├── LICENSE ├── README.md ├── appserver-scripts ├── balancer-manager.py ├── deploy.sh ├── deploy_wrapper_standalone.sh ├── env-qa-test.sh ├── env-security-check.sh ├── schedule_deploy └── sslTest.java ├── bin ├── README.md ├── SimpleHTTPSServer.py ├── asm_translate.py ├── bandmeter.sh ├── cert ├── check_courses ├── checkip ├── fpaste ├── gitar.sh ├── gpg_decrypt_individual_files.sh ├── gpg_encrypt_individual_files.sh ├── gpg_sign_sha1sums.sh ├── gpg_validate_sha1sums.sh ├── gpg_verify_checksums.sh ├── headers ├── knownhosts.sh ├── logchecker.py ├── ls.py ├── ls2.py ├── man2pdf ├── missing_from_all_clusters.sh ├── pytailuntil.py ├── servercount ├── sort_clusters ├── update_hostname.sh └── wasted-ram-updates.py ├── dotfiles ├── .bash_aliases ├── .bashrc_custom ├── .gitconfig ├── .vimperatorrc ├── .vimrc └── gpg.conf ├── icinga ├── README.md ├── plugins │ ├── check_db_connections │ ├── check_last_fsck │ ├── check_md_raid │ ├── check_mem │ ├── gitlab_status │ ├── jvm_health │ │ ├── README.md │ │ ├── jvm_health.py │ │ └── parsegarbagelogs.py │ ├── ssl_check │ ├── ssl_check.sh │ └── test.sh ├── sbin │ └── munin-cgi.py └── scripts │ ├── check-icinga-address-against-hostname.sh │ └── report-status.py ├── init.d ├── jboss └── tomcat ├── live_trends ├── README ├── generate_aligned_csv_file.py ├── killall_jobs ├── run_tests ├── run_tests_manually ├── trendprograms │ ├── 15minute_load_live_trend │ ├── 1minute_load_live_trend │ ├── 5minute_load_live_trend │ ├── httpd_memory_live_trend │ ├── jvm_memory_live_trend │ ├── jvm_oracleconnections_live_trend │ ├── open_file_descriptors_live_trend │ └── thread_dump_count_live_trend └── view_result_file └── munin └── plugins ├── README.md └── java_vm_time /LICENSE: -------------------------------------------------------------------------------- 1 | All unlicensed scripts/programs use the following license. A local LICENSE 2 | file in the directory or LICENSE declaration in script comments supercedes 3 | this License. 4 | 5 | The MIT License (MIT) 6 | 7 | Copyright (c) 2012 Samuel Gleske, Drexel University 8 | 9 | Permission is hereby granted, free of charge, to any person obtaining 10 | a copy of this software and associated documentation files (the "Software"), 11 | to deal in the Software without restriction, including without limitation 12 | the rights to use, copy, modify, merge, publish, distribute, sublicense, 13 | and/or sell copies of the Software, and to permit persons to whom the 14 | Software is furnished to do so, subject to the following conditions: 15 | 16 | The above copyright notice and this permission notice shall be included in 17 | all copies or substantial portions of the Software. 18 | 19 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 20 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 21 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 22 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 23 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 24 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 25 | SOFTWARE. 26 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # drexel-university 2 | 3 | This is a list of scripts I created while working at Drexel University. All licensed under the MIT license. 4 | 5 | ## List of README files. 6 | 7 | * [bin/README.md](bin/README.md) 8 | * [icinga/plugins/jvm_health/README.md](icinga/plugins/jvm_health/README.md) 9 | * [icinga/README.md](icinga/README.md) 10 | * [live_trends/README](live_trends/README) 11 | * [munin/plugins/README.md](munin/plugins/README.md) 12 | 13 | Generate bullet list of readme files: 14 | 15 | find | grep -i readme | while read x;do readme="$(echo $x | sed 's#^\.##' | sed 's#^/##')";echo "* [$readme]($readme)";done 16 | 17 | # ./appserver-scripts/ 18 | 19 | Some of these scripts help me automate deployments to app servers. 20 | 21 | # ./bin/ 22 | 23 | This is user bin scripts I put in my ~/bin directory. 24 | 25 | # ./dotfiles/ 26 | 27 | Some common dotfiles which I personally like to customize. 28 | 29 | # ./icinga/ 30 | 31 | Lists some Nagios/Icinga scripts which I maintain and use within our monitoring system, Icinga. 32 | 33 | # ./init.d/ 34 | 35 | These are a set of service/daemon scripts I wrote for different software. 36 | Things like tomcat startup scripts and JBoss startup scripts (or any software) 37 | will be in here. 38 | 39 | # ./live_trends/ 40 | 41 | I wrote a set of scripts to record live performance data of a system 42 | using simple bash scripts (or any language). I did this because I 43 | needed something which was better than munin as munin only recorded 44 | once every 5 minutes. 45 | 46 | # ./munin/ 47 | 48 | Scripts which I've written or maintained for munin monitoring at Drexel. We use munin for performance trending. 49 | -------------------------------------------------------------------------------- /appserver-scripts/balancer-manager.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' balancer-manager for managing Apache mod_proxy_balancer nodes 3 | 4 | If using httpd server, you get a 404 with https urls, then add the following to the virtualhost. 5 | SSLCACertificateFile /etc/httpd/ssl.crt/incommon-ca.crt''' 6 | 7 | # Created by Sam Gleske (sag47@drexel.edu) 8 | # Fri Mar 8 12:31:54 EST 2013 9 | # Copyright 2013 Drexel University 10 | # Red Hat Enterprise Linux Server release 6.3 (Santiago) 11 | # Linux 2.6.32-279.14.1.el6.x86_64 12 | # Python 2.6.6 13 | 14 | VERSION = '0.1.0' 15 | USER_AGENT = 'balancer-manager.py/' + VERSION 16 | MANAGER_URL = 'https://somehost.com/balancer-manager' 17 | 18 | # variable cleanup 19 | if MANAGER_URL[-1:] != '/': 20 | MANAGER_URL = MANAGER_URL + '/' 21 | 22 | #imports 23 | import os,urllib2,sys,re 24 | from optparse import OptionParser, OptionGroup, SUPPRESS_HELP 25 | 26 | def url_request(url): 27 | req = urllib2.Request(url=url, headers={'User-agent': USER_AGENT}) 28 | try: 29 | f = urllib2.urlopen(req) 30 | except urllib2.URLError, e: 31 | if hasattr(e, 'reason'): 32 | print >> sys.stderr, "Error: %s" % e.reason 33 | elif hasattr(e,'code'): 34 | print >> sys.stderr, "Server Error: %d - %s if using https then it could be a certificate problem." % (e.code, e.msg) 35 | print >> sys.stderr, "Attempted to connect to %s" % url 36 | sys.exit(1) 37 | return f.read() 38 | 39 | def get_session_nonce(): 40 | ''' Get the nonce value for submitting management forms. This is a uuid. ''' 41 | #Use one of the nodes to do a poor mans search through html using regex for a uuid 42 | uuids=re.findall(r'.*&nonce=([-a-f0-9]{36}).*',url_request(MANAGER_URL)) 43 | if len(uuids) <= 0: 44 | print >> sys.stderr, "Could not obtain nonce uuid. Double check balancer-manager url!\n%s" % MANAGER_URL 45 | sys.exit(1) 46 | return uuids[0] 47 | 48 | def build_urls_to_call(options,routes): 49 | '''build a list of urls to be called for disabling/enabling route workers''' 50 | uuid=get_session_nonce() 51 | page=url_request(MANAGER_URL) 52 | url_list=[] 53 | for route in routes: 54 | worker="" 55 | for line in page.split('\n'): 56 | r=re.compile('.*([^<]*).*>' + route + '<.*') 57 | if len(re.findall(r,line)) > 0: 58 | worker=re.findall(r,line)[0] 59 | if len(worker) > 0: 60 | if options.disable: 61 | url="%s?lf=1&ls=0&wr=%s&rr=&dw=%s&w=%s&b=%s&nonce=%s" % (MANAGER_URL,route,"Disable",urllib2.quote(worker),options.cluster,uuid) 62 | elif options.enable: 63 | url="%s?lf=1&ls=0&wr=%s&rr=&dw=%s&w=%s&b=%s&nonce=%s" % (MANAGER_URL,route,"Enable",urllib2.quote(worker),options.cluster,uuid) 64 | url_list.append(url) 65 | else: 66 | print >> sys.stderr, "Could not determine worker for route: %s" % route 67 | sys.exit(1) 68 | return url_list 69 | 70 | 71 | def main(): 72 | ''' main function for processing balancers based on options ''' 73 | 74 | #CONFIGURE OPTIONS 75 | usage="""\ 76 | Usage: %%prog [OPTIONS] -c CLUSTER NODEROUTE [NODEREOUTE...] 77 | 78 | Description: 79 | %%prog can be used to change the balancing scheme on mod_proxy_balancer 80 | httpd using the balancer-manager web interface. It is recommended that 81 | you restrict the /balancer-manager/ url to localhost. 82 | 83 | Manager URL: 84 | %s 85 | 86 | Examples: 87 | %%prog --disable-routes -c my_cluster route2 route3""" % MANAGER_URL 88 | parser = OptionParser(usage=usage,version='%prog ' + VERSION) 89 | parser.add_option('', '--debug', dest='debug', help=SUPPRESS_HELP, action="store_true", default=False) 90 | managerProg_group = OptionGroup(parser, "Balancer Manager Options") 91 | managerProg_group.add_option('-c','--cluster',dest="cluster", help="specify the cluster to be managed", metavar="CLUSTER") 92 | managerProg_group.add_option('-d','--disable-routes', dest="disable",help="disable route for balancer",action="store_true",default=False) 93 | managerProg_group.add_option('-e','--enable-routes', dest="enable",help="enable route for balancer",action="store_true",default=False) 94 | parser.add_option_group(managerProg_group) 95 | parser.set_defaults(cluster=None) 96 | (options, routes) = parser.parse_args() 97 | 98 | 99 | 100 | 101 | 102 | #option checking 103 | if options.cluster == None: 104 | parser.error("OOPS - must specify cluster") 105 | if len(routes) <= 0: 106 | parser.error("OOPS - there's no routes to enable or disable") 107 | if options.disable and options.enable: 108 | parser.error("OOPS - --disable and --enable are incompatible options") 109 | elif not options.disable and not options.enable: 110 | parser.error("OOPS - --disable or --enable option required") 111 | 112 | #process options and start modifying the cluster using POST method URL calls to balancer-manager 113 | for url in build_urls_to_call(options,routes): 114 | url_request(url) 115 | 116 | 117 | 118 | if __name__ == '__main__': 119 | try: 120 | main() 121 | except KeyboardInterrupt: 122 | print "\ninterrupted." 123 | sys.exit(1) 124 | -------------------------------------------------------------------------------- /appserver-scripts/deploy_wrapper_standalone.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske (sag47@drexel.edu) 3 | #Sun Mar 17 01:05:41 EDT 2013 4 | #Red Hat Enterprise Linux Server release 5.5 (Tikanga) 5 | #Linux 2.6.18-194.11.4.el5 x86_64 6 | 7 | #warning; this script is still immature and designed to work 8 | #just with jboss. Unlike the deploy.sh script which is 9 | #highly configurable for any app server. It will require 10 | #extensive modification for your build environment. 11 | 12 | #CI server to connect to get env.sh file 13 | ci_server="ci.server.com" 14 | #workspace path where the env.sh is located 15 | ci_workspace="/opt/jenkins_builder/jobs/some_job_name/workspace" 16 | #where do we pull the war files from? 17 | test_server="test.server.com" 18 | #this is where war files will be deployed to. From test to prod. 19 | prod_server="prod.server.com" 20 | #staging directory where all the files will be kept. This should be the same as deploy.sh 21 | stage="/opt/staging" 22 | #prepend the subject of the email 23 | email_subject_prepend="$HOSTNAME: " 24 | #remove trailing slash from ci_workspace if there is one. 25 | ci_workspace="${ci_workspace%/}" 26 | 27 | 28 | if [ "${HOSTNAME}" = "${prod_server}" ];then 29 | cd "${stage}" 30 | 31 | #download env.sh from the CI server using sftp 32 | sftp ${ci_server} &> /dev/null < /dev/stderr 38 | exit 1 39 | fi 40 | if ! ./env-security-check.sh < ./env.sh;then 41 | exit 1 42 | fi 43 | . ./env.sh 44 | #force variables for env.sh security purposes 45 | stage="/app/stage" 46 | appsprofile="/app/jboss/server/default" 47 | deploydir="deploy" 48 | libdir="lib" 49 | export debug dryrun email enable_colors force_restart lib_files timeout war_files 50 | for x in ${war_files};do 51 | sftp ${test_server}:"${appsprofile}/${deploydir}/${x}" ./ 52 | done 53 | for x in ${lib_files};do 54 | sftp ${test_server}:"${appsprofile}/${libdir}/${x}" ./ 55 | done 56 | #create a schedule if we're not doing an instant deployment 57 | /usr/bin/at ${TIME} < "\${dlog}";then 65 | exit 1 66 | fi 67 | . ./env.sh 68 | #force variables for evn.sh security purposes 69 | appsprofile="/app/jboss/server/default" 70 | deploydir="deploy" 71 | libdir="lib" 72 | enable_colors="false" 73 | 74 | export debug dryrun email enable_colors force_restart lib_files timeout war_files 75 | 76 | logdir="/app/jboss/server/default/logs" 77 | logs=() 78 | logs+=("/app/jboss/server/default/log/server.log") 79 | logs+=("/app/jboss/server/default/log/boot.log") 80 | for x in \${war_files};do 81 | logs+=("\${logdir}/\${x/%.war/.log}") 82 | done 83 | echo "=== START DEPLOYMENT ===" >> "\${dlog}" 84 | date >> "\${dlog}" 85 | #append both stderr and stdout to the dlog file 86 | sudo "\${rootdir}"/deploy.sh >> "\${dlog}" 2>&1 87 | date >> "\${dlog}" 88 | echo "=== END DEPLOYMENT ===" >> "\${dlog}" 89 | cat "\${dlog}" | mail -s "${email_subject_prepend}deployment deploy.log" \${email} 90 | 91 | # 92 | # Mail Logs for analyzing 5 minutes after the deployment 93 | # 94 | 95 | function maillogs() { 96 | for log in \${logs[@]};do 97 | echo "sending mail deployment \$(basename \${log})" 98 | if [ -f "\${log}" ];then 99 | tail -n 5000 \${log} | mail -s "${email_subject_prepend}deployment \$(basename \${log})" \${email} 100 | else 101 | echo "ERROR \${log} file does not exist!" | mail -s "${email_subject_prepend}ERROR deployment \$(basename \${log})" \${email} 102 | fi 103 | done 104 | } 105 | if [ ! -z "\$email" ];then 106 | sleep 300 && maillogs 107 | fi 108 | EOF 109 | fi 110 | -------------------------------------------------------------------------------- /appserver-scripts/env-qa-test.sh: -------------------------------------------------------------------------------- 1 | # environment file only, no executable code;used to test against env-security-check.sh 2 | 3 | #quality testing 4 | #the following tests blank lines 5 | 6 | #the following line test blank lines with a mix of tabs and spaces 7 | 8 | var="$(test)" 9 | something=(ls) 10 | echo "I'm bad!" 11 | TEST_TWO="farts" 12 | PATH="evil" 13 | USER="mrclean" 14 | first_arg="$1" 15 | #valid entry 16 | trest="three" 17 | something2="`ls`" 18 | -------------------------------------------------------------------------------- /appserver-scripts/env-security-check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske (sag47@drexel.edu) 3 | #Fri Mar 8 17:32:54 EST 2013 4 | #Red Hat Enterprise Linux Server release 6.3 (Santiago) 5 | #Linux 2.6.32-279.14.1.el6.x86_64 6 | #this runs a execution security check on the env.sh file. 7 | #there should only be static environment variables specified in the env.sh file 8 | # 9 | #To run a QA test on this file run the following command 10 | # ./env-security-check.sh < ./env-qa-test.sh 11 | 12 | #allow users to export environment variables; if set to false then export command will fail security check. 13 | allow_variable_exporting="true" 14 | 15 | #exit code, this may get modified 16 | result=0 17 | 18 | while read line;do 19 | if echo "${line}" | grep '^\s*#' &> /dev/null;then 20 | #check for comments 21 | continue 22 | elif echo "${line}" | grep '^\s*$' &> /dev/null;then 23 | #check for blank line 24 | continue 25 | elif echo "${line}" | grep '$(' &> /dev/null || echo "${line}" | grep '`' &> /dev/null;then 26 | echo 'security failure: $(list) or `list` command substitution execution is not allowed in env.sh' > /dev/stderr 27 | result=1 28 | elif echo "${line}" | grep '(' &> /dev/null;then 29 | echo 'security failure: (list) subshell execution is not allowed in env.sh' > /dev/stderr 30 | result=1 31 | elif ! echo "${line}" | grep '^\s*[a-zA-Z_0-9]*=' &> /dev/null;then 32 | if "${allow_variable_exporting}";then 33 | if ! echo "${line}" | grep '^export ' &> /dev/null;then 34 | echo 'security failure: command execution detected.' > /dev/stderr 35 | result=1 36 | fi 37 | else 38 | echo 'security failure: command execution detected.' > /dev/stderr 39 | result=1 40 | fi 41 | elif echo "${line}" | grep '$[0-9]' &> /dev/null;then 42 | echo 'security failure: assignment shell script arguments to variables not allowed in env.sh' > /dev/stderr 43 | result=1 44 | fi 45 | #system_variables list obtained using following command 46 | #env | grep '^[a-zA-Z_]*=' | cut -d= -f1 | tr '\n' ' ' | sed 's/\(.*\)/\1\n/' 47 | system_variables="appsprofile backupdir deploydir libdir second_stage stage HOSTNAME SHELL TERM HISTSIZE USER JAVA_OPTS LS_COLORS TERMCAP PATH MAIL STY PWD JAVA_HOME LANG HISTCONTROL HOME SHLVL LOGNAME CVS_RSH WINDOW LESSOPEN G_BROKEN_FILENAMES _ OLDPWD" 48 | for var in ${system_variables};do 49 | if echo "${line}" | grep "^\s*${var}=" &> /dev/null;then 50 | echo "security failure: assignment of system variable ${var} is not allowed in env.sh" > /dev/stderr 51 | result=1 52 | fi 53 | done 54 | done 55 | if [ "${result}" -eq "1" ];then 56 | echo 'security notice: env.sh is not a shell script, only an environment file.' > /dev/stderr 57 | else 58 | echo "env.sh security check pass" 59 | fi 60 | exit ${result} 61 | -------------------------------------------------------------------------------- /appserver-scripts/schedule_deploy: -------------------------------------------------------------------------------- 1 | # Created by Sam Gleske 2 | # 3 | # Deployment scheduler - this handles the details of an after hours deployment 4 | # must be compatible with /bin/sh 5 | # 6 | # Dependencies: 7 | # deploy.sh - my application server deployment script 8 | # 9 | # Usage: 10 | # Schedule a job to run at 5:30pm 11 | # at 17:30 < schedule_deploy 12 | # List and then delete a scheduled job number 13 | # at -l 14 | # atrm 2 15 | # 16 | 17 | # 18 | # Configuration Variables 19 | # 20 | rootdir="/opt/staging" 21 | dlog="$rootdir/deploy.log" 22 | email="some_email@domain.com" 23 | logs="/opt/jboss/server/default/log/server.log 24 | /opt/jboss/server/default/log/boot.log" 25 | 26 | # 27 | # Deployment Logic 28 | # 29 | 30 | echo "=== START DEPLOYMENT ===" 31 | date >> $dlog 32 | $rootdir/deploy.sh >> $dlog 33 | date >> $dlog 34 | echo "=== END DEPLOYMENT ===" 35 | cat $dlog | mail -s "deployment deploy.log" $email 36 | 37 | # 38 | # Mail Logs for analyzing 5 minutes after the deployment 39 | # 40 | 41 | function maillogs() { 42 | for log in $logs;do 43 | echo "sending mail deployment $(basename $log)" 44 | if [ -f "$log" ];then 45 | tail -n 5000 $log | mail -s "deployment $(basename $log)" $email 46 | else 47 | echo "ERROR $log file does not exist!" | mail -s "ERROR deployment $(basename $log)" $email 48 | fi 49 | done 50 | } 51 | 52 | sleep 300 && maillogs 53 | -------------------------------------------------------------------------------- /appserver-scripts/sslTest.java: -------------------------------------------------------------------------------- 1 | //javac 1.6.0_26 2 | import java.net.*; 3 | import java.io.*; 4 | import java.security.*; 5 | import javax.net.ssl.*; 6 | 7 | public class sslTest { 8 | public static void main(String[] args) { 9 | if (args.length < 2) { 10 | System.out.println("Usage: java sslTest somehost someport"); 11 | return; 12 | } 13 | 14 | int port = 0; 15 | if(args[1] == null){ 16 | port = 443; // default https port 17 | }else{ 18 | port = Integer.parseInt(args[1]); 19 | } 20 | String host = args[0]; 21 | 22 | try{ 23 | Security.addProvider(new com.sun.net.ssl.internal.ssl.Provider()); 24 | SSLSocketFactory factory = (SSLSocketFactory) SSLSocketFactory.getDefault(); 25 | 26 | SSLSocket socket = (SSLSocket) factory.createSocket(host, port); 27 | SSLSession session = socket.getSession(); 28 | System.out.println("Protocol is " + session.getProtocol()); 29 | // just close it 30 | System.out.println("Closing connection."); 31 | socket.close(); 32 | }catch (IOException e) { 33 | System.err.println(e); 34 | } 35 | } 36 | } 37 | -------------------------------------------------------------------------------- /bin/README.md: -------------------------------------------------------------------------------- 1 | # How do I use some of these scripts? 2 | 3 | ---- 4 | ## man2pdf 5 | 6 | [man2pdf](man2pdf) is a simple script which converts man pages to PDF format. This is useful for reading man pages on the go in an e-book reader. The permissions of the script should be 755. 7 | 8 | Example, converting the bash man page into PDF. 9 | 10 | chmod 755 man2pdf 11 | ./man2pdf bash 12 | 13 | ---- 14 | ## clusterssh helper scripts 15 | 16 | I use the following helper scripts to maintain the `/etc/clusters` file: 17 | 18 | * `knownhosts.sh` 19 | * `missing_from_all_clusters.sh` 20 | * `servercount` 21 | * `sort_clusters` 22 | 23 | I maintain my `/etc/clusters` file with a standardized naming convention. The first line has an `All_clusters` alias. Its only purpose is to be an alias for all aliases in the `/etc/clusters` file. From there every alias starts with one of two standardized prefixes: `cluster_` or `host_`. 24 | 25 | Here is a sample `/etc/clusters` file using that naming convention. 26 | 27 | All_clusters cluster_website cluster_dns host_Config_management 28 | 29 | cluster_website host1.domain.com host2.domain.com host3.domain.com 30 | 31 | cluster_dns ns1.domain.com ns2.domain.com 32 | 33 | host_Config_management someconfigmanagement.domain.com 34 | 35 | `knownhosts.sh` - This script reads stdin a list of host names, queries the ssh fingerprint, and checks to see if that known host exists in `~/.ssh/known_hosts`. If it exists then it outputs nothing. If there's any missing (or possibly incorrect) then it will output only the problem hosts. If no hosts have any problems then it exits with a proper success exit code. This can be used with `servercount`. 36 | 37 | `missing_from_all_clusters.sh` - This goes through the `/etc/clusters` file for all of the aliases and checks to make sure that all aliases are added to `All_clusters`. If there is no alias then it will output the problem entry. There will be no output if all entries are properly accounted for. 38 | 39 | `servercount` - This goes through the `/etc/clusters` file and displays a list of host names only (with no aliases). This will consist of one host per line. 40 | 41 | `sort_clusters` - As you keep adding aliases to `/etc/clusters` there becomes a need to alphabetically sort the aliases in the file. This will sort the aliases. It also sorts the list of aliases on the `All_clusters` line at the top of the file. 42 | 43 | ### Example usage 44 | 45 | Get a head count of the number of servers in the clusters file. 46 | 47 | servercount | wc -l 48 | 49 | Check that there aren't any bad `known_hosts` fingerprints for clusters host names. 50 | 51 | servercount | knownhosts.sh 52 | 53 | Generage a list of ip addresses associated with all of the hosts. 54 | 55 | servercount | while read server;do dig +short ${server};done 56 | servercount | while read server;do echo "$(dig +short ${server}) ${server}";done 57 | 58 | The remaining scripts are fairly standalone. 59 | 60 | ---- 61 | ## wasted-ram-updates.py 62 | 63 | Ever hear about Linux being able to update without ever needing to be restarted (with exception for a few critical packages)? Ever wonder how to figure out which services need to actually be restarted after a large update of hundreds of packages? With `wasted-ram-updates.py` you no longer need to wonder. 64 | 65 | `wasted-ram-updates.py` helps to resolve these questions by showing which running processes are using files in memory that have been deleted on disk. This lets you know that there is likely an outdated library being used. If you restart the daemon associated with this process then it will use the updated copy of the library. 66 | 67 | ### List of packages which require a restart 68 | 69 | Over time I've encountered a small list of critical packages which require a restart of the Linux OS. Here's a non-comprehensive list of which I'm aware. Feel free to open an [issue](https://github.com/sag47/drexel-university/issues) letting me know of another package which requires a system reboot. 70 | 71 | * dbus (used by `/sbin/init` which is pid 1) 72 | * glibc 73 | * kernel 74 | 75 | Other than that you should be able to simply restart the associated service. 76 | 77 | _Please note: some programs regularly create and delete temporary files which will show up in `wasted-ram-updates.py`. This is normal and does not require a service restart for this case._ 78 | 79 | ### Example usage 80 | 81 | Just display an overall summary 82 | 83 | wasted-ram-updates.py summary 84 | 85 | Organize the output by deleted file handle (I've found this to be less useful for accomplishing a system update). 86 | 87 | wasted-ram-updates.py 88 | 89 | Organize the output by process ID and show a heirarchy of deleted file handles as children to the PIDs. This is the most useful command for determining which services to restart. 90 | 91 | wasted-ram-updates.py pids 92 | 93 | ---- 94 | ## gitar.sh - A simple deduplication and compression script 95 | This is an [original idea by Tom Dalling](http://tomdalling.com/blog/random-stuff/using-git-for-hacky-archive-deduplication/). 96 | 97 | [gitar.sh](https://github.com/samrocketman/drexel-university/blob/main/bin/gitar.sh) is a simple deduplication and compression script. It uses git to deduplicate data and then other compression algorithms to compress data. This short script was made for when you have to compress a large amount of duplicated files. It also comes with a handly little utility, `gintar.sh`, for decompressing the archive on the receiving end. See Usage section for more information. 98 | 99 | `gitar.sh` assumes you do not have [lrzip](https://github.com/ckolivas/lrzip) readily available. lrzip can compress better and deduplicate better than this script. Also, this script has known limitations which are not bound to lrzip such as not being able to compress git repositories. gitar.sh is meant as a quick hack 'n slash dedupe and compress. See the benchmarks for when I tested gitar.sh against other compression methods. 100 | 101 | ### Compression options 102 | You can set different compression options with the `compression_type` environment variable. The numbers are ordered from minimum compression to max compression. 103 | 104 | 1. no optimization, just git deduplication+tar 105 | 2. deduplication+optimized git+tar compression 106 | 3. deduplication+optimized git+tar+gzip compression 107 | 4. deduplication+optimized git+tar+bzip2 compression 108 | 5. deduplication+optimized git+tar+lzma compression 109 | 110 | Sample bash command for setting compression. 111 | 112 | export compression_type=3 113 | 114 | ### Known Limitations/Won't Fix 115 | * Not able to compress git repositories or nested git repositories. 116 | * If you're using `gitar.sh` on a directory that contains wholly unique data and no duplicates then the result will actually be slightly larger than using `tar` with `bzip2` or `gzip` due to the metadata of `git`. 117 | * When extracting a `.gitar` archive using `tar` it is possible to destroy a git repository in the current working directory if one previously exists. 118 | * It is assumed you will be compressing a single directory. If you need to compress multiple directories or files then place them all inside of a directory to be gitar'd. 119 | * You must compress a directory located in the current directory or located in a child. If the path you're compressing is located above the current working directory in a parent directory then this will fail because git can't do that. 120 | 121 | ### Usage 122 | Simply compress a directory. 123 | 124 | gitar.sh "somedirectory" 125 | 126 | Subshell the compressing of an archive and set the compression level to 2 127 | 128 | (export compression_type=2; gitar.sh "somedirectory") 129 | 130 | Decompress a `gitar` archive. 131 | 132 | tar -xf "somefile.gitar" && ./gintar.sh 133 | 134 | ### Benchmarks of gitar.sh hack vs other utilities 135 | 136 | #### Environment 137 | 138 | * [Tested Repository](https://github.com/tomdalling/opengl-series) with git directories removed. 139 | * 3rd Gen Intel® Core™ i7-3770 (Quad Core, 3.40GHz, 8MB L2) 140 | * 8GB RAM, NON-ECC, 1600MHZ DDR3,2DIMM 141 | * 512GB Samsung 840 Pro Series 2.5" SSD 142 | 143 | Some file system stats for my system using `dd`. 144 | 145 | #READ RATE 146 | $ dd if=./coursematerial.tar.gz of=/dev/null 147 | 1386091+1 records in 148 | 1386091+1 records out 149 | 709678734 bytes (710 MB) copied, 4.02507 s, 176 MB/s 150 | #WRITE RATE 151 | $ dd if=/dev/zero of=./test2 152 | 827170+0 records in 153 | 827170+0 records out 154 | 423511040 bytes (424 MB) copied, 18.1395 s, 23.3 MB/s 155 | 156 | Now on to the good stuff.... 157 | 158 | #### Compression ratios 159 | For the ratios I used max compression for all utilities (`gzip -9`, and `bzip -9`, and `compression_level=3` for `gitar.sh`, and `lrztar -z` respectively). 160 | 161 | Size Name Type % of original size 162 | 132M opengl-series Uncompressed 100.0% 163 | 95M opengl-series.tar tar 72.0% 164 | 30M opengl-series.tgz tar+gzip 22.7% 165 | 27M opengl-series.tbz2 tar+bzip2 20.5% 166 | 5.8M opengl-series.gitar git+tar+lzma 4.4% 167 | 4.4M opengl-series.tar.lrz tar+lrzip 3.3% 168 | 169 | Compression ratios for different gitar levels. 170 | 171 | Size Name Type % of original size 172 | 132M opengl-series Uncompressed 100.0% 173 | 95M 0-opengl-series.tar tar 72.0% 174 | 9.0M 1-opengl-series.gitar.dedupeonly gitar-1 6.8% 175 | 6.3M 2-opengl-series.gitar.optimized gitar-2 4.8% 176 | 5.9M 3-opengl-series.gitar.gzip gitar-3 4.5% 177 | 5.9M 4-opengl-series.gitar.bzip2 gitar-4 4.5% 178 | 5.8M 5-opengl-series.gitar.lzma gitar-5 4.4% 179 | 180 | #### Compression times 181 | I used the `time` utility and took an average of 3 runs for each. 182 | 183 | Name Type real value (from time command) 184 | opengl-series Uncompressed 0m0.000s (no command was executed) 185 | opengl-series.tar tar 0m0.707s 186 | opengl-series.tgz tar+gzip 0m7.200s 187 | opengl-series.tbz2 tar+bzip2 0m11.521s 188 | opengl-series.gitar git+tar+lzma 0m5.292s 189 | opengl-series.tar.lrz tar+lrzip 0m24.338s 190 | 191 | Compression times for different gitar levels. 192 | 193 | Name Type real value (from time command) 194 | opengl-series Uncompressed 0m0.000s (no command was executed) 195 | opengl-series.tar tar 0m0.707s 196 | opengl-series.gitar gitar-1 0m1.077s 197 | opengl-series.gitar gitar-2 0m3.818s 198 | opengl-series.gitar gitar-3 0m4.000s 199 | opengl-series.gitar gitar-4 0m4.632s 200 | opengl-series.gitar gitar-5 0m5.292s 201 | 202 | #### Benchmark Conclusion 203 | If you want the absolute best compression ratio with the best deduplication then `lrzip` is the utility for you. However, based on my benchmarks `gitar.sh` achieved a similar ratio in less than one sixth of the time. `gitar.sh` is a little less user friendly and there are some known drawbacks to using it. However, if none of these known issues are a problem then `gitar.sh` is great for being fast and highly compressed. When in doubt of any of the known issues or you're looking for the best compression ratios then `lrzip` is the best candidate. 204 | -------------------------------------------------------------------------------- /bin/SimpleHTTPSServer.py: -------------------------------------------------------------------------------- 1 | ''' 2 | SimpleSecureHTTPServer.py - simple HTTP server supporting SSL. 3 | 4 | You can copy this to /usr/lib/python2.7/ and then run the following command 5 | python -m SimpleHTTPSServer 6 | It will then serve directory listing web pages for $PWD. 7 | 8 | - replace fpem with the location of your .pem server file. 9 | - the default port is 443. 10 | 11 | usage: python SimpleSecureHTTPServer.py 12 | ''' 13 | import os,sys 14 | from SocketServer import BaseServer 15 | from BaseHTTPServer import HTTPServer 16 | from SimpleHTTPServer import SimpleHTTPRequestHandler 17 | from OpenSSL import SSL 18 | sys.path.reverse() 19 | sys.path.append("/home/sam/sandbox/simple python ssl server/Python 2.7") 20 | sys.path.reverse() 21 | import socket 22 | 23 | 24 | class SecureHTTPServer(HTTPServer): 25 | def __init__(self, server_address, HandlerClass): 26 | BaseServer.__init__(self, server_address, HandlerClass) 27 | ctx = SSL.Context(SSL.SSLv23_METHOD) 28 | #server.pem's location (containing the server private key and 29 | #the server certificate). 30 | #fpem = './localhost.pem' 31 | fpem = '/home/sam/certs/farcry.irt.drexel.edu/farcry.irt.drexel.edu.pem' 32 | ctx.use_privatekey_file (fpem) 33 | ctx.use_certificate_file(fpem) 34 | self.socket = SSL.Connection(ctx, socket.socket(self.address_family,self.socket_type)) 35 | self.server_bind() 36 | self.server_activate() 37 | def shutdown_request(self,request): request.shutdown() 38 | 39 | 40 | class SecureHTTPRequestHandler(SimpleHTTPRequestHandler): 41 | def setup(self): 42 | self.connection = self.request 43 | self.rfile = socket._fileobject(self.request, "rb", self.rbufsize) 44 | self.wfile = socket._fileobject(self.request, "wb", self.wbufsize) 45 | 46 | 47 | def test(HandlerClass = SecureHTTPRequestHandler, 48 | ServerClass = SecureHTTPServer): 49 | server_address = ('', 8000) # (address, port) 50 | httpd = ServerClass(server_address, HandlerClass) 51 | sa = httpd.socket.getsockname() 52 | print "Serving HTTPS on", sa[0], "port", sa[1], "..." 53 | print "Use ^C to shut down server." 54 | try: 55 | httpd.serve_forever() 56 | except KeyboardInterrupt: 57 | print "Shutting down server..." 58 | 59 | 60 | if __name__ == '__main__': 61 | test() 62 | -------------------------------------------------------------------------------- /bin/asm_translate.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #Sam Gleske 3 | #Thu Mar 27 14:48:54 EDT 2014 4 | #Ubuntu 12.04.4 LTS \n \l 5 | #Linux 3.8.0-37-generic x86_64 6 | #Python 2.7.3 7 | #Translates assembly comments from cz to en. 8 | #Requires google translate python module - https://github.com/terryyin/google-translate-python 9 | #Discussion - https://www.linuxquestions.org/questions/linux-newbie-8/script-to-translate-comments-to-english-4175499642/ 10 | 11 | import argparse 12 | from os.path import isfile 13 | from sys import argv 14 | from sys import exit 15 | from sys import stdin 16 | from translate import Translator 17 | 18 | #select language defaults or use arguments 19 | DEFAULT_FROM_LANG="cz" 20 | DEFAULT_TO_LANG="en" 21 | 22 | def main(asmfile="",from_lang="cz",to_lang="en"): 23 | if len(asmfile) > 0 and not isfile(asmfile): 24 | print "%s is not a file!" % asmfile 25 | exit(1) 26 | tl=Translator(from_lang=from_lang,to_lang=to_lang) 27 | #read from stdin or a file 28 | if len(asmfile) == 0: 29 | data=stdin.read() 30 | else: 31 | with open(asmfile,'r') as f: 32 | data=f.read() 33 | #try translating comments otherwise simply output the line 34 | for x in data.split('\n'): 35 | parts=x.split(';',1) 36 | if len(parts) > 1: 37 | parts[1]=tl.translate(parts[1]) 38 | print ';'.join(parts) 39 | else: 40 | print x 41 | 42 | if __name__ == '__main__': 43 | try: 44 | parser = argparse.ArgumentParser(description='Translate assembly code comments. From %s to %s by default.' % (DEFAULT_FROM_LANG,DEFAULT_TO_LANG)) 45 | parser.add_argument(dest="asmfile",nargs="?",default="",type=str,help="Optional asm file to read. Otherwise read from stdin.") 46 | parser.add_argument("--from-lang",dest="from_lang",default=DEFAULT_FROM_LANG,help="Translate from language.") 47 | parser.add_argument("--to-lang",dest="to_lang",default=DEFAULT_TO_LANG,help="Translate to language.") 48 | args = parser.parse_args() 49 | main(asmfile=args.asmfile,from_lang=args.from_lang,to_lang=args.to_lang) 50 | except KeyboardInterrupt,e: 51 | print "User aborted." 52 | exit(1) 53 | -------------------------------------------------------------------------------- /bin/bandmeter.sh: -------------------------------------------------------------------------------- 1 | #Author szboardstretcher @ linuxquestions.org 2 | #Contributor sag47 @ linuxquestions.org 3 | #Wed Sep 18 09:16:15 EDT 2013 4 | #GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) 5 | 6 | #DESCRIPTION 7 | # A simple bandwidth rate usage script for a device. This script 8 | # will only output changes in bandwidth and samples at every second. 9 | # If it doesn't output at all then it is assumed the value is the same 10 | # for every second since the last output. 11 | #USAGE 12 | # ./bandmeter.sh eth0 13 | 14 | #do some error checking 15 | if [ -z "${1}" ];then 16 | echo "Device not specified!" 1>&2 17 | echo "Usage: $(basename ${0}) device" 1>&2 18 | exit 1 19 | elif [ ! -e "/sys/class/net/${1}" ];then 20 | echo "Error: The device you specified does not exist!" 1>&2 21 | echo "List of devices:" 1>&2 22 | (cd /sys/class/net/ && ls -1) | while read device;do 23 | echo " ${device}" 1>&2 24 | done 25 | echo "Usage: $(basename ${0}) device" 1>&2 26 | exit 1 27 | fi 28 | R1=0 29 | T1=0 30 | while true;do 31 | #R2 and T2 are now the old values from the last second 32 | R2="${R1}" 33 | T2="${T1}" 34 | #date of right now in seconds since 1970-01-01 00:00:00 UTC 35 | DATE="$(date +%s)" 36 | R1="$(cat /sys/class/net/${1}/statistics/rx_bytes)" 37 | T1="$(cat /sys/class/net/${1}/statistics/tx_bytes)" 38 | TBPS="$(expr ${T1} - ${T2})" 39 | RBPS="$(expr ${R1} - ${R2})" 40 | TKBPS="$(expr ${TBPS} / 1024)" 41 | RKBPS="$(expr ${RBPS} / 1024)" 42 | current_message="tx ${1}: ${TKBPS} kB/s rx ${1}: ${RKBPS} kB/s" 43 | #If the last message is not the same as the current message then output the current message 44 | if [ ! "${last_message}" = "${current_message}" ];then 45 | echo "${DATE};${current_message}" 46 | fi 47 | last_message="${current_message}" 48 | sleep 1 49 | done 50 | -------------------------------------------------------------------------------- /bin/cert: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) 4 | #DESCRIPTION: 5 | # Create and renew X.509 SSL certificates for Drexel University, Medaille College, and Cabrini College 6 | 7 | #debugging enable 8 | set -e 9 | 10 | PROGNAME="${0##*/}" 11 | PROGVERSION="0.1.1" 12 | 13 | #program default variables 14 | create=false 15 | renew=false 16 | drexel_s=false 17 | medaille_s=false 18 | cabrini_s=false 19 | 20 | # 21 | # ARGUMENT HANDLING 22 | # 23 | 24 | #Short options are one letter. If an argument follows a short opt then put a colon (:) after it 25 | SHORTOPTS="hvc:r:" 26 | LONGOPTS="help,version,create:,renew:,du,me,cc" 27 | usage() 28 | { 29 | cat <&2 60 | exit 1 61 | ;; 62 | -v|--version) 63 | echo "${PROGNAME} ${PROGVERSION}" 1>&2 64 | exit 1 65 | ;; 66 | -c|--create) 67 | create=true 68 | cname="${2}" 69 | shift 2 70 | ;; 71 | -r|--renew) 72 | renew=true 73 | cname="${2}" 74 | shift 2 75 | ;; 76 | --du) 77 | drexel_s=true 78 | ;; 79 | --me) 80 | medaille_s=true 81 | ;; 82 | --cc) 83 | cabrini_s=true 84 | ;; 85 | --) 86 | shift 87 | break 88 | ;; 89 | *) 90 | shift 91 | ;; 92 | esac 93 | done 94 | 95 | # 96 | # Program functions 97 | # 98 | 99 | function preflight(){ 100 | STATUS=0 101 | if [ "${create}" = "true" -a "${renew}" = "true" ] || [ "${create}" = "false" -a "${renew}" = "false" ];then 102 | echo "Must choose --create or --renew." 103 | STATUS=1 104 | elif [ -z "${cname}" ];then 105 | echo "CNAME may not be null." 106 | STATUS=1 107 | fi 108 | if ${renew} && [ ! -f "${cname}.key" ];then 109 | echo "${cname}.key does not exist!" 110 | echo "Perhaps you need to --create a new CSR?" 111 | STATUS=1 112 | fi 113 | return ${STATUS} 114 | } 115 | 116 | # 117 | # Main execution 118 | # 119 | 120 | #Run a preflight check on options for compatibility. 121 | if ! preflight 1>&2;then 122 | echo "Command aborted due to previous errors." 1>&2 123 | echo "Perhaps try --help option." 1>&2 124 | exit 1 125 | fi 126 | 127 | if ${drexel_s};then 128 | #drexel subject 129 | SUBJECT="/C=US/ST=Pennsylvania/L=Philadelphia/O=Drexel University/OU=IRT/CN=${cname}" 130 | elif ${medaille_s};then 131 | #medaille subject 132 | SUBJECT="/C=US/ST=New York/L=Buffalo/O=Medaille College/OU=IRT/CN=${cname}" 133 | elif ${cabrini_s};then 134 | #cabrini subject 135 | SUBJECT="/C=US/ST=Pennsylvania/L=Radnor/O=Cabrini College/OU=IRT/CN=${cname}" 136 | fi 137 | DEFAULT_SUBJECT="/C=US/ST=Pennsylvania/L=Philadelphia/O=Drexel University/OU=IRT/CN=${cname}" 138 | SUBJECT="${SUBJECT:-${DEFAULT_SUBJECT}}" 139 | 140 | if ${create};then 141 | #new certificate 142 | openssl req -out ${cname}.csr -new -newkey rsa:2048 -nodes -subj "${SUBJECT}" -keyout ${cname}.key 143 | echo "${cname}.csr has been generated!" 1>&2 144 | elif ${renew};then 145 | #renew certificate 146 | openssl req -out "${cname}.csr" -new -subj "${SUBJECT}" -key "${cname}.key" 147 | echo "${cname}.csr has been generated!" 1>&2 148 | else 149 | echo "You should never see this message!" 1>&2 150 | exit 1 151 | fi 152 | openssl req -noout -text -in "${cname}.csr" 153 | -------------------------------------------------------------------------------- /bin/check_courses: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Sam Gleske 3 | # Script for Drexel CS265 class 4 | # GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu) 5 | # diff (GNU diffutils) 3.0 6 | # Linux tux64-14.cs.drexel.edu 3.2.0-40-generic x86_64 7 | 8 | #Generate a class list file. If there are differences between 9 | #the last run and the current run then email the student there 10 | #are new courses. 11 | #Note: First run will always email student. 12 | 13 | #The script should be 755 permissions. 14 | 15 | #Add it to cron. 16 | #crontab -e 17 | #0 12 * * * /home/user/bin/check_courses 18 | 19 | #remember which machine you create the cron job on. Cron jobs are not replicated 20 | #to all of the tux machines. If you don't want to remember then run the following 21 | #command in vim at the top of this script. It will insert the system you're on. 22 | # :r!uname -nrms 23 | 24 | #email address you wish to be notified of new assignment dates 25 | email_address="your@email.com" 26 | #temporary directory where you wish current assignments to be published. One could also choose ~/public_html so the assignments can be accessed from the web. 27 | tmp_dir="$HOME/tmp" 28 | #Name of the persistent file containing assignment due dates. 29 | assignment_file="current_assignments.txt" 30 | #The submit_cli command and arguments 31 | submit_cli_command="/usr/local/bin/submit_cli --list -ccs265" 32 | 33 | #use absolute paths for binaries 34 | if [ ! -d "${tmp_dir}" ];then 35 | /bin/mkdir "${tmp_dir}" 36 | fi 37 | 38 | ${submit_cli_command} > "${tmp_dir}/tempclasslist" 39 | 40 | /usr/bin/diff "${tmp_dir}/tempclasslist" "${tmp_dir}/${assignment_file}" > /dev/null 2>&1 41 | if [ "$?" -ne "0" ];then 42 | /usr/sbin/sendmail $email_address << EOF 43 | Subject: New CS265 assignments available 44 | New assignments listed in submit_cli. 45 | $(${submit_cli_command}) 46 | EOF 47 | /bin/mv -f "${tmp_dir}/tempclasslist" "${tmp_dir}/${assignment_file}" 48 | fi 49 | /bin/rm -f "${tmp_dir}/tempclasslist" 50 | -------------------------------------------------------------------------------- /bin/checkip: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Author: Sam Gleske 3 | #Origin: https://github.com/sag47/drexel-university/ 4 | #Description: 5 | # Check your public IP Address rather than your private. 6 | curl http://checkip.dyndns.org/ 2> /dev/null | sed 's#.*Current IP Address: \(.*\).*#\1#' 7 | -------------------------------------------------------------------------------- /bin/fpaste: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | '''fpaste - a cli frontend for the fpaste.org pastebin''' 3 | # 4 | # Copyright 2008, 2010 Fedora Unity Project (http://fedoraunity.org) 5 | # Author: Jason 'zcat' Farrell 6 | # 7 | # This program is free software: you can redistribute it and/or modify 8 | # it under the terms of the GNU General Public License as published by 9 | # the Free Software Foundation, either version 3 of the License, or 10 | # (at your option) any later version. 11 | # 12 | # This program is distributed in the hope that it will be useful, 13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 | # GNU General Public License for more details. 16 | # 17 | # You should have received a copy of the GNU General Public License 18 | # along with this program. If not, see . 19 | VERSION = '0.3.7' 20 | USER_AGENT = 'fpaste/' + VERSION 21 | SET_DESCRIPTION_IF_EMPTY = 1 # stdin, clipboard, sysinfo 22 | FPASTE_URL = 'http://fpaste.org/' 23 | 24 | import os, sys, urllib, urllib2, subprocess 25 | from optparse import OptionParser, OptionGroup, SUPPRESS_HELP 26 | 27 | def is_text(text, maxCheck = 100, pctPrintable = 0.75): 28 | '''returns true if maxCheck evenly distributed chars in text are >= pctPrintable% text chars''' 29 | # e.g.: /bin/* ranges between 19% and 42% printable 30 | from string import printable 31 | nchars = len(text) 32 | if nchars == 0: 33 | return False 34 | ncheck = min(nchars, maxCheck) 35 | inc = float(nchars)/ncheck 36 | i = 0.0 37 | nprintable = 0 38 | while i < nchars: 39 | if text[int(i)] in printable: 40 | nprintable += 1 41 | i += inc 42 | pct = float(nprintable) / ncheck 43 | return (pct >= pctPrintable) 44 | 45 | 46 | def confirm(prompt = "OK?"): 47 | '''prompt user for yes/no input and return True or False''' 48 | prompt += " [y/N]: " 49 | try: 50 | ans = raw_input(prompt) 51 | except EOFError: # already read sys.stdin and hit EOF 52 | # rebind sys.stdin to user tty (unix-only) 53 | try: 54 | mytty = os.ttyname(sys.stdout.fileno()) 55 | sys.stdin = open(mytty) 56 | ans = raw_input() 57 | except: 58 | print >> sys.stderr, "could not rebind sys.stdin to %s after sys.stdin EOF" % mytty 59 | return False 60 | 61 | if ans.lower().startswith("y"): 62 | return True 63 | else: 64 | return False 65 | 66 | 67 | def paste(text, options): 68 | '''send text to fpaste.org and return the URL''' 69 | import re 70 | if not text: 71 | print >> sys.stderr, "No text to send." 72 | return False 73 | 74 | params = urllib.urlencode({'title': options.desc, 'author': options.nick, 'lexer': options.lang, 'content': text, 'expire_options': options.expires}) 75 | pasteSizeKiB = len(params)/1024.0 76 | 77 | if pasteSizeKiB >= 512: # 512KiB appears to be the current hard limit (20110404); old limit was 16MiB 78 | print >> sys.stderr, "WARNING: your paste size (%.1fKiB) is very large and may be rejected by the server. A pastebin is NOT a file hosting service!" % (pasteSizeKiB) 79 | # verify that it's most likely *non-binary* data being sent. 80 | if not is_text(text): 81 | print >> sys.stderr, "WARNING: your paste looks a lot like binary data instead of text." 82 | if not confirm("Send binary data anyway?"): 83 | return False 84 | 85 | req = urllib2.Request(url=FPASTE_URL, data=params, headers={'User-agent': USER_AGENT}) 86 | if options.proxy: 87 | if options.debug: 88 | print >> sys.stderr, "Using proxy: %s" % options.proxy 89 | req.set_proxy(options.proxy, 'http') 90 | 91 | print >> sys.stderr, "Uploading (%.1fKiB)..." % pasteSizeKiB 92 | 93 | try: 94 | f = urllib2.urlopen(req) 95 | except urllib2.URLError, e: 96 | if hasattr(e, 'reason'): 97 | print >> sys.stderr, "Error Uploading: %s" % e.reason 98 | elif hasattr(e, 'code'): 99 | print >> sys.stderr, "Server Error: %d - %s" % (e.code, e.msg) 100 | if options.debug: 101 | print f.read() 102 | return False 103 | 104 | url = f.geturl() 105 | if re.match(FPASTE_URL + '?.+', url): 106 | return url 107 | elif urllib2.urlparse.urlsplit(url).path == '/static/limit/': 108 | # instead of returning a 500 server error, fpaste.org now returns "http://fedoraunity.org/static/limit/" if paste too large 109 | print >> sys.stderr, "Error: paste size (%.1fKiB) exceeded server limit. %s" % (pasteSizeKiB, url) 110 | return False 111 | else: 112 | print >> sys.stderr, "Invalid URL '%s' returned. This should not happen. Use --debug to see server output" % url 113 | if options.debug: 114 | print f.read() 115 | return False 116 | 117 | 118 | def sysinfo(show_stderr = False, show_successful_cmds = True, show_failed_cmds = True): 119 | '''returns commonly requested (and some fedora-specific) system info''' 120 | # 'ps' output below has been anonymized: -n for uid vs username, and -c for short processname 121 | 122 | # cmd name, command, command2 fallback, command3 fallback, ... 123 | cmdlist = [ 124 | ('OS Release', '''lsb_release -ds''', '''cat /etc/*-release | uniq''', 'cat /etc/issue', 'cat /etc/motd'), 125 | ('Kernel', '''uname -r ; cat /proc/cmdline'''), 126 | ('Desktop(s) Running', '''ps -eo comm= | egrep '(gnome-session|kdeinit|xfce.?-session|fluxbox|blackbox|hackedbox|ratpoison|enlightenment|icewm-session|od-session|wmaker|wmx|openbox-lxde|openbox-gnome-session|openbox-kde-session|mwm|e16|fvwm|xmonad|sugar-session)' '''), 127 | ('Desktop(s) Installed', '''ls -m /usr/share/xsessions/ | sed 's/\.desktop//g' '''), 128 | ('SELinux Status', '''sestatus''', '''/usr/sbin/sestatus''', '''getenforce''', '''grep -v '^#' /etc/sysconfig/selinux'''), 129 | ('SELinux Error Count', '''selinuxenabled && (grep avc: /var/log/messages; ausearch -m avc -ts today)2>/dev/null|egrep -o "comm=\\"[^ ]+"|sort|uniq -c|sort -rn'''), 130 | ('CPU Model', '''grep 'model name' /proc/cpuinfo | awk -F: '{print $2}' | uniq -c | sed -re 's/^ +//' ''', '''grep 'model name' /proc/cpuinfo'''), 131 | ('64-bit Support', '''grep -q ' lm ' /proc/cpuinfo && echo Yes || echo No'''), 132 | ('Hardware Virtualization Support', '''egrep -q '(vmx|svm)' /proc/cpuinfo && echo Yes || echo No'''), 133 | ('Load average', '''uptime'''), 134 | ('Memory usage', '''free -m''', 'free'), 135 | #('Top', '''top -n1 -b | head -15'''), 136 | ('Top 5 CPU hogs', '''ps axuScnh | awk '$2!=''' + str(os.getpid()) + '''' | sort -rnk3 | head -5'''), 137 | ('Top 5 Memory hogs', '''ps axuScnh | sort -rnk4 | head -5'''), 138 | ('Disk space usage', '''df -hT''', 'df -h', 'df'), 139 | ('Block devices', '''blkid''', '''/sbin/blkid'''), 140 | ('PCI devices', '''lspci''', '''/sbin/lspci'''), 141 | ('USB devices', '''lsusb''', '''/sbin/lsusb'''), 142 | ('DRM Information', '''grep drm /var/log/dmesg'''), 143 | ('Xorg modules', '''grep LoadModule /var/log/Xorg.0.log | cut -d \\" -f 2 | xargs'''), 144 | ('GL Support', '''glxinfo | egrep "OpenGL version|OpenGL renderer"'''), 145 | ('Xorg errors', '''grep '^\[.*(EE)' /var/log/Xorg.0.log'''), 146 | ('Kernel buffer tail', '''dmesg | tail'''), 147 | ('Last few reboots', '''last -x -n10 reboot runlevel'''), 148 | ('YUM Repositories', '''yum -C repolist''', '''ls -l /etc/yum.repos.d''', '''grep -v '^#' /etc/yum.conf'''), 149 | ('YUM Extras', '''yum -C list extras'''), 150 | ('Last 20 packages installed', '''rpm -qa --nodigest --nosignature --last | head -20''')] 151 | #('Installed packages', '''rpm -qa --nodigest --nosignature | sort''', '''dpkg -l''') ] 152 | si = [] 153 | 154 | print >> sys.stderr, "Gathering system info", 155 | for cmds in cmdlist: 156 | cmdname = cmds[0] 157 | cmd = "" 158 | for cmd in cmds[1:]: 159 | sys.stderr.write('.') # simple progress feedback 160 | p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 161 | (out, err) = p.communicate() 162 | if p.returncode == 0 and out: 163 | break 164 | else: 165 | if show_stderr: 166 | print >> sys.stderr, "sysinfo Error: the cmd \"%s\" returned %d with stderr: %s" % (cmd, p.returncode, err) 167 | print >> sys.stderr, "Trying next fallback cmd..." 168 | if out: 169 | if show_successful_cmds: 170 | si.append( ('%s (%s)' % (cmdname, cmd), out) ) 171 | else: 172 | si.append( ('%s' % cmdname, out) ) 173 | else: 174 | if show_failed_cmds: 175 | si.append( ('%s (failed: "%s")' % (cmdname, '" AND "'.join(cmds[1:])), out) ) 176 | else: 177 | si.append( ('%s' % cmdname, out) ) 178 | 179 | # public SMOLT url 180 | try: 181 | sys.path.append('/usr/share/smolt/client') 182 | from smolt import get_profile_link, getPubUUID 183 | from smolt_config import get_config_attr 184 | smoonurl = get_config_attr("SMOON_URL", "http://smolts.org/") 185 | pubuuid = getPubUUID() 186 | puburl = get_profile_link(smoonurl, pubuuid)+os.linesep 187 | except: 188 | puburl = None 189 | si.insert(2, ('Smolt Profile URL', puburl) ) 190 | 191 | sys.stderr.write("\n") 192 | 193 | # return in readable indented format 194 | sistr = "=== fpaste %s System Information (fpaste --sysinfo) ===\n" % VERSION 195 | for cmdname, output in si: 196 | sistr += "* %s:\n" % cmdname 197 | if not output: 198 | sistr += " N/A\n\n" 199 | else: 200 | for line in output.split('\n'): 201 | sistr += " %s\n" % line 202 | 203 | return sistr 204 | 205 | 206 | def generate_man_page(): 207 | '''TODO: generate man page from usage''' 208 | pass 209 | 210 | 211 | def summarize_text(text): 212 | # use beginning/middle/end content snippets as a description summary. 120 char limit 213 | # "36chars ... 36chars ... 36chars" == 118 chars 214 | # TODO: nuking whitespace in huge text files might be expensive; optimize for b/m/e segments only 215 | sniplen = 36 216 | seplen = len(" ... ") 217 | tsum = "" 218 | text = " ".join(text.split()) # nuke whitespace 219 | tlen = len(text) 220 | 221 | if tlen < sniplen+seplen: 222 | tsum += text 223 | if tlen >= sniplen+seplen: 224 | tsum += text[0:sniplen] + " ..." 225 | if tlen >= (sniplen*2)+seplen: 226 | tsum += " " + text[tlen/2-(sniplen/2):(tlen/2)+(sniplen/2)] + " ..." 227 | if tlen >= (sniplen*3)+(seplen*2): 228 | tsum += " " + text[-sniplen:] 229 | #print >> sys.stderr, str(len(tsum)) + ": " + tsum 230 | 231 | return tsum 232 | 233 | 234 | 235 | def main(): 236 | validExpiresOpts = [ '3600', '10800', '43200', '86400' ] 237 | validSyntaxOpts = [ 'abap', 'antlr', 'antlr-as', 'antlr-cpp', 'antlr-csharp', 'antlr-java', 'antlr-objc', 'antlr-perl', 'antlr-python', 'antlr-ruby', 'apacheconf', 'applescript', 'as', 'as3', 'aspx-cs', 'aspx-vb', 'basemake', 'bash', 'bat', 'bbcode', 'befunge', 'boo', 'brainfuck', 'c', 'c-objdump', 'cheetah', 'clojure', 'common-lisp', 'console', 'control', 'cpp', 'cpp-objdump', 'csharp', 'css', 'css+django', 'css+erb', 'css+genshitext', 'css+mako', 'css+myghty', 'css+php', 'css+smarty', 'cython', 'd', 'd-objdump', 'delphi', 'diff', 'django', 'dpatch', 'dylan', 'erb', 'erl', 'erlang', 'evoque', 'fortran', 'gas', 'genshi', 'genshitext', 'glsl', 'gnuplot', 'groff', 'haskell', 'html', 'html+cheetah', 'html+django', 'html+evoque', 'html+genshi', 'html+mako', 'html+myghty', 'html+php', 'html+smarty', 'ini', 'io', 'irc', 'java', 'js', 'js+cheetah', 'js+django', 'js+erb', 'js+genshitext', 'js+mako', 'js+myghty', 'js+php', 'js+smarty', 'jsp', 'lhs', 'lighty', 'llvm', 'logtalk', 'lua', 'make', 'mako', 'matlab', 'matlabsession', 'minid', 'modelica', 'moocode', 'mupad', 'mxml', 'myghty', 'mysql', 'nasm', 'newspeak', 'nginx', 'numpy', 'objdump', 'objective-c', 'ocaml', 'perl', 'php', 'pot', 'pov', 'prolog', 'py3tb', 'pycon', 'pytb', 'python', 'python3', 'ragel', 'ragel-c', 'ragel-cpp', 'ragel-d', 'ragel-em', 'ragel-java', 'ragel-objc', 'ragel-ruby', 'raw', 'rb', 'rbcon', 'rebol', 'redcode', 'rhtml', 'rst', 'scala', 'scheme', 'smalltalk', 'smarty', 'sourceslist', 'splus', 'sql', 'sqlite3', 'squidconf', 'tcl', 'tcsh', 'tex', 'text', 'trac-wiki', 'vala', 'vb.net', 'vim', 'xml', 'xml+cheetah', 'xml+django', 'xml+erb', 'xml+evoque', 'xml+mako', 'xml+myghty', 'xml+php', 'xml+smarty', 'xslt', 'yaml' ] 238 | validClipboardSelectionOpts = [ 'primary', 'secondary', 'clipboard' ] 239 | ext2lang_map = { 'sh':'bash', 'bash':'bash', 'bat':'bat', 'c':'c', 'h':'c', 'cpp':'cpp', 'css':'css', 'html':'html', 'htm':'html', 'ini':'ini', 'java':'java', 'js':'js', 'jsp':'jsp', 'pl':'perl', 'php':'php', 'php3':'php', 'py':'python', 'rb':'rb', 'rhtml':'rhtml', 'sql':'sql', 'sqlite':'sqlite3', 'tcl':'tcl', 'vim':'vim', 'xml':'xml' } 240 | 241 | usage = """\ 242 | Usage: %%prog [OPTION]... [FILE]... 243 | send text file(s), stdin, or clipboard to the %s pastebin and return the URL. 244 | 245 | Examples: 246 | %%prog file1.txt file2.txt 247 | dmesg | %%prog 248 | (prog1; prog2; prog3) | fpaste 249 | %%prog --sysinfo -d "my laptop" --confirm 250 | %%prog -n codemonkey -d "problem with foo" -l python foo.py""" % FPASTE_URL 251 | 252 | parser = OptionParser(usage=usage, version='%prog '+VERSION) 253 | parser.add_option('', '--debug', dest='debug', help=SUPPRESS_HELP, action="store_true", default=False) 254 | parser.add_option('', '--proxy', dest='proxy', help=SUPPRESS_HELP) 255 | 256 | # pastebin-specific options first 257 | fpasteOrg_group = OptionGroup(parser, "fpaste.org Options") 258 | fpasteOrg_group.add_option('-n', dest='nick', help='your nickname; default is "%default"', metavar='"NICKNAME"') 259 | fpasteOrg_group.add_option('-d', dest='desc', help='description of paste; default appends filename(s)', metavar='"DESCRIPTION"') 260 | fpasteOrg_group.add_option('-l', dest='lang', help='language of content for syntax highlighting; default is "%default"; use "list" to show all ' + str(len(validSyntaxOpts)) + ' supported langs', metavar='"LANGUAGE"') 261 | fpasteOrg_group.add_option('-x', dest='expires', help='time before paste is removed; default is %default seconds; valid options: ' + ', '.join(validExpiresOpts), metavar='EXPIRES') 262 | parser.add_option_group(fpasteOrg_group) 263 | # other options 264 | fpasteProg_group = OptionGroup(parser, "Input/Output Options") 265 | fpasteProg_group.add_option('-i', '--clipin', dest='clipin', help='read paste text from current X clipboard selection', action="store_true", default=False) 266 | fpasteProg_group.add_option('-o', '--clipout', dest='clipout', help='save returned paste URL to X clipboard', action="store_true", default=False) 267 | fpasteProg_group.add_option('', '--selection', dest='selection', help='specify which X clipboard to use. valid options: "primary" (default; middle-mouse-button paste), "secondary" (uncommon), or "clipboard" (ctrl-v paste)', metavar='CLIP') 268 | fpasteProg_group.add_option('', '--fullpath', dest='fullpath', help='use pathname VS basename for file description(s)', action="store_true", default=False) 269 | fpasteProg_group.add_option('', '--pasteself', dest='pasteself', help='paste this script itself', action="store_true", default=False) 270 | fpasteProg_group.add_option('', '--sysinfo', dest='sysinfo', help='paste system information', action="store_true", default=False) 271 | fpasteProg_group.add_option('', '--printonly', dest='printonly', help='print paste, but do not send', action="store_true", default=False) 272 | fpasteProg_group.add_option('', '--confirm', dest='confirm', help='print paste, and prompt for confirmation before sending', action="store_true", default=False) 273 | parser.add_option_group(fpasteProg_group) 274 | 275 | parser.set_defaults(desc='', nick='', lang='text', expires=max(validExpiresOpts), selection='primary') 276 | (options, args) = parser.parse_args() 277 | 278 | if options.lang.lower() == 'list': 279 | print 'Valid language syntax options:' 280 | for opt in validSyntaxOpts: 281 | print opt 282 | sys.exit(0) 283 | if options.clipin: 284 | if not os.access('/usr/bin/xsel', os.X_OK): 285 | # TODO: try falling back to xclip or dbus 286 | parser.error('OOPS - the clipboard options currently depend on "/usr/bin/xsel", which does not appear to be installed') 287 | if options.clipin and args: 288 | parser.error("Sending both clipboard contents AND files is not supported. Use -i OR filename(s)") 289 | for optk, optv, opts in [('language', options.lang, validSyntaxOpts), ('expires', options.expires, validExpiresOpts), ('clipboard selection', options.selection, validClipboardSelectionOpts)]: 290 | if optv not in opts: 291 | parser.error("'%s' is not a valid %s option.\n\tVALID OPTIONS: %s" % (optv, optk, ', '.join(opts))) 292 | 293 | fileargs = args 294 | if options.fullpath: 295 | fileargs = [os.path.abspath(x) for x in args] 296 | else: 297 | fileargs = [os.path.basename(x) for x in args] # remove potentially non-anonymous path info from file path descriptions 298 | 299 | #guess lang for some common file extensions, if all file exts similar, and lang not changed from default 300 | if options.lang == 'text': 301 | all_exts_similar = False 302 | for i in range(0, len(args)): 303 | all_exts_similar = True 304 | ext = os.path.splitext(args[i])[1].lstrip(os.extsep) 305 | if i > 0 and ext != ext_prev: 306 | all_exts_similar = False 307 | break 308 | ext_prev = ext 309 | if all_exts_similar and ext in ext2lang_map.keys(): 310 | options.lang = ext2lang_map[ext] 311 | 312 | # get input from mutually exclusive sources, though they *could* be combined 313 | text = "" 314 | if options.clipin: 315 | xselcmd = 'xsel -o --%s' % options.selection 316 | #text = os.popen(xselcmd).read() 317 | p = subprocess.Popen(xselcmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 318 | (text, err) = p.communicate() 319 | if p.returncode != 0: 320 | if options.debug: 321 | print >> sys.stderr, err 322 | parser.error("'xsel' failure. this usually means you're not running X") 323 | if not text: 324 | parser.error("%s clipboard is empty" % options.selection) 325 | if SET_DESCRIPTION_IF_EMPTY and not options.desc: 326 | #options.desc = '%s clipboard' % options.selection 327 | options.desc = summarize_text(text) 328 | elif options.pasteself: 329 | text = open(sys.argv[0]).read() 330 | options.desc = 'fpaste-' + VERSION 331 | options.lang = 'python' 332 | options.nick = 'Fedora Unity' 333 | elif options.sysinfo: 334 | text = sysinfo(options.debug) 335 | if SET_DESCRIPTION_IF_EMPTY and not options.desc: 336 | options.desc = 'fpaste --sysinfo' 337 | elif not args: # read from stdin if no file args supplied 338 | try: 339 | text += sys.stdin.read() 340 | except KeyboardInterrupt: 341 | print >> sys.stderr, "\nUSAGE REMINDER:\n fpaste waits for input when run without file arguments.\n Paste your text, then press on a new line to upload.\n Try `fpaste --help' for more information.\nExiting..." 342 | sys.exit(1) 343 | if SET_DESCRIPTION_IF_EMPTY and not options.desc: 344 | options.desc = summarize_text(text) 345 | else: 346 | if not options.desc: 347 | options.desc = '%s' % (' + '.join(fileargs)) 348 | else: 349 | options.desc = '%s: %s' % (options.desc, ' + '.join(fileargs)) 350 | for i, f in enumerate(args): 351 | if not os.access(f, os.R_OK): 352 | parser.error("file '%s' is not readable" % f) 353 | if (len(args) > 1): # separate multiple files with header 354 | text += '#' * 78 + '\n' 355 | text += '### file %d of %d: %s\n' % (i+1, len(args), fileargs[i]) 356 | text += '#' * 78 + '\n' 357 | text += open(f).read() 358 | 359 | if options.debug: 360 | print 'nick: "%s"' % options.nick 361 | print 'desc: "%s"' % options.desc 362 | print 'lang: "%s"' % options.lang 363 | print 'text (%d): "%s ..."' % (len(text), text[:80]) 364 | 365 | if options.printonly or options.confirm: 366 | try: 367 | if is_text(text): 368 | print text # when piped to less, sometimes fails with [Errno 32] Broken pipe 369 | else: 370 | print "DATA" 371 | except IOError: 372 | pass 373 | if options.printonly: # print only what would be sent, and exit 374 | sys.exit(0) 375 | elif options.confirm: # print what would be sent, and ask for permission 376 | if not confirm("OK to send?"): 377 | sys.exit(1) 378 | 379 | url = paste(text, options) 380 | if url: 381 | # try to save URL in clipboard, and warn but don't error 382 | if options.clipout: 383 | xselcmd = 'xsel -i --%s' % options.selection 384 | #os.popen(xselcmd, 'wb').write(url) 385 | p = subprocess.Popen(xselcmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) 386 | (out, err) = p.communicate(input=url) 387 | if p.returncode != 0: 388 | if options.debug: 389 | print >> sys.stderr, err 390 | #parser.error("'xsel' failure. this usually means you're not running X") 391 | print >> sys.stderr, "WARNING: URL not saved to clipboard" 392 | 393 | print url 394 | else: 395 | sys.exit(1) 396 | 397 | if options.pasteself: 398 | print >> sys.stderr, "install fpaste to local ~/bin dir by running: mkdir -p ~/bin; curl " + url + "raw/ -o ~/bin/fpaste && chmod +x ~/bin/fpaste" 399 | 400 | sys.exit(0) 401 | 402 | 403 | if __name__ == '__main__': 404 | try: 405 | if '--generate-man' in sys.argv: 406 | generate_man_page() 407 | else: 408 | main() 409 | except KeyboardInterrupt: 410 | print "\ninterrupted." 411 | sys.exit(1) 412 | -------------------------------------------------------------------------------- /bin/gitar.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #original concept by Tom Dalling 4 | #source: http://tomdalling.com/blog/random-stuff/using-git-for-hacky-archive-deduplication/ 5 | #Fri Aug 9 11:04:46 EDT 2013 6 | #Ubuntu 12.04.2 LTS 7 | #Linux 3.8.0-27-generic x86_64 8 | #GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) 9 | 10 | #KNOWN ISSUES/WON'T FIX: 11 | # - Not able to compress git repositories or nested git repositories. 12 | # - When extracting a gitar archive using tar it is possible to destroy a git repository 13 | # in the current working directory. 14 | # - It is assumed you will be compressing a single directory. If you need to compress 15 | # multiple directories or files then place them all inside of a directory to be gitar'd. 16 | # - You must compress a directory located in the current directory or located in a child. 17 | # If the path you're compressing is located above the current working directory in a 18 | # parent then this will fail because git can't do that. 19 | 20 | #DESCRIPTION: 21 | # This short script was made for when you have to compress a large amount of duplicated 22 | # files. This assumes you do not have lrzip readily available. Learn more about lrzip at 23 | # https://github.com/ckolivas/lrzip 24 | # 25 | # lrzip can compress better and deduplicate better than this script. Also, this script 26 | # has known limitations which are not bound to lrzip such as not being able to compress 27 | # git repositories. This is meant as a quick hack 'n slash dedupe and compress. 28 | 29 | #USAGE: 30 | # Subshell the compressing of an archive and set the compression level to 2 31 | # (export compression_type=2; gitar.sh "somedirectory") 32 | 33 | ###################################################################### 34 | # List of global options 35 | ###################################################################### 36 | 37 | #compression types ordered from least to greatest 38 | # 1 - no optimization, just git deduplication+tar 39 | # 2 - deduplication+optimized git+tar compression 40 | # 3 - deduplication+optimized git+tar+gzip compression 41 | # 4 - deduplication+optimized git+tar+bzip2 compression 42 | # 5 - deduplication+optimized git+tar+lzma compression 43 | compression_type="${compression_type:-3}" 44 | 45 | #Compression list helps to make the logic more human readable (e.g. in preflight check function) 46 | compression_list[1]="dedupe_only" 47 | compression_list[2]="optimized" 48 | compression_list[3]="gzip" 49 | compression_list[4]="bzip2" 50 | compression_list[5]="lzma" 51 | 52 | testmode="false" 53 | 54 | ###################################################################### 55 | # List of functions 56 | ###################################################################### 57 | 58 | function err(){ 59 | echo "${1}" 1>&2 60 | } 61 | function write_gintar(){ 62 | #copies the currently running program to gintar.sh for unarchiving later 63 | cp "$0" "./gintar.sh" 64 | #grab the current compression_type out of $0 65 | sed -i '0,/compression_type="${compression_type:-[0-9]}"/{s#\(compression_type="${compression_type:-\)[0-9]\(}"\)#\1'${compression_type}'\2#}' "./gintar.sh" 66 | } 67 | function preflight(){ 68 | STATUS=0 69 | if [ ! -x "$(which basename)" ];then 70 | err "basename executable is missing: GNU coreutils package" 71 | STATUS=1 72 | fi 73 | if [ ! -x "$(which git)" ];then 74 | err "git executable is missing: git package" 75 | STATUS=1 76 | fi 77 | if [ ! -x "$(which tar)" ];then 78 | err "tar executable is missing: tar package" 79 | STATUS=1 80 | fi 81 | #prerequisite only based on the current algorithm (gzip, bzip2, or lzma) 82 | if [ "gzip" = "${compression_list[compression_type]}" ] && [ ! -x "$(which gzip)" ];then 83 | err "gzip executable is missing: gzip package" 84 | STATUS=1 85 | fi 86 | if [ "bzip2" = "${compression_list[compression_type]}" ] && [ ! -x "$(which bzip2)" ];then 87 | err "bzip2 executable is missing: bzip2 package" 88 | STATUS=1 89 | fi 90 | if [ "lzma" = "${compression_list[compression_type]}" ] && [ ! -x "$(which lzma)" ];then 91 | err "lzma executable is missing: xz-lzma package" 92 | STATUS=1 93 | fi 94 | #method specific preflight check based on the script name 95 | if [ "${BASENAME}" = "gitar.sh" ];then 96 | if ! ${testmode} && [ ! -d "${1}" ];then 97 | err "ERROR: ${1} must be a directory!" 98 | exit 1 99 | fi 100 | if [ -d ".git" ];then 101 | err "The current directory must not be a git repository!" 102 | STATUS=1 103 | elif [ ! -z "$(find "${1}" -type d -name .git 2> /dev/null | head -n1 2> /dev/null)" ];then 104 | err "Error, a nested git repository was found. This is not recommended so will abort." 105 | err "$(find "${1}" -type d -name .git | head -n1)" 106 | err "" 107 | err "To find potential problems like this run: " 108 | err "find \"${1}\" -type d -name .git" 109 | STATUS=1 110 | fi 111 | if [ "${compression_list[compression_type]}" = "dedupe_only" -o "${compression_list[compression_type]}" = "optimized" ] && [ -f "${1}.gitar" ];then 112 | err "${1}.gitar already exists! Aborting..." 113 | STATUS=1 114 | elif [ "${compression_list[compression_type]}" = "gzip" ] && [ -f "${1}.gitar.gz" ];then 115 | err "${1}.gitar.gz already exists! Aborting..." 116 | STATUS=1 117 | elif [ "${compression_list[compression_type]}" = "bzip2" ] && [ -f "${1}.gitar.bz2" ];then 118 | err "${1}.gitar.bz2 already exists! Aborting..." 119 | STATUS=1 120 | elif [ "${compression_list[compression_type]}" = "lzma" ] && [ -f "${1}.gitar.lzma" ];then 121 | err "${1}.gitar.lzma already exists! Aborting..." 122 | STATUS=1 123 | fi 124 | elif [ "${BASENAME}" = "gintar.sh" ];then 125 | if [ ! "${0%/*}" = "${PWD}" -a ! "${0%/*}" = "." ];then 126 | err "This script must be run from the same working directory!" 127 | err "e.g. ./gintar.sh" 128 | STATUS=1 129 | elif [ ! -d ".git" ];then 130 | err "Missing .git directory. Was this really from gitar?" 131 | STATUS=1 132 | fi 133 | fi 134 | return ${STATUS} 135 | } 136 | function gitar(){ 137 | STATUS=0 138 | git init 139 | if [ ! "$?" = "0" ];then 140 | STATUS=1 141 | fi 142 | git add "${1}" 143 | if [ ! "$?" = "0" ];then 144 | STATUS=1 145 | fi 146 | git commit -m "gitar commit" 147 | if [ ! "$?" = "0" ];then 148 | STATUS=1 149 | fi 150 | if [ ! "${compression_list[compression_type]}" = "dedupe_only" ];then 151 | git gc --aggressive 152 | if [ ! "$?" = "0" ];then 153 | STATUS=1 154 | fi 155 | fi 156 | write_gintar 157 | if [ ! "$?" = "0" ];then 158 | STATUS=1 159 | fi 160 | if [ "${compression_list[compression_type]}" = "dedupe_only" -o "${compression_list[compression_type]}" = "optimized" ];then 161 | tar -cf "${1}".gitar .git gintar.sh 162 | elif [ "${compression_list[compression_type]}" = "gzip" ];then 163 | #tar -czf "${1}".gitar .git gintar.sh 164 | tar -cf - .git gintar.sh | gzip -9 - > "${1}".gitar.gz 165 | elif [ "${compression_list[compression_type]}" = "bzip2" ];then 166 | #tar -cjf "${1}".gitar .git gintar.sh 167 | tar -cf - .git gintar.sh | bzip2 -9 - > "${1}".gitar.bz2 168 | elif [ "${compression_list[compression_type]}" = "lzma" ];then 169 | tar -cf - .git gintar.sh | lzma -9 - > "${1}".gitar.lzma 170 | else 171 | err "Invalid compression type specified in gitar.sh. Choose" 172 | err "compression_type=[1-5] where 1 is least and 5 is most compression." 173 | err "1 - no optimization, just git deduplication+tar" 174 | err "2 - deduplication+optimized git+tar compression" 175 | err "3 - deduplication+optimized git+tar+gzip compression" 176 | err "4 - deduplication+optimized git+tar+bzip2 compression" 177 | err "5 - deduplication+optimized git+tar+lzma compression" 178 | STATUS=1 179 | fi 180 | if [ ! "$?" = "0" ];then 181 | STATUS=1 182 | fi 183 | rm -rf ./.git ./gintar.sh 184 | if [ ! "$?" = "0" ];then 185 | STATUS=1 186 | fi 187 | return ${STATUS} 188 | } 189 | function gintar_ls(){ 190 | git show --pretty="format:" --name-only $(git log | awk '$1 == "commit" { print $2}') 191 | exit 192 | } 193 | function gintar(){ 194 | STATUS=0 195 | git reset --hard 196 | if [ ! "$?" = "0" ];then 197 | STATUS=1 198 | fi 199 | rm -rf ./.git ./gintar.sh 200 | if [ ! "$?" = "0" ];then 201 | STATUS=1 202 | fi 203 | return ${STATUS} 204 | } 205 | function success(){ 206 | if [ "${BASENAME}" = "gitar.sh" ];then 207 | err "" 208 | err "SUCCESS!" 209 | err "" 210 | err "Your gitar archive is ready. To decompress run the following commands." 211 | if [ "gzip" = "${compression_list[compression_type]}" ];then 212 | err "tar -xjf \"${1}.gitar\" && ./gintar.sh" 213 | elif [ "bzip2" = "${compression_list[compression_type]}" ];then 214 | err "tar -xjf \"${1}.gitar\" && ./gintar.sh" 215 | elif [ "lzma" = "${compression_list[compression_type]}" ];then 216 | err "tar -xJf \"${1}.gitar\" && ./gintar.sh" 217 | else 218 | err "tar -xf \"${1}.gitar\" && ./gintar.sh" 219 | fi 220 | err "" 221 | elif [ "${BASENAME}" = "gintar.sh" ];then 222 | err "Successfully extracted!" 223 | fi 224 | exit 0 225 | } 226 | 227 | ###################################################################### 228 | # Application testing functions 229 | ###################################################################### 230 | 231 | function run_tests(){ 232 | testmode="true" 233 | preflight 234 | echo -n "Cloning opengl-series.git... " 1>&2 235 | git clone https://github.com/tomdalling/opengl-series.git &>/dev/null && err "success" || err "failed" 236 | echo -n "Cleaning up git directories... " 1>&2 237 | rm -rf opengl-series/.git && err "success" || err "failed" 238 | err "Runing compression tests:" 239 | echo -n " 0-opengl-series.tar... " 1>&2 240 | tar -cf 0-opengl-series.tar opengl-series &> /dev/null && err "success" || err "failed" 241 | #run compression tests with each type of compression 242 | export compression_type=1 243 | try_compress 244 | try_decompress 245 | export compression_type=2 246 | try_compress 247 | try_decompress 248 | export compression_type=3 249 | try_compress 250 | try_decompress 251 | export compression_type=4 252 | try_compress 253 | try_decompress 254 | export compression_type=5 255 | try_compress 256 | try_decompress 257 | exit 1 258 | } 259 | function try_compress(){ 260 | STATUS=0 261 | filename="${compression_type}-opengl-series.gitar.${compression_list[compression_type]}" 262 | echo -n " ${filename}... " 1>&2 263 | "${0}" opengl-series &> /dev/null 264 | if [ ! "$?" -eq "0" ];then 265 | STATUS=1 266 | fi 267 | mv -f opengl-series.gitar* "${filename}" &> /dev/null 268 | if [ ! "$?" -eq "0" ];then 269 | STATUS=1 270 | fi 271 | if [ "${STATUS}" -eq "0" ];then 272 | err "success" 273 | else 274 | err "failed" 275 | err "For more information run the following." 276 | err "(export compression_type=${compression_type};bash -x $0 opengl-series)" 277 | fi 278 | return ${STATUS} 279 | } 280 | function try_decompress(){ 281 | STATUS=0 282 | filename="${compression_type}-opengl-series.gitar.${compression_list[compression_type]}" 283 | echo -n " ${filename} decompress... " 1>&2 284 | mkdir -p "/tmp/${filename}" &> /dev/null 285 | pushd "/tmp/${filename}" &> /dev/null 286 | tar -xf ~1/"${filename}" &> /dev/null 287 | if [ ! "$?" -eq "0" ];then 288 | STATUS=1 289 | fi 290 | ./gintar.sh &> /dev/null 291 | if [ ! "$?" -eq "0" ];then 292 | STATUS=1 293 | fi 294 | if [ -d "./.git" ];then 295 | STATUS=1 296 | fi 297 | if [ -f "./gintar.sh" ];then 298 | STATUS=1 299 | fi 300 | popd &> /dev/null 301 | if [ "${STATUS}" -eq "0" ];then 302 | err "success" 303 | else 304 | err "failed" 305 | err "For more information run the following." 306 | err '(mkdir /tmp/'${filename}';export compression_type=${compression_type};pushd /tmp/'${filename}';tar -xf ~1/'${filename}';bash -x ./gintar.sh)' 307 | fi 308 | return ${STATUS} 309 | } 310 | function clean_tests(){ 311 | echo -n "Cleaning up gitar.sh test data..." 1>&2 312 | rm -rf .git opengl-series 0-opengl-series.tar 313 | for x in 1 2 3 4 5;do 314 | filename="${x}-opengl-series.gitar.${compression_list[x]}" 315 | rm -f "${filename}" 316 | rm -rf "/tmp/${filename}" 317 | done 318 | err "done" 319 | exit 1 320 | } 321 | 322 | ###################################################################### 323 | # Main execution logic 324 | ###################################################################### 325 | 326 | #execute the script based on the basename. 327 | BASENAME="$(basename ${0})" 328 | 329 | #remove possible trailing slash 330 | INPUT="${1%/}" 331 | 332 | if [ "${BASENAME}" = "gitar.sh" ];then 333 | #start deduplication and compression into an archive 334 | if [ "$#" == "0" ];then 335 | err "You must provide an argument!" 336 | err "Help: gitar.sh somedirectory" 337 | exit 1 338 | elif [ ! -e "test" ] && [ "${INPUT}" = "test" ];then 339 | #this helps me test the program 340 | run_tests 341 | elif [ ! -e "clean-test" ] && [ "${INPUT}" = "clean-test" ];then 342 | #this cleans up the test data 343 | clean_tests 344 | fi 345 | preflight "${INPUT}" && gitar "${INPUT}" && success "${INPUT}" 346 | err "A problem has occurred when creating the gitar archive." 347 | exit 1 348 | elif [ "${BASENAME}" = "gintar.sh" ];then 349 | #do the gintar.sh action to unarchive 350 | if [ "${INPUT}" = "ls" ];then 351 | preflight && gintar_ls 352 | else 353 | preflight && gintar && success 354 | fi 355 | err "Something has gone very wrong during extraction!" 356 | err "For more verbosity run..." 357 | err "bash -x ./gintar.sh" 358 | exit 1 359 | else 360 | err "Unknown method invoked. This file must be named gitar.sh or gintar.sh" 361 | exit 1 362 | fi 363 | -------------------------------------------------------------------------------- /bin/gpg_decrypt_individual_files.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #github.com/sag47 4 | #Sat Mar 29 19:51:06 EDT 2014 5 | #Fedora release 16 (Verne) 6 | #Linux 3.6.11-4.fc16.x86_64 x86_64 7 | #GNU bash, version 4.2.37(1)-release (x86_64-redhat-linux-gnu) 8 | #gpg (GnuPG) 1.4.13 9 | #DESCRIPTION 10 | # This script will decrypt all *.gpg files located in a sub directory. 11 | 12 | #remove the original encrypted file?; this value can be overridden by environment 13 | remove_original="${remove_original:-true}" 14 | 15 | #DO NOT EDIT ANY MORE VARIABLES 16 | #exit on first error 17 | set -e 18 | 19 | #this will individually decrypt all files in the folder; this value can be overridden by environment 20 | if [ -z "${folder_to_decrypt}" ];then 21 | folder_to_decrypt="${1}" 22 | fi 23 | 24 | if [ -z "${folder_to_decrypt}" -o ! -d "${folder_to_decrypt}" ];then 25 | echo "Must provide a valid folder as an argument!" 26 | exit 1 27 | fi 28 | 29 | #decrypt all individually encrypted files in the folder 30 | find "${folder_to_decrypt}" -type f -name '*.gpg' | while read x;do 31 | echo "${x}" | gpg --multifile --decrypt -- 32 | echo -n "decrypted ${x}" 33 | if ${remove_original};then 34 | rm -f -- "${x}" 35 | echo " and removed original." 36 | else 37 | echo "" 38 | fi 39 | done 40 | 41 | #NOTE THIS METHOD CAUSES TWICE THE SPACE NEEDED FOR DECRYPTION 42 | #primarily because it decrypts all of the files... and then removes 43 | #decrypt all individually encrypted files in the folder 44 | #find "${folder_to_decrypt}" -type f -name '*.asc' | gpg --multifile --decrypt 45 | #remove encrypted files 46 | #find "${folder_to_decrypt}" -type f -name '*.asc' -exec rm -f {} \; 47 | #remove sha1sum.txt checksum files 48 | #find "${folder_to_decrypt}" -type f -name 'sha1sum.txt' -exec rm -f {} \; 49 | -------------------------------------------------------------------------------- /bin/gpg_encrypt_individual_files.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #github.com/sag47 4 | #Sat Mar 29 14:13:27 EDT 2014 5 | #Fedora release 16 (Verne) 6 | #Linux 3.6.11-4.fc16.x86_64 x86_64 7 | #GNU bash, version 4.2.37(1)-release (x86_64-redhat-linux-gnu) 8 | #gpg (GnuPG) 1.4.13 9 | #DESCRIPTION 10 | # This script will individually encrypt files in a folder 11 | # using gpg and then generate a sha1sum of all encrypted 12 | # files (*.gpg). 13 | #USAGE 14 | # gpg_encrypt_individual_files.sh dir/ 15 | 16 | #a space separated list of recipient key IDs; this value can be overridden by environment 17 | recipient_list="${recipient_list:-}" 18 | 19 | #remove the original unencrypted file?; this value can be overridden by environment 20 | remove_original="${remove_original:-true}" 21 | 22 | #skip files that already have an encrypted equivalent? i.e. this will replace the encrypted file equivalent if it already exists; this value can be overridden by environment 23 | skip_encrypted="${skip_encrypted:-false}" 24 | 25 | #create checksums for the files in each directory. Each directory will contain a file call sha1sum.txt; this value can be overridden by environment 26 | create_checksums="${create_checksums:-true}" 27 | 28 | #Do you want to sign the files too?; this value can be overridden by environment 29 | sign_encrypted_file="${sign_encrypted_file:-false}" 30 | 31 | #DO NOT EDIT ANY MORE VARIABLES 32 | #this will individually encrypt all files in the folder; this value can be overridden by environment 33 | if [ -z "${folder_to_encrypt}" ];then 34 | folder_to_encrypt="${1}" 35 | fi 36 | 37 | #if the end of the file or name matches an ignore_rule then it will not be encrypted; this value can be overridden by environment 38 | if [ "${#ignore_rules[@]}" -eq "0" ];then 39 | ignore_rules=('.gpg' 'sha1sum.txt' '.checksumrequired') 40 | fi 41 | 42 | 43 | if [ -z "${folder_to_encrypt}" -o ! -d "${folder_to_encrypt}" ];then 44 | echo "Must provide a valid folder as an argument!" 45 | exit 1 46 | elif [ -z "${recipient_list}" ];then 47 | echo "recipient_list environment variable is not set!" 48 | echo "You should set it in ~/.bashrc and/or ~/.bash_profile." 49 | echo "It is a space separated list of GPG key IDs." 50 | exit 1 51 | fi 52 | 53 | #disable globbing 54 | set -o noglob 55 | #exit script on first error 56 | set -e 57 | 58 | #build a recipient list 59 | recipients="" 60 | for x in ${recipient_list};do recipients="${recipients} --recipient ${x}";done 61 | 62 | #build the ignore rules for the find command 63 | ignore_expression="" 64 | for x in ${ignore_rules[@]};do ignore_expression="${ignore_expression} -path *${x} -prune -o "; done 65 | 66 | #use find command to find files to encrypt 67 | find "${folder_to_encrypt}" ${ignore_expression} -type f -print | while read x;do 68 | dir="${x%/*}" 69 | file="${x##*/}" 70 | ( 71 | #subshell 72 | #exit subshell on first error 73 | set -e 74 | cd "${dir}" 75 | #remove encrypted file equivalent if it exists 76 | ! ${skip_encrypted} && [ -f "${file}.gpg" ] && rm -f -- "${file}.gpg" 77 | if ! ${skip_encrypted} || [ "${skip_encrypted}" = "true" -a ! -f "${file}.gpg" ];then 78 | #sign (-d) encrypt (-e) the file (output is filename.gpg) 79 | if ${sign_encrypted_file};then 80 | gpg -s -e ${recipients} -- "${file}" 81 | else 82 | gpg -e ${recipients} -- "${file}" 83 | fi 84 | echo -n "encrypted ${x}" 85 | if ${remove_original};then 86 | rm -f -- "${file}" 87 | echo " and removed original." 88 | else 89 | echo "" 90 | fi 91 | touch -- "${file}.gpg.checksumrequired" 92 | fi 93 | ) 94 | done 95 | ${create_checksums} && echo "Checksumming files in ${folder_to_encrypt}..." 96 | #create a checksum of each file in the folder but ONLY do it to encrypted files that have changed (i.e. has a *.gpg.checksumrequired file) 97 | ${create_checksums} && find "${folder_to_encrypt}" -type d | while read x;do 98 | ( 99 | #subshell 100 | #exit subshell on first error 101 | set -e 102 | cd "${x}" 103 | #create sha1sum.txt file if it doesn't exist 104 | [ ! -f "sha1sum.txt" ] && touch sha1sum.txt 105 | find . -maxdepth 1 -type f -name '*.gpg.checksumrequired' -printf '%f\n' | while read esum;do 106 | #delete the old entry from the sha1sum.txt file 107 | echo "Checksumming ${x}/${esum%\.checksumrequired}" 108 | expression="$(echo "${esum%\.checksumrequired}" | sed 's/\[/\\\[/g' | sed 's/\]/\\\]/g')" 109 | sed -i "/\w\s\+${expression}/ d" sha1sum.txt 110 | #append the new sum to the sha1sum.txt file 111 | sha1sum -- "${esum%\.checksumrequired}" >> sha1sum.txt 112 | #remove the checksumrequired file 113 | rm -f -- "${esum}" 114 | done 115 | ) 116 | done 117 | -------------------------------------------------------------------------------- /bin/gpg_sign_sha1sums.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #Wed Jun 18 22:57:45 EDT 2014 4 | #Fedora release 16 (Verne) 5 | #Linux 3.6.11-4.fc16.x86_64 x86_64 6 | #GNU bash, version 4.2.37(1)-release (x86_64-redhat-linux-gnu) 7 | #sha1sum (GNU coreutils) 8.12 8 | #gpg (GnuPG) 1.4.13 9 | 10 | #USAGE: 11 | # gpg_sign_sha1sums.sh DIRECTORY 12 | 13 | #DESCRIPTION 14 | # This script will iterate through a gpg_encrypt_individual_files.sh 15 | # encrypted directory and sign all of the sha1sum.txt files. This 16 | # is intended for ensuring the integrity of all checksummed files 17 | # when, for example, uploading your encrypted files to a cloud 18 | # filesharing service. 19 | 20 | if [ -z "$!" -a ! -d "$1" ];then 21 | echo "Error: must provide a directory as an argument." 1>&2 22 | exit 1 23 | fi 24 | 25 | find "$1" -type f -name 'sha1sum.txt' | while read x;do 26 | gpg --output "$x.sig" --detach-sign "$x" 27 | echo "Signed $x" 1>&2 28 | done 29 | -------------------------------------------------------------------------------- /bin/gpg_validate_sha1sums.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #Wed Jun 18 22:57:45 EDT 2014 4 | #Fedora release 16 (Verne) 5 | #Linux 3.6.11-4.fc16.x86_64 x86_64 6 | #GNU bash, version 4.2.37(1)-release (x86_64-redhat-linux-gnu) 7 | #sha1sum (GNU coreutils) 8.12 8 | #gpg (GnuPG) 1.4.13 9 | 10 | #USAGE: 11 | # gpg_validate_sha1sums.sh DIRECTORY 12 | 13 | #DESCRIPTION 14 | # This script will iterate through a gpg_encrypt_individual_files.sh 15 | # encrypted directory and validate the sha1sum.txt.sig signatures. 16 | # This is intended to check the signatures against all signed 17 | # sha1sum.txt files and fail the validation if no signatures are 18 | # provided. 19 | 20 | if [ -z "$!" -a ! -d "$1" ];then 21 | echo "Error: must provide a directory as an argument." 1>&2 22 | exit 1 23 | fi 24 | 25 | find "$1" -type d | while read x;do 26 | set -e 27 | pushd "$x" &> /dev/null 28 | if [ ! -f sha1sum.txt.sig ];then 29 | echo -e "\nLocation: $x" 1>&2 30 | echo -e "Error: No sha1sum.txt.sig!\n" 1>&2 31 | exit 1 32 | fi 33 | if ! gpg --verify "sha1sum.txt.sig";then 34 | echo -e "\nLocation: $x\nError: sha1sum.txt.sig contains a bad signature.\n" 1>&2 35 | exit 1 36 | fi 37 | popd &> /dev/null 38 | done || exit 1 39 | 40 | echo -e "\nAll signatures exist and validated!\n" 1>&2 41 | -------------------------------------------------------------------------------- /bin/gpg_verify_checksums.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #Wed Jun 18 22:57:45 EDT 2014 4 | #Fedora release 16 (Verne) 5 | #Linux 3.6.11-4.fc16.x86_64 x86_64 6 | #GNU bash, version 4.2.37(1)-release (x86_64-redhat-linux-gnu) 7 | #sha1sum (GNU coreutils) 8.12 8 | #gpg (GnuPG) 1.4.13 9 | 10 | #USAGE: 11 | # gpg_verify_checksums.sh DIRECTORY 12 | 13 | #DESCRIPTION 14 | # This program will iterate through an encrypted directory structure 15 | # to verify the checksum all of the contents. This is to ensure that 16 | # the contents of a gpg_encrypt_individual_files.sh encrypted 17 | # directory maintains its integrity. This script eases that process. 18 | 19 | if [ -z "$!" -a ! -d "$1" ];then 20 | echo "Error: must provide a directory as an argument." 1>&2 21 | exit 1 22 | fi 23 | 24 | find "$1" -type d | while read x;do 25 | pushd "$x" &> /dev/null 26 | if [ ! -f sha1sum.txt ];then 27 | echo -e "\nLocation: $x" 1>&2 28 | echo -e "Error: No sha1sum.txt!\n" 1>&2 29 | exit 1 30 | fi 31 | if [ "$(find . -maxdepth 1 -type f | grep -v 'sha1sum\.txt\.sig' | wc -l)" -gt "1" ];then 32 | if ! sha1sum -c sha1sum.txt;then 33 | echo -e "\nLocation: $x" 1>&2 34 | echo "sha1sum failed:" 1>&2 35 | sha1sum -c sha1sum.txt 2> /dev/null | grep FAILED 36 | echo 37 | exit 1 38 | fi 39 | fi 40 | popd &> /dev/null 41 | done || exit 1 42 | 43 | echo -e "\nAll checksums exist and passed!\n" 1>&2 44 | -------------------------------------------------------------------------------- /bin/headers: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Tue Aug 6 10:35:08 EDT 2013 4 | #Linux 3.8.0-27-generic x86_64 5 | #GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) 6 | 7 | HEADERS_VERSION="v0.1.0" 8 | 9 | #Description: 10 | # Pulls http headers from a url while discarding content using curl 11 | 12 | function err() { 13 | #echo to stderr rather than stdout 14 | echo "${1}" 1>&2 15 | } 16 | function getheaders() { 17 | #http request and response headers 18 | #the 2>&1 1>/dev/null wires the default stdout to /dev/null and the default stderr to stdout so it can be processed by grep 19 | curl -v -o /dev/null "${1}" 2>&1 1>/dev/null | \grep "^\(<\|>\)" 20 | } 21 | function displayhelp() { 22 | err "headers ${HEADERS_VERSION} by Sam Gleske" 23 | err "" 24 | err "SYNOPSIS" 25 | err " headers [-h|--help] URL" 26 | err "" 27 | err "DESCRIPTION" 28 | err " This program uses curl to obtain the headers of an http request" 29 | err " while discarding the response from the request. If you want to get" 30 | err " the full http response then using curl is best (see CURL NOTES)." 31 | err "" 32 | err "OPTIONS" 33 | err " -h,--help Display this help guide." 34 | err " URL A standard curl url." 35 | err "" 36 | err "USAGE" 37 | err " Take a file name as an argument." 38 | err " headers http://server.com/file.html" 39 | err " Process stdin assuming all input are URLs" 40 | err " echo http://server.com/file.html | headers" 41 | err " Interactive command line with one URL per line" 42 | err " headers" 43 | err "" 44 | err "CURL NOTES" 45 | err " To get the full http request and response use verbose" 46 | err " curl -v http://server.com/file.html" 47 | err " To get the full response only printed to stdout" 48 | err " curl -D - http://server.com/file.html" 49 | } 50 | function main() { 51 | if [ "${1}" = "-h" -o "${1}" = "--help" ];then 52 | displayhelp 53 | exit 1 54 | elif [ ! "${#}" = "0" ];then 55 | while (( "$#" ));do 56 | getheaders "${1}" 57 | shift 58 | done 59 | else 60 | #assume since there's no arguments for help and no arguments then we must want to process stdin 61 | while read line;do 62 | getheaders "${line}" 63 | done 64 | fi 65 | } 66 | 67 | main $* 68 | -------------------------------------------------------------------------------- /bin/knownhosts.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Tue Aug 6 14:12:05 EDT 2013 4 | #Ubuntu 12.04.2 LTS 5 | #Linux 3.8.0-27-generic x86_64 6 | #GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) 7 | #OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012 8 | 9 | #Find known_hosts key issues given a list of server names in stdin 10 | 11 | STATUS=0 12 | 13 | while read line;do 14 | if ! ssh-keygen -H -F $line | grep "$(ssh-keyscan -t rsa ${line} 2>/dev/null | awk '{print $3}')" &> /dev/null;then 15 | echo $line 16 | STATUS=1 17 | fi 18 | done 19 | 20 | exit ${STATUS} 21 | -------------------------------------------------------------------------------- /bin/logchecker.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Created by Sam Gleske (sag47@drexel.edu) 3 | # Created 5 Oct 2011 4 | # 5 | # Usage: logchecker.py -s list [-h] [options] < logchecker.mbox 6 | # 7 | # Description: 8 | # Sam's python script for filtering the filtered logs emailed by logchecker from scribe. 9 | # Searches logchecker logs for specific servers. 10 | # Logchecker emails must be exported into an mbox format. 11 | # 12 | # KMail exports to mbox by default. 13 | # Thunderbird requires add-on ImportExportTools 14 | # http://nic-nac-project.de/~kaosmos/mboximport-en.html 15 | 16 | from sys import stdin 17 | from sys import stderr 18 | from sys import exit 19 | from optparse import OptionParser 20 | import re 21 | from os.path import isfile 22 | 23 | """ 24 | Global Variables 25 | Filters comprise of a file of OR expressions. All expressions in the filters file will be matched against a line at once. 26 | """ 27 | PROG_VERSION="v0.4" 28 | 29 | #Setting ENABLE_FILTERING=True forces filters to be on, otherwise it will be optional depending on the options passed to the program. 30 | ENABLE_FILTERING=True 31 | FILTERS_FILE="/home/sam/.config/logchecker.filters" #for a better description of the contents of FILTERS_FILE see process_filters_file() comment block written below 32 | #Setting ENABLE_SERVERS_FILE=True forces servers file to be processed for filtering by server name, otherwise it will be optional depending on the options passed to the program. 33 | ENABLE_SERVERS_FILE=True 34 | SERVERS_FILE="/home/sam/.config/logchecker.servers" #for a better description of the contents of SERVERS_FILE see process_servers_file() comment block written below 35 | #global variable filters is created by process_filters_file() function 36 | #filters = "" 37 | 38 | 39 | def main(): 40 | """ 41 | main() function used as the main entry point. 42 | 43 | Reads stdin of program and processes it 44 | """ 45 | global ENABLE_FILTERING,ENABLE_SERVERS_FILE,FILTERS_FILE,filters,sl 46 | #parsing the arguments for -w and -c 47 | #see docs http://docs.python.org/library/optparse.html 48 | parser = OptionParser( 49 | usage = "usage: %prog [-h] [options] < logchecker.mbox", 50 | version = "%prog " + PROG_VERSION + " created by Sam Gleske (sag47@drexel.edu)", 51 | description="This script filters logchecker logs from mbox files exported from mail clients by using stdin. Filter logs for specific servers. You must export the logchecker emails into the mbox format. It does not matter if you include the Logchecker Summary or not; it will be ignored." 52 | ) 53 | parser.add_option("-s","--server",action="store",type="str",dest="servers",default=False,help="Comma Separated list of servers in the logchecker list to filter for.",metavar="list") 54 | parser.add_option("-n","--servers-file",action="store",type="str",dest="SERVERS_FILE",default=None,help="One per line list of servers in a file to filter for. (Similar to -s)",metavar="FILE") 55 | parser.add_option("-f","--filters-file",action="store",type="str",dest="FILTERS_FILE",default=None,help="FILE contains filters which will be used to locally filter out logs line by line.",metavar="FILE") 56 | parser.add_option("-d","--disable-filters",action="store_true",dest="DISABLE_FILTERING",default=False,help="Disable line by line filtering no matter what options are passed.") 57 | parser.add_option("-i","--invert-servers",action="store_true",dest="invert",default=False,help="Invert the list of servers to include only servers *not* in the list.") 58 | (options,args) = parser.parse_args() 59 | 60 | if not ENABLE_FILTERING: 61 | ENABLE_FILTERING = bool(options.FILTERS_FILE) 62 | FILTERS_FILE = options.FILTERS_FILE 63 | if not ENABLE_SERVERS_FILE: 64 | ENABLE_SERVERS_FILE = bool(options.SERVERS_FILE) 65 | SERVERS_FILE = options.SERVERS_FILE 66 | 67 | if options.DISABLE_FILTERING: 68 | ENABLE_FILTERING = False 69 | 70 | sl = [] 71 | if bool(options.servers): 72 | sl = options.servers.split(',') 73 | 74 | #start processing data from stdin 75 | data = stdin.read() 76 | 77 | #process the filters file if filtering is enabled 78 | if ENABLE_FILTERING: 79 | process_filters_file() 80 | #process the servers file if it is enabled 81 | if ENABLE_SERVERS_FILE: 82 | process_servers_file() 83 | 84 | #split mbox file up into separate messages. Messages will be handled individualy 85 | #docs http://docs.python.org/library/re.html 86 | messages = re.split(r'From [-a-zA-Z0-9\.]+@[a-zA-Z0-9\.]+\s+[,a-zA-Z]{3,4}\s*[0-9]*\s*[a-zA-Z]{3}\s+[0-9]+\s+[0-9:]+\s+[-0-9]+',data) 87 | 88 | for msg in messages: 89 | #handle the data within each individual message 90 | splitdata = re.split(r'={4}=+',msg) 91 | 92 | #filter logs only for servers which I am concerned (only servers in the server list sl) 93 | x=1 94 | while x < len(splitdata): 95 | if not splitdata[x].split('\'')[1] in sl: 96 | notfound = True 97 | if len(sl) == 0: 98 | notfound = False 99 | else: 100 | for i in range(len(splitdata[x].split('\'')[1].split('.'))): 101 | if options.invert: 102 | if not splitdata[x].split('\'')[1].split('.')[i] in sl: 103 | notfound = False 104 | else: 105 | if splitdata[x].split('\'')[1].split('.')[i] in sl: 106 | notfound = False 107 | if notfound: 108 | x=x+2 109 | continue 110 | header = '\t\t' + "="*40 + splitdata[x] + "="*40 111 | if ENABLE_FILTERING: 112 | linebyline = splitdata[x+1].split('\n') 113 | #filter out logs by line 114 | #remove lines which match the filters similar to a NOT filter for regex 115 | #filters is a global variable 116 | linebyline = [line for line in linebyline if not filters.match(line)] 117 | #filter out empty lines. If there are less than 2 fields in the list then it means the list is empty because it contains ['\t\t'] 118 | #this if statement is necessary because there is no point in printing a server name if there are no logs in it 119 | if len(filter(len,linebyline)) < 2: 120 | x=x+2 121 | continue 122 | print header 123 | for line in linebyline: 124 | print line 125 | else: 126 | print header 127 | print splitdata[x+1] 128 | x=x+2 129 | 130 | def process_filters_file(): 131 | """ 132 | This is only done once and executed by the main() function. 133 | 134 | ABOUT THIS FILTERS_FILE 135 | I use this file as a way to filter servers even more. Not all admins wish to filter as much as I do so this is one way in which I can filter my local copy of logs without affecting the view of other admins. This is a more 136 | line by line log filter in addition to the hostname filter for logchecker.py. 137 | 138 | RULES FOR FORMATTING THIS FILTERS_FILE 139 | Comments are lines that start with a hash #. Nested hashes are not evaluated as comments. 140 | Each line is an expression which will be evaluated as an OR regex. 141 | The entire file is a string of OR regexes which will be compiled into a single regex to match against entire single lines. 142 | Blank lines will be ignored. 143 | Spaces on blank lines are also ignored. 144 | """ 145 | global filters 146 | if not isfile(FILTERS_FILE): 147 | stderr.write("STDERR: Filters file does not exist: " + FILTERS_FILE + "\n") 148 | stderr.write("STDERR: Try -h or --help options\n") 149 | stderr.write("STDERR: Alternatively configure the FILTERS_FILE variable at the top of logchecker.py or disable filtering by setting ENABLE_FILTERING=False\n") 150 | stderr.write("STDERR: Exiting.\n") 151 | exit(1) 152 | f = open(FILTERS_FILE,'r') 153 | filters = f.read() 154 | f.close() 155 | #remove comments from the filters list. 156 | #this basically splits the file into a list, remove all lines that start with a hash (#) and also if they contain only spaces. 157 | filters = [expr for expr in filters.split('\n') if not re.match(r'^#.*|^\s*$',expr)] 158 | #remove all entries in the filters list which are empty 159 | filters = filter(len,filters) 160 | #and rejoin the list using pipes as a separator (|) so that the filters can be compiled into a regex object 161 | #add ^expr$ to each expression so that it is matched against the whole line 162 | filters = '|'.join(filters) 163 | #filters = '$|^'.join(filters) 164 | #filters = '^' + filters + '$' 165 | #turn the string of expressions into a regex object 166 | filters = re.compile(filters) 167 | def process_servers_file(): 168 | """ 169 | This is only done once and executed by the main() function. 170 | 171 | ABOUT THIS SERVERS_FILE 172 | I use this file as a way to make filtering server names even easier. Rather than passing in a giant list to the -s 173 | option one could opt-in to use the SERVERS_FILE instead. 174 | 175 | RULES FOR FORMATTING THIS SERVERS_FILE 176 | Comments are lines that start with a hash #. Nested hashes are not evaluated as comments. 177 | Each line is a server name which will be displayed in the logchecker logs. 178 | The host name can be just the leading server name, e.g. myhost, or the FQDN, e.g. myhost.server.com. 179 | Blank lines will be ignored. 180 | Spaces on blank lines are also ignored. 181 | """ 182 | global sl 183 | if not isfile(SERVERS_FILE): 184 | stderr.write("STDERR: Servers file does not exist: " + SERVERS_FILE + "\n") 185 | stderr.write("STDERR: Try -h or --help options\n") 186 | stderr.write("STDERR: Alternatively configure the SERVERS_FILE variable at the top of logchecker.py or disable filtering by setting ENABLE_SERVERS_FILE=False\n") 187 | stderr.write("STDERR: Exiting.\n") 188 | exit(1) 189 | f = open(SERVERS_FILE,'r') 190 | servers = f.read() 191 | f.close() 192 | #remove comments from the filters list. 193 | #this basically splits the file into a list, remove all lines that start with a hash (#) and also if they contain only spaces. 194 | servers = [expr for expr in servers.split('\n') if not re.match(r'^#.*|^\s*$',expr)] 195 | #remove all entries in the filters list which are empty 196 | servers = filter(len,servers) 197 | #Append servers to list 198 | sl = sl + servers 199 | 200 | 201 | if __name__ == "__main__": 202 | try: 203 | main() 204 | except IOError: 205 | pass 206 | #cleaning up open file handles 207 | stdin.close() 208 | stderr.close() 209 | 210 | 211 | 212 | 213 | 214 | 215 | 216 | # CHANGELOG 217 | # Wed Feb 12 17:43:51 EST 2014 v0.4 released 218 | # Added --invert-servers option so that only servers *not* in the list are displayed. 219 | # Tue Feb 11 13:13:00 EST 2014 v0.3 released 220 | # Added --servers-file option. Ability to specify a file of hostnames to search for. 221 | # Wed May 30 19:50:19 EDT 2012 v0.2 released 222 | # Added two options (--filters-file and --disable-filters). Ability to filter logs line by line to cut out noise. 223 | # Second option is to disable filtering. 224 | # Created 5 Oct 2011 v0.1 released 225 | -------------------------------------------------------------------------------- /bin/ls.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #Author: Sam Gleske 3 | #Origin: https://github.com/sag47/drexel-university/ 4 | #Thu Nov 8 10:13:27 EST 2012 5 | #Python 2.7.3 6 | #Description: Speed up the listing of a directory when there are too many files for ls to handle. 7 | #Usage: 8 | # echo * | ls.py 9 | 10 | #" ".join(str.split(' ')[1:3]) 11 | import sys 12 | from os.path import * 13 | from optparse import OptionParser 14 | 15 | #command options 16 | parser=OptionParser() 17 | parser.add_option("-0","--null",action="store_true",dest="separator",default=False,help="Use null separator instead of new lines.") 18 | parser.add_option("-d","--dirs",action="store_true",dest="dirs_only",default=False,help="Only display directories.") 19 | parser.add_option("-f","--files",action="store_true",dest="files_only",default=False,help="Only display files.") 20 | 21 | (options,args)=parser.parse_args() 22 | if options.dirs_only and options.files_only: 23 | sys.stderr.write("Can't specify -f and -d options together. See ls.py --help.\n") 24 | sys.exit(1) 25 | if options.separator: 26 | separator="\0" 27 | else: 28 | separator="\n" 29 | 30 | #Check the existing path against options 31 | def checkfile(value): 32 | if (not options.dirs_only) and (not options.files_only): 33 | return True 34 | elif options.dirs_only and isdir(value): 35 | return True 36 | elif options.files_only and isfile(value): 37 | return True 38 | else: 39 | return False 40 | 41 | #do all initial calculations 42 | files=sys.stdin.read() 43 | files=files.split() 44 | length=len(files) 45 | 46 | start=0 47 | end=1 48 | 49 | 50 | 51 | while end <= length: 52 | if exists(" ".join(files[start:end])): 53 | if checkfile(" ".join(files[start:end])): 54 | sys.stdout.write( "%s%s" % (" ".join(files[start:end]),separator) ) 55 | start,end=end,end+1 56 | else:#skip file 57 | start,end=end,end+1 58 | continue 59 | else: 60 | end=end+1 61 | 62 | #successful run 63 | sys.exit(0) 64 | -------------------------------------------------------------------------------- /bin/ls2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #Author: Sam Gleske 3 | #Origin: https://github.com/sag47/drexel-university/ 4 | #Thu Nov 8 10:13:27 EST 2012 5 | #Python 2.7.3 6 | # 7 | #Description: 8 | # Speed up the listing of a directory when there are too many files for ls to handle. 9 | # In addition to the options this script has two modes: multiline and singleline. 10 | # These two modes are selected automatically based on received input but it handles 11 | # the processing of files differently. In multiline mode, each newline (\n) is processed 12 | # as a whole file path. In singleline mode, spaces are processed as a whole file path 13 | # and is expanded dynamically to account for spaces in file names. 14 | # 15 | # Please note: the -e option does not work with single line mode (handled gracefully). 16 | # This version is much slower than ls.py because it attempts to determine the mode for 17 | # multiline or singleline processing. 18 | # 19 | #Usage: 20 | # List out a very large directory (singleline mode determined) 21 | # echo * | ls2.py 22 | # Show non-existent files in a file list (multiline mode determined) 23 | # cat filelist | ls2.py -e 1> /dev/null 24 | 25 | #" ".join(str.split(' ')[1:3]) 26 | import sys,re 27 | from os.path import * 28 | from optparse import OptionParser 29 | 30 | #command options 31 | parser=OptionParser() 32 | parser.add_option("-0","--null",action="store_true",dest="separator",default=False,help="Use null separator instead of new lines.") 33 | parser.add_option("-d","--dirs",action="store_true",dest="dirs_only",default=False,help="Only display directories.") 34 | parser.add_option("-f","--files",action="store_true",dest="files_only",default=False,help="Only display files.") 35 | parser.add_option("-e","--err",action="store_true",dest="show_err_files",default=False,help="Output non-existing paths to stderr.") 36 | 37 | (options,args)=parser.parse_args() 38 | if options.dirs_only and options.files_only: 39 | sys.stderr.write("Can't specify -f and -d options together. See ls.py --help.\n") 40 | sys.exit(1) 41 | if options.separator: 42 | separator="\0" 43 | else: 44 | separator="\n" 45 | 46 | #Check the existing path against options 47 | def checkfile(value): 48 | if (not options.dirs_only) and (not options.files_only): 49 | return True 50 | elif options.dirs_only and isdir(value): 51 | return True 52 | elif options.files_only and isfile(value): 53 | return True 54 | else: 55 | return False 56 | 57 | #do all initial calculations 58 | files=sys.stdin.read() 59 | #determine if the file list is all on one line or if a multiline file list (determine mode) 60 | if ((files.split("\n")[1:] == ['']) and len(files.split("\n")) <= 2) or len(files.split("\n")) < 2: 61 | multiline_mode=False 62 | files=files.split() 63 | else: 64 | files=files.split("\n") 65 | files=filter(lambda x: not re.match(r'^\s*$', x), files) #remove empty list entries 66 | multiline_mode=True 67 | length=len(files) 68 | 69 | start=0 70 | end=1 71 | 72 | 73 | 74 | while end <= length: 75 | if exists(" ".join(files[start:end])): 76 | if checkfile(" ".join(files[start:end])): 77 | sys.stdout.write( "%s%s" % (" ".join(files[start:end]),separator) ) 78 | start,end=end,end+1 79 | else: 80 | start,end=end,end+1 #skip file (filtered out by checkfile) 81 | continue 82 | else: 83 | if multiline_mode: 84 | if options.show_err_files: 85 | sys.stderr.write( "%s%s" % (" ".join(files[start:end]),separator) ) 86 | start,end=end,end+1 #next file 87 | else: 88 | end=end+1 89 | 90 | #successful run 91 | sys.exit(0) 92 | -------------------------------------------------------------------------------- /bin/man2pdf: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | 4 | man -t "${1}" | ps2pdf - > "${1}.pdf" 5 | -------------------------------------------------------------------------------- /bin/missing_from_all_clusters.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Tue Aug 6 14:31:01 EDT 2013 4 | #Ubuntu 12.04.2 LTS 5 | #Linux 3.8.0-27-generic x86_64 6 | #GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu) 7 | 8 | #This is to show which aliases are missing from the All_clusters alias located at the top of my /etc/clusters file. 9 | for x in $(tail -n $(( $(wc -l /etc/clusters | cut -d\ -f1)-1 )) /etc/clusters | grep -v '^$' | cut -d\ -f1);do if ! head -n1 /etc/clusters | grep $x &>/dev/null;then echo $x;fi;done 10 | -------------------------------------------------------------------------------- /bin/pytailuntil.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #Created by Sam Gleske 3 | #Created on Fri Nov 30 14:38:16 EST 2012 4 | # 5 | #Tested: 6 | # Python 2.4.3 (#1, Jun 11 2009, 14:09:37) 7 | # Python 2.7.3 8 | # 9 | #Description: 10 | # "tail -f" a file until a phrase has been displayed and then exit. 11 | # 12 | #Modified from pytailer. 13 | # http://code.google.com/p/pytailer/source/browse/src/tailer/__init__.py 14 | 15 | import time 16 | 17 | class Tailer(object): 18 | line_terminators = ('\r\n', '\n', '\r') 19 | def __init__(self, file, read_size=1024, end=False): 20 | self.read_size = read_size 21 | self.file = file 22 | self.start_pos = self.file.tell() 23 | if end: 24 | self.seek_end() 25 | def seek_end(self): 26 | self.seek(0, 2) 27 | def seek(self, pos, whence=0): 28 | self.file.seek(pos, whence) 29 | def follow(self, delay=1.0): 30 | """\ 31 | Iterator generator that returns lines as data is added to the file. 32 | Based on: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/157035 33 | """ 34 | trailing = True 35 | while 1: 36 | where = self.file.tell() 37 | line = self.file.readline() 38 | if line: 39 | if trailing and line in self.line_terminators: 40 | # This is just the line terminator added to the end of the file 41 | # before a new line, ignore. 42 | trailing = False 43 | continue 44 | if line[-1] in self.line_terminators: 45 | line = line[:-1] 46 | if line[-1:] == '\r\n' and '\r\n' in self.line_terminators: 47 | # found crlf 48 | line = line[:-1] 49 | trailing = False 50 | yield line 51 | else: 52 | trailing = True 53 | self.seek(where) 54 | time.sleep(delay) 55 | def __iter__(self): 56 | return self.follow() 57 | def close(self): 58 | self.file.close() 59 | 60 | def help(): 61 | print """Name: 62 | pytailuntil - good for tailing service startup logs 63 | 64 | Synopsis: 65 | python pytailuntil.py /path/to/file.log "phrase to find" 66 | 67 | Description: 68 | "tail -f" a file until a phrase has been displayed and then exit. 69 | 70 | Modified from pytailer. 71 | http://code.google.com/p/pytailer/source/browse/src/tailer/__init__.py""" 72 | 73 | def _main(filepath,phrase): 74 | import re 75 | tailer = Tailer(open(filepath,'rb')) 76 | phrase_regex = re.compile(phrase) 77 | try: 78 | try: 79 | tailer.seek_end() 80 | for line in tailer.follow(delay=1.0): 81 | print line 82 | if not re.search(phrase_regex,line) == None: 83 | break 84 | except KeyboardInterrupt: 85 | pass 86 | finally: 87 | tailer.close() 88 | 89 | def main(): 90 | import sys 91 | from os.path import isfile 92 | if len(sys.argv) < 3 or sys.argv[1] == "-h" or sys.argv[1] == "--help": 93 | help() 94 | sys.exit() 95 | if isfile(sys.argv[1]): 96 | _main(sys.argv[1],sys.argv[2]) 97 | else: 98 | print >>sys.stderr, 'File does not exist, try --help' 99 | sys.exit(1) 100 | 101 | if __name__ == '__main__': 102 | main() 103 | -------------------------------------------------------------------------------- /bin/servercount: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #your /etc/clusters cssh file must have all aliases designated with one of the following prefixes: cluster_ host_ All_ 4 | 5 | #output the servers 6 | #cat /etc/clusters | tr ' ' '\n' | grep -v '^$\|cluster_\|host_\|All_' | sort -u | wc -l 7 | cat /etc/clusters | tr ' ' '\n' | grep -v '^$\|cluster_\|host_\|All_' | sort -u 8 | -------------------------------------------------------------------------------- /bin/sort_clusters: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Sanity configuration checking 4 | if [ -z "$(head -n1 /etc/clusters | sed 's/\s\+//g')" ];then 5 | echo "The first line of /etc/clusters must not be blank" 6 | exit 1 7 | elif [ ! "$(head -n1 /etc/clusters | awk '{print $1}')" = "All_clusters" ];then 8 | echo "All_clusters must be the first entry in /etc/clusters" 9 | exit 1 10 | fi 11 | 12 | #sort all of the entries on the first line 13 | head -n1 /etc/clusters | tr ' ' '\n' | sort | tr '\n' ' ' | sed 's/ \(.*\)/\1\n\n/' > /tmp/clusters 14 | 15 | #sort all following lines 16 | grep -v '^All_clusters' /etc/clusters | sort | grep -v '^$' | while read line;do echo -e "${line}\n";done >> /tmp/clusters 17 | 18 | #overwrite the current cluster configuration 19 | cat /tmp/clusters > /etc/clusters 20 | \rm /tmp/clusters 21 | -------------------------------------------------------------------------------- /bin/update_hostname.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #Fri Feb 14 15:46:16 EST 2014 4 | #Ubuntu 13.10 5 | #Linux 3.11.0-12-generic x86_64 6 | #GNU bash, version 4.2.45(1)-release (x86_64-pc-linux-gnu) 7 | #DESCRIPTION: 8 | # To be used in Ubuntu Server VM. When cloning a VM the hostname 9 | # should be updated. This is a small script to update the hostname. 10 | 11 | if [ ! "$USER" = "root" ];then 12 | echo "Must be root!" 13 | exit 1 14 | fi 15 | if [ -z "$1" ];then 16 | echo "Must provide hostname as arg!" 17 | echo "Usage:" 18 | echo " update_hostname.sh myhost" 19 | exit 1 20 | fi 21 | currenthost="$(head -n1 /etc/hostname)" 22 | sed -i 's/'"$currenthost"'/'"$1"'/g' /etc/hostname 23 | sed -i 's/'"$currenthost"'/'"$1"'/g' /etc/hosts 24 | hostname $1 25 | -------------------------------------------------------------------------------- /bin/wasted-ram-updates.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python -t 2 | # -*- coding: utf-8 -*- 3 | #Author: James Antill 4 | #Contributors: 5 | # Sam Gleske 6 | 7 | #Source1: http://www.redhat.com/archives/rhl-list/2009-July/msg00228.html 8 | #Source2: http://markmail.org/message/dodinyrhwgey35mh 9 | #Acquired Fri Nov 9 09:09:28 EST 2012 10 | 11 | #Successfully tested environments: 12 | # Ubuntu 12.04.1 LTS (Kubuntu) 13 | # Python 2.7.3 14 | # Red Hat Enterprise Linux Server release 6.3 (Santiago) 15 | # Python 2.6.6 16 | 17 | #DESCRIPTION: 18 | # Check /proc directory for programs with open file handles to 19 | # deleted files. Meaning the program is using the file in memory but the 20 | # it has been removed from the filesystem (such as a library). 21 | # 22 | # The purpose of this script is to safely determine which services should 23 | # be restarted after updating many packages on a server. 24 | # 25 | # Note that a few entries in this list are "normal" e.g. a program opens a 26 | # file for temporary storage, and then deletes it (but keeps the handle), 27 | # but most will indicate a process that needs to be restarted to reload 28 | # files from a new package. 29 | 30 | 31 | #USAGE: 32 | # Run with default behavior. This will group by file name. 33 | # python wasted-ram-updates.py 34 | # Group output by pids so that you can see which files the process has open. 35 | # python wasted-ram-updates.py pids 36 | # Only show a summary. 37 | # python wasted-ram-updates.py summary 38 | 39 | import os 40 | import sys 41 | 42 | 43 | # Decent (UK/US English only) number formatting. 44 | import locale 45 | locale.setlocale(locale.LC_ALL, '') 46 | 47 | #help documentation 48 | if len(sys.argv) > 1 and (sys.argv[1] == "-h" or sys.argv[1] == "--help"): 49 | print """ 50 | NAME: 51 | wasted-ram-updates.py - discovers programs with open file handles to 52 | deleted files. 53 | 54 | SYNOPSIS: 55 | wasted-ram-updates.py [OPTIONS] 56 | 57 | DESCRIPTION: 58 | The purpose of this script is to safely determine which services should 59 | be restarted after updating many packages on a server. 60 | 61 | Note that a few entries in this list are "normal" e.g. a program opens a 62 | file for temporary storage, and then deletes it (but keeps the handle), 63 | but most will indicate a process that needs to be restarted to reload 64 | files from a new package. 65 | 66 | OPTIONS: 67 | By default executing this program with no arguments will organize the 68 | output by file handle to deleted file. 69 | 70 | -h, --help 71 | show this help documentation 72 | 73 | pids 74 | Organize the output by PID 75 | 76 | summary 77 | Only displays summary information. 78 | 79 | EXAMPLES: 80 | wasted-ram-updates.py 81 | Run with default behavior. This will group by file name. 82 | 83 | wasted-ram-updates.py pids 84 | Group output by pids so that you can see which files the process 85 | has open. 86 | 87 | wasted-ram-updates.py summary 88 | Only show a summary. 89 | 90 | AUTHORS: 91 | Written by James Antill 92 | Contributors: 93 | Sam Gleske 94 | 95 | LINKS: 96 | Source 1: 97 | http://www.redhat.com/archives/rhl-list/2009-July/msg00228.html 98 | Source 2: 99 | http://markmail.org/message/dodinyrhwgey35mh 100 | """ 101 | sys.exit(0) 102 | 103 | def loc_num(x): 104 | """ Return a string of a number in the readable "locale" format. """ 105 | return locale.format("%d", int(x), True) 106 | def kmgtp_num(x): 107 | """ Return a string of a number in the MEM size format, Ie. "30 MB". """ 108 | ends = [" ", "K", "M", "G", "T", "P"] 109 | while len(ends) and x > 1024: 110 | ends.pop(0) 111 | x /= 1024 112 | return "%u %s" % (x, ends[0]) 113 | 114 | def cmdline_from_pid(pid): 115 | """ Fetch command line from a process id. """ 116 | try: 117 | cmdline= open("/proc/%i/cmdline" %pid).readlines()[0] 118 | return " ".join(cmdline.split("\x00")).rstrip() 119 | except: 120 | return "" 121 | 122 | pids = {} 123 | for d in os.listdir("/proc/"): 124 | try: 125 | pid = int(d) 126 | pids[pid] = lambda x: x 127 | pids[pid].files = set() 128 | pids[pid].vsz = 0 129 | pids[pid].s_size = 0 130 | pids[pid].s_rss = 0 131 | pids[pid].s_shared_clean = 0 132 | pids[pid].s_shared_dirty = 0 133 | pids[pid].s_private_clean = 0 134 | pids[pid].s_private_dirty = 0 135 | pids[pid].referenced = 0 136 | pids[pid].name = cmdline_from_pid(pid) 137 | except: 138 | pass 139 | 140 | def map_sz(x): 141 | """ Work out vsz from mapping range. """ 142 | (beg, end) = x.split('-') 143 | return (int(end, 16) - int(beg, 16)) 144 | 145 | files = {} 146 | for pid in pids.keys(): 147 | try: 148 | try: 149 | lines = open("/proc/%d/smaps" % pid).readlines() 150 | smaps = True 151 | except: 152 | lines = open("/proc/%d/maps" % pid).readlines() 153 | smaps = False 154 | 155 | off = 0 156 | while off < len(lines): 157 | line = lines[off] 158 | off += 1 159 | try: 160 | int(line[0]) 161 | except: 162 | continue 163 | 164 | data = line.split(None, 5) 165 | try: 166 | ino = int(data[4]) 167 | dev = int(data[3].split(":", 2)[0], 16) 168 | except: 169 | print "DBG: Bad line:", lines[off - 1] 170 | print "DBG: data=", data 171 | continue 172 | 173 | if dev == 0: 174 | continue 175 | if ino == 0: 176 | continue 177 | if '(deleted)' not in data[5]: 178 | continue 179 | 180 | key = "%s:%d" % (data[3], ino) 181 | if key not in files: 182 | files[key] = lambda x: x # Hack 183 | 184 | files[key].s_size = 0 185 | files[key].s_rss = 0 186 | files[key].s_shared_clean = 0 187 | files[key].s_shared_dirty = 0 188 | files[key].s_private_clean = 0 189 | files[key].s_private_dirty = 0 190 | files[key].referenced = 0 191 | 192 | files[key].vsz = 0 193 | files[key].pids = set() 194 | files[key].name = data[5] 195 | 196 | num = map_sz(data[0]) 197 | pids[pid].vsz += num 198 | pids[pid].files.update([key]) 199 | files[key].vsz += num 200 | files[key].pids.update([pid]) 201 | try: 202 | if smaps: 203 | off += 1 204 | num = int(lines[off].split(None, 3)[1]) 205 | pids[pid].s_size += num 206 | files[key].s_size += num 207 | off += 1 208 | num = int(lines[off].split(None, 3)[1]) 209 | pids[pid].s_rss += num 210 | files[key].s_rss += num 211 | off += 1 212 | num = int(lines[off].split(None, 3)[1]) 213 | pids[pid].s_shared_clean += num 214 | files[key].s_shared_clean += num 215 | off += 1 216 | num = int(lines[off].split(None, 3)[1]) 217 | pids[pid].s_shared_dirty += num 218 | files[key].s_shared_dirty += num 219 | off += 1 220 | num = int(lines[off].split(None, 3)[1]) 221 | pids[pid].s_private_clean += num 222 | files[key].s_private_clean += num 223 | off += 1 224 | num = int(lines[off].split(None, 3)[1]) 225 | pids[pid].s_private_dirty += num 226 | files[key].s_private_dirty += num 227 | off += 1 228 | try: 229 | num = int(lines[off].split(None, 3)[1]) 230 | pids[pid].referenced += num 231 | files[key].referenced += num 232 | off += 1 233 | except: 234 | pass 235 | except: 236 | print "DBG: Bad data:", lines[off - 1] 237 | 238 | except: 239 | pass 240 | 241 | vsz = 0 242 | s_size = 0 243 | s_rss = 0 244 | s_shared_clean = 0 245 | s_shared_dirty = 0 246 | s_private_clean = 0 247 | s_private_dirty = 0 248 | referenced = 0 249 | 250 | out_type = "files" 251 | if len(sys.argv) > 1 and sys.argv[1] == "pids": 252 | out_type = "pids" 253 | if len(sys.argv) > 1 and sys.argv[1] == "summary": 254 | out_type = "summary" 255 | 256 | for x in files.values(): 257 | vsz += x.vsz 258 | s_size += x.s_size 259 | s_rss += x.s_rss 260 | s_shared_clean += x.s_shared_clean 261 | s_shared_dirty += x.s_shared_dirty 262 | s_private_clean += x.s_private_clean 263 | s_private_dirty += x.s_private_dirty 264 | referenced += x.referenced 265 | 266 | if out_type == "files": 267 | print "%5sB:" % kmgtp_num(x.vsz), x.name, 268 | print "\ts_size = %5sB" % kmgtp_num(x.s_size * 1024) 269 | print "\ts_rss = %5sB" % kmgtp_num(x.s_rss * 1024) 270 | print "\ts_shared_clean = %5sB" % kmgtp_num(x.s_shared_clean * 1024) 271 | print "\ts_shared_dirty = %5sB" % kmgtp_num(x.s_shared_dirty * 1024) 272 | print "\ts_private_clean = %5sB" % kmgtp_num(x.s_private_clean * 1024) 273 | print "\ts_private_dirty = %5sB" % kmgtp_num(x.s_private_dirty * 1024) 274 | print "\treferenced = %5sB" % kmgtp_num(x.referenced * 1024) 275 | for pid in frozenset(x.pids): 276 | print "\t\t", pid, pids[pid].name 277 | 278 | 279 | for pid in pids.keys(): 280 | if not pids[pid].vsz: 281 | del pids[pid] 282 | 283 | if out_type == "pids": 284 | for pid in pids.keys(): 285 | print "%5sB:" % kmgtp_num(pids[pid].vsz), pid, pids[pid].name 286 | print "\ts_size = %5sB" % kmgtp_num(pids[pid].s_size * 1024) 287 | print "\ts_rss = %5sB" % kmgtp_num(pids[pid].s_rss * 1024) 288 | print "\ts_shared_clean = %5sB" % kmgtp_num(pids[pid].s_shared_clean * 1024) 289 | print "\ts_shared_dirty = %5sB" % kmgtp_num(pids[pid].s_shared_dirty * 1024) 290 | print "\ts_private_clean = %5sB" % kmgtp_num(pids[pid].s_private_clean * 1024) 291 | print "\ts_private_dirty = %5sB" % kmgtp_num(pids[pid].s_private_dirty * 1024) 292 | print "\treferenced = %5sB" % kmgtp_num(pids[pid].referenced * 1024) 293 | for key in pids[pid].files: 294 | print "\t\t", files[key].name, 295 | 296 | print "\ 297 | ==============================================================================" 298 | print "files = %8s" % loc_num(len(files)) 299 | print "pids = %8s" % loc_num(len(pids.keys())) 300 | print "vsz = %5sB" % kmgtp_num(vsz) 301 | print "\ 302 | ------------------------------------------------------------------------------" 303 | print "s_size = %5sB" % kmgtp_num(s_size * 1024) 304 | print "s_rss = %5sB" % kmgtp_num(s_rss * 1024) 305 | print "s_shared_clean = %5sB" % kmgtp_num(s_shared_clean * 1024) 306 | print "s_shared_dirty = %5sB" % kmgtp_num(s_shared_dirty * 1024) 307 | print "s_private_clean = %5sB" % kmgtp_num(s_private_clean * 1024) 308 | print "s_private_dirty = %5sB" % kmgtp_num(s_private_dirty * 1024) 309 | print "referenced = %5sB" % kmgtp_num(referenced * 1024) 310 | print "\ 311 | ==============================================================================" 312 | -------------------------------------------------------------------------------- /dotfiles/.bash_aliases: -------------------------------------------------------------------------------- 1 | alias vpn='openconnect -u sag47 -s /etc/vpnc/vpnc-script https://vpn.drexel.edu/' 2 | alias vpntest='openconnect -u sag47 -s /etc/vpnc/vpnc-script -g IRT-Private https://vpntest.drexel.edu/' 3 | 4 | #more aliases 5 | #alias ls='ls -lah --color=auto' 6 | alias ls='ls --color=auto' 7 | alias ssh='ssh -C' 8 | alias df='df -h' 9 | alias du='du -shc' 10 | alias amarokbackupdb='mysqldump --add-drop-table -u amarokuser -pamarok amarokdb > ~/Documents/amarok-backup.sql' 11 | alias firefoxvacuum='echo "sqlite3 VACUUM and REINDEX on firefox";for x in `find ~ -type f -name *.sqlite* | grep firefox`;do echo "$x";sqlite3 $x VACUUM;sqlite3 $x REINDEX;done' 12 | alias tux='ssh -C sag47@tux.cs.drexel.edu' 13 | alias x='exit' 14 | #alias cp='rsync -ruptv' 15 | alias irc_rizon='ssh -p23 -f sag47@home.gleske.net -L 16 | 1025:irc.rizon.net:6667 -N' 17 | alias irc_freenode='ssh -p23 -f sag47@home.gleske.net -L 18 | 1024:irc.freenode.net:6667 -N' 19 | alias vnc_tunnel='ssh -p23 -f sag47@home.gleske.net -L 2000:localhost:5902 -N' 20 | alias vnc_connect='ssvncviewer -passwd /home/sam/.vnc/passwd localhost:2000' 21 | alias vnc_kill_tunnel='/home/sam/.vnc/killtunnel.sh' 22 | alias vnc_tunnel_internal='ssh -p23 -f sag47@home.gleske.net -L 2001:hda.home:5902 -N' 23 | alias bzflag_connect='bzflag -window -geometry 1024x768' 24 | alias wychcraft='xfreerdp -u sag47 -d drexel -g 1260x965 wychcraft.irt.drexel.edu' 25 | alias rdp_tunnel='ssh -f sag47@home.gleske.net -L 2003:etherbeast.home:3389 -N && echo $!' 26 | alias rdp_connect='xfreerdp -t 2003 -u sam -g 1024x768 localhost' 27 | #alias servercount="echo $((`sed '1d' /etc/clusters | grep -v '^$' | wc -w`-`sed '1d' /etc/clusters | grep -v '^$' | wc -l`))" 28 | -------------------------------------------------------------------------------- /dotfiles/.bashrc_custom: -------------------------------------------------------------------------------- 1 | #set up some common aliases 2 | if ! which vim &> /dev/null;then 3 | alias vim="vi" 4 | fi 5 | alias x='exit' 6 | alias l.='ls -d .* --color=auto' 7 | alias ll='ls -l --color=auto' 8 | alias ls='ls --color=auto' 9 | 10 | #other terminal fun stuff 11 | if echo "${PATH}" | grep -v "~/bin" &> /dev/null;then 12 | export PATH="${PATH}:~/bin" 13 | fi 14 | export PS1="\`if [ \$? = 0 ]; then echo \[\e[33m\]^_^\[\e[0m\]; else echo \[\e[31m\]O_O\[\e[0m\]; fi\`[\u@\h:\w]\\$ " 15 | export EDITOR="vim" 16 | if [ "$(id -u)" -eq "0" ];then 17 | export HOME="/root" 18 | export HISTFILE="$HOME/.bash_history" 19 | export MAIL="/var/spool/mail/root" 20 | fi 21 | -------------------------------------------------------------------------------- /dotfiles/.gitconfig: -------------------------------------------------------------------------------- 1 | [alias] 2 | tree = log --graph --all --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(bold white). %an%C(reset)%C(bold yellow)%d%C(reset)' --abbrev-commit --date=relative 3 | tree2 = log --graph --all --format=format:'%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n'' %C(white)%s%C(reset) %C(bold white). %an%C(reset)' --abbrev-commit 4 | -------------------------------------------------------------------------------- /dotfiles/.vimperatorrc: -------------------------------------------------------------------------------- 1 | "3.2 (created: 2011/06/03 11:51:34) 2 | 3 | set titlestring=' Mozilla Firefox' 4 | nmap K gt 5 | nmap J gT 6 | set hintchars=hjklasdfgyuiopqwertnmzxcvb 7 | set "hinttags=//*[@onclick or @onmouseover or @onmousedown or @onmouseup or @oncommand or @class='lk' or @role='link' or @role='button'] | //input[not(@type='hidden')] | //a | //area | //iframe | //textarea | //button | //select | //xhtml:input[not(@type='hidden')] | //xhtml:a | //xhtml:area | //xhtml:iframe | //xhtml:textarea | //xhtml:button | //xhtml:select | //div[contains(@class,'J-K-I J-J5-Ji')]" 8 | source! /home/sam/.vimperatorrc.local 9 | 10 | " vim: set ft=vimperator: 11 | -------------------------------------------------------------------------------- /dotfiles/.vimrc: -------------------------------------------------------------------------------- 1 | "this is a comment 2 | "type :help command to see the vim help docs for that command 3 | :filetype on 4 | :au FileType c,cpp,java set cindent 5 | "will display the trailing space 6 | :highlight ExtraWhitespace ctermfg=Grey ctermbg=LightGrey 7 | :autocmd ColorScheme * highlight ExtraWhitespace ctermfg=Grey ctermbg=LightGrey 8 | :au BufWinEnter *.py let w:m2=matchadd('ExtraWhitespace', '\s\+\%#\@79v.\+', -1) 11 | 12 | set nocompatible 13 | set shiftwidth=2 14 | "showmode indicates input or replace mode at botto 15 | set showmode 16 | set showmatch 17 | "shortcut for toggling paste while in insert mode, press F2 key 18 | set pastetoggle= 19 | set backspace=2 20 | "hlsearch for when there is a previous search pattern, highlight all its matches. 21 | set hlsearch 22 | "ruler shows line and char number in bottom right of vim 23 | set ruler 24 | "each line has line number prepended 25 | set number 26 | "expandtab means tabs create spaces in insert mode, softtabstop is the number of spaces created 27 | "tabstop affects visual representation of tabs only 28 | set tabstop=4 29 | set expandtab 30 | set softtabstop=2 31 | 32 | "always show status and tabs 33 | set laststatus=2 34 | "set showtabline=2 35 | 36 | "ignore case 37 | set ignorecase 38 | 39 | "set background=light 40 | set background=dark 41 | set autoindent 42 | if &t_Co > 1 43 | syntax enable 44 | endif 45 | 46 | ":w!! will ask for password when trying to write to system files 47 | cmap w!! %!sudo tee > /dev/null % 48 | 49 | set incsearch 50 | 51 | "This executes a command and puts output into a throw away scratch pad 52 | "source: http://vim.wikia.com/wiki/Display_output_of_shell_commands_in_new_window 53 | function! s:ExecuteInShell(command, bang) 54 | let _ = a:bang != '' ? s:_ : a:command == '' ? '' : join(map(split(a:command), 'expand(v:val)')) 55 | if (_ != '') 56 | let s:_ = _ 57 | let bufnr = bufnr('%') 58 | let winnr = bufwinnr('^' . _ . '$') 59 | silent! execute winnr < 0 ? 'belowright new ' . fnameescape(_) : winnr . 'wincmd w' 60 | setlocal buftype=nowrite bufhidden=wipe nobuflisted noswapfile wrap number 61 | silent! :%d 62 | let message = 'Execute ' . _ . '...' 63 | call append(0, message) 64 | echo message 65 | silent! 2d | resize 1 | redraw 66 | silent! execute 'silent! %!'. _ 67 | silent! execute 'resize ' . line('$') 68 | silent! execute 'syntax on' 69 | silent! execute 'autocmd BufUnload execute bufwinnr(' . bufnr . ') . ''wincmd w''' 70 | silent! execute 'autocmd BufEnter execute ''resize '' . line(''$'')' 71 | silent! execute 'nnoremap :call ExecuteInShell(''' . _ . ''', '''')' 72 | silent! execute 'nnoremap r :call ExecuteInShell(''' . _ . ''', '''')' 73 | silent! execute 'nnoremap g :execute bufwinnr(' . bufnr . ') . ''wincmd w''' 74 | nnoremap _ :execute 'resize ' . line('$') 75 | silent! syntax on 76 | endif 77 | endfunction 78 | command! -complete=shellcmd -nargs=* -bang Scratchpad call s:ExecuteInShell(, '') 79 | command! -complete=shellcmd -nargs=* -bang Scp call s:ExecuteInShell(, '') 80 | -------------------------------------------------------------------------------- /dotfiles/gpg.conf: -------------------------------------------------------------------------------- 1 | # GnuPG config file created by KGpg 2 | 3 | default-key 7257E65F 4 | use-agent 5 | keyserver-options import-clean 6 | keyserver pgp.mit.edu 7 | -------------------------------------------------------------------------------- /icinga/README.md: -------------------------------------------------------------------------------- 1 | These are customizations I've made to icinga with short descriptions. Any lacking details will eventually be updated. 2 | 3 | http://www.icinga.org/ 4 | -------------------------------------------------------------------------------- /icinga/plugins/check_db_connections: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # Author: Sam Gleske (sag47@drexel.edu) 4 | # Date: 30 Sep 2011 5 | # Description: Reports the number of database network connections open on port 2337 6 | # check_db_connections 7 | # tested on Python 2.4-2.6 8 | 9 | from sys import exit 10 | from optparse import OptionParser 11 | from commands import getstatusoutput 12 | 13 | #exit status 14 | UNKNOWN = 3 15 | OK = 0 16 | WARNING = 1 17 | CRITICAL = 2 18 | 19 | def main(): 20 | #parsing the arguments for -w and -c 21 | #see docs http://docs.python.org/library/optparse.html 22 | parser = OptionParser( 23 | #usage = "usage: %prog -w limit -c limit", 24 | version = "%prog v0.1 created by Sam Gleske", 25 | description="This nagios plugin is designed to give the number of database network connections open on port 2337." 26 | ) 27 | parser.add_option("-w",action="store",type="int",dest="w",default=False,help="Connection limit for the warning threshold.",metavar="limit") 28 | parser.add_option("-c",action="store",type="int",dest="c",default=False,help="Connection limit for the critical threshold.",metavar="limit") 29 | (options, args) = parser.parse_args() 30 | 31 | #Check to make sure the user even provided an argument for -c or -w and if not exit out 32 | if not bool(options.c) or not bool(options.w): 33 | print "Syntax error: no -w or -c specified, try -h for help" 34 | exit(UNKNOWN) 35 | 36 | 37 | # Execute a command which gives the number of network connections on port 2337. 38 | # docs http://docs.python.org/library/commands.html 39 | # docs http://docs.python.org/tutorial/introduction.html#lists 40 | (returnCode, response) = getstatusoutput('netstat -nt | grep 2337 | wc -l') 41 | 42 | if returnCode != 0: 43 | print "UNKNOWN - %s" % (response) 44 | exit(UNKNOWN) 45 | 46 | connections = int(response) 47 | 48 | #set the thresholds for warning and critical 49 | (wThresh,cThresh) = (options.w,options.c) 50 | #I know this seems excessive but it's required since -w and -c could either be an int or a string (because of the % symbol) 51 | if cThresh <= wThresh: 52 | print "UNKNOWN - warning (-w %s) must be less than critical (-c %s)" % (options.w, options.c) 53 | exit(UNKNOWN) 54 | 55 | #The output of the following is the python version of printf, see documentation. 56 | #http://docs.python.org/library/stdtypes.html#string-formatting 57 | #performance data goes like so - |graph_title=current_value;warning_value;critical_value;graph_min_value;graph_max_value (optional) 58 | elif connections >= cThresh: 59 | print "DB_CONNECTIONS CRITICAL - %d connections |db_connections=%d;%d;%d;0" % (connections,connections,wThresh,cThresh) 60 | exit(CRITICAL) 61 | if connections >= wThresh: 62 | print "DB_CONNECTIONS WARNING - %d connections |db_connections=%d;%d;%d;0" % (connections,connections,wThresh,cThresh) 63 | exit(WARNING) 64 | else: 65 | print "DB_CONNECTIONS OK - %d connections |db_connections=%d;%d;%d;0" % (connections,connections,wThresh,cThresh) 66 | exit(OK) 67 | 68 | 69 | 70 | if __name__ == "__main__": 71 | main() 72 | -------------------------------------------------------------------------------- /icinga/plugins/check_last_fsck: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Wed Jul 3 11:06:59 EDT 2013 4 | #Checks the date of the last successful fsck of a filesystem. 5 | 6 | while test $# -gt 0;do 7 | case "$1" in 8 | -h|--help) 9 | cat < /dev/null | grep 'Last checked:' | sed 's/^Last checked:\s\+//')" +%s)" 114 | days_since_fsck="$(( ( ${current_time} - ${last_fsck} ) / 3600 / 24 ))" 115 | filesystem_state="$(${dumpe2fs_bin} -h ${dev_filesystem} 2> /dev/null | grep '^Filesystem state:' | sed 's/^Filesystem state:\s\+//')" 116 | 117 | #set the status to OK before processing 118 | STATUS="${OK}" 119 | if [ "${days_since_fsck}" -ge "${crit_thresh}" ] || [ "${filesystem_state}" != "clean" ];then 120 | STATUS="${CRITICAL}" 121 | elif [ "${days_since_fsck}" -ge "${warn_thresh}" ];then 122 | STATUS="${WARNING}" 123 | fi 124 | MESSAGE="FS_State=${filesystem_state}; Last_fsck=${days_since_fsck} days; FS=${dev_filesystem}" 125 | echo "${MESSAGE}" 126 | exit ${STATUS} 127 | -------------------------------------------------------------------------------- /icinga/plugins/check_md_raid: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Created by Sebastian Grewe, Jammicron Technology 4 | # 5 | # Modified by Sam Gleske (http://www.gleske.net/) 6 | #Source: http://exchange.nagios.org/directory/Plugins/Operating-Systems/Linux/check_md_raid/details 7 | 8 | #enable troubleshooting emails (yes=1,no=0) 9 | email="0" 10 | emailaddress="your@email.here" 11 | 12 | # Get count of raid arrays 13 | RAID_DEVICES=`grep ^md -c /proc/mdstat` 14 | 15 | # Get count of degraded arrays 16 | RAID_STATUS=`grep "\[.*_.*\]" /proc/mdstat -c` 17 | 18 | # Is an array currently recovering, get percentage of recovery 19 | RAID_RECOVER=`grep recovery /proc/mdstat | awk '{print $4}'` 20 | 21 | # Check raid status 22 | # RAID recovers --> Warning 23 | if [[ $RAID_RECOVER ]]; then 24 | STATUS="WARNING - Checked $RAID_DEVICES arrays, recovering : $RAID_RECOVER" 25 | EXIT=1 26 | # RAID ok 27 | elif [[ $RAID_STATUS == "0" ]]; then 28 | STATUS="OK - Checked $RAID_DEVICES arrays." 29 | EXIT=0 30 | # All else critical, better save than sorry 31 | else 32 | STATUS="CRITICAL - Checked $RAID_DEVICES arrays, $RAID_STATUS have FAILED" 33 | EXIT=2 34 | if [ $email -eq 1 ];then 35 | sendmail "$emailaddress" <= wThresh: 67 | print "UNKNOWN - warning (-w %s) must be greater than critical (-c %s)" % (options.w, options.c) 68 | exit(UNKNOWN) 69 | 70 | #Calculate percentage of free memory (100% is completely free and 0% is full memory) 71 | percMemFree = int(float(memFree)/float(memTotal)*100) 72 | 73 | #The output of the following is the python version of printf, see documentation. 74 | #http://docs.python.org/library/stdtypes.html#string-formatting 75 | if memFree < 0: 76 | print "MEMORY UNKNOWN - free memory less than zero?|memory=0MB;%d;%d;0;%d" % (wThresh,cThresh,memTotal) 77 | exit(UNKNOWN) 78 | elif memFree <= cThresh: 79 | print "MEMORY CRITICAL - %d%% free (%d MB out of %d MB) |memory=%dMB;%d;%d;0;%d" % (percMemFree,memFree,memTotal,memFree,wThresh,cThresh,memTotal) 80 | exit(CRITICAL) 81 | elif memFree <= wThresh: 82 | print "MEMORY WARNING - %d%% free (%d MB out of %d MB) |memory=%dMB;%d;%d;0;%d" % (percMemFree,memFree,memTotal,memFree,wThresh,cThresh,memTotal) 83 | exit(WARNING) 84 | else: 85 | print "MEMORY OK - %d%% free (%d MB out of %d MB) |memory=%dMB;%d;%d;0;%d" % (percMemFree,memFree,memTotal,memFree,wThresh,cThresh,memTotal) 86 | exit(OK) 87 | 88 | 89 | 90 | if __name__ == "__main__": 91 | main() 92 | -------------------------------------------------------------------------------- /icinga/plugins/gitlab_status: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Sam Gleske 3 | #Wed Nov 13 10:28:55 EST 2013 4 | #Linux 2.6.32-358.18.1.el6.x86_64 x86_64 5 | #GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) 6 | 7 | #settings (critical must be less than warn) 8 | worker_count_warn="1" 9 | worker_count_crit="0" 10 | 11 | #exit status 12 | UNKNOWN=3 13 | OK=0 14 | WARNING=1 15 | CRITICAL=2 16 | 17 | STATUS=0 18 | echo_string="" 19 | 20 | #this simple function will compare the integer argument with the current $STATUS and returns the greater of the two 21 | function set_status(){ 22 | if [ "$1" -gt "${STATUS}" ];then 23 | return $1 24 | else 25 | return ${STATUS} 26 | fi 27 | } 28 | 29 | worker_count="$(ps aux | grep '^gitlab' | grep 'unicorn_rails worker' | wc -l)" 30 | master_count="$(ps aux | grep '^gitlab' | grep 'unicorn_rails master' | wc -l)" 31 | sidekiq_count="$(ps aux | grep '^gitlab' | grep sidekiq | wc -l)" 32 | sidekiq_queue="$(ps aux 2>&1 | grep '^gitlab' | grep -v '^$' | grep sidekiq | sed 's#.*\[\(.*\)\]#\1#')" 33 | 34 | #check for the unicorn master 35 | if [ "${master_count}" -lt "1" ];then 36 | echo_string="Unicorn Master: not running CRITICAL" 37 | set_status "${CRITICAL}" 38 | STATUS="$?" 39 | else 40 | echo_string="Unicorn Master: ${master_count} OK" 41 | fi 42 | 43 | #check for the unicorn workers 44 | if [ "${worker_count}" -le "${worker_count_crit}" ];then 45 | echo_string="${echo_string}; Unicorn Workers: ${worker_count} CRITICAL" 46 | set_status "${CRITICAL}" 47 | STATUS="$?" 48 | elif [ "${worker_count}" -le "${worker_count_warn}" ];then 49 | echo_string="${echo_string}; Unicorn Workers: ${worker_count} WARNING" 50 | set_status "${WARNING}" 51 | STATUS="$?" 52 | else 53 | echo_string="${echo_string}; Unicorn Workers: ${worker_count} OK" 54 | fi 55 | 56 | #check for sidekiq 57 | if [ "${sidekiq_count}" -lt "1" ];then 58 | echo_string="${echo_string}; Sidekiq Queue: not running CRITICAL" 59 | set_status "${CRITICAL}" 60 | STATUS="$?" 61 | else 62 | echo_string="${echo_string}; Sidekiq Queue: ${sidekiq_queue}" 63 | fi 64 | 65 | echo "${echo_string}" 66 | exit ${STATUS} 67 | -------------------------------------------------------------------------------- /icinga/plugins/jvm_health/README.md: -------------------------------------------------------------------------------- 1 | # JVM GC Performance Tuning and Alerts 2 | 3 | The garbage collector's young object memory space is reported so that memory leaks can be detected in developing applications. To learn more about performance tuning and garbage collection in Java 1.6 then check out the following articles. 4 | 5 | * http://www.petefreitag.com/articles/gctuning/ 6 | * http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html 7 | 8 | --- 9 | ## How it works 10 | 11 | ### Prerequisites 12 | 13 | Be sure to place `jvm_health.py` and `parsegarbagelogs.py` in `/usr/local/sbin/`. 14 | 15 | Must enable garbage collection logs in the JVM options. 16 | 17 | JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails -Xloggc:/path/to/log/garbage.log" 18 | 19 | Create an sqlite database which will be used by `parsegarbagelogs.py`. 20 | 21 | 22 | sqlite3 /var/db/sqldb 23 | CREATE TABLE `datapoints` (`timestamp` varchar(40),`young_space` varchar(20), `old_space` varchar(20), `perm` varchar(20), `young_gc` varchar(25),`young_gc_collection_time` varchar(25), `full_gc` varchar(25), `full_gc_collection_time` varchar(25),`total_gc_time` varchar(25)); 24 | .quit 25 | 26 | There should be a cron job which runs parsegarbagelogs.py every 15 minutes. 27 | 28 | 0,15,30,45 * * * * /usr/local/sbin/parsegarbagelogs.py -f /path/to/log/garbage.log -s /var/db/sqldb > /dev/null 2>&1 29 | 30 | 31 | `parsegarbagelogs.py` parses the garbage collector logs and calculates the percentage of memory of the young object memory space against the total space. It takes that calculated value and stores it in an sqlite database located at /var/db/sqldb. parsegarbagelogs.py should have owner `root:nsca` with `755` permissions if you're using passive checks in Icinga. The cron job for `parsegarbagelogs.py` is run by root and `crontab -l` shows the cron job listing. 32 | 33 | ### Monitoring JVM-HEALTH 34 | 35 | `jvm_health.py` is a Icinga plugin which reads the calculated percentage from the sqlite database and reports a status to Icinga. If the young object memory percentage is less than 50% then it is good. If greater than 50% then warning. If greater than 90% then critical and a crash is imminent. Here is the `cmds` variable in the [report-status.py](https://github.com/sag47/drexel-university/blob/master/icinga/scripts/report-status.py) script for [passive Icinga checks](http://docs.icinga.org/latest/en/passivechecks.html). 36 | 37 | cmds = { 38 | "LOAD": "check_load -w 5,5,5 -c 10,10,10", 39 | "DISK": "check_disk -w 5% -c 1%", 40 | "PROCS-SENDMAIL": "check_procs -u root -C sendmail -w 1: -c 1:", 41 | "PROCS-NTPD": "check_procs -u ntp -C ntpd -w 1: -c 1:", 42 | "JVM-HEALTH": "/usr/local/sbin/jvm_health.py" 43 | } 44 | 45 | ### Resolving JVM-HEALTH error states 46 | 47 | First check the garbage.log to be sure that there is an actual problem with the free memory for young objects. See the articles previously mentioned for how to read garbage.log. If garbage.log is truly reporting a problem then the next step is to look at munin for your JVM server under the "JVM Garbage Collection Time Spent" graph. It should look like a saw tooth under weekly, monthly, and yearly. If the graph is a permanently inclining step graph then it means there is possibly a memory leak in one of the test client apps on your JVM so work with your developer to figure out the root cause. At this point, assuming you're not in a mid-crisis in production (you should have a service highly available or this should be a test system) you may go ahead and enable a remote JVM console and hook up Java VisualVM (`jvisualvm`). See what you can figure out from thread dumps, heap dumps, and so on. If you find your system is pegged at 100% CPU usage it could be caused by a race condition across unsynchronised threads. You can verify that by profiling the runtime with `jvisualvm` and look to see if multiple threads are stuck in the same method. Once you're done diagnosing you should go ahead and kill the JVM app server and restart it. 48 | 49 | In Icinga, to resolve the error state you must execute `parsegarbagelogs.py` (it updates the sqldb), `jvm_health.py` (to verify the check passes), and `report-status.py` to ensure an update is immediately submitted to Icinga. 50 | -------------------------------------------------------------------------------- /icinga/plugins/jvm_health/jvm_health.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | __author__="Chris Holcombe" 3 | __date__ ="$May 13, 2010 1:08:49 PM$" 4 | 5 | ''' 6 | The purpose of this script is to tell nagios when the java process 7 | being monitored is having a problem. I am defining a 'warning' state to be 8 | when the java young memory space is about 50% usage. I will also make the 9 | 'critical' state be when the young memory space usage is about 90% and the JVM 10 | is probably about to crash. 11 | 12 | This script will connect to the sqlite database defined, query the last known 13 | state of the JVM and then decide what exit code/return status to return to 14 | nagios. 15 | 16 | Nagios return status: 17 | 18 | 0 OK The plugin was able to check the service and it appeared to be functioning properly 19 | 1 Warning The plugin was able to check the service, but it appeared to be above some "warning" threshold. 20 | 2 Critical The plugin detected that either the service was not running or it was above some "critical" threshold 21 | 22 | ''' 23 | sqlite_3 = 1; 24 | 25 | import sys 26 | import os 27 | 28 | try: 29 | import sqlite3 30 | sql = sqlite3; 31 | except ImportError: 32 | import sqlite 33 | sql = sqlite; 34 | sqlite_3 = 0; 35 | 36 | 37 | sqlitedb='/var/db/sqldb' #SQLite database to store the information we're collecting 38 | 39 | def get_current_data(): 40 | if(sqlitedb is None): 41 | return None 42 | conn = sql.connect(sqlitedb); 43 | if(conn is None): 44 | return None 45 | cursor = conn.cursor(); 46 | cursor.execute("select young_space,ROWID from datapoints ORDER BY ROWID DESC Limit 1"); 47 | row = cursor.fetchone(); 48 | conn.close(); 49 | if (row is None): 50 | conn.close(); 51 | return None; 52 | else: 53 | #this query will get the latest young_space information. 54 | return row[0] 55 | 56 | def main(): 57 | global sqlitedb 58 | 59 | # Check to see if the user supplied good file information 60 | if (os.path.isfile(sqlitedb) is False): 61 | print "Can not find Sqlitedb: " + sqlitedb; 62 | sys.exit(2); 63 | 64 | # Get the current usage data 65 | young_space = int(get_current_data()) 66 | if young_space is None: 67 | print "Error getting young space information" 68 | sys.exit(2) 69 | elif young_space < 50: 70 | print "Young space usage is good" 71 | sys.exit(0) 72 | elif young_space >= 50 and young_space < 90: 73 | print "Young space usage is > 50%" 74 | sys.exit(1) 75 | else: 76 | print "Young space usage is > 90%. Crash is highly likely." 77 | sys.exit(2) 78 | 79 | if __name__ == "__main__": 80 | print main() 81 | -------------------------------------------------------------------------------- /icinga/plugins/ssl_check: -------------------------------------------------------------------------------- 1 | #!/usr/bin/perl -w 2 | # code to check ssl expiration status 3 | # takes three arguments, expireation threshold, host, and port, separated by a space 4 | # author: kyle halpin/drexel university 5 | # date: 2/26/2009 6 | 7 | 8 | use strict; 9 | use warnings; 10 | use Getopt::Std; 11 | use Date::Manip; 12 | 13 | my %ERRORS=('OK'=>0,'WARNING'=>1,'CRITICAL'=>2,'UNKNOWN'=>3,'DEPENDENT'=>4); 14 | 15 | # parse args 16 | my %opts; 17 | getopts('h:p:t:f:',\%opts); 18 | my $host = $opts{'h'}; 19 | my $port = $opts{'p'}; 20 | my $file = $opts{'f'}; 21 | my $expiration_threshold = $opts{'t'}; 22 | if(!defined($expiration_threshold)){ 23 | $expiration_threshold = 14; 24 | } 25 | 26 | my $max_differential_seconds = $expiration_threshold * 86400; 27 | 28 | 29 | # add new chunk to work with a file based cert, not network based gets 30 | my $cmd = ""; 31 | 32 | if(defined($opts{'f'})){ 33 | chomp($file); 34 | $cmd = "cat $file | openssl x509 -text | grep 'Not\ After'"; 35 | }else{ 36 | $cmd = "echo -n '' | openssl s_client -connect $host:$port 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' |openssl x509 -text | grep 'Not\ After'"; 37 | } 38 | 39 | my @cmd_output = `$cmd`; 40 | $cmd_output[0] =~ m/\s+(.*)$/; 41 | 42 | # prepare the date string 43 | my $date = $1; 44 | $date =~ s/Not\ After\ \:\ //; 45 | $date =~ s/\ GMT//; 46 | 47 | chomp($date); 48 | 49 | print("$date\n"); 50 | # get unix epoc seconds 51 | my $unix_expiration_date=&UnixDate($date,'%s'); 52 | my $unix_current_date=&UnixDate("today",'%s'); 53 | 54 | my $time_difference = $unix_expiration_date - $unix_current_date; 55 | 56 | print("Date: $unix_expiration_date\n"); 57 | print("UDate: $unix_current_date\n"); 58 | print("EDate: $time_difference\n"); 59 | print("TDate: $max_differential_seconds\n"); 60 | 61 | if($time_difference > $max_differential_seconds){ 62 | print("good"); 63 | exit($ERRORS{'OK'}); 64 | }else{ 65 | print("going bad"); 66 | exit($ERRORS{'CRITICAL'}); 67 | } 68 | -------------------------------------------------------------------------------- /icinga/plugins/ssl_check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Date Created: Thu Nov 8 16:19:10 EST 2012 4 | #The MIT License 5 | #Copyright (c) 2012 Samuel Gleske, Drexel University 6 | # 7 | #Description: 8 | # This simple script checks the expiration of an SSL Certificate. 9 | # If the cert is within 30 days of expiration there will be an Icinga warning. 10 | # If the cert is within 14 days of expiration there will be an Icinga critical. 11 | # 12 | #Usage: 13 | # ssl_check.sh server:port 14 | # ssl_check.sh -f /path/to/cert.crt 15 | # 16 | #Required Packages: bash, coreutils, grep, sed, openssl 17 | 18 | #values are in days 19 | expire_warning=30 20 | expire_critical=14 21 | 22 | #exit status 23 | UNKNOWN=3 24 | OK=0 25 | WARNING=1 26 | CRITICAL=2 27 | 28 | if [ -z "$1" ] || [ "$1" = "-h" ] || [ "$1" = "--help" ];then 29 | cat </dev/null 2>&1 45 | if [ ! "$?" = "0" ];then 46 | echo "UNKNOWN - $(openssl x509 -text -in $2 2>&1 1>/dev/null | head -n1)" 47 | exit $UNKNOWN 48 | fi 49 | else #run a timeout of 3 seconds for the openssl command 50 | ssl_exp_date="$(timeout 3 openssl s_client -connect $1 2>/dev/null < /dev/null | openssl x509 -text 2>/dev/null | grep 'Not After' | sed 's/^ *Not After *: *//')" 51 | #test for successful certificate 52 | timeout 3 openssl s_client -connect $1 /dev/null | openssl x509 -text 1>/dev/null 2>&1 53 | if [ ! "$?" = "0" ];then 54 | echo "UNKNOWN - $(timeout 3 openssl s_client -connect $1 /dev/null | openssl x509 -text 2>&1 1>/dev/null | head -n1)" 55 | exit $UNKNOWN 56 | fi 57 | fi 58 | time_left_in_seconds=$(( $(date -d "$ssl_exp_date" +%s) - $(date +%s) )) 59 | warn_val=$(( $expire_warning*24*3600 )) 60 | crit_val=$(( $expire_critical*24*3600 )) 61 | 62 | #logic 63 | if [ "$time_left_in_seconds" -lt "0" ];then 64 | echo "CRITICAL - Cert Expired $(date -d "$ssl_exp_date")" 65 | exit $CRITICAL 66 | elif [ "$time_left_in_seconds" -lt "$crit_val" ];then 67 | echo "CRITICAL - Cert Expires $(date -d "$ssl_exp_date")" 68 | exit $CRITICAL 69 | elif [ "$time_left_in_seconds" -lt "$warn_val" ];then 70 | echo "WARNING - Cert Expires $(date -d "$ssl_exp_date")" 71 | exit $WARNING 72 | else 73 | echo "OK - Cert Expires $(date -d "$ssl_exp_date")" 74 | exit $OK 75 | fi 76 | 77 | #Tested Environment: 78 | # Ubuntu 12.04.1 LTS Linux 3.2.0-32-generic x86_64 GNU/Linux 79 | # GNU bash, version 4.2.24(1)-release (x86_64-pc-linux-gnu) 80 | # GNU coreutils 8.13-3ubuntu3.1 81 | # GNU sed version 4.2.1 82 | # grep (GNU grep) 2.10 83 | # OpenSSL 1.0.1 14 Mar 2012 84 | -------------------------------------------------------------------------------- /icinga/plugins/test.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Created on Oct. 25, 2011 4 | #Icinga plugin, put it in /usr/local/icinga/libexec/ 5 | #This was written to test contacts 6 | 7 | #Add the following to commands.cfg in icinga config. 8 | #define command{ 9 | # command_name test 10 | # command_line $USER1$/test.sh 11 | #} 12 | 13 | #End of doc 14 | 15 | #Exit status key 16 | # OK = 0 17 | # WARNING = 1 18 | # CRITICAL = 2 19 | # UNKOWN = 3 20 | 21 | #define your test exit status here referencing the key above 22 | status=0 23 | 24 | echo -n "Test status: " 25 | 26 | case $status in 27 | 0) 28 | echo "OK" 29 | ;; 30 | 1) 31 | echo "WARNING" 32 | ;; 33 | 2) 34 | echo "CRITICAL" 35 | ;; 36 | *) 37 | echo "UNKNOWN" 38 | ;; 39 | esac 40 | 41 | exit $status 42 | -------------------------------------------------------------------------------- /icinga/sbin/munin-cgi.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # Author: Sam Gleske (sag47@drexel.edu) 4 | # Date: 04 Oct 2011 5 | # Description: CGI Script written in Python to integrate munin into Icinga/Nagios using a template 6 | # To install the script in Icinga/Nagios simply copy it into the following folder 7 | # /usr/local/icinga/sbin 8 | # 9 | # https://projects.irt.drexel.edu/systems/wiki/Monitoring#MuninIntegrationWithNagios 10 | # munin-cgi.py 11 | # tested on Python 2.4 12 | 13 | #docs http://docs.python.org/library/cgi.html 14 | import cgitb,cgi 15 | from sys import exit 16 | 17 | # toggle the cgitb.enable comments for debugging 18 | #cgitb.enable() 19 | cgitb.enable(display=0, logdir="/tmp") 20 | 21 | form=cgi.FieldStorage() 22 | 23 | 24 | #test to make sure that the url for the cgi contains a host POST argument 25 | #example is server.com/cgi-bin/munin.py?host=somehost 26 | #?host= must have a value or exit in error 27 | if "host" not in form or len(form.getlist("host")[0]) < 1 or form.getlist("host")[0] == None: 28 | print "Content-Type: text/html" # HTML is following 29 | print # blank line, end of headers 30 | print "Error: no host name specified
should include ?host=somehost at the end of the url" 31 | exit(1) 32 | 33 | 34 | # the goal is if the host is nagios.irt.drexel.edu then we want to redirect to /munin/irt.drexel.edu/nagios.irt.drexel.edu 35 | 36 | hostname = form.getlist("host")[0] 37 | if len(hostname.split('.',1)) > 1: 38 | domain = hostname.split('.',1)[1] 39 | else: 40 | domain = None 41 | 42 | if domain == None: 43 | print "Location: /munin/%s" % (hostname) 44 | print 45 | exit(0) 46 | else: 47 | print "Location: /munin/%s/%s" % (domain,hostname) 48 | print 49 | exit(0) 50 | -------------------------------------------------------------------------------- /icinga/scripts/check-icinga-address-against-hostname.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #Created by Sam Gleske 3 | #Created Jul 26 13:12 2012 4 | #make sure that addresses match the hostname in icinga host cfg files 5 | #if a linux server in icinga is not in munin then it will show output 6 | 7 | 8 | 9 | function notfiltered() { 10 | case "$1" in 11 | quail) 12 | return 1 13 | ;; 14 | quail-z1) 15 | return 1 16 | ;; 17 | sparrow) 18 | return 1 19 | ;; 20 | esac 21 | } 22 | 23 | 24 | #muninserver=http:// 25 | icingahostconfigsdir=/usr/local/icinga/etc/hosts 26 | 27 | find $icingahostconfigsdir -type f -name '*.cfg' | while read file;do 28 | #if [ ! -z "`grep 'linux-host' $file`" ];then 29 | hostname1=`grep -v '^#' $file | grep 'host_name' | awk '{print $2}' | uniq` 30 | address=`grep -v '^#' $file | grep 'address' | awk '{print $2}'` 31 | hostname2=`nslookup $address | grep 'name = ' | awk '{print $4}'` 32 | hostname2=${hostname2%.} 33 | #echo $hostname2 34 | #echo $hostname 35 | #notfound=`curl $muninserver/munin/${hostname#*.}/$hostname/ 2>/dev/null | grep "not found on this server"` 36 | if [ "$hostname1" != "$hostname2" ];then 37 | echo "$hostname1 address pointing to $hostname2 in $file" 38 | fi 39 | #fi 40 | done 41 | 42 | #for x in `grep -r host_name * | awk '{print $3}'| sort | uniq`; do 43 | # grep $x /etc/munin/munin.conf 44 | # echo $x - $? 45 | #done 46 | -------------------------------------------------------------------------------- /icinga/scripts/report-status.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #Created by Sam Gleske (sag47@drexel.edu) 3 | #Created: Mon Nov 12 18:34:30 EST 2012 4 | #Red Hat Enterprise Linux Server release 6.3 (Santiago) 5 | #Linux 2.6.32-71.el6.x86_64 6 | #Python 2.6.6 7 | 8 | #Description: 9 | # A quick script to send_nsca updates back to Icinga 10 | # This is to simplify passive checks in Icinga. 11 | 12 | #Usage: 13 | # report-status.py 14 | 15 | #Requires: 16 | # Requires nsca configured on icinga server and nsca-client 17 | # installed on the host. 18 | #Recommended: 19 | # Nagios Plugins package (you will need to possibly update $PATH 20 | # environment variable). 21 | 22 | #Setup: 23 | # Create a user nsca on the system and move the script to 24 | # /usr/local/sbin/ 25 | # Take ownership of report-status.py with 26 | # chown root:nsca report-status.py 27 | # chmod 750 report-status.py 28 | # Sample crontab for submitting once every 5 minutes: 29 | # crontab -u nsca -e 30 | # */5 * * * * /usr/local/sbin/report-status.py > /dev/null 2>&1 31 | # Edit /etc/nagios/send_nsca.cfg and set the contents to 32 | # encryption_method=3 33 | # password=somethingprivate 34 | # Set ownership of send_nsca.cfg: 35 | # chown root:nsca /etc/nagios/send_nsca.cfg 36 | # chmod 220 /etc/nagios/send_nsca.cfg 37 | 38 | import os,commands 39 | 40 | #User configurable variables 41 | host=os.getenv("HOSTNAME") 42 | icinga_host = "your.icingahost.com" 43 | send_nsca_cfg = "/etc/nagios/send_nsca.cfg" 44 | send_cmd = "/usr/sbin/send_nsca" 45 | 46 | # A list of descriptions and plugins we wish to run 47 | # The descriptions need to match *exactly* the service name on the Icinga host 48 | cmds = { 49 | "LOAD": "check_load -w 5,5,5 -c 10,10,10", 50 | "DISK": "check_disk -w 5% -c 1%", 51 | "PROCS-SENDMAIL": "check_procs -u root -C sendmail -w 1: -c 1:", 52 | "PROCS-NTPD": "check_procs -u ntp -C ntpd -w 1: -c 1:" 53 | } 54 | 55 | #NO NEED TO EDIT BEYOND THIS POINT 56 | #Set up environment variables 57 | for env_var in ("IFS","PATH","CDPATH","ENV","BASH_ENV"): 58 | os.unsetenv(env_var) 59 | os.putenv("PATH","/sbin:/usr/sbin:/bin:/usr/bin:/usr/lib64/nagios/plugins:/usr/local/sbin:/usr/lib64/nagios/plugins/contrib") 60 | 61 | #Submit the host checks to the icinga server 62 | for cmd in cmds: 63 | result,output=commands.getstatusoutput(cmds[cmd]) 64 | output=output.strip() 65 | print "SYS: %s RETURN: %s" % (result,output) 66 | rcode,nag=commands.getstatusoutput('echo -e "%s\t%s\t%s\t%s" | %s -H %s -c %s' % (host,cmd,result,output,send_cmd,icinga_host,send_nsca_cfg)) 67 | print "%s %s" % (rcode,nag) 68 | -------------------------------------------------------------------------------- /init.d/jboss: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | #Created by Sam Gleske (sag47@drexel.edu) 3 | #Created Tue Dec 6 10:24:22 EST 2011 4 | #Description: 5 | # Service script for RedHat JBoss 6 | # chkconfig compatible 7 | #Usage: 8 | # /etc/init.d/jboss help 9 | #Dependencies: 10 | # bash jboss coreutils awk grep procps util-linux-ng 11 | 12 | ##### User defined environment variable values (should be the only place needing modification) 13 | JBOSS_USER="jboss" 14 | JBOSS_CMD_START="/app/jboss/bin/startJboss.sh" 15 | 16 | ##### Default environment variable values 17 | #define where tomcat is - this is the directory containing directories log, bin, conf etc 18 | JBOSS_HOME=${JBOSS_HOME:-"/app/jboss"} 19 | #define the user under which tomcat will run, or use 'RUNASIS' to run as the current user (normally root) 20 | JBOSS_USER=${JBOSS_USER:-"RUNASIS"} 21 | #define what will be done with the console log 22 | JBOSS_CONSOLE=${JBOSS_CONSOLE:-"/dev/null"} 23 | #Command to start tomcat 24 | JBOSS_CMD_START=${JBOSS_CMD_START:-"$JBOSS_HOME/bin/run.sh -b 0.0.0.0"} 25 | #make sure java is installed 26 | JAVA_HOME=${JAVA_HOME:-"/app/java"} 27 | #make sure java is in your path 28 | JAVAPTH=${JAVAPTH:-"$JAVA_HOME/bin"} 29 | 30 | ##### Run environment variable pre-flight tests 31 | if [ "$JBOSS_USER" = "RUNASIS" ]; then 32 | SUBIT="" 33 | else 34 | if [ -n "`grep "$JBOSS_USER" /etc/passwd`" ];then 35 | SUBIT="su - $JBOSS_USER -c " 36 | else 37 | echo "SERVICE CFG ERR: JBOSS_USER = $JBOSS_USER does not exist in /etc/passwd." 38 | exit 1 39 | fi 40 | fi 41 | 42 | if [ -n "$JBOSS_CONSOLE" -a ! -d "$JBOSS_CONSOLE" ]; then 43 | # ensure the file exists 44 | touch $JBOSS_CONSOLE 45 | if [ ! -z "$SUBIT" ]; then 46 | chown $JBOSS_USER $JBOSS_CONSOLE 47 | fi 48 | else 49 | echo "SERVICE CONFIG ERR: JBOSS_CONSOLE = $JBOSS_CONSOLE is already existing as a directory." 50 | exit 1 51 | fi 52 | 53 | if [ -z "`echo $PATH | grep $JAVAPTH`" ]; then 54 | PATH="$PATH:$JAVAPTH" 55 | fi 56 | 57 | if [ ! -d "$JBOSS_HOME" ]; then 58 | echo "SERVICE CFG ERR: JBOSS_HOME = $JBOSS_HOME does not exist as a valid directory." 59 | exit 1 60 | fi 61 | 62 | export PATH JBOSS_HOME JAVA_HOME 63 | 64 | ##### Daemon service bits 65 | # chkconfig: 2345 80 80 66 | # description: starts jboss 67 | 68 | ### BEGIN INIT INFO 69 | # Provides: jboss 70 | # Required-Start: $network 71 | # Defalt-Start: 2 3 4 5 72 | # Default-Stop: 0 1 6 73 | # Description: starts jboss 74 | ### END INIT INFO 75 | 76 | # source function library 77 | if [ -f /lib/lsb/init-functions ]; then 78 | #Ubuntu 79 | . /lib/lsb/init-functions 80 | fi 81 | if [ -f /etc/rc.d/init.d/functions ]; then 82 | #RedHat 83 | . /etc/rc.d/init.d/functions 84 | fi 85 | 86 | start() { 87 | cd $JBOSS_HOME/bin 88 | echo "JBOSS_CMD_START = $JBOSS_CMD_START" 89 | if [ -z "$SUBIT" ]; then 90 | echo "$JBOSS_CMD_START >${JBOSS_CONSOLE} 2>&1 &" 91 | eval $JBOSS_CMD_START >${JBOSS_CONSOLE} 2>&1 & 92 | else 93 | echo "$SUBIT \"$JBOSS_CMD_START >${JBOSS_CONSOLE} 2>&1\" &" 94 | $SUBIT "$JBOSS_CMD_START >${JBOSS_CONSOLE} 2>&1" & 95 | fi 96 | } 97 | 98 | wait() { 99 | while true;do 100 | pid=`ps aux | grep "^$JBOSS_USER" | grep java | grep "$JBOSS_HOME" | awk '{print $2}'` 101 | if [ -z "$pid" ];then 102 | return 0 103 | else 104 | sleep 1 105 | echo -n "." 106 | fi 107 | done 108 | } 109 | 110 | stop() { 111 | #Sam's custom kill command for tomcat 112 | kill -s 15 `ps aux | grep "^$JBOSS_USER" | grep java | grep "$JBOSS_HOME" | awk '{print $2}'` 2> /dev/null 113 | if [ "$?" -eq "0" ];then 114 | echo -n "JBoss is stopping." 115 | wait && success 116 | echo "" 117 | return 0 118 | else 119 | failure 120 | echo "JBoss not running..." 121 | return 1 122 | fi 123 | 124 | } 125 | 126 | status() { 127 | #overwrite the default status function 128 | pid=`ps aux | grep "^$JBOSS_USER" | grep java | grep "$JBOSS_HOME" | awk '{print $2}'` 129 | if [ -n "$pid" ];then 130 | base="`awk 'BEGIN { FS = "\0" }; {print $1}' /proc/$pid/cmdline`" 131 | echo "JBoss $base (pid = $pid) is running..." 132 | return 0 133 | else 134 | echo "JBoss is stopped." 135 | return 1 136 | fi 137 | } 138 | 139 | restart() { 140 | stop 141 | start 142 | } 143 | 144 | case "$1" in 145 | start) 146 | start 147 | ;; 148 | stop) 149 | stop 150 | ;; 151 | restart) 152 | restart 153 | ;; 154 | status) 155 | status 156 | ;; 157 | *) 158 | echo "usage: $0 (start|stop|restart|status|help)" 159 | esac 160 | 161 | #Test environment 162 | #Red Hat Enterprise Linux Server release 6.2 (Santiago) 163 | #GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) 164 | #coreutils.x86_64 8.4-19.el6 165 | #apache-tomcat-6.0.35 166 | #GNU Awk 3.1.7 167 | #GNU grep 2.6.3 168 | #procps version 3.2.8 169 | #util-linux-ng.x86_64 2.17.2-12.7.el6_3 170 | -------------------------------------------------------------------------------- /init.d/tomcat: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | #Created by Sam Gleske (sag47@drexel.edu) 3 | #Created Tue Dec 6 10:24:22 EST 2011 4 | #Description: 5 | # Service script for Apache Tomcat 6 | # chkconfig compatible 7 | #Usage: 8 | # /etc/init.d/tomcat help 9 | #Dependencies: 10 | # bash tomcat coreutils awk grep procps util-linux-ng 11 | 12 | ##### User defined environment variable values (should be the only place needing modification) 13 | org="du" 14 | TOMCAT_HOME="/app/tomcat-$org" 15 | TOMCAT_USER="tomcat" 16 | 17 | ##### Default environment variable values 18 | #define where tomcat is - this is the directory containing directories log, bin, conf etc 19 | TOMCAT_HOME=${TOMCAT_HOME:-"/app/tomcat"} 20 | #define the user under which tomcat will run, or use 'RUNASIS' to run as the current user (normally root) 21 | TOMCAT_USER=${TOMCAT_USER:-"RUNASIS"} 22 | #define what will be done with the console log 23 | TOMCAT_CONSOLE=${TOMCAT_CONSOLE:-"/dev/null"} 24 | #Command to start tomcat 25 | TOMCAT_CMD_START=${TOMCAT_CMD_START:-"$TOMCAT_HOME/bin/startup.sh"} 26 | #make sure java is installed 27 | JAVA_HOME=${JAVA_HOME:-"/app/java"} 28 | #make sure java is in your path 29 | JAVAPTH=${JAVAPTH:-"/app/java/bin"} 30 | 31 | ##### Run environment variable pre-flight tests 32 | if [ "$TOMCAT_USER" = "RUNASIS" ]; then 33 | SUBIT="" 34 | else 35 | if [ -n "`grep "$TOMCAT_USER" /etc/passwd`" ];then 36 | SUBIT="su - $TOMCAT_USER -c " 37 | else 38 | echo "SERVICE CFG ERR: TOMCAT_USER = $TOMCAT_USER does not exist in /etc/passwd." 39 | exit 1 40 | fi 41 | fi 42 | 43 | if [ -n "$TOMCAT_CONSOLE" -a ! -d "$TOMCAT_CONSOLE" ]; then 44 | # ensure the file exists 45 | touch $TOMCAT_CONSOLE 46 | if [ ! -z "$SUBIT" ] && [ ! "$TOMCAT_CONSOLE" = "/dev/null" ]; then 47 | chown $TOMCAT_USER $TOMCAT_CONSOLE 48 | fi 49 | else 50 | echo "SERVICE CONFIG ERR: TOMCAT_CONSOLE = $TOMCAT_CONSOLE is already existing as a directory." 51 | exit 1 52 | fi 53 | 54 | if [ -z "`echo $PATH | grep $JAVAPTH`" ]; then 55 | PATH="$PATH:$JAVAPTH" 56 | fi 57 | 58 | if [ ! -d "$TOMCAT_HOME" ]; then 59 | echo "SERVICE CFG ERR: TOMCAT_HOME = $TOMCAT_HOME does not exist as a valid directory." 60 | exit 1 61 | fi 62 | 63 | export PATH TOMCAT_HOME JAVA_HOME 64 | 65 | ##### Daemon service bits 66 | # chkconfig: 2345 80 80 67 | # description: starts tomcat 68 | 69 | ### BEGIN INIT INFO 70 | # Provides: tomcat 71 | # Required-Start: $network 72 | # Defalt-Start: 2 3 4 5 73 | # Default-Stop: 0 1 6 74 | # Description: starts tomcat 75 | ### END INIT INFO 76 | 77 | # source function library 78 | if [ -f /lib/lsb/init-functions ]; then 79 | #Ubuntu 80 | . /lib/lsb/init-functions 81 | fi 82 | if [ -f /etc/rc.d/init.d/functions ]; then 83 | #RedHat 84 | . /etc/rc.d/init.d/functions 85 | fi 86 | 87 | start() { 88 | cd $TOMCAT_HOME/bin 89 | echo "TOMCAT_CMD_START = $TOMCAT_CMD_START" 90 | if [ -z "$SUBIT" ]; then 91 | echo "$TOMCAT_CMD_START >${TOMCAT_CONSOLE} 2>&1 &" 92 | eval $TOMCAT_CMD_START >${TOMCAT_CONSOLE} 2>&1 & 93 | else 94 | echo "$SUBIT \"$TOMCAT_CMD_START >${TOMCAT_CONSOLE} 2>&1\" &" 95 | $SUBIT "$TOMCAT_CMD_START >${TOMCAT_CONSOLE} 2>&1" & 96 | fi 97 | } 98 | 99 | wait() { 100 | while true;do 101 | pid=`ps aux | grep "^$TOMCAT_USER" | grep java | grep " -Dcatalina.base=$TOMCAT_HOME " | awk '{print $2}'` 102 | if [ -z "$pid" ];then 103 | return 0 104 | else 105 | sleep 1 106 | echo -n "." 107 | fi 108 | done 109 | } 110 | 111 | stop() { 112 | #Sam's custom kill command for tomcat 113 | kill -s 15 `ps aux | grep "^$TOMCAT_USER" | grep java | grep " -Dcatalina.base=$TOMCAT_HOME " | awk '{print $2}'` 2> /dev/null 114 | if [ "$?" -eq "0" ];then 115 | echo -n "Tomcat is stopping." 116 | wait && success 117 | echo "" 118 | return 0 119 | else 120 | failure 121 | echo "Tomcat not running..." 122 | return 1 123 | fi 124 | 125 | } 126 | 127 | status() { 128 | #overwrite the default status function 129 | pid=`ps aux | grep "^$TOMCAT_USER" | grep java | grep " -Dcatalina.base=$TOMCAT_HOME " | awk '{print $2}'` 130 | if [ -n "$pid" ];then 131 | base="`awk 'BEGIN { FS = "\0" }; {print $1}' /proc/$pid/cmdline`" 132 | echo "Tomcat $base (pid = $pid) is running..." 133 | return 0 134 | else 135 | echo "Tomcat is stopped" 136 | return 1 137 | fi 138 | } 139 | 140 | restart() { 141 | stop 142 | start 143 | } 144 | 145 | case "$1" in 146 | start) 147 | start 148 | ;; 149 | stop) 150 | stop 151 | ;; 152 | restart) 153 | restart 154 | ;; 155 | status) 156 | status 157 | ;; 158 | *) 159 | echo "usage: $0 (start|stop|restart|status|help)" 160 | esac 161 | 162 | #Test environment 163 | #Red Hat Enterprise Linux Server release 6.2 (Santiago) 164 | #GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) 165 | #coreutils.x86_64 8.4-19.el6 166 | #apache-tomcat-6.0.35 167 | #GNU Awk 3.1.7 168 | #GNU grep 2.6.3 169 | #procps version 3.2.8 170 | #util-linux-ng.x86_64 2.17.2-12.7.el6_3 171 | -------------------------------------------------------------------------------- /live_trends/README: -------------------------------------------------------------------------------- 1 | Scripts are designed to record system performance data and output them to a plain text file. 2 | 3 | The following five sections outline how to run and manage the live_trend programs listed in the trendprograms/ folder. 4 | See the comment block at the head of each program in the trendprograms/ folder for a description of the program. 5 | 6 | == About run_tests == 7 | DO NOT chmod +x run_tests 8 | This file is meant to be run as an include to the current shell so that you can have process control over the jobs. 9 | This file is the core to starting all jobs. 10 | TESTS 11 | The run_tests files starts all programs in the trendprograms/ folder which end in "_live_trend". 12 | To disable a trend program from running simply rename the end of the file to something other than "_live_trend". 13 | RESULTS 14 | Each *_live_trend program will have a corresponding result named *_stats.txt in the results/ folder. 15 | For example ./trendprograms/jvm_memory_live_trend has its result stored in ./results/jvm_memory_stats.txt. 16 | BEHAVIOR 17 | If there is *_live_trend jobs already running then they will be killed using the pids file and will be started again. 18 | If the result file for the *_live_trend program already exists then run_tests will move it to a backup name and a new result file will be created. 19 | EXAMPLE USAGE 20 | $ . run_tests 21 | 22 | == About run_tests_manually == 23 | DO NOT chmod +x run_tests_manually 24 | Similar to run_tests but instead of running everything automatically you must manually edit the run_tests_manually file to include live_trend programs. 25 | This file will automatically backup result files like run_tests. 26 | Instead of using killall_jobs program control the jobs you must utilize the jobs, fg, and bg commands to manage the jobs. 27 | EXAMPLE USAGE 28 | $ . run_tests_manually 29 | 30 | == About pids file == 31 | You can largely ignore this file. It is only used by the killall_jobs program for killing all jobs at once when testing is done. 32 | If no jobs show up by running the jobs command then you can simply clear this file. 33 | By default the pids file is located at /tmp/live_trend_pids 34 | 35 | == About killall_jobs == 36 | This program uses the pids file to kill all running jobs at once when testing is done. 37 | This program will only kill jobs whose PID is listed in the pids file. 38 | This program will delete the pids file after killing all jobs. 39 | Example Usage: 40 | $ killall_jobs 41 | 42 | == About view_result_file == 43 | This program cats the file of your choice by showing the whole file and then following future output updates (tail -f) 44 | If you do not need this behavior then you can simply run tail -f jvm_memory_stats.txt or any other result file. 45 | Example Usage: 46 | $ view_result_file ./results/jvm_memory_stats.txt 47 | 48 | == About generate_aligned_csv_file.py == 49 | This program combines the results of all files which are given to it as arguments and align the timestamps producing a new list with the combined results. 50 | When the results are combined the timestamps will be aligned. 51 | The purpose of this is so that all data can be represented on the same graph easily. 52 | In order to have the correct time show up in Libre office graphs you must do the following: 53 | Go to Tools > Options, select LibreOffice Calc > Calculate 54 | Ensure the Date is set to 12/30/1899. 55 | Then to calculate from the Unix timestamp to a local time in LibreOffice you must apply the following Cell equation where A2 is the Timestamp: 56 | =(A2-3600*4)/3600/24+25569 57 | Example Usage: 58 | $ generate_aligned_csv_file.py jvm_memory_stats.txt jvm_oracleconnections_stats.txt open_file_descriptors_stats.txt 59 | $ generate_aligned_csv_file.py ./results/{1,5,15}minute_load_stats.txt 60 | -------------------------------------------------------------------------------- /live_trends/generate_aligned_csv_file.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # Sam Gleske (sag47) 3 | # Created 2012/04/11 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # Python 2.4.3 6 | # 7 | # This program combines the results of all three log files jvm_memory_stats.txt, jvm_oracleconnections_stats.txt, and open_file_descriptors_stats.txt. 8 | # When the results are combined the timestamps will be aligned. 9 | # The purpose of this is so that all data can be represented on the same graph easily. 10 | # In order to have the correct time show up in Libre office graphs you must do the following: 11 | # Go to Tools > Options, select LibreOffice Calc > Calculate 12 | # Ensure the Date is set to 12/30/1899. 13 | # Then to calculate from the Unix timestamp to a local time in LibreOffice you must apply the following Cell equation where A2 is the Timestamp: 14 | # =(A2-3600*4)/3600/24+25569 15 | # Example Usage: 16 | # $ generate_aligned_csv_file.py 17 | 18 | import sys,re,os.path,linecache 19 | from sys import exit 20 | from os.path import basename 21 | from optparse import OptionParser 22 | from sys import argv 23 | 24 | #Show more verbose error output and exit on all errors 25 | debug = True 26 | 27 | class DataObject: 28 | oldest_sample = 0 29 | newest_sample = 0 30 | datalen = 0 31 | data = []#array of touples 32 | currentindex = 0 33 | def __init__(self,filename,regex): 34 | f = open(filename,'r') 35 | filecontents = f.read() 36 | f.close() 37 | searchregex = re.compile(regex, re.MULTILINE|re.DOTALL) 38 | self.data = re.findall(searchregex,filecontents) 39 | self.datalen = len(self.data) 40 | self.newest_sample = int(self.data[self.datalen-1][1]) #highest number of time since 1970 41 | self.oldest_sample = int(self.data[0][1]) #smallest number of time since 1970 42 | def incrementIndex(self,timestamp): 43 | if self.currentindex < self.datalen-1:#do not allow the index to be incremented past the data array length 44 | if int(self.data[self.currentindex+1][1]) <= timestamp+1: 45 | self.currentindex += 1 46 | def getCurrenttime(self): 47 | return int(self.data[self.currentindex][1]) 48 | def getCurrentdata(self): 49 | return self.data[self.currentindex][0] 50 | 51 | def get_newest(fromlist): 52 | """ 53 | get_newest(fromlist) where fromlist is a list of DataObjects 54 | Get the newest timestamp out of all the timestamps in the DataObject list. 55 | """ 56 | newest_timestamp = 0 57 | for obj in fromlist: 58 | if obj.newest_sample > newest_timestamp: 59 | newest_timestamp = obj.newest_sample 60 | return newest_timestamp 61 | 62 | def get_oldest(fromlist): 63 | """ 64 | get_oldest(fromlist) where fromlist is a list of DataObjects 65 | Get the oldest timestamp out of all the timestamps in the DataObject list. 66 | """ 67 | oldest_timestamp = fromlist[0].data[0][1] #take the first timestamp from the first DataObject in the fromlist list 68 | for obj in fromlist: 69 | if obj.oldest_sample < oldest_timestamp: 70 | oldest_timestamp = obj.oldest_sample 71 | return oldest_timestamp 72 | 73 | def getfiletype(line_from_file): 74 | """ 75 | getfiletype(line_from_file) 76 | 77 | This function has a list of registered datasources. line_from_file will be tested against each datasource. 78 | If there are any datasources which match the provided string then the datasource will be returned. If no 79 | matching datasource was detected the None will be returned. 80 | 81 | Example: 82 | The best way to get the first line from a file is to use the linecache.getline(filename,lineno) function. 83 | import linecache 84 | getfiletype(linecache.getline(filename,1)) 85 | """ 86 | 87 | # This is a registry of known data sources for the stats results of the live_trend programs 88 | datasource_types = [ 89 | {'file' : arg,'fieldname' : "Memory Usage (MB)",'regex' : r'JVM Memory = ([\d.]+) \/ \d+ MB \([\d.]+ %\); [-\d.]+ [\d.:]+ [AP]M; (\d+)'}, 90 | {'file' : arg,'fieldname' : "Oracle Connections (#)",'regex' : r'Number of OracleDB connections: ([\d]+); [\d\.-]+ [\d\.:]+ [AP]M; ([\d]+)'}, 91 | {'file' : arg,'fieldname' : "Open File Descriptors (#)",'regex' : r'[-a-zA-Z\d]+ user open file descriptors = (\d+); [-\d.]+ [\d.:]+ [AP]M; ([\d]+)'}, 92 | {'file' : arg,'fieldname' : "1min load",'regex' : r'1 minute load = ([\d\.]+); [\d\.-]+ [\d\.:]+ [AP]M; ([\d]+)'}, 93 | {'file' : arg,'fieldname' : "5min load",'regex' : r'5 minute load = ([\d\.]+); [\d\.-]+ [\d\.:]+ [AP]M; ([\d]+)'}, 94 | {'file' : arg,'fieldname' : "15min load",'regex' : r'15 minute load = ([\d\.]+); [\d\.-]+ [\d\.:]+ [AP]M; ([\d]+)'} 95 | ] 96 | for dstype in datasource_types: 97 | if re.match(dstype['regex'],line_from_file): 98 | return dstype 99 | return None 100 | 101 | 102 | #do not execute if included as a library 103 | if __name__ == "__main__": 104 | #build a list of datasources in which to align 105 | datasources=[] 106 | data=[] 107 | delim="" 108 | 109 | usage = "usage: %prog [options] result.txt [...result.txt]" 110 | version = "%prog 0.4" 111 | description = "Takes in live_trend program results and aligns the Unix timestamps so that it can be graphed. Multiple trend sources mean multiple dataplots of trends on the same graph." 112 | parser = OptionParser(usage=usage,version=version,description=description) 113 | parser.add_option("-f","--format",action="store",type="string",dest="format",default="csv",help="Select the data type for outputting the file. Only option is csv or gnuplot. Default is csv.") 114 | parser.add_option("-d","--delimiter",action="store",type="string",dest="delim",default=",",help="Select the delimiter to separate the data. Default is a comma \",\".") 115 | parser.add_option("-t","--libre-office-timestamp",action="store_true",dest="libretime",default=False,help="Use a LibreOffice compatible timestamp since 12/30/1899 rather than the Unix timestamp.") 116 | (options,args) = parser.parse_args() 117 | 118 | #choose the correct data delimiter based on type and other options. 119 | if options.format == "gnuplot": 120 | delim = "\t" 121 | elif options.format == "csv": 122 | delim = options.delim 123 | elif not (options.format == 'csv') and not (options.format == 'gnuplot'): 124 | err="%s is not a valid output data type. Expecting csv or gnuplot.\n" % options['type'] 125 | sys.stderr.write(err) 126 | exit(1) 127 | 128 | # This is necessary because each data source requires a unique regex to parse it. Unregistered data sources are ignored. 129 | # To see a list of registered data sources see the getfiletype() function 130 | for arg in args: 131 | line_to_test = linecache.getline(arg,1) 132 | line_to_test = line_to_test.strip() 133 | if not (getfiletype(line_to_test) == None): 134 | datasources.append(getfiletype(line_to_test)) 135 | else: 136 | err="%s is not a registered datasource.\n" % arg 137 | sys.stderr.write(err) 138 | if debug: 139 | print "Tested line:\n %s" % line_to_test 140 | exit(1) 141 | else: 142 | continue 143 | 144 | if len(datasources) <= 0: 145 | print "No registered datasources detected. Please provide a proper datasource in the arguments. Try seeing help docs.\n%s -h\n%s --help" % (argv[0],argv[0]) 146 | exit(1) 147 | 148 | # Create a list of DataObjects from the datasources list and store them in the data list 149 | for source in datasources: 150 | data.append(DataObject(source['file'],source['regex'])) 151 | 152 | newest=get_newest(data) 153 | oldest=get_oldest(data) 154 | datasources_len=len(datasources) 155 | 156 | #Write the header 157 | if options.format == 'gnuplot': 158 | headerstr="#Timestamp" 159 | else: 160 | headerstr="Timestamp" 161 | for source in datasources: 162 | headerstr += delim + source['fieldname'] 163 | #if options.libretime: 164 | # headerstr += delim + "Time" 165 | sys.stdout.write(headerstr+"\n") 166 | 167 | #Write out the syncronized timestamp data in CSV format for all datasources 168 | last="" 169 | current="" 170 | for i in range(oldest,newest+1): 171 | #do we use the libreoffice compatible timestamp or keep the unix timestamp? 172 | if options.libretime: 173 | current = str((i-3600.0*4)/3600/24+25569) 174 | else: 175 | current = str(i) 176 | #build the output string by iterating through all the data list DataOjbects 177 | for dataobj in data: 178 | current += delim + dataobj.getCurrentdata() 179 | dataobj.incrementIndex(i) 180 | #if there's a duplicate data entry (not including the timestamp then don't print it 181 | if not (current.split(delim,1)[1] == last): 182 | sys.stdout.write(current+"\n") 183 | #set the last so that duplicates do not get printed 184 | last = current.split(delim,1)[1] 185 | 186 | -------------------------------------------------------------------------------- /live_trends/killall_jobs: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # 7 | # Kills all jobs created from the run_tests include file. 8 | 9 | jobpids_file=/tmp/live_trend_pids 10 | 11 | if [ ! -e $jobpids_file ];then 12 | echo "There is no $jobpids_file file." 13 | echo "Nothing has been killed." 14 | exit 15 | fi 16 | for x in $(cat $jobpids_file);do kill $x;done 17 | \rm -f $jobpids_file 18 | -------------------------------------------------------------------------------- /live_trends/run_tests: -------------------------------------------------------------------------------- 1 | #see README file for a description of this file 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # 7 | # Spawns off jobs for keeping track of system statics. 8 | # This will automatically search the trendprograms/ folder for programs to create jobs from. 9 | # Issue the killall_jobs command to kill all the spawned jobs. 10 | # This is meant to be an include of your current shell for job control. 11 | # see jobs, fg, bg, kill commands. 12 | jobpids_file=/tmp/live_trend_pids 13 | jvm_testing_pwd=/root/jvm_testing 14 | touch $jobpids_file 15 | dates=$(date '+%s') 16 | if [ "$(cat $jobpids_file)" != "" ];then 17 | for x in $(cat $jobpids_file);do kill $x;done 18 | echo "" > $jobpids_file 19 | sleep 1 20 | fi 21 | 22 | if [ ! -d "$jvm_testing_pwd/results" ];then 23 | mkdir "$jvm_testing_pwd/results" 24 | fi 25 | 26 | #keep around old trend stats 27 | for x in $jvm_testing_pwd/trendprograms/*_live_trend;do 28 | if [ ! -d "$jvm_testing_pwd/results/old" ];then 29 | mkdir "$jvm_testing_pwd/results/old" 30 | fi 31 | x=${x##*/} 32 | if [ -e "$jvm_testing_pwd/results/${x%_live_trend}_stats.txt" ];then 33 | echo "moving $jvm_testing_pwd/results/${x%_live_trend}_stats.txt to $jvm_testing_pwd/results/old/${x%_live_trend}_stats_$dates.txt" 34 | mv $jvm_testing_pwd/results/${x%_live_trend}_stats.txt $jvm_testing_pwd/results/old/${x%_live_trend}_stats_$dates.txt 35 | fi 36 | done 37 | 38 | #start all enabled live_trend programs in the trendprograms/ folder 39 | # i.e. any program which ends with "_live_trend" 40 | for x in $jvm_testing_pwd/trendprograms/*_live_trend;do 41 | y=${x##*/} 42 | y=$jvm_testing_pwd/results/${y%_live_trend}_stats.txt 43 | echo "starting $x" 44 | $x >> $y & 45 | echo $! >> $jobpids_file 46 | done 47 | 48 | -------------------------------------------------------------------------------- /live_trends/run_tests_manually: -------------------------------------------------------------------------------- 1 | #see README file for a description of this file 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # 7 | # Spawns off jobs for keeping track of system statics 8 | # This is meant to be an include of your current shell for job control 9 | # see jobs, fg, bg, kill commands 10 | if [ ! -d "results" ];then 11 | mkdir results 12 | fi 13 | #keep around old trend stats 14 | for x in ./trendprograms/*_live_trend;do 15 | x=${x##*/} 16 | if [ -e "./results/${x%_live_trend}_stats.txt" ];then 17 | echo "moving ./results/${x%_live_trend}_stats.txt to ./results/${x%_live_trend}_stats_$dates.txt" 18 | mv ./results/${x%_live_trend}_stats.txt ./results/${x%_live_trend}_stats_$dates.txt 19 | fi 20 | done 21 | ./trendprograms/jvm_memory_live_trend >> ./results/jvm_memory_stats.txt & 22 | ./trendprograms/jvm_oracleconnections_live_trend >> ./results/jvm_oracleconnections_stats.txt & 23 | ./trendprograms/open_file_descriptors_live_trend >> ./results/open_file_descriptors_stats.txt & 24 | -------------------------------------------------------------------------------- /live_trends/trendprograms/15minute_load_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created Thu Apr 12 14:50:49 EDT 2012 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # 15minute_load_live_trend info 15 | # This trending program shows the 15 minute system load. 16 | # You configure some of the variables for the specific user you wish to analyze. See # CONFIGURE VARIABLES section 17 | # Output format 18 | # 15 minute load = ####; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 19 | 20 | # CONFIGURE VARIABLES 21 | #no variables to configure 22 | 23 | # END CONFIGURE VARIABLES 24 | while true;do 25 | current="15 minute load = $(uptime | cut -d, -f6 | awk '{print $1}')" 26 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 27 | dates=$(date '+%s') 28 | if [ "$current" != "$last" ];then 29 | echo "$current; $date; $dates"; 30 | fi 31 | last=$current 32 | sleep 1 33 | done 34 | -------------------------------------------------------------------------------- /live_trends/trendprograms/1minute_load_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created Thu Apr 12 14:50:49 EDT 2012 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # 1minute_load_live_trend info 15 | # This trending program shows the 1 minute system load. 16 | # You configure some of the variables for the specific user you wish to analyze. See # CONFIGURE VARIABLES section 17 | # Output format 18 | # 1 minute load = ####; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 19 | 20 | # CONFIGURE VARIABLES 21 | #no variables to configure 22 | 23 | # END CONFIGURE VARIABLES 24 | while true;do 25 | current="1 minute load = $(uptime | cut -d, -f4 | awk '{print $3}')" 26 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 27 | dates=$(date '+%s') 28 | if [ "$current" != "$last" ];then 29 | echo "$current; $date; $dates"; 30 | fi 31 | last=$current 32 | sleep 1 33 | done 34 | -------------------------------------------------------------------------------- /live_trends/trendprograms/5minute_load_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created Thu Apr 12 14:50:49 EDT 2012 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # 5minute_load_live_trend info 15 | # This trending program shows the 5 minute system load. 16 | # You configure some of the variables for the specific user you wish to analyze. See # CONFIGURE VARIABLES section 17 | # Output format 18 | # 5 minute load = ####; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 19 | 20 | # CONFIGURE VARIABLES 21 | #no variables to configure 22 | 23 | # END CONFIGURE VARIABLES 24 | while true;do 25 | current="5 minute load = $(uptime | cut -d, -f5 | awk '{print $1}')" 26 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 27 | dates=$(date '+%s') 28 | if [ "$current" != "$last" ];then 29 | echo "$current; $date; $dates"; 30 | fi 31 | last=$current 32 | sleep 1 33 | done 34 | -------------------------------------------------------------------------------- /live_trends/trendprograms/httpd_memory_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/30 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # httpd_memory_live_trend info 15 | # This trending program shows the httpd memory usage over time. 16 | # Output format 17 | # httpd Memory Stats Info; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 18 | 19 | # CONFIGURE VARIABLES 20 | user="apache" 21 | proc="httpd" 22 | #END CONFIGURE VARIABLES 23 | 24 | sysmem="$(awk '$1 == "MemTotal:" {print $2/1024}' /proc/meminfo)" 25 | while true;do 26 | # perc="$(ps axo user,pid,ppid,%mem,comm | awk '($5 == "'$proc'") && ($1 == "'$user'") && (last != $4) {last=$4;print $4}')" 27 | perc="$(ps axo user,pid,ppid,%mem,comm | awk 'BEGIN{memusage=0.0}; ($5 == "'$proc'") && ($1 == "'$user'"){memusage=memusage+$4}; END{print memusage}')" 28 | if [ -z "$perc" ];then 29 | mem="0.0" 30 | else 31 | mem="$(echo "$perc" | awk '{print $1*'$sysmem'/100}')" 32 | fi 33 | mem_perc="$(echo "$mem" | awk '{print $1/'$sysmem'*100}')" 34 | current="$proc Memory = $mem / $sysmem MB ($mem_perc %)" 35 | date="$(date '+%Y-%m-%d %I:%M:%S %p')" 36 | dates="$(date '+%s')" 37 | if [ "$current" != "$last" ];then 38 | echo "$current; $date; $dates"; 39 | fi 40 | last="$current" 41 | sleep 1; 42 | done 43 | -------------------------------------------------------------------------------- /live_trends/trendprograms/jvm_memory_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # jvm_memory_live_trend info 15 | # This trending program shows the JVM memory usage over time. 16 | # You configure some of the variables for the specific JVM you wish to analyze. See # CONFIGURE VARIABLES section 17 | # Output format 18 | # JVM Memory Stats Info; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 19 | 20 | # CONFIGURE VARIABLES 21 | jvmuser="tomcat" #this is the user of the spawned subshell running the jvm 22 | pathgrep="tomcat-du" #this is a unique value in the full pathname of the jvm server which is unique compared to other jvm servers under the same jvmuser. 23 | max_jvm_memory=4096 #this value should be the same as what is set from $JAVA_OPTS = -Xmx4096m 24 | 25 | #END CONFIGURE VARIABLES 26 | #get the system memory in MB 27 | #sysmem=$(grep 'MemTotal:' /proc/meminfo | awk '{print $2}' | sed 's/\(.*\)/\1\/1024/' | bc) 28 | sysmem=$(awk '$1 == "MemTotal:" {print $2/1024}' /proc/meminfo) 29 | 30 | while true;do 31 | perc=$(ps u --user $jvmuser | awk '($0 !~ /openoffice/) && ($0 ~ /'$pathgrep'/) { print $4}') 32 | #perc=$(ps u --user $jvmuser | grep $pathgrep | grep -v 'openoffice' | awk '{print $4}') 33 | if [ -z "$perc" ];then #if the jvm shuts down then assume memory usage is zero 34 | #echo "No $pathgrep instance found, exiting jvm_memory_live_trend" > /dev/stderr 35 | mem="0.0" 36 | else 37 | mem=$(echo "$perc" | awk '{print $1'*$sysmem'/100}') 38 | fi 39 | mem_perc=$(echo "$mem" | awk '{print $1/'$max_jvm_memory'*100}') 40 | #mem_perc=$(echo "$mem/$max_jvm_memory*100" | bc -l) 41 | current="JVM Memory = $mem / $max_jvm_memory MB ($mem_perc %)" 42 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 43 | dates=$(date '+%s') 44 | if [ "$current" != "$last" ];then 45 | echo "$current; $date; $dates"; 46 | fi 47 | last=$current 48 | sleep 1; 49 | done 50 | -------------------------------------------------------------------------------- /live_trends/trendprograms/jvm_oracleconnections_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # jvm_oracleconnections_live_trend info 15 | # This trending program shows the number of connections a JVM makes to an Oracle Database. 16 | # You configure some of the variables for the specific JVM you wish to analyze. See # CONFIGURE VARIABLES section 17 | # Output format 18 | # Number of OracleDB connections: ####; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 19 | 20 | # CONFIGURE VARIABLES 21 | jvmuser="tomcat" #this is the user of the spawned subshell running the jvm 22 | pathgrep="tomcat-du" #this is a unique value in the full pathname of the jvm server which is unique compared to other jvm servers under the same jvmuser. 23 | oracleport="2337" #the port in which connections connecting to the oracle database would have. 24 | 25 | # END CONFIGURE VARIABLES 26 | while true;do 27 | #pid=$(ps u --user $jvmuser | grep $pathgrep | awk '{print $2}') 28 | pid=$(ps u --user $jvmuser | awk '($0 !~ /openoffice/) && ($0 ~ /'$pathgrep'/) {print $2}') 29 | current="Number of OracleDB connections: $(netstat -anp | grep "$oracleport" | grep "$pid/java" | wc -l)" 30 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 31 | dates=$(date '+%s') 32 | if [ "$current" != "$last" ];then 33 | echo "$current; $date; $dates"; 34 | fi 35 | last=$current 36 | sleep 1 37 | done 38 | -------------------------------------------------------------------------------- /live_trends/trendprograms/open_file_descriptors_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # 7 | # live_trend info 8 | # This live_trend program samples every second. 9 | # This program will not output if the current second sample is the same as the last second sample. 10 | # That way only unique entries with associated timestamps will be output. 11 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 12 | # 13 | # open_file_descriptors_live_trend info 14 | # This trending program shows the number of open file descriptors over time. This is useful because users are limited to 1024 (see ulimit -n). 15 | # You configure some of the variables for the specific user you wish to analyze. See # CONFIGURE VARIABLES section 16 | # Output format 17 | # Number of open file descriptors = ####; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 18 | 19 | # CONFIGURE VARIABLES 20 | user="jboss" 21 | 22 | # END CONFIGURE VARIABLES 23 | while true;do 24 | current="$user user open file descriptors = $(lsof | awk 'BEGIN{count=0}; ($3 == "jboss") {count++}; END{print count}')" 25 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 26 | dates=$(date '+%s') 27 | if [ "$current" != "$last" ];then 28 | echo "$current; $date; $dates"; 29 | fi 30 | last=$current 31 | sleep 1 32 | done 33 | -------------------------------------------------------------------------------- /live_trends/trendprograms/thread_dump_count_live_trend: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created Tue Feb 5 10:20:31 EST 2013 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # GNU Awk 3.1.5 7 | # 8 | # live_trend info 9 | # This live_trend program samples every second. 10 | # This program will not output if the current second sample is the same as the last second sample. 11 | # That way only unique entries with associated timestamps will be output. 12 | # All proceeding timestamp entries not output are assumed to be the same value as the current timestamp value. 13 | # 14 | # thread_dump_count_live_trend info 15 | # This trending program counts up the threads on a jvm and displays a sum. 16 | # You configure some of the variables for the specific user you wish to analyze. See # CONFIGURE VARIABLES section 17 | # Documentation on thread states. 18 | # http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Thread.State.html 19 | # Output format 20 | # Thread breakdown = ## ## ## ## ## ## ##; local date and time in human readable format; date in seconds since 1970-01-01 00:00:00 UTC 21 | 22 | # CONFIGURE VARIABLES 23 | org="du" 24 | TOMCAT_HOME="/app/tomcat-$org" 25 | TOMCAT_USER="tomcat" 26 | JAVA_HOME="/app/java" 27 | 28 | # END CONFIGURE VARIABLES 29 | pid=$(ps aux | grep "^$TOMCAT_USER" | grep java | grep "$TOMCAT_HOME" | awk '{print $2}') 30 | 31 | while true;do 32 | 33 | #GATHER DATA USING AWK SCRIPT 34 | data="$(su - $TOMCAT_USER -c "$JAVA_HOME/bin/jstack $pid" | awk ' 35 | BEGIN { 36 | total=0; 37 | new=0; 38 | runnable=0; 39 | blocked=0; 40 | waiting=0; 41 | timed_waiting=0; 42 | terminated=0; 43 | }; 44 | $0 ~ /java.lang.Thread.State/ { 45 | total=total+1; 46 | if($2 == "NEW") { 47 | new=new+1; 48 | } 49 | if($2 == "RUNNABLE") { 50 | runnable=runnable+1; 51 | } 52 | if($2 == "BLOCKED") { 53 | blocked=blocked+1; 54 | } 55 | if($2 == "WAITING") { 56 | waiting=waiting+1; 57 | } 58 | if($2 == "TIMED_WAITING") { 59 | timed_waiting=timed_waiting+1; 60 | } 61 | if($2 == "TERMINATED") { 62 | terminated=terminated+1; 63 | } 64 | }; 65 | END{FS=",";print total,new,runnable,blocked,waiting,timed_waiting,terminated} 66 | ')" 67 | #END OF AWK SCRIPT 68 | 69 | current="Thread breakdown = $data" 70 | date=$(date '+%Y-%m-%d %I:%M:%S %p') 71 | dates=$(date '+%s') 72 | if [ "$current" != "$last" ];then 73 | echo "$current; $date; $dates"; 74 | fi 75 | last=$current 76 | sleep 5 77 | done 78 | -------------------------------------------------------------------------------- /live_trends/view_result_file: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # By Sam Gleske (sag47) 3 | # Created 2012/04/10 4 | # Linux 2.6.18-194.11.4.el5 x86_64 GNU/Linux 5 | # GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) 6 | # 7 | # This program cats the file of your choice by showing the whole file and then following future output updates # 8 | # If you do not need this behavior then you can simply run tail -f jvm_memory_stats.txt or any other result file. 9 | # Example Usage: 10 | # $ view_result_file ./results/jvm_memory_stats.txt 11 | # 12 | #this file is a good way to cat the result files generated by 13 | # jvm_memory_live_trend 14 | # jvm_oracleconnections_live_trend 15 | # open_file_descriptors_live_trend 16 | tail -f -n $(wc -l $1 | awk '{print $1}') $1 17 | -------------------------------------------------------------------------------- /munin/plugins/README.md: -------------------------------------------------------------------------------- 1 | # Munin Plugins 2 | 3 | This is a set of munin plugins which I wrote or coauthored. 4 | 5 | --- 6 | ## java\_vm\_time 7 | 8 | This script uses [parsegarbagelogs.py](https://github.com/sag47/drexel-university/blob/master/icinga/plugins/jvm_health/). To set this plugin up you must first get `parsegarbagelogs.py` working. From there you must use symlinks to execute the different plugin types for monitoring Java with munin. 9 | 10 | #e.g. let's say we place it at /usr/share/munin/plugins/java_vm_time 11 | source="/usr/share/munin/plugins/java_vm_time" 12 | ln -s $source /etc/munin/plugins/java_graph 13 | ln -s $source /etc/munin/plugins/java_vm_threads 14 | ln -s $source /etc/munin/plugins/java_vm_time 15 | ln -s $source /etc/munin/plugins/java_vm_uptime 16 | -------------------------------------------------------------------------------- /munin/plugins/java_vm_time: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #A plugin to send jvm statistics to munin for graphing 4 | 5 | 6 | SQLFILE=/var/db/sqldb #SQLite database to store the information we're collecting 7 | PYTHON_FILE=/usr/local/sbin/parsegarbagelogs.py 8 | MEM_GRAPH="java_graph" 9 | TIME_GRAPH="java_vm_time" 10 | UPTIME_GRAPH="java_vm_uptime" 11 | THREAD_GRAPH="java_vm_threads" 12 | BASENAME=`basename $0` 13 | 14 | if [ "$BASENAME" = "$MEM_GRAPH" ]; then 15 | if [ "$1" = "config" ]; then 16 | echo "graph_category JVM statistics" 17 | echo "graph_title JVM Memory usage" 18 | echo "graph_order S O P" 19 | echo "graph_vlabel usage" 20 | echo "usage.label usage" 21 | echo "graph_info This graph shows JVM memory usage." 22 | echo "graph_scale no" 23 | echo "S.label Survivor1_Space" 24 | echo "O.label Old_Space" 25 | echo "P.label Perm_Space" 26 | exit 0 27 | fi 28 | if [ -e $SQLFILE ]; then 29 | #DATA=$(sqlite3 $SQLFILE "select *,ROWID from datapoints ORDER BY ROWID DESC Limit 1") 30 | DATA=$($PYTHON_FILE -g -s $SQLFILE) 31 | echo "S.value `echo $DATA | cut -d ',' -f 2`" 32 | echo "O.value `echo $DATA | cut -d ',' -f 3`" 33 | echo "P.value `echo $DATA | cut -d ',' -f 4`" 34 | else 35 | echo "Couldn't find sql db file" 36 | exit -1 37 | fi 38 | elif [ "$BASENAME" = "$TIME_GRAPH" ]; then 39 | if [ "$1" = "config" ]; then 40 | echo "graph_category JVM statistics" 41 | echo "graph_title JVM Garbage Collection Time Spent" 42 | echo "graph_order YGC_Time FGC_Time TGC_Time Uptime" 43 | echo "graph_vlabel time" 44 | echo "time.label Time" 45 | echo "graph_info This graph shows how long the JVM spend garbage collecting." 46 | echo "graph_scale no" 47 | echo "YGC_Time.label Young_GC_Time" 48 | echo "FGC_Time.label Full_GC_Time" 49 | echo "TGC_Time.label Total_GC_Time" 50 | exit 0 51 | fi 52 | if [ -e $SQLFILE ]; then 53 | #DATA=$(sqlite3 $SQLFILE "select *,ROWID from datapoints ORDER BY ROWID DESC Limit 1") 54 | DATA=$($PYTHON_FILE -g -s $SQLFILE) 55 | echo "YGC_Time.value `echo $DATA | cut -d ',' -f 6`" 56 | echo "FGC_Time.value `echo $DATA | cut -d ',' -f 8`" 57 | echo "TGC_Time.value `echo $DATA | cut -d ',' -f 9`" 58 | else 59 | echo "Couldn't find sql db file" 60 | exit -1 61 | fi 62 | elif [ "$BASENAME" = "$UPTIME_GRAPH" ]; then 63 | if [ "$1" = "config" ]; then 64 | echo "graph_category JVM statistics" 65 | echo "graph_title JVM Uptime" 66 | echo "graph_order Uptime" 67 | echo "graph_vlabel time" 68 | echo "time.label Time" 69 | echo "graph_info This graph shows how long the JVM has been up." 70 | echo "graph_scale no" 71 | echo "Uptime.label Uptime" 72 | exit 0 73 | fi 74 | if [ -e $SQLFILE ]; then 75 | #DATA=$(sqlite3 $SQLFILE "select *,ROWID from datapoints ORDER BY ROWID DESC Limit 1") 76 | DATA=$($PYTHON_FILE -g -s $SQLFILE) 77 | echo "Uptime.value `echo $DATA | cut -d ',' -f 1`" 78 | else 79 | echo "Couldn't find sql db file" 80 | exit -1 81 | fi 82 | elif [ "$BASENAME" = "$THREAD_GRAPH" ]; then 83 | if [ "$1" = "config" ]; then 84 | echo "graph_category JVM statistics" 85 | echo "graph_title JVM Thread States (tomcat-du)" 86 | echo "graph_order TD_Total TD_Waiting TD_TWaiting TD_Runnable TD_New TD_Blocked TD_Terminated" 87 | echo "graph_vlabel time" 88 | echo "time.label Time" 89 | echo "graph_info This graph shows thread states statistics from a jstack thread dump for the tomcat-du JVM." 90 | echo "graph_scale no" 91 | echo "TD_Total.label TOTAL" 92 | echo "TD_New.label NEW" 93 | echo "TD_Runnable.label RUNNABLE" 94 | echo "TD_Blocked.label BLOCKED" 95 | echo "TD_Waiting.label WAITING" 96 | echo "TD_TWaiting.label TIMED_WAITING" 97 | echo "TD_Terminated.label TERMINATED" 98 | exit 0 99 | fi 100 | #GATHER DATA USING AWK SCRIPT 101 | org="du" 102 | TOMCAT_HOME="/app/tomcat-$org" 103 | TOMCAT_USER="tomcat" 104 | JAVA_HOME="/app/java" 105 | pid=$(ps aux | grep "^$TOMCAT_USER" | grep java | grep "$TOMCAT_HOME" | awk '{print $2}') 106 | #NOTE! Due to a delay with dumping threads one might need to dump the threads 107 | #to a temporary file and then use the cat command to get the output for the awk script. 108 | #Set the thread dump job in cron and just parse the /tmp file. 109 | DATA="$(su - $TOMCAT_USER -c "$JAVA_HOME/bin/jstack $pid" | awk ' 110 | BEGIN { 111 | total=0; 112 | new=0; 113 | runnable=0; 114 | blocked=0; 115 | waiting=0; 116 | timed_waiting=0; 117 | terminated=0; 118 | }; 119 | $0 ~ /java.lang.Thread.State/ { 120 | total=total+1; 121 | if($2 == "NEW") { 122 | new=new+1; 123 | } 124 | if($2 == "RUNNABLE") { 125 | runnable=runnable+1; 126 | } 127 | if($2 == "BLOCKED") { 128 | blocked=blocked+1; 129 | } 130 | if($2 == "WAITING") { 131 | waiting=waiting+1; 132 | } 133 | if($2 == "TIMED_WAITING") { 134 | timed_waiting=timed_waiting+1; 135 | } 136 | if($2 == "TERMINATED") { 137 | terminated=terminated+1; 138 | } 139 | }; 140 | END{print total,new,runnable,blocked,waiting,timed_waiting,terminated} 141 | ')" 142 | #END OF AWK SCRIPT DATA GATHERING 143 | echo "TD_Total.value `echo "$DATA" | cut -d ' ' -f 1`" 144 | echo "TD_New.value `echo "$DATA" | cut -d ' ' -f 2`" 145 | echo "TD_Runnable.value `echo "$DATA" | cut -d ' ' -f 3`" 146 | echo "TD_Blocked.value `echo "$DATA" | cut -d ' ' -f 4`" 147 | echo "TD_Waiting.value `echo "$DATA" | cut -d ' ' -f 5`" 148 | echo "TD_TWaiting.value `echo "$DATA" | cut -d ' ' -f 6`" 149 | echo "TD_Terminated.value `echo "$DATA" | cut -d ' ' -f 7`" 150 | fi 151 | --------------------------------------------------------------------------------