├── .gitignore
├── DOC
├── P0f.INSTALL.md
├── README.md
├── conpot.INSTALL.md
├── dionaea.INSTALL.md
├── glastopf.INSTALL.md
└── kippo.INSTALL.md
├── README.md
├── TODO.md
├── conpot
├── .gitignore
└── conpot.cfg
├── dionaea
├── DionaeaFR
│ ├── .gitignore
│ ├── dionaeafr_init.sh
│ └── settings.py
├── README.md
├── dionaea.conf
├── dionaea.logrotate
├── logsql.py
└── modules_python_util
│ ├── Makefile.am
│ ├── csv2sqlite.py
│ ├── gnuplotsql.py
│ ├── gnuplotsql
│ ├── gnuplot.example
│ └── gnuplot.svg.example
│ ├── logsql2postgres.py
│ ├── readlogsqltree.py
│ ├── retry.py
│ ├── updateccs.py
│ └── xmpp
│ ├── pg_backend.py
│ └── pg_schema.sql
├── dns
├── README.md
├── db.root.honeypot
├── named.conf
└── named.conf.options
├── elk
├── README.md
├── TODO.md
├── _grokparsefailure.png
├── conpot.singlelogline.py
├── cudeso-dionaea.json
├── dionaea-singlelogline.py
├── elk-import.logrotate
├── glastopf-singlelogline.py
├── hp1.png
├── hp2.png
├── hp3.png
├── hp4.png
├── hp5.png
├── hp6.png
├── inspect-to-csv
│ ├── README.md
│ ├── inspect-to-csv.py
│ └── request.inspect
├── logstash.conf
└── query_ELK.py
├── glastopf
└── .gitignore
├── kippo
├── kippo.cfg
├── kippo.logrotate
└── kippo
│ └── start.sh
└── p0f
└── p0f_init.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | .DS_Store
3 | */.DS_Store
4 |
5 | kippo/kippo-graph/
6 |
7 | dionaea/DionaeaFR/dionaeafr_init.sh
8 |
--------------------------------------------------------------------------------
/DOC/P0f.INSTALL.md:
--------------------------------------------------------------------------------
1 | # passive traffic fingerprinting
2 |
3 | ## Set p0f to start at boot
4 |
5 | ```
6 | update-rc.d p0f_init.sh defaults
7 | ```
8 |
9 |
--------------------------------------------------------------------------------
/DOC/README.md:
--------------------------------------------------------------------------------
1 | # System setup
2 |
3 | ## Set the time zone
4 |
5 | It's important to have the correct timezone set on your machine. This helps with correlating events with other sources / machines.
6 |
7 | ```
8 | dpkg-reconfigure tzdata
9 | ```
10 |
11 | ## Have GIT installed
12 |
13 | A lot of the configuration and tools used in this setup are hosted on Github. Obviously you'll need the git command.
14 |
15 | ```
16 | apt-get install git
17 | ```
18 |
19 | ## Clone this repository
20 |
21 | ```
22 | git clone https://github.com/cudeso/cudeso-honeypot.git
23 | ```
24 |
25 |
26 |
27 | # Honeypot installs
28 |
29 | ## Dionaea
30 |
31 | Proceed with the documentation for dionaea.
32 |
33 |
--------------------------------------------------------------------------------
/DOC/conpot.INSTALL.md:
--------------------------------------------------------------------------------
1 | # Honeypot
2 | Fairly simple low-interaction honeypot setups
3 | Koen Van Impe
4 |
5 | # Conpot
6 |
7 | Conpot is an ICS honeypot
8 |
9 | ------------------------------------------------------------------------------------------
10 |
11 | # Install
12 |
13 | From http://glastopf.github.io/conpot/installation/ubuntu.html
14 |
15 | ```
16 | sudo apt-get install libsmi2ldbl snmp-mibs-downloader python-dev libevent-dev libxslt1-dev libxml2-dev
17 | ```
18 |
19 | If you get an error **E: Package 'snmp-mibs-downloader' has no installation candidate** then you will have to enable multiverse. Do this with **sudo vi /etc/apt/sources.list ; sudo apt-get update**
20 |
21 | ```
22 | cd /opt
23 | git clone https://github.com/glastopf/conpot.git
24 | cd conpot
25 | python setup.py install
26 | ```
27 |
28 | This will install all the necessary packages and install the conpot python package. The python package ends up in a location similar to **/usr/local/lib/python2.7/dist-packages/Conpot-0.3.1-py2.7.egg/**.
29 |
30 | # Starting conpot
31 |
32 | Conpot needs root privileges (because some services bind to ports below 1024). It drops privileges to nobody/nogroup once started.
33 | You can start the honeypot with
34 |
35 | ```
36 | sudo conpot
37 | ```
38 |
39 | You'll get a list of available templates if you start if with no options
40 |
41 | * --template kamstrup_382
42 | ** Kamstrup 382 smart meter
43 | ** Services
44 | *** Kamstrup (tcp/1025)
45 | *** Kamstrup (tcp/50100)
46 | * --template proxy
47 | ** Demonstrating the proxy feature
48 | ** Services
49 | *** Kamstrup Channel A proxy server (tcp/1025)
50 | *** Kamstrup Channel B proxy server (tcp/1026)
51 | *** SSL proxy (tcp/1234)
52 | *** Kamstrup telnet proxy server (tcp/50100)
53 | * --template default
54 | ** Siemens S7-200 CPU with 2 slaves
55 | ** Services
56 | *** Modbus (tcp/502)
57 | *** S7Comm (tcp/102)
58 | *** HTTP (tcp/80)
59 | *** SNMP (udp/161)
60 |
61 | If you start conpot with the **-h** option then you get a list of configuration options. The three most useful are
62 |
63 | * --template : what template to use
64 | * --config : where is the config file
65 | * --logfile : where to write the logs
66 |
67 | The default logging is to a file **conpot.log** in the current directory.
68 |
69 | I usually start it with
70 |
71 | ```
72 | conpot --config /etc/conpot/conpot.cfg --logfile /var/log/conpot/conpot.log --template default
73 | ```
74 |
75 | # Configuration
76 |
77 | The configuration is in the file **conpot.cfg**.
78 |
79 | ## Services configured for proxy template
80 |
81 | By default the proxy template has no http, snmp, etc. service configured.
82 |
83 | ```
84 | No modbus template found. Service will remain unconfigured/stopped.
85 | No s7comm template found. Service will remain unconfigured/stopped.
86 | No kamstrup_meter template found. Service will remain unconfigured/stopped.
87 | No kamstrup_management template found. Service will remain unconfigured/stopped.
88 | No http template found. Service will remain unconfigured/stopped.
89 | No snmp template found. Service will remain unconfigured/stopped.
90 | ```
91 |
92 | ## Adding a template
93 |
94 | The easiest way for adding a service template is by copying it from an existing one. For example to add the http service template to the proxy template you can merely copy it from the 'default' template.
95 |
96 | If you're running conpot from the package then you'll have to reinstall it (sudo python setup.py install).
97 |
98 | ## Fetching public IP
99 |
100 | Sometimes you'll notice outgoing tcp/80 connections when starting conpot. This is because it tries to obtain its public IP. By default the service at telize.com is used. You can change this by altering the configuration setting :
101 |
102 | ```
103 | [fetch_public_ip]
104 | enabled = True
105 | urls = ["http://www.telize.com/ip", "http://queryip.net/ip/", "http://ifconfig.me/ip"]
106 | ```
107 |
108 | ## Database configuration
109 |
110 | ### mysql
111 | Out of the box contop will log to a flat file. If you prefer mysql then first create a database, set proper permissions and change the setting in the config file.
112 |
113 | ```
114 | create database conpot;
115 | mysql> create user 'conpot'@'localhost' identified by 'conpot';
116 | mysql> grant all privileges on conpot.* to 'conpot'@'localhost';
117 | mysql> flush privileges;
118 | ```
119 |
120 | Do not worry that the database is empty, without tables. conpot will create the necessary tables when it starts.
121 | In conpot.cfg change this
122 |
123 | ```
124 | [mysql]
125 | enabled = True
126 | device = /tmp/mysql.sock
127 | host = localhost
128 | port = 3306
129 | db = conpot
130 | username = conpot
131 | passphrase = conpot
132 | socket = tcp ; tcp (sends to host:port), dev (sends to mysql device/socket file)
133 | ```
134 |
135 | Do not leave out any of the settings. If you are not using sockets you might by tempted to leave out 'device'. This will prevent conpot from starting.
136 |
137 | ### sqlite
138 |
139 | Similarily to mysql, you can also configure sqlite in the configuration file.
140 | Conpot will use the path **logs/conpot.db** for storing the sqlite database (see conpot/core/loggers/sqlite_log.py)
141 |
142 | ## Other logging features
143 |
144 | Conpot can also log / report to syslog and HPFeeds, these are disabled by default.
145 |
146 | # Usage
147 |
148 | http://glastopf.github.io/conpot/index.html
149 |
150 |
--------------------------------------------------------------------------------
/DOC/dionaea.INSTALL.md:
--------------------------------------------------------------------------------
1 | # Honeypot
2 | Fairly simple low-interaction honeypot setups
3 | Koen Van Impe
4 |
5 | # Dionaea
6 |
7 | Dionaea is a low-interaction honeypot that captures attack payloads and malware
8 | p0f is a versatile passive OS fingerprinting tool.
9 |
10 | ------------------------------------------------------------------------------------------
11 |
12 |
13 | # Layout
14 |
15 | The dionaea and p0f packages are installed from pacakges but the front-end DionaeaFR is installed from source (git) in /opt/DionaeaFR/
16 |
17 | # Install dionaea
18 |
19 | The install info is partly from http://www.cyberbrian.net/2014/09/install-dionaea-ubuntu-14-04/
20 |
21 | ## Installation
22 |
23 | ```
24 | sudo apt-get update
25 | sudo apt-get upgrade
26 | sudo apt-get install software-properties-common python-software-properties
27 | sudo add-apt-repository ppa:honeynet/nightly
28 | sudo apt-get update
29 | sudo apt-get install p0f
30 | sudo apt-get install dionaea-phibo
31 | ```
32 |
33 | ## Start p0f
34 |
35 | P0f can be started from the command line with
36 | ```
37 | sudo p0f -i any -u root -Q /var/run/p0f.sock -q -l
38 | ```
39 |
40 | Make sure that the socket (-Q) is also accessible by dionaea. Alternatively you can use the init-script in p0f/p0f_init.sh
41 |
42 | ```
43 | chgrp dionaea /var/run/p0f.sock
44 | ```
45 |
46 | ## Start dionaea
47 |
48 | ```
49 | sudo service dionaea-phibo start
50 | ```
51 |
52 | ## Statistics, optionally use gnuplotsql
53 |
54 | The gnuplotsql utility is not included in the Ubuntu package but you can get it from the source of dionaea (you might first have to clone the source from dionaea.carnivore.it). The useful modules are in dionaea/modules_python_util
55 |
56 | ```
57 | sudo apt-get install gnuplot
58 | ./gnuplotsql.py -d /var/lib/dionaea/logsql.sqlite -p smbd -p epmapper -p mssqld -p httpd -p ftpd -D /var/www/html/dionaea-gnuplot/
59 | ```
60 |
61 | # Configuration
62 |
63 | The configuration files are in /etc/dionaea/ and the data files are in /var/lib/dionaea/
64 | Use the config file in this repository **dionaea/dionaea.conf**
65 |
66 | ## Enable P0f
67 |
68 | Enable P0f by uncommenting it in the list of **ihandlers** and set the proper path for the socket in
69 | ```
70 | p0f = {
71 | path = "un:///var/run/p0f.sock"
72 | }
73 | ```
74 |
75 | ## Logging
76 |
77 | Enable proper logging in the logging = {} section.
78 |
79 | ## Logrotating
80 |
81 | Make sure that you rotate your logs. You can use the **dionaea.logrotate** script for this (make sure you define the correct path).
82 |
83 | ## SQLITE database scheme
84 |
85 | By default dionaea logs to a sqlite file /var/lib/dionaea/logsql.sqlite.
86 |
87 | The sqlite logstash module needs an ID-column to keep track of the data.
88 | The patch **logsql.py** adds an ID field and keeps its updated with every dionaea connection.
89 |
90 | - Remove the SQLITE database /var/lib/dionaea/logsql.sqlite.
91 | - Apply the patch
92 | - Restart dionaea
93 |
94 | ## Set dionaea to start at boot
95 |
96 | ```
97 | update-rc.d dionaea-phibo defaults
98 | ```
99 |
100 | # Install dionaeaFR
101 |
102 | See http://www.vanimpe.eu/2014/07/04/install-dionaeafr-web-frontend-dionaea-ubuntu/
103 |
104 | # Start dionaeaFR
105 |
106 |
107 |
108 | # Finishing up
109 |
110 | * Create a cronjob for gnuplotsql
111 | * Set dionaeaFR to start at boot
112 |
113 |
114 |
--------------------------------------------------------------------------------
/DOC/glastopf.INSTALL.md:
--------------------------------------------------------------------------------
1 | # Honeypot
2 | Fairly simple low-interaction honeypot setups
3 | Koen Van Impe
4 |
5 | # Glastopf
6 |
7 | Glastopf is a Python web application honeypot
8 |
9 | ------------------------------------------------------------------------------------------
10 |
11 | # Install
12 |
13 | Install instructions are in the git repository.
14 | https://github.com/glastopf/glastopf/blob/master/docs/source/installation/installation_ubuntu.rst
15 |
16 | ```
17 | sudo apt-get install python2.7 python-openssl python-gevent libevent-dev python2.7-dev build-essential make
18 | sudo apt-get install python-chardet python-requests python-sqlalchemy python-lxml
19 | sudo apt-get install python-beautifulsoup mongodb python-pip python-dev python-setuptools
20 | sudo apt-get install g++ git php5 php5-dev liblapack-dev gfortran libmysqlclient-dev
21 | sudo apt-get install libxml2-dev libxslt-dev
22 | sudo pip install --upgrade distribute
23 | ```
24 |
25 | ## Install PHP sandbox
26 |
27 | ```
28 | cd /opt
29 | sudo git clone git://github.com/glastopf/BFR.git
30 | cd BFR
31 | sudo phpize
32 | sudo ./configure --enable-bfr
33 | sudo make && sudo make install
34 | ```
35 |
36 | Open the file php.ini (**/etc/php5/apache2/php.ini**) and add
37 | ```
38 | zend_extension = /usr/lib/php5/20090626+lfs/bfr.so
39 | ```
40 |
41 | ## Install glastopf
42 |
43 | ```
44 | cd /opt
45 | sudo git clone https://github.com/glastopf/glastopf.git
46 | cd glastopf
47 | sudo python setup.py install
48 | ```
49 |
50 | ## Error while installing glastopf
51 |
52 | If you are doing the install on **Ubuntu 14** and you get an error similar to
53 |
54 | ```
55 | NameError: name 'sys_platform' is not defined
56 |
57 | File "/opt/glastopf/distribute_setup.py", line 123, in _build_egg
58 | raise IOError('Could not build the egg.')
59 | IOError: Could not build the egg.
60 | ```
61 |
62 | (see https://github.com/glastopf/glastopf/issues/200#issuecomment-59065414 for the full error message) then you have to remove distribute and reinstall it manually.
63 |
64 | ```
65 | rm -rf /usr/local/lib/python2.7/dist-packages/distribute-0.7.3-py2.7.egg-info/
66 | rm -rf /usr/local/lib/python2.7/dist-packages/setuptools*
67 | cd /opt
68 | wget https://pypi.python.org/packages/source/d/distribute/distribute-0.6.35.tar.gz
69 | tar -xzvf distribute-0.6.35.tar.gz
70 | cd distribute-0.6.35
71 | sudo python setup.py install
72 | ```
73 |
74 | # Basic configuration
75 |
76 | ```
77 | cd /opt
78 | sudo mkdir myhoneypot
79 | cd myhoneypot
80 | sudo glastopf-runner
81 | ```
82 |
83 | This will create a config file **glastopf.cfg**
84 |
85 | ## HP-Feeds
86 |
87 | By default glastopf has hpfeeds enabled. You can disable it in the [hpfeed] section of glastopf.cfg
88 |
89 | ## Socket error
90 |
91 | If glastopf fails to start
92 |
93 | ```
94 | socket.error: [Errno 98] Address already in use: ('0.0.0.0', 80)
95 | ```
96 |
97 | then maybe Apache is also running? Stop Apache and bind it to a different port.
98 |
99 | ## Database configuration
100 |
101 | Out of the box glastopf will log to a sqlite database. If you prefer mysql then first create a database, set proper permissions and change the setting in the config file.
102 |
103 | ```
104 | create database glastopf;
105 | mysql> create user 'glastopf'@'localhost' identified by 'glastopf';
106 | mysql> grant all privileges on glastopf.* to 'glastopf'@'localhost';
107 | mysql> flush privileges;
108 | ```
109 |
110 | Do not worry that the database is empty, without tables. Glastophf will create the necessary tables when it starts.
111 | In glastopf.cfg change this
112 |
113 | ```
114 | [main-database]
115 | enabled = True
116 | connection_string = mysql://glastopf:glastopf@localhost/glastopf
117 | ```
--------------------------------------------------------------------------------
/DOC/kippo.INSTALL.md:
--------------------------------------------------------------------------------
1 | # Honeypot
2 | Fairly simple low-interaction honeypot setups
3 | Koen Van Impe
4 |
5 | # Kippo
6 |
7 | Kippo is a medium interaction SSH honeypot designed to log brute force attack.
8 |
9 | ------------------------------------------------------------------------------------------
10 |
11 | # Install kippo
12 |
13 | Kippo uses a couple of Python libraries.
14 |
15 | ```
16 | sudo apt-get install python-openssl python-pyasn1 python-twisted python-mysqldb
17 | ```
18 |
19 | You can download the latest source from Github.
20 |
21 | ```
22 | git clone https://github.com/desaster/kippo.git
23 | ```
24 |
25 | ## mysql
26 |
27 | Kippo can store the connection attempts in a mysql database.
28 |
29 | Create a mysql database and user for kippo. Generate the tables from **doc/sql/mysql.sql**
30 |
31 | # Configuration
32 |
33 | The kippo configuration is stored in kippo.cfg. You can copy the config file from **kippo/kippo.cfg**
34 |
35 | ## Mysql
36 |
37 | Enable the mysql configuration by changing the section [database_mysql]. Set the database, hostname, username and password. I don't use mysql in this setup.
38 |
39 | ## Kippo hostname
40 |
41 | The default hostname returned by kippo is svr03. Make sure you change the setting **hostname**.
42 |
43 | ## SSH Banner
44 | You can define the SSH-banner returned by kippo. It's advisable you change this to make it more difficult for intruders to guess that they are in a honeypot. Do this with the **ssh_version_string** setting.
45 |
46 | ## Listen on tcp/22
47 |
48 | kippo listens on tcp/2222 and should not be run as root. Non-root users can not bind to tcp/22. In order to get incoming SSH connections into kippo you have to add an iptables rule. 192.168.218.141 below is the IP of the interface to which kippo is binded.
49 | ```
50 | iptables -t nat -A PREROUTING -p tcp --dport 22 -d 192.168.218.141 -j REDIRECT --to-port 2222
51 | ```
52 |
53 | # Start kippo
54 |
55 | The kippo startup script is **start.sh** . Check the logs in log/kippo.log
56 |
57 | ## Logging
58 |
59 | The startup script sets logging to log/kippo.log ; it's better to change this to /var/log/kippo/kippo.log ; make sure that your kippo user has write access.
60 |
61 | ## Log rotate
62 |
63 | Rotate the kippo logs (with **kippo.logrotate**) ; make sure you substitute the username 'kippo' in the logrotate script with your username
64 |
65 | # Stop kippo
66 |
67 | ```
68 | kill `cat kippo.pid`
69 | ```
70 |
71 | or use the stop.sh script.
72 |
73 | # Kippo graphs
74 |
75 | Graphs make sense of the data that is stored in the database. You can use kippo-graph for this.
76 |
77 | ```
78 | sudo apt-get install php5-gd php5-curl
79 | ```
80 |
81 | ```
82 | git clone https://github.com/cudeso/kippo-graph.git
83 | ```
84 |
85 |
86 | ## Configuration
87 |
88 | The configuration of kippo-graph is in **config.php**. Change the mysql settings to make sure kippo-graph can read its data.
89 |
90 | Do not forget to make sure that the webserver can write to the directory **generated-graphs**.
91 |
92 |
93 | # Finishing up
94 |
95 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | cudeso-honeypot
2 | ===============
3 |
4 | ## Low interaction honeypot
5 |
6 | Setup documentation for a number of low interaction honeypots
7 |
8 | * dionaea
9 | * kippo
10 | * glastopf
11 | * conpot
12 |
13 | ## SSHD Configuration
14 |
15 | The kippo honeypot takes connections on tcp/22 (and tcp/2222). Change the listening port of the SSH daemon.
16 | In
17 | ```
18 | /etc/ssh/sshd_config
19 | ```
20 | Change the Port setting to
21 | ```
22 | Port 8822
23 | ```
24 | Restart the SSD daemon.
--------------------------------------------------------------------------------
/TODO.md:
--------------------------------------------------------------------------------
1 | * Compare kippo results with dionaea
2 | * Integrate other honeydrive tools http://bruteforce.gr/honeydrive
3 |
4 |
--------------------------------------------------------------------------------
/conpot/.gitignore:
--------------------------------------------------------------------------------
1 | conpot/*
--------------------------------------------------------------------------------
/conpot/conpot.cfg:
--------------------------------------------------------------------------------
1 | [common]
2 | sensorid = default
3 |
4 | [session]
5 | timeout = 30
6 |
7 | [sqlite]
8 | enabled = False
9 |
10 | [mysql]
11 | enabled = True
12 | device = /tmp/mysql.sock
13 | host = localhost
14 | port = 3306
15 | db = conpot
16 | username = conpot
17 | passphrase = conpot
18 | socket = tcp ; tcp (sends to host:port), dev (sends to mysql device/socket file)
19 |
20 | [syslog]
21 | enabled = False
22 | device = /dev/log
23 | host = localhost
24 | port = 514
25 | facility = local0
26 | socket = dev ; udp (sends to host:port), dev (sends to device)
27 |
28 | [hpfriends]
29 | enabled = False
30 | host = hpfriends.honeycloud.net
31 | port = 20000
32 | ident = 3Ykf9Znv
33 | secret = 4nFRhpm44QkG9cvD
34 | channels = ["conpot.events", ]
35 |
36 | [taxii]
37 | enabled = False
38 | host = taxiitest.mitre.org
39 | port = 80
40 | inbox_path = /services/inbox/default/
41 | use_https = False
42 | contact_name = conpot
43 | contact_domain = http://conpot.org/stix-1
44 |
45 | [fetch_public_ip]
46 | enabled = True
47 | urls = ["http://www.telize.com/ip", "http://queryip.net/ip/", "http://ifconfig.me/ip"]
48 |
49 |
--------------------------------------------------------------------------------
/dionaea/DionaeaFR/.gitignore:
--------------------------------------------------------------------------------
1 | DionaeaFR/*
--------------------------------------------------------------------------------
/dionaea/DionaeaFR/dionaeafr_init.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # DionaeaFR Startup script
4 | # Koen Van Impe
5 |
6 | . /lib/lsb/init-functions
7 |
8 | #BASEDIR="/mnt/hgfs/www/camelot.cudeso.be/cudeso-honeypot/dionaea/DionaeaFR/DionaeaFR"
9 | BASEDIR="/opt/DionaeaFR/DionaeaFR"
10 | PIDFILE="/var/run/dionaeafr/dionaeafr.pid"
11 | LOGFILE="/var/log/dionaeafr/dionaeafr.log"
12 | NAME="dionaeafr"
13 | DAEMON="dionaeafr"
14 | PORT=8000
15 |
16 | case $1 in
17 | start)
18 | cd $BASEDIR
19 | python manage.py runserver 0.0.0.0:$PORT > $LOGFILE 2>> $LOGFILE &
20 | log_daemon_msg "$DAEMON started ..."
21 | log_end_msg 0
22 | ;;
23 | stop)
24 | if [ -e $PIDFILE ]; then
25 | kill `cat $PIDFILE`
26 | rm $PIDFILE
27 | log_daemon_msg "$DAEMON stopped ..."
28 | log_end_msg 0
29 | else
30 | log_daemon_msg "$DAEMON is *NOT* running"
31 | log_end_msg 1
32 | fi
33 | ;;
34 | collectstatic)
35 | cd $BASEDIR
36 | python manage.py collectstatic
37 | ;;
38 | logs)
39 | cat $LOGFILE
40 | ;;
41 | status)
42 | if [ -e $PIDFILE ]; then
43 | status_of_proc -p $PIDFILE $DAEMON "$NAME process" && exit 0 || exit $?
44 | else
45 | log_daemon_msg "$DAEMON is not running ..."
46 | log_end_msg 0
47 | fi
48 | ;;
49 | *)
50 | # For invalid arguments, print the usage message.
51 | echo "Usage: $0 {start|stop|collectstatic|logs|status}"
52 | exit 2
53 | ;;
54 | esac
55 |
56 |
--------------------------------------------------------------------------------
/dionaea/DionaeaFR/settings.py:
--------------------------------------------------------------------------------
1 | # Django settings for DionaeaFR project.
2 | import os
3 |
4 | CURRENT_PATH = os.path.abspath(os.path.dirname(__file__))
5 |
6 | DEBUG = True
7 | TEMPLATE_DEBUG = DEBUG
8 |
9 | ADMINS = (
10 | # ('Your Name', 'your_email@example.com'),
11 | )
12 |
13 | MANAGERS = ADMINS
14 |
15 | DATABASES = {
16 | 'default': {
17 | 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
18 | 'NAME': '/var/lib/dionaea/logsql.sqlite',
19 | 'USER': '', # Not used with sqlite3.
20 | 'PASSWORD': '', # Not used with sqlite3.
21 | 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
22 | 'PORT': '', # Set to empty string for default. Not used with sqlite3.
23 | },
24 | 'OPTIONS': {
25 | 'timeout': 60,
26 | }
27 | }
28 |
29 | # How many days (going backwards) worth of results to show
30 | RESULTS_DAYS = 14
31 |
32 | # Local time zone for this installation. Choices can be found here:
33 | # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
34 | # although not all choices may be available on all operating systems.
35 | # On Unix systems, a value of None will cause Django to use the same
36 | # timezone as the operating system.
37 | # If running in a Windows environment this must be set to the same as your
38 | # system time zone.
39 | DATETIME_FORMAT = 'd-m-Y H:i:s'
40 | TIME_ZONE = 'Etc/UTC'
41 |
42 | # Language code for this installation. All choices can be found here:
43 | # http://www.i18nguy.com/unicode/language-identifiers.html
44 | LANGUAGE_CODE = 'en-us'
45 |
46 | SITE_ID = 1
47 |
48 | # If you set this to False, Django will make some optimizations so as not
49 | # to load the internationalization machinery.
50 | USE_I18N = True
51 |
52 | # If you set this to False, Django will not format dates, numbers and
53 | # calendars according to the current locale.
54 | USE_L10N = True
55 |
56 | # If you set this to False, Django will not use timezone-aware datetimes.
57 | USE_TZ = True
58 |
59 | # Absolute filesystem path to the directory that will hold user-uploaded files.
60 | # Example: "/home/media/media.lawrence.com/media/"
61 | MEDIA_ROOT = os.path.join(CURRENT_PATH, 'media')
62 |
63 | # URL that handles the media served from MEDIA_ROOT. Make sure to use a
64 | # trailing slash.
65 | # Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
66 | MEDIA_URL = '/media/'
67 |
68 | # Absolute path to the directory static files should be collected to.
69 | # Don't put anything in this directory yourself; store your static files
70 | # in apps' "static/" subdirectories and in STATICFILES_DIRS.
71 | # Example: "/home/media/media.lawrence.com/static/"
72 | STATIC_ROOT = os.path.abspath(os.path.join(os.path.join(CURRENT_PATH, os.pardir), 'static'))
73 |
74 | # URL prefix for static files.
75 | # Example: "http://media.lawrence.com/static/"
76 | STATIC_URL = '/static/'
77 |
78 | # Additional locations of static files
79 | STATICFILES_DIRS = (
80 | os.path.join(CURRENT_PATH, 'static'),
81 | )
82 |
83 | # List of finder classes that know how to find static files in
84 | # various locations.
85 | STATICFILES_FINDERS = (
86 | 'django.contrib.staticfiles.finders.FileSystemFinder',
87 | 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
88 | 'compressor.finders.CompressorFinder',
89 | )
90 |
91 | # Make this unique, and don't share it with anybody.
92 | SECRET_KEY = 'o*w2sbrhS_LKSQwx_dmlslsdmcx,c,k!(el0^qx5dljbv6eyjwx)(76^wppdjrrxnj%wcny3h2r'
93 |
94 | # List of callables that know how to import templates from various sources.
95 | TEMPLATE_LOADERS = (
96 | 'django.template.loaders.filesystem.Loader',
97 | 'django.template.loaders.app_directories.Loader',
98 | # 'django.template.loaders.eggs.Loader',
99 | )
100 |
101 | TEMPLATE_CONTEXT_PROCESSORS = (
102 | 'django.core.context_processors.request',
103 | 'django.core.context_processors.static',
104 | 'django.contrib.messages.context_processors.messages',
105 | 'django.contrib.auth.context_processors.auth',
106 | 'Web.context_processors.expose_extra_settings_keys',
107 | )
108 |
109 | MIDDLEWARE_CLASSES = (
110 | 'django.middleware.gzip.GZipMiddleware',
111 | 'htmlmin.middleware.HtmlMinifyMiddleware',
112 | 'django.middleware.common.CommonMiddleware',
113 | 'django.contrib.sessions.middleware.SessionMiddleware',
114 | 'django.middleware.csrf.CsrfViewMiddleware',
115 | 'django.contrib.auth.middleware.AuthenticationMiddleware',
116 | 'django.contrib.messages.middleware.MessageMiddleware',
117 | 'django.middleware.clickjacking.XFrameOptionsMiddleware',
118 | 'pagination.middleware.PaginationMiddleware',
119 | )
120 |
121 | ANTIVIRUS_VIRUSTOTAL = 'Sophos'
122 |
123 | RESERVED_IP = (
124 | '0.0.0.0/8',
125 | '10.0.0.0/8',
126 | '100.64.0.0/10',
127 | '127.0.0.0/8',
128 | '169.254.0.0/16',
129 | '172.16.0.0/12',
130 | '192.0.0.0/24',
131 | '192.0.2.0/24',
132 | '192.88.99.0/24',
133 | '192.168.0.0/16',
134 | '198.15.0.0/15',
135 | '198.51.100.0/24',
136 | '203.0.113.0/24',
137 | '224.0.0.0/4',
138 | '240.0.0.0/4',
139 | '255.255.255.255/32'
140 | )
141 |
142 | HTML_MINIFY = False
143 |
144 | ROOT_URLCONF = 'DionaeaFR.urls'
145 |
146 | # Python dotted path to the WSGI application used by Django's runserver.
147 | WSGI_APPLICATION = 'DionaeaFR.wsgi.application'
148 |
149 | TEMPLATE_DIRS = (
150 | os.path.join(CURRENT_PATH, 'Templates'),
151 | )
152 |
153 | COMPRESS_PRECOMPILERS = (
154 | ('text/less', 'lessc {infile} {outfile}'),
155 | )
156 |
157 | INTERNAL_IPS = ('127.0.0.1',)
158 |
159 | INSTALLED_APPS = (
160 | 'django.contrib.auth',
161 | 'django.contrib.contenttypes',
162 | 'django.contrib.sessions',
163 | 'django.contrib.sites',
164 | 'django.contrib.messages',
165 | 'django.contrib.staticfiles',
166 | 'compressor',
167 | 'django_tables2',
168 | 'django_tables2_simplefilter',
169 | 'pagination',
170 | 'django.contrib.humanize',
171 | 'Web',
172 | )
173 |
174 | # A sample logging configuration. The only tangible logging
175 | # performed by this configuration is to send an email to
176 | # the site admins on every HTTP 500 error when DEBUG=False.
177 | # See http://docs.djangoproject.com/en/dev/topics/logging for
178 | # more details on how to customize your logging configuration.
179 | LOGGING = {
180 | 'version': 1,
181 | 'disable_existing_loggers': False,
182 | 'filters': {
183 | 'require_debug_false': {
184 | '()': 'django.utils.log.RequireDebugFalse'
185 | }
186 | },
187 | 'handlers': {
188 | 'mail_admins': {
189 | 'level': 'ERROR',
190 | 'filters': ['require_debug_false'],
191 | 'class': 'django.utils.log.AdminEmailHandler'
192 | }
193 | },
194 | 'loggers': {
195 | 'django.request': {
196 | 'handlers': ['mail_admins'],
197 | 'level': 'ERROR',
198 | 'propagate': True,
199 | },
200 | }
201 | }
202 |
--------------------------------------------------------------------------------
/dionaea/README.md:
--------------------------------------------------------------------------------
1 | By default dionaea logs to a sqlite file /var/lib/dionaea/logsql.sqlite.
2 |
3 | The sqlite logstash module needs an ID-column to keep track of the data.
4 |
5 | The patch logsql.py adds an ID field and keeps its updated with every dionaea connection.
6 |
--------------------------------------------------------------------------------
/dionaea/dionaea.conf:
--------------------------------------------------------------------------------
1 | logging = {
2 | default = {
3 | // file not starting with / is taken relative to LOCALESTATEDIR (e.g. /opt/dionaea/var)
4 | file = "/var/log/dionaea/dionaea.log"
5 | levels = "all"
6 | domains = "*"
7 | }
8 |
9 | errors = {
10 | // file not starting with / is taken relative to LOCALESTATEDIR (e.g. /opt/dionaea/var)
11 | file = "/var/log/dionaea/dionaea-errors.log"
12 | levels = "warning,error"
13 | domains = "*"
14 | }
15 | }
16 |
17 | processors =
18 | {
19 | filter-emu =
20 | {
21 | config = {
22 | allow = [{ protocol = ["smbd","epmapper","nfqmirrord","mssqld"] }]
23 | }
24 | next = {
25 | emu =
26 | {
27 | config = {
28 | emulation = {
29 | limits = {
30 | files = "3"
31 | filesize = "524288" // 512 * 1024
32 | sockets = "3"
33 | sustain = "120"
34 | idle = "30"
35 | listen = "30"
36 | cpu = "120"
37 | steps = "1073741824" // 1024 * 1024 * 1024
38 | }
39 |
40 | /**
41 | * api default arguments for development
42 | * disabled by default
43 | * not working yet
44 | */
45 | api = {
46 | connect = {
47 | host = "127.0.0.1"
48 | port = "4444"
49 | }
50 | }
51 | }
52 | }
53 | }
54 | }
55 | }
56 |
57 | filter-streamdumper =
58 | {
59 | config = {
60 | allow = [
61 | { type = ["accept"] }
62 | { type = ["connect"] protocol=["ftpctrl"] }
63 | ]
64 | deny = [
65 | { protocol = ["ftpdata", "ftpdatacon","xmppclient"] }
66 | ]
67 | }
68 | next = {
69 | streamdumper = {
70 | config = {
71 | path = "/var/lib/dionaea/bistreams/%Y-%m-%d/"
72 | }
73 | }
74 | }
75 | }
76 |
77 | /* filter-sessions =
78 | {
79 | config = {
80 | allow = [ { protocol = ["ftpctrl","remoteshell"] } ]
81 | }
82 | next = {
83 | python = {
84 | incident = "true"
85 | }
86 | }
87 | }
88 | */
89 | }
90 |
91 | downloads =
92 | {
93 | dir = "/var/lib/dionaea/binaries"
94 | tmp-suffix = ".tmp"
95 | }
96 |
97 | bistreams =
98 | {
99 | python =
100 | {
101 | dir = "/var/lib/dionaea/bistreams"
102 | }
103 | }
104 |
105 | submit =
106 | {
107 | defaults = {
108 | urls = ["http://anubis.iseclab.org/nepenthes_action.php",
109 | "http://onlineanalyzer.norman.com/nepenthes_upload.php",
110 | "http://luigi.informatik.uni-mannheim.de/submit.php?action=verify"]
111 | email = "nepenthesdev@gmail.com"
112 | file_fieldname = "upfile"
113 | MAX_FILE_SIZE = "1500000"
114 | submit = "Submit for analysis"
115 | }
116 |
117 | /**
118 | * joebox is special, due to the TOS you can lookup here
119 | * http://www.joebox.org/resources/service%20terms.txt
120 | * therefore untested and disabled by default
121 | */
122 | /*
123 | joebox = {
124 | urls = ["http://analysis.joebox.org/submit"]
125 | email = "nepenthesdev@gmail.com"
126 | file_fieldname = "upfile"
127 | MAX_FILE_SIZE = "1500000"
128 | submit = "Submit for analysis"
129 | service = "agree"
130 | xp = "1"
131 | vista = "1"
132 | w7 = "1"
133 | pcap = "1"
134 | }
135 | */
136 |
137 | /*
138 | yoursection =
139 | {
140 | urls = ["http://127.0.0.1/submit"]
141 | email = "yourmail"
142 | user = "yourusername"
143 | pass = "yourpassword"
144 | }
145 | */
146 | }
147 |
148 | listen =
149 | {
150 | /* basically we have 3 modes
151 | - getifaddrs - auto
152 | will get a list of all ips and bind a service to each ip
153 | - manual - your decision
154 | addrs has to be provided, and should look like this
155 | addrs = { eth0 = ["1.1.1.1", "1.1.1.2"], eth1 = ["2.1.1.1", "2.1.1.2"] }
156 | you get the idea ...
157 | for most cases with more than one address
158 | addrs = { eth0 = ["0.0.0.0"] }
159 | will do the trick
160 | if you want to throw in ipv6 support as well ...
161 | addrs = { eth0 = ["::"] }
162 | note: ipv6 does not work with surfids yet,
163 | as ipv6 addresses are mapped to ipv4 and surfids fails to retrieve the sensor id for ::ffff:1.2.3.4
164 | - nl, will require a list of interfaces
165 | fnmatch is possible like
166 | interfaces = ["ppp*","tun*"]
167 | and loading the nl module AFTER the python module in the modules section below
168 | nl will use the kernel netlink interface to figure out which addresses exist
169 | at runtime, and start/stop services dynamically per address per interface
170 | */
171 |
172 | mode = "getifaddrs"
173 | addrs = { eth0 = ["::"] }
174 | }
175 |
176 | modules = {
177 |
178 | curl =
179 | {
180 | protocol = "http"
181 | }
182 |
183 | emu = {
184 | detect = "1"
185 | profile = "1"
186 | }
187 |
188 | // pcap =
189 | // {
190 | /**
191 | * libpcap 1.0.0
192 | *
193 | * "Arithmetic expression against transport layer headers, like
194 | * tcp[0], does not work against IPv6 packets. It only looks
195 | * at IPv4 packets."
196 | *
197 | * As a consequence, the default filter can not match
198 | * ipv6 tcp rst packets.
199 | *
200 | * If you want to go for rejected ipv6, remove the tcp matching part of the filter
201 | * The code is capable to checking the tcp-rst flag and seq number itself, but
202 | * matching every packet in userspace is expensive.
203 | * Therefore you'll have to hack the code if you want to track ipv6 rejected connections
204 | *
205 | * Format is IFACE = { addrs = MODE }
206 | * currently mode is ignored
207 | */
208 |
209 | // any = {
210 | // addrs = "auto"
211 | // }
212 | // }
213 |
214 | nfq =
215 | {
216 | /**
217 | * queue has to be the nfqueue num
218 | * refer to http://dionaea.carnivore.it/#nfq_python
219 | * if you do not specify a queue-num with iptables, 0 is the default
220 | */
221 | queue = "0"
222 | }
223 |
224 | python = {
225 | // default expands to PREFIX/lib/dionaea/python/
226 | // ordering is granted
227 | // useful for development
228 | // simply add your devel directory to the list, avoids a make install for new python code
229 | sys_path = ["default"]
230 |
231 | // python imports
232 | imports = [ "log",
233 | "services",
234 | "ihandlers"]
235 | ftp = {
236 | root = "/var/lib/dionaea/wwwroot"
237 |
238 | /* ftp client section
239 | */
240 |
241 | /* ports for active ftp
242 | * string indicating a range
243 | */
244 | active-ports = "63001-64000"
245 |
246 | /* host for active ftp via NAT
247 | * 0.0.0.0 - the initiating connection ip is used for active ftp
248 | * not 0.0.0.0 - gets resolved as hostname and used
249 | */
250 | active-host = "0.0.0.0"
251 | }
252 | tftp = {
253 | root = "/var/lib/dionaea/wwwroot"
254 | }
255 | http = {
256 | root = "/var/lib/dionaea/wwwroot"
257 | max-request-size = "32768" // maximum size in kbytes of the request (32MB)
258 | }
259 | sip = {
260 | udp = {
261 | port = "5060"
262 | }
263 | tcp = {
264 | port = "5060"
265 | }
266 | tls = {
267 | port = "5061"
268 | }
269 | users = "/var/lib/dionaea/sipaccounts.sqlite"
270 | rtp = {
271 | enable = "yes"
272 | /* how to dump the rtp stream
273 | bistream = dump as bistream
274 | */
275 | mode = ["bistream", "pcap"]
276 |
277 | pcap = {
278 | path = "/var/lib/dionaea/rtp/{personality}/%Y-%m-%d/"
279 | filename = "%H:%M:%S_{remote_host}_{remote_port}_in.pcap"
280 | }
281 | }
282 | personalities = {
283 | default = {
284 | domain = "localhost"
285 | name = "softphone"
286 | personality = "generic"
287 | }
288 | /*
289 | next-server = {
290 | domain = "my-domain"
291 | name = "my server"
292 | personality = "generic"
293 | serve = ["10.0.0.1"]
294 | default_sdp = "default"
295 | handle = ["REGISTER", "INVITE", "BYE", "CANCEL", "ACK"]
296 | }
297 |
298 | */
299 | }
300 | actions = {
301 | bank-redirect = {
302 | do = "redirect"
303 | params = {
304 | }
305 | }
306 | play-hello = {
307 | do = "play"
308 | params = {
309 | file = "/var/lib/dionaea/.../file.ext"
310 | }
311 | }
312 | }
313 | }
314 | surfids = {
315 | sslmode = "require"
316 | host = "surfids.example.com" // change this
317 | port = "5432" // maybe this
318 | username = "surfids" // this
319 | password = "secret" // and this
320 | dbname = "idsserver"
321 | }
322 | virustotal = {
323 | apikey = "........." // grab it from your virustotal account at My account -> Inbox -> Public API
324 | file = "/var/lib/dionaea/vtcache.sqlite"
325 | }
326 | mwserv = { // ask your mwserv backend provider for needed values
327 | url = "" // the url to send the submission requests to
328 | maintainer = "" // username of the maintainer of this sensor
329 | guid = "" // guid of this sensor, as generated serverside; typically 8 chars
330 | secret = "" // shared secret used for authentication aka password; typically 48 chars
331 | }
332 | mysql = {
333 | databases = {
334 | information_schema = {
335 | path = ":memory:"
336 | }
337 |
338 | // example how to extend this
339 | // just provide a databasename and path to the database
340 | // the database can be altered by attackers, so ... better use a copy
341 | // psn = {
342 | // path = "/path/to/cc_info.sqlite"
343 | // }
344 |
345 | }
346 | }
347 | submit_http = { // ask your submit_http backend provider for needed values
348 | url = "" // the url to send the submission requests to
349 | email = "" // optional
350 | user = "" // username (optional)
351 | pass = "" // password (optional)
352 | }
353 | logsql = {
354 | mode = "sqlite" // so far there is only sqlite
355 | sqlite = {
356 | file = "/var/lib/dionaea/logsql.sqlite"
357 | }
358 | }
359 | logxmpp = {
360 | /**
361 | * this section defines a single xmpp logging target
362 | * you can have multiple
363 | */
364 | carnivore = {
365 | server = "sensors.carnivore.it"
366 |
367 | /**
368 | * as dionaea does not support starttls (xmpp on port 5223),
369 | * we rely on 'legacy ssl' for the xmpp connection (port 5222)
370 | */
371 | port = "5223"
372 | muc = "dionaea.sensors.carnivore.it"
373 |
374 | /**
375 | * if the server exists, this is a valid account
376 | */
377 | username = "anonymous@sensors.carnivore.it"
378 | password = "anonymous"
379 |
380 | /**
381 | * setting a resource is possible, but you should not do it
382 | * the default resource is a random string of 8 chars
383 | */
384 | // resource = "theresource"
385 | config =
386 | {
387 | /**
388 | * this defines a muc channel
389 | */
390 | anon-events =
391 | {
392 | /**
393 | * incidents matching these events will get relayed to the channel
394 | */
395 | events = ["^dionaea\x5c.connection\x5c..*",
396 | "^dionaea\x5c.modules\x5c.python\x5c.smb.dcerpc\x5c.*",
397 | "^dionaea\x5c.download\x5c.offer$",
398 | "^dionaea\x5c.download\x5c.complete\x5c.hash$",
399 | "^dionaea\x5c.module\x5c.emu\x5c.profile$",
400 | "^dionaea\x5c.modules\x5c.python\x5c.mysql\x5c.*",
401 | "^dionaea\x5c.modules\x5c.python\x5c.sip\x5c.*",
402 | "^dionaea\x5c.modules\x5c.python\x5c.p0f\x5c.*",
403 | "^dionaea\x5c.modules\x5c.python\x5c.virustotal\x5creport",
404 | ]
405 |
406 | /**
407 | * anonymous removes the local host information from all connection messages
408 | * so you can report without getting identified
409 | */
410 | anonymous = "yes"
411 | }
412 |
413 | anon-files =
414 | {
415 | events = ["^dionaea\x5c.download\x5c.complete\x5c.unique"]
416 | }
417 | }
418 | }
419 | }
420 | nfq = {
421 | /**
422 | * nfq can intercept incoming tcp connections during the tcp handshake
423 | * giving your honeypot the possibility to provide service on
424 | * ports which are not served by default.
425 | * refer to the documentation (http://dionaea.carnivore.it/#nfq_python)
426 | * BEFORE using this
427 | */
428 |
429 | nfaction = "0" // DROP
430 |
431 | throttle = {
432 | window = "30"
433 | limits = {
434 | total = "30"
435 | slot = "30"
436 | }
437 | }
438 |
439 | timeouts = {
440 | server = {
441 | listen = "5"
442 | }
443 | client = {
444 | idle = "10"
445 | sustain = "240"
446 | }
447 | }
448 | }
449 | p0f = {
450 | /**
451 | * start p0f with
452 | * sudo p0f -i any -u root -Q /tmp/p0f.sock -q -l
453 | */
454 | path = "un:///var/run/p0f.sock"
455 | }
456 |
457 | fail2ban = {
458 | downloads = "/var/lib/dionaea/downloads.f2b"
459 | offers = "/var/lib/dionaea/offers.f2b"
460 | }
461 |
462 | ihandlers = {
463 | handlers = ["ftpdownload", "tftpdownload", "emuprofile", "cmdshell", "store", "uniquedownload",
464 | "logsql",
465 | // "virustotal",
466 | // "mwserv",
467 | // "submit_http",
468 | // "logxmpp",
469 | // "nfq",
470 | "p0f",
471 | // "surfids",
472 | // "fail2ban"
473 | ]
474 | }
475 |
476 | services = {
477 | serve = [ "https", "tftp", "ftp", "mirror", "smb", "epmap", "sip","mssql", "mysql"]
478 | //serve = ["http", "https", "tftp", "ftp", "mirror", "smb", "epmap", "sip","mssql", "mysql"]
479 | }
480 |
481 | }
482 |
483 | nl =
484 | {
485 | lookup_ethernet_addr = "no" // set to yes in case you are interested in the mac address of the remote (only works for lan)
486 |
487 | }
488 |
489 |
490 | /* nc is a test module */
491 | /* nc =
492 | {
493 | services = [
494 | {
495 | proto = "redir"
496 | type = "tcp"
497 | host = "::"
498 | port = "4711"
499 | },
500 | {
501 | proto = "redir"
502 | type = "tcp"
503 | host = "::"
504 | port = "12344"
505 | },
506 | {
507 | proto = "sink"
508 | type = "tcp"
509 | host = "::"
510 | port = "12345"
511 | throttle = {
512 | in = "8192"
513 | }
514 | timeout = {
515 | listen = "15"
516 | connect = "15"
517 | }
518 | },
519 | {
520 | proto = "source"
521 | type = "tcp"
522 | host = "::"
523 | port = "12346"
524 | throttle = {
525 | out = "8192"
526 | }
527 | timeout = {
528 | listen = "15"
529 | connect = "15"
530 | }
531 | },
532 | {
533 | proto = "redir"
534 | type = "tcp"
535 | host = "::"
536 | port = "12347"
537 | throttle = {
538 | in = "8192"
539 | out = "8192"
540 | }
541 | timeout = {
542 | listen = "15"
543 | connect = "15"
544 | }
545 | },
546 | {
547 | proto = "redir"
548 | type = "tls"
549 | host = "::"
550 | port = "12444"
551 | timeout = {
552 | listen = "15"
553 | connect = "15"
554 | }
555 | },
556 |
557 | {
558 | proto = "sink"
559 | type = "tls"
560 | host = "::"
561 | port = "12445"
562 | throttle = {
563 | in = "8192"
564 | }
565 | timeout = {
566 | listen = "15"
567 | connect = "5"
568 | }
569 | },
570 | {
571 | proto = "source"
572 | type = "tls"
573 | host = "::"
574 | port = "12446"
575 | throttle = {
576 | out = "8192"
577 | }
578 | timeout = {
579 | listen = "15"
580 | connect = "15"
581 | }
582 | },
583 | {
584 | proto = "redir"
585 | type = "tls"
586 | host = "::"
587 | port = "12447"
588 | throttle = {
589 | in = "8192"
590 | out = "8192"
591 | }
592 | timeout = {
593 | listen = "15"
594 | connect = "15"
595 | }
596 | },
597 | {
598 | proto = "source"
599 | type = "udp"
600 | host = "::"
601 | port = "12544"
602 | timeout = {
603 | connect = "15"
604 | }
605 | },
606 | {
607 | proto = "sink"
608 | type = "udp"
609 | host = "::"
610 | port = "12545"
611 | timeout = {
612 | connect = "15"
613 | }
614 | },
615 | {
616 | proto = "redir"
617 | type = "udp"
618 | host = "::"
619 | port = "12546"
620 | timeout = {
621 | connect = "15"
622 | }
623 | }
624 | ]
625 |
626 | clients = [
627 | {
628 | proto = "source"
629 | type = "tcp"
630 | host = "127.0.0.1"
631 | port = "13344"
632 | timeout = {
633 | connecting = "5"
634 | connect = "15"
635 | reconnect = "5"
636 | }
637 | },
638 | {
639 | proto = "redir"
640 | type = "tcp"
641 | host = "ip6-localhost"
642 | port = "13345"
643 | timeout = {
644 | connecting = "5"
645 | connect = "15"
646 | reconnect = "5"
647 | }
648 | },
649 | {
650 | proto = "redir"
651 | type = "tls"
652 | host = "localhost"
653 | port = "13346"
654 | timeout = {
655 | connecting = "5"
656 | connect = "15"
657 | reconnect = "5"
658 | }
659 | },
660 | {
661 | proto = "source"
662 | type = "tls"
663 | host = "ip6-localhost"
664 | port = "12445"
665 | timeout = {
666 | reconnect = "1"
667 | connect = "1"
668 | }
669 | }
670 | ]
671 | }
672 | */
673 | }
674 |
675 |
676 |
--------------------------------------------------------------------------------
/dionaea/dionaea.logrotate:
--------------------------------------------------------------------------------
1 | /var/log/dionaea/*.log {
2 | notifempty
3 | missingok
4 | rotate 28
5 | daily
6 | delaycompress
7 | compress
8 | create 660 root root
9 | dateext
10 | postrotate
11 | /etc/init.d/dionaea-phibo restart
12 | endscript
13 | }
14 |
15 |
--------------------------------------------------------------------------------
/dionaea/logsql.py:
--------------------------------------------------------------------------------
1 | #********************************************************************************
2 | #* Dionaea
3 | #* - catches bugs -
4 | #*
5 | #*
6 | #*
7 | #* Copyright (C) 2009 Paul Baecher & Markus Koetter
8 | #*
9 | #* This program is free software; you can redistribute it and/or
10 | #* modify it under the terms of the GNU General Public License
11 | #* as published by the Free Software Foundation; either version 2
12 | #* of the License, or (at your option) any later version.
13 | #*
14 | #* This program is distributed in the hope that it will be useful,
15 | #* but WITHOUT ANY WARRANTY; without even the implied warranty of
16 | #* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 | #* GNU General Public License for more details.
18 | #*
19 | #* You should have received a copy of the GNU General Public License
20 | #* along with this program; if not, write to the Free Software
21 | #* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
22 | #*
23 | #*
24 | #* contact nepenthesdev@gmail.com
25 | #*
26 | #*******************************************************************************/
27 |
28 |
29 | from dionaea.core import ihandler, incident, g_dionaea
30 |
31 | import os
32 | import logging
33 | import random
34 | import json
35 | import sqlite3
36 | import time
37 |
38 | logger = logging.getLogger('logsql')
39 | logger.setLevel(logging.DEBUG)
40 |
41 | class logsqlhandler(ihandler):
42 | def __init__(self, path):
43 | logger.debug("%s ready!" % (self.__class__.__name__))
44 | self.path = path
45 |
46 | def start(self):
47 | ihandler.__init__(self, self.path)
48 | # mapping socket -> attackid
49 | self.attacks = {}
50 |
51 | self.pending = {}
52 |
53 | # self.dbh = sqlite3.connect(user = g_dionaea.config()['modules']['python']['logsql']['file'])
54 | file = g_dionaea.config()['modules']['python']['logsql']['sqlite']['file']
55 | self.dbh = sqlite3.connect(file)
56 | self.cursor = self.dbh.cursor()
57 | update = False
58 |
59 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
60 | connections (
61 | connection INTEGER PRIMARY KEY,
62 | id INTEGER,
63 | connection_type TEXT,
64 | connection_transport TEXT,
65 | connection_protocol TEXT,
66 | connection_timestamp INTEGER,
67 | connection_root INTEGER,
68 | connection_parent INTEGER,
69 | local_host TEXT,
70 | local_port INTEGER,
71 | remote_host TEXT,
72 | remote_hostname TEXT,
73 | remote_port INTEGER
74 | )""")
75 |
76 | self.cursor.execute("""CREATE TRIGGER IF NOT EXISTS connections_INSERT_update_connection_root_trg
77 | AFTER INSERT ON connections
78 | FOR EACH ROW
79 | WHEN
80 | new.connection_root IS NULL
81 | BEGIN
82 | UPDATE connections SET connection_root = connection WHERE connection = new.connection AND new.connection_root IS NULL;
83 | END""")
84 |
85 | for idx in ["type","timestamp","root","parent"]:
86 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS connections_%s_idx
87 | ON connections (connection_%s)""" % (idx, idx))
88 |
89 | for idx in ["local_host","local_port","remote_host"]:
90 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS connections_%s_idx
91 | ON connections (%s)""" % (idx, idx))
92 |
93 |
94 | # self.cursor.execute("""CREATE TABLE IF NOT EXISTS
95 | # bistreams (
96 | # bistream INTEGER PRIMARY KEY,
97 | # connection INTEGER,
98 | # bistream_data TEXT
99 | # )""")
100 | #
101 | # self.cursor.execute("""CREATE TABLE IF NOT EXISTS
102 | # smbs (
103 | # smb INTEGER PRIMARY KEY,
104 | # connection INTEGER,
105 | # smb_direction TEXT,
106 | # smb_action TEXT,
107 | # CONSTRAINT smb_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
108 | # )""")
109 |
110 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
111 | dcerpcbinds (
112 | dcerpcbind INTEGER PRIMARY KEY,
113 | connection INTEGER,
114 | dcerpcbind_uuid TEXT,
115 | dcerpcbind_transfersyntax TEXT
116 | -- CONSTRAINT dcerpcs_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
117 | )""")
118 |
119 | for idx in ["uuid","transfersyntax"]:
120 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS dcerpcbinds_%s_idx
121 | ON dcerpcbinds (dcerpcbind_%s)""" % (idx, idx))
122 |
123 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
124 | dcerpcrequests (
125 | dcerpcrequest INTEGER PRIMARY KEY,
126 | connection INTEGER,
127 | dcerpcrequest_uuid TEXT,
128 | dcerpcrequest_opnum INTEGER
129 | -- CONSTRAINT dcerpcs_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
130 | )""")
131 |
132 | for idx in ["uuid","opnum"]:
133 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS dcerpcrequests_%s_idx
134 | ON dcerpcrequests (dcerpcrequest_%s)""" % (idx, idx))
135 |
136 |
137 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
138 | dcerpcservices (
139 | dcerpcservice INTEGER PRIMARY KEY,
140 | dcerpcservice_uuid TEXT,
141 | dcerpcservice_name TEXT,
142 | CONSTRAINT dcerpcservice_uuid_uniq UNIQUE (dcerpcservice_uuid)
143 | )""")
144 |
145 | from uuid import UUID
146 | from dionaea.smb import rpcservices
147 | import inspect
148 | services = inspect.getmembers(rpcservices, inspect.isclass)
149 | for name, servicecls in services:
150 | if not name == 'RPCService' and issubclass(servicecls, rpcservices.RPCService):
151 | try:
152 | self.cursor.execute("INSERT INTO dcerpcservices (dcerpcservice_name, dcerpcservice_uuid) VALUES (?,?)",
153 | (name, str(UUID(hex=servicecls.uuid))) )
154 | except Exception as e:
155 | # print("dcerpcservice %s existed %s " % (servicecls.uuid, e) )
156 | pass
157 |
158 |
159 | logger.info("Getting RPC Services")
160 | r = self.cursor.execute("SELECT * FROM dcerpcservices")
161 | # print(r)
162 | names = [r.description[x][0] for x in range(len(r.description))]
163 | r = [ dict(zip(names, i)) for i in r]
164 | # print(r)
165 | r = dict([(UUID(i['dcerpcservice_uuid']).hex,i['dcerpcservice']) for i in r])
166 | # print(r)
167 |
168 |
169 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
170 | dcerpcserviceops (
171 | dcerpcserviceop INTEGER PRIMARY KEY,
172 | dcerpcservice INTEGER,
173 | dcerpcserviceop_opnum INTEGER,
174 | dcerpcserviceop_name TEXT,
175 | dcerpcserviceop_vuln TEXT,
176 | CONSTRAINT dcerpcop_service_opnum_uniq UNIQUE (dcerpcservice, dcerpcserviceop_opnum)
177 | )""")
178 |
179 | logger.info("Setting RPC ServiceOps")
180 | for name, servicecls in services:
181 | if not name == 'RPCService' and issubclass(servicecls, rpcservices.RPCService):
182 | for opnum in servicecls.ops:
183 | op = servicecls.ops[opnum]
184 | uuid = servicecls.uuid
185 | vuln = ''
186 | dcerpcservice = r[uuid]
187 | if opnum in servicecls.vulns:
188 | vuln = servicecls.vulns[opnum]
189 | try:
190 | self.cursor.execute("INSERT INTO dcerpcserviceops (dcerpcservice, dcerpcserviceop_opnum, dcerpcserviceop_name, dcerpcserviceop_vuln) VALUES (?,?,?,?)",
191 | (dcerpcservice, opnum, op, vuln))
192 | except:
193 | # print("%s %s %s %s %s existed" % (dcerpcservice, uuid, name, op, vuln))
194 | pass
195 |
196 | # NetPathCompare was called NetCompare in dcerpcserviceops
197 | try:
198 | logger.debug("Trying to update table: dcerpcserviceops")
199 | x = self.cursor.execute("""SELECT * FROM dcerpcserviceops WHERE dcerpcserviceop_name = 'NetCompare'""").fetchall()
200 | if len(x) > 0:
201 | self.cursor.execute("""UPDATE dcerpcserviceops SET dcerpcserviceop_name = 'NetPathCompare' WHERE dcerpcserviceop_name = 'NetCompare'""")
202 | logger.debug("... done")
203 | else:
204 | logger.info("... not required")
205 | except Exception as e:
206 | print(e)
207 | logger.info("... not required")
208 |
209 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
210 | emu_profiles (
211 | emu_profile INTEGER PRIMARY KEY,
212 | connection INTEGER,
213 | emu_profile_json TEXT
214 | -- CONSTRAINT emu_profiles_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
215 | )""")
216 |
217 |
218 | # fix a typo on emu_services table definition
219 | # emu_services.emu_serive is wrong, should be emu_services.emu_service
220 | # 1) rename table, create the proper table
221 | try:
222 | logger.debug("Trying to update table: emu_services")
223 | self.cursor.execute("""SELECT emu_serivce FROM emu_services LIMIT 1""")
224 | self.cursor.execute("""ALTER TABLE emu_services RENAME TO emu_services_old""")
225 | update = True
226 | except Exception as e:
227 | logger.debug("... not required")
228 | update = False
229 |
230 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
231 | emu_services (
232 | emu_serivce INTEGER PRIMARY KEY,
233 | connection INTEGER,
234 | emu_service_url TEXT
235 | -- CONSTRAINT emu_services_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
236 | )""")
237 |
238 | # 2) copy all values to proper table, drop old table
239 | try:
240 | if update == True:
241 | self.cursor.execute("""
242 | INSERT INTO
243 | emu_services (emu_service, connection, emu_service_url)
244 | SELECT
245 | emu_serivce, connection, emu_service_url
246 | FROM emu_services_old""")
247 | self.cursor.execute("""DROP TABLE emu_services_old""")
248 | logger.debug("... done")
249 | except Exception as e:
250 | logger.debug("Updating emu_services failed, copying old table failed (%s)" % e)
251 |
252 |
253 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
254 | offers (
255 | offer INTEGER PRIMARY KEY,
256 | connection INTEGER,
257 | offer_url TEXT
258 | -- CONSTRAINT offers_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
259 | )""")
260 |
261 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS offers_url_idx ON offers (offer_url)""")
262 |
263 | # fix a type on downloads table definition
264 | # downloads.downloads is wrong, should be downloads.download
265 | # 1) rename table, create the proper table
266 | try:
267 | logger.debug("Trying to update table: downloads")
268 | self.cursor.execute("""SELECT downloads FROM downloads LIMIT 1""")
269 | self.cursor.execute("""ALTER TABLE downloads RENAME TO downloads_old""")
270 | update = True
271 | except Exception as e:
272 | # print(e)
273 | logger.debug("... not required")
274 | update = False
275 |
276 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
277 | downloads (
278 | download INTEGER PRIMARY KEY,
279 | connection INTEGER,
280 | download_url TEXT,
281 | download_md5_hash TEXT
282 | -- CONSTRAINT downloads_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
283 | )""")
284 |
285 | # 2) copy all values to proper table, drop old table
286 | try:
287 | if update == True:
288 | self.cursor.execute("""
289 | INSERT INTO
290 | downloads (download, connection, download_url, download_md5_hash)
291 | SELECT
292 | downloads, connection, download_url, download_md5_hash
293 | FROM downloads_old""")
294 | self.cursor.execute("""DROP TABLE downloads_old""")
295 | logger.debug("... done")
296 | except Exeption as e:
297 | logger.debug("Updating downloads failed, copying old table failed (%s)" % e)
298 |
299 | for idx in ["url", "md5_hash"]:
300 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS downloads_%s_idx
301 | ON downloads (download_%s)""" % (idx, idx))
302 |
303 |
304 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
305 | resolves (
306 | resolve INTEGER PRIMARY KEY,
307 | connection INTEGER,
308 | resolve_hostname TEXT,
309 | resolve_type TEXT,
310 | resolve_result TEXT
311 | )""")
312 |
313 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
314 | p0fs (
315 | p0f INTEGER PRIMARY KEY,
316 | connection INTEGER,
317 | p0f_genre TEXT,
318 | p0f_link TEXT,
319 | p0f_detail TEXT,
320 | p0f_uptime INTEGER,
321 | p0f_tos TEXT,
322 | p0f_dist INTEGER,
323 | p0f_nat INTEGER,
324 | p0f_fw INTEGER
325 | -- CONSTRAINT p0fs_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
326 | )""")
327 |
328 | for idx in ["genre","detail","uptime"]:
329 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS p0fs_%s_idx
330 | ON p0fs (p0f_%s)""" % (idx, idx))
331 |
332 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
333 | logins (
334 | login INTEGER PRIMARY KEY,
335 | connection INTEGER,
336 | login_username TEXT,
337 | login_password TEXT
338 | -- CONSTRAINT logins_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
339 | )""")
340 |
341 | for idx in ["username","password"]:
342 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS logins_%s_idx
343 | ON logins (login_%s)""" % (idx, idx))
344 |
345 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
346 | mssql_fingerprints (
347 | mssql_fingerprint INTEGER PRIMARY KEY,
348 | connection INTEGER,
349 | mssql_fingerprint_hostname TEXT,
350 | mssql_fingerprint_appname TEXT,
351 | mssql_fingerprint_cltintname TEXT
352 | -- CONSTRAINT mssql_fingerprints_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
353 | )""")
354 |
355 | for idx in ["hostname","appname","cltintname"]:
356 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS mssql_fingerprints_%s_idx
357 | ON mssql_fingerprints (mssql_fingerprint_%s)""" % (idx, idx))
358 |
359 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
360 | mssql_commands (
361 | mssql_command INTEGER PRIMARY KEY,
362 | connection INTEGER,
363 | mssql_command_status TEXT,
364 | mssql_command_cmd TEXT
365 | -- CONSTRAINT mssql_commands_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
366 | )""")
367 |
368 | for idx in ["status"]:
369 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS mssql_commands_%s_idx
370 | ON mssql_commands (mssql_command_%s)""" % (idx, idx))
371 |
372 |
373 |
374 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS virustotals (
375 | virustotal INTEGER PRIMARY KEY,
376 | virustotal_md5_hash TEXT NOT NULL,
377 | virustotal_timestamp INTEGER NOT NULL,
378 | virustotal_permalink TEXT NOT NULL
379 | )""")
380 |
381 | for idx in ["md5_hash"]:
382 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS virustotals_%s_idx
383 | ON virustotals (virustotal_%s)""" % (idx, idx))
384 |
385 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS virustotalscans (
386 | virustotalscan INTEGER PRIMARY KEY,
387 | virustotal INTEGER NOT NULL,
388 | virustotalscan_scanner TEXT NOT NULL,
389 | virustotalscan_result TEXT
390 | )""")
391 |
392 |
393 | for idx in ["scanner","result"]:
394 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS virustotalscans_%s_idx
395 | ON virustotalscans (virustotalscan_%s)""" % (idx, idx))
396 |
397 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS virustotalscans_virustotal_idx
398 | ON virustotalscans (virustotal)""")
399 |
400 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
401 | mysql_commands (
402 | mysql_command INTEGER PRIMARY KEY,
403 | connection INTEGER,
404 | mysql_command_cmd NUMBER NOT NULL
405 | -- CONSTRAINT mysql_commands_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
406 | )""")
407 |
408 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
409 | mysql_command_args (
410 | mysql_command_arg INTEGER PRIMARY KEY,
411 | mysql_command INTEGER,
412 | mysql_command_arg_index NUMBER NOT NULL,
413 | mysql_command_arg_data TEXT NOT NULL
414 | -- CONSTRAINT mysql_commands_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
415 | )""")
416 |
417 | for idx in ["command"]:
418 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS mysql_command_args_%s_idx
419 | ON mysql_command_args (mysql_%s)""" % (idx, idx))
420 |
421 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
422 | mysql_command_ops (
423 | mysql_command_op INTEGER PRIMARY KEY,
424 | mysql_command_cmd INTEGER NOT NULL,
425 | mysql_command_op_name TEXT NOT NULL,
426 | CONSTRAINT mysql_command_cmd_uniq UNIQUE (mysql_command_cmd)
427 | )""")
428 |
429 | from dionaea.mysql.include.packets import MySQL_Commands
430 | logger.info("Setting MySQL Command Ops")
431 | for num,name in MySQL_Commands.items():
432 | try:
433 | self.cursor.execute("INSERT INTO mysql_command_ops (mysql_command_cmd, mysql_command_op_name) VALUES (?,?)",
434 | (num, name))
435 | except:
436 | pass
437 |
438 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
439 | sip_commands (
440 | sip_command INTEGER PRIMARY KEY,
441 | connection INTEGER,
442 | sip_command_method ,
443 | sip_command_call_id ,
444 | sip_command_user_agent ,
445 | sip_command_allow INTEGER
446 | -- CONSTRAINT sip_commands_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
447 | )""")
448 |
449 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
450 | sip_addrs (
451 | sip_addr INTEGER PRIMARY KEY,
452 | sip_command INTEGER,
453 | sip_addr_type ,
454 | sip_addr_display_name,
455 | sip_addr_uri_scheme,
456 | sip_addr_uri_user,
457 | sip_addr_uri_password,
458 | sip_addr_uri_host,
459 | sip_addr_uri_port
460 | -- CONSTRAINT sip_addrs_command_fkey FOREIGN KEY (sip_command) REFERENCES sip_commands (sip_command)
461 | )""")
462 |
463 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
464 | sip_vias (
465 | sip_via INTEGER PRIMARY KEY,
466 | sip_command INTEGER,
467 | sip_via_protocol,
468 | sip_via_address,
469 | sip_via_port
470 | -- CONSTRAINT sip_vias_command_fkey FOREIGN KEY (sip_command) REFERENCES sip_commands (sip_command)
471 | )""")
472 |
473 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
474 | sip_sdp_origins (
475 | sip_sdp_origin INTEGER PRIMARY KEY,
476 | sip_command INTEGER,
477 | sip_sdp_origin_username,
478 | sip_sdp_origin_sess_id,
479 | sip_sdp_origin_sess_version,
480 | sip_sdp_origin_nettype,
481 | sip_sdp_origin_addrtype,
482 | sip_sdp_origin_unicast_address
483 | -- CONSTRAINT sip_sdp_origins_fkey FOREIGN KEY (sip_command) REFERENCES sip_commands (sip_command)
484 | )""")
485 |
486 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
487 | sip_sdp_connectiondatas (
488 | sip_sdp_connectiondata INTEGER PRIMARY KEY,
489 | sip_command INTEGER,
490 | sip_sdp_connectiondata_nettype,
491 | sip_sdp_connectiondata_addrtype,
492 | sip_sdp_connectiondata_connection_address,
493 | sip_sdp_connectiondata_ttl,
494 | sip_sdp_connectiondata_number_of_addresses
495 | -- CONSTRAINT sip_sdp_connectiondatas_fkey FOREIGN KEY (sip_command) REFERENCES sip_commands (sip_command)
496 | )""")
497 |
498 | self.cursor.execute("""CREATE TABLE IF NOT EXISTS
499 | sip_sdp_medias (
500 | sip_sdp_media INTEGER PRIMARY KEY,
501 | sip_command INTEGER,
502 | sip_sdp_media_media,
503 | sip_sdp_media_port,
504 | sip_sdp_media_number_of_ports,
505 | sip_sdp_media_proto
506 | -- sip_sdp_media_fmt,
507 | -- sip_sdp_media_attributes
508 | -- CONSTRAINT sip_sdp_medias_fkey FOREIGN KEY (sip_command) REFERENCES sip_commands (sip_command)
509 | )""")
510 |
511 | # self.cursor.execute("""CREATE TABLE IF NOT EXISTS
512 | # httpheaders (
513 | # httpheader INTEGER PRIMARY KEY,
514 | # connection INTEGER,
515 | # http_headerkey TEXT,
516 | # http_headervalue TEXT,
517 | # -- CONSTRAINT httpheaders_connection_fkey FOREIGN KEY (connection) REFERENCES connections (connection)
518 | # )""")
519 | #
520 | # for idx in ["headerkey","headervalue"]:
521 | # self.cursor.execute("""CREATE INDEX IF NOT EXISTS httpheaders_%s_idx
522 | # ON httpheaders (httpheader_%s)""" % (idx, idx))
523 |
524 |
525 | # connection index for all
526 | for idx in ["dcerpcbinds", "dcerpcrequests", "emu_profiles", "emu_services", "offers", "downloads", "p0fs", "logins", "mssql_fingerprints", "mssql_commands","mysql_commands","sip_commands"]:
527 | self.cursor.execute("""CREATE INDEX IF NOT EXISTS %s_connection_idx ON %s (connection)""" % (idx, idx))
528 |
529 |
530 | self.dbh.commit()
531 |
532 |
533 | # updates, database schema corrections for old versions
534 |
535 | # svn rev 2143 removed the table dcerpcs
536 | # and created the table dcerpcrequests
537 | #
538 | # copy the data to the new table dcerpcrequests
539 | # drop the old table
540 | try:
541 | logger.debug("Updating Table dcerpcs")
542 | self.cursor.execute("""INSERT INTO
543 | dcerpcrequests (connection, dcerpcrequest_uuid, dcerpcrequest_opnum)
544 | SELECT
545 | connection, dcerpc_uuid, dcerpc_opnum
546 | FROM
547 | dcerpcs""")
548 | self.cursor.execute("""DROP TABLE dcerpcs""")
549 | logger.debug("... done")
550 | except Exception as e:
551 | # print(e)
552 | logger.debug("... not required")
553 |
554 |
555 | def __del__(self):
556 | logger.info("Closing sqlite handle")
557 | self.cursor.close()
558 | self.cursor = None
559 | self.dbh.close()
560 | self.dbh = None
561 |
562 | def handle_incident(self, icd):
563 | # print("unknown")
564 | pass
565 |
566 | def connection_insert(self, icd, connection_type):
567 | con=icd.con
568 | r = self.cursor.execute("INSERT INTO connections (connection_timestamp, connection_type, connection_transport, connection_protocol, local_host, local_port, remote_host, remote_hostname, remote_port) VALUES (?,?,?,?,?,?,?,?,?)",
569 | (time.time(), connection_type, con.transport, con.protocol, con.local.host, con.local.port, con.remote.host, con.remote.hostname, con.remote.port) )
570 | attackid = self.cursor.lastrowid
571 | self.attacks[con] = (attackid, attackid)
572 | self.dbh.commit()
573 |
574 | # maybe this was a early connection?
575 | if con in self.pending:
576 | # the connection was linked before we knew it
577 | # that means we have to
578 | # - update the connection_root and connection_parent for all connections which had the pending
579 | # - update the connection_root for all connections which had the 'childid' as connection_root
580 | for i in self.pending[con]:
581 | print("%s %s %s" % (attackid, attackid, i))
582 | self.cursor.execute("UPDATE connections SET connection_root = ?, connection_parent = ? WHERE connection = ?",
583 | (attackid, attackid, i ) )
584 | self.cursor.execute("UPDATE connections SET connection_root = ? WHERE connection_root = ?",
585 | (attackid, i ) )
586 | self.dbh.commit()
587 |
588 | # Set the ID table ready for Logstash
589 | self.cursor.execute("UPDATE connections SET id = ? WHERE connection = ?", (attackid, attackid) )
590 |
591 | return attackid
592 |
593 |
594 | def handle_incident_dionaea_connection_tcp_listen(self, icd):
595 | attackid = self.connection_insert( icd, 'listen')
596 | con=icd.con
597 | logger.info("listen connection on %s:%i (id=%i)" %
598 | (con.remote.host, con.remote.port, attackid))
599 |
600 | def handle_incident_dionaea_connection_tls_listen(self, icd):
601 | attackid = self.connection_insert( icd, 'listen')
602 | con=icd.con
603 | logger.info("listen connection on %s:%i (id=%i)" %
604 | (con.remote.host, con.remote.port, attackid))
605 |
606 | def handle_incident_dionaea_connection_tcp_connect(self, icd):
607 | attackid = self.connection_insert( icd, 'connect')
608 | con=icd.con
609 | logger.info("connect connection to %s/%s:%i from %s:%i (id=%i)" %
610 | (con.remote.host, con.remote.hostname, con.remote.port, con.local.host, con.local.port, attackid))
611 |
612 | def handle_incident_dionaea_connection_tls_connect(self, icd):
613 | attackid = self.connection_insert( icd, 'connect')
614 | con=icd.con
615 | logger.info("connect connection to %s/%s:%i from %s:%i (id=%i)" %
616 | (con.remote.host, con.remote.hostname, con.remote.port, con.local.host, con.local.port, attackid))
617 |
618 | def handle_incident_dionaea_connection_udp_connect(self, icd):
619 | attackid = self.connection_insert( icd, 'connect')
620 | con=icd.con
621 | logger.info("connect connection to %s/%s:%i from %s:%i (id=%i)" %
622 | (con.remote.host, con.remote.hostname, con.remote.port, con.local.host, con.local.port, attackid))
623 |
624 | def handle_incident_dionaea_connection_tcp_accept(self, icd):
625 | attackid = self.connection_insert( icd, 'accept')
626 | con=icd.con
627 | logger.info("accepted connection from %s:%i to %s:%i (id=%i)" %
628 | (con.remote.host, con.remote.port, con.local.host, con.local.port, attackid))
629 |
630 | def handle_incident_dionaea_connection_tls_accept(self, icd):
631 | attackid = self.connection_insert( icd, 'accept')
632 | con=icd.con
633 | logger.info("accepted connection from %s:%i to %s:%i (id=%i)" %
634 | (con.remote.host, con.remote.port, con.local.host, con.local.port, attackid))
635 |
636 |
637 | def handle_incident_dionaea_connection_tcp_reject(self, icd):
638 | attackid = self.connection_insert(icd, 'reject')
639 | con=icd.con
640 | logger.info("reject connection from %s:%i to %s:%i (id=%i)" %
641 | (con.remote.host, con.remote.port, con.local.host, con.local.port, attackid))
642 |
643 | def handle_incident_dionaea_connection_tcp_pending(self, icd):
644 | attackid = self.connection_insert(icd, 'pending')
645 | con=icd.con
646 | logger.info("pending connection from %s:%i to %s:%i (id=%i)" %
647 | (con.remote.host, con.remote.port, con.local.host, con.local.port, attackid))
648 |
649 | def handle_incident_dionaea_connection_link_early(self, icd):
650 | # if we have to link a connection with a connection we do not know yet,
651 | # we store the unknown connection in self.pending and associate the childs id with it
652 | if icd.parent not in self.attacks:
653 | if icd.parent not in self.pending:
654 | self.pending[icd.parent] = {self.attacks[icd.child][1]: True}
655 | else:
656 | if icd.child not in self.pending[icd.parent]:
657 | self.pending[icd.parent][self.attacks[icd.child][1]] = True
658 |
659 | def handle_incident_dionaea_connection_link(self, icd):
660 | if icd.parent in self.attacks:
661 | logger.info("parent ids %s" % str(self.attacks[icd.parent]))
662 | parentroot, parentid = self.attacks[icd.parent]
663 | if icd.child in self.attacks:
664 | logger.info("child had ids %s" % str(self.attacks[icd.child]))
665 | childroot, childid = self.attacks[icd.child]
666 | else:
667 | childid = parentid
668 | self.attacks[icd.child] = (parentroot, childid)
669 | logger.info("child has ids %s" % str(self.attacks[icd.child]))
670 | logger.info("child %i parent %i root %i" % (childid, parentid, parentroot) )
671 | r = self.cursor.execute("UPDATE connections SET connection_root = ?, connection_parent = ? WHERE connection = ?",
672 | (parentroot, parentid, childid) )
673 | self.dbh.commit()
674 |
675 | if icd.child in self.pending:
676 | # if the new accepted connection was pending
677 | # assign the connection_root to all connections which have been waiting for this connection
678 | parentroot, parentid = self.attacks[icd.parent]
679 | if icd.child in self.attacks:
680 | childroot, childid = self.attacks[icd.child]
681 | else:
682 | childid = parentid
683 |
684 | self.cursor.execute("UPDATE connections SET connection_root = ? WHERE connection_root = ?",
685 | (parentroot, childid) )
686 | self.dbh.commit()
687 |
688 | def handle_incident_dionaea_connection_free(self, icd):
689 | con=icd.con
690 | if con in self.attacks:
691 | attackid = self.attacks[con][1]
692 | del self.attacks[con]
693 | logger.info("attackid %i is done" % attackid)
694 | else:
695 | logger.warn("no attackid for %s:%s" % (con.local.host, con.local.port) )
696 | if con in self.pending:
697 | del self.pending[con]
698 |
699 |
700 | def handle_incident_dionaea_module_emu_profile(self, icd):
701 | con = icd.con
702 | attackid = self.attacks[con][1]
703 | logger.info("emu profile for attackid %i" % attackid)
704 | self.cursor.execute("INSERT INTO emu_profiles (connection, emu_profile_json) VALUES (?,?)",
705 | (attackid, icd.profile) )
706 | self.dbh.commit()
707 |
708 |
709 | def handle_incident_dionaea_download_offer(self, icd):
710 | con=icd.con
711 | attackid = self.attacks[con][1]
712 | logger.info("offer for attackid %i" % attackid)
713 | self.cursor.execute("INSERT INTO offers (connection, offer_url) VALUES (?,?)",
714 | (attackid, icd.url) )
715 | self.dbh.commit()
716 |
717 | def handle_incident_dionaea_download_complete_hash(self, icd):
718 | con=icd.con
719 | attackid = self.attacks[con][1]
720 | logger.info("complete for attackid %i" % attackid)
721 | self.cursor.execute("INSERT INTO downloads (connection, download_url, download_md5_hash) VALUES (?,?,?)",
722 | (attackid, icd.url, icd.md5hash) )
723 | self.dbh.commit()
724 |
725 |
726 | def handle_incident_dionaea_service_shell_listen(self, icd):
727 | con=icd.con
728 | attackid = self.attacks[con][1]
729 | logger.info("listen shell for attackid %i" % attackid)
730 | self.cursor.execute("INSERT INTO emu_services (connection, emu_service_url) VALUES (?,?)",
731 | (attackid, "bindshell://"+str(icd.port)) )
732 | self.dbh.commit()
733 |
734 | def handle_incident_dionaea_service_shell_connect(self, icd):
735 | con=icd.con
736 | attackid = self.attacks[con][1]
737 | logger.info("connect shell for attackid %i" % attackid)
738 | self.cursor.execute("INSERT INTO emu_services (connection, emu_service_url) VALUES (?,?)",
739 | (attackid, "connectbackshell://"+str(icd.host)+":"+str(icd.port)) )
740 | self.dbh.commit()
741 |
742 | def handle_incident_dionaea_detect_attack(self, icd):
743 | con=icd.con
744 | attackid = self.attacks[con]
745 |
746 |
747 | def handle_incident_dionaea_modules_python_p0f(self, icd):
748 | con=icd.con
749 | if con in self.attacks:
750 | attackid = self.attacks[con][1]
751 | self.cursor.execute("INSERT INTO p0fs (connection, p0f_genre, p0f_link, p0f_detail, p0f_uptime, p0f_tos, p0f_dist, p0f_nat, p0f_fw) VALUES (?,?,?,?,?,?,?,?,?)",
752 | ( attackid, icd.genre, icd.link, icd.detail, icd.uptime, icd.tos, icd.dist, icd.nat, icd.fw))
753 | self.dbh.commit()
754 |
755 | def handle_incident_dionaea_modules_python_smb_dcerpc_request(self, icd):
756 | con=icd.con
757 | if con in self.attacks:
758 | attackid = self.attacks[con][1]
759 | self.cursor.execute("INSERT INTO dcerpcrequests (connection, dcerpcrequest_uuid, dcerpcrequest_opnum) VALUES (?,?,?)",
760 | (attackid, icd.uuid, icd.opnum))
761 | self.dbh.commit()
762 |
763 | def handle_incident_dionaea_modules_python_smb_dcerpc_bind(self, icd):
764 | con=icd.con
765 | if con in self.attacks:
766 | attackid = self.attacks[con][1]
767 | self.cursor.execute("INSERT INTO dcerpcbinds (connection, dcerpcbind_uuid, dcerpcbind_transfersyntax) VALUES (?,?,?)",
768 | (attackid, icd.uuid, icd.transfersyntax))
769 | self.dbh.commit()
770 |
771 | def handle_incident_dionaea_modules_python_mssql_login(self, icd):
772 | con = icd.con
773 | if con in self.attacks:
774 | attackid = self.attacks[con][1]
775 | self.cursor.execute("INSERT INTO logins (connection, login_username, login_password) VALUES (?,?,?)",
776 | (attackid, icd.username, icd.password))
777 | self.cursor.execute("INSERT INTO mssql_fingerprints (connection, mssql_fingerprint_hostname, mssql_fingerprint_appname, mssql_fingerprint_cltintname) VALUES (?,?,?,?)",
778 | (attackid, icd.hostname, icd.appname, icd.cltintname))
779 | self.dbh.commit()
780 |
781 | def handle_incident_dionaea_modules_python_mssql_cmd(self, icd):
782 | con = icd.con
783 | if con in self.attacks:
784 | attackid = self.attacks[con][1]
785 | self.cursor.execute("INSERT INTO mssql_commands (connection, mssql_command_status, mssql_command_cmd) VALUES (?,?,?)",
786 | (attackid, icd.status, icd.cmd))
787 | self.dbh.commit()
788 |
789 | def handle_incident_dionaea_modules_python_virustotal_report(self, icd):
790 | md5 = icd.md5hash
791 | f = open(icd.path, mode='r')
792 | j = json.load(f)
793 |
794 | if j['result'] == 1: # file was known to virustotal
795 | permalink = j['permalink']
796 | date = j['report'][0]
797 | self.cursor.execute("INSERT INTO virustotals (virustotal_md5_hash, virustotal_permalink, virustotal_timestamp) VALUES (?,?,strftime('%s',?))",
798 | (md5, permalink, date))
799 | self.dbh.commit()
800 |
801 | virustotal = self.cursor.lastrowid
802 |
803 | scans = j['report'][1]
804 | for av in scans:
805 | res = scans[av]
806 | # not detected = '' -> NULL
807 | if res == '':
808 | res = None
809 |
810 | self.cursor.execute("""INSERT INTO virustotalscans (virustotal, virustotalscan_scanner, virustotalscan_result) VALUES (?,?,?)""",
811 | (virustotal, av, res))
812 | # logger.debug("scanner {} result {}".format(av,scans[av]))
813 | self.dbh.commit()
814 |
815 | def handle_incident_dionaea_modules_python_mysql_login(self, icd):
816 | con = icd.con
817 | if con in self.attacks:
818 | attackid = self.attacks[con][1]
819 | self.cursor.execute("INSERT INTO logins (connection, login_username, login_password) VALUES (?,?,?)",
820 | (attackid, icd.username, icd.password))
821 | self.dbh.commit()
822 |
823 |
824 | def handle_incident_dionaea_modules_python_mysql_command(self, icd):
825 | con = icd.con
826 | if con in self.attacks:
827 | attackid = self.attacks[con][1]
828 | self.cursor.execute("INSERT INTO mysql_commands (connection, mysql_command_cmd) VALUES (?,?)",
829 | (attackid, icd.command))
830 | cmdid = self.cursor.lastrowid
831 |
832 | if hasattr(icd, 'args'):
833 | args = icd.args
834 | for i in range(len(args)):
835 | arg = args[i]
836 | self.cursor.execute("INSERT INTO mysql_command_args (mysql_command, mysql_command_arg_index, mysql_command_arg_data) VALUES (?,?,?)",
837 | (cmdid, i, arg))
838 | self.dbh.commit()
839 |
840 | def handle_incident_dionaea_modules_python_sip_command(self, icd):
841 | con = icd.con
842 | if con not in self.attacks:
843 | return
844 |
845 | def calc_allow(a):
846 | b={ b'UNKNOWN' :(1<<0),
847 | 'ACK' :(1<<1),
848 | 'BYE' :(1<<2),
849 | 'CANCEL' :(1<<3),
850 | 'INFO' :(1<<4),
851 | 'INVITE' :(1<<5),
852 | 'MESSAGE' :(1<<6),
853 | 'NOTIFY' :(1<<7),
854 | 'OPTIONS' :(1<<8),
855 | 'PRACK' :(1<<9),
856 | 'PUBLISH' :(1<<10),
857 | 'REFER' :(1<<11),
858 | 'REGISTER' :(1<<12),
859 | 'SUBSCRIBE' :(1<<13),
860 | 'UPDATE' :(1<<14)
861 | }
862 | allow=0
863 | for i in a:
864 | if i in b:
865 | allow |= b[i]
866 | else:
867 | allow |= b[b'UNKNOWN']
868 | return allow
869 |
870 | attackid = self.attacks[con][1]
871 | self.cursor.execute("""INSERT INTO sip_commands
872 | (connection, sip_command_method, sip_command_call_id,
873 | sip_command_user_agent, sip_command_allow) VALUES (?,?,?,?,?)""",
874 | (attackid, icd.method, icd.call_id, icd.user_agent, calc_allow(icd.allow)))
875 | cmdid = self.cursor.lastrowid
876 |
877 | def add_addr(cmd, _type, addr):
878 | self.cursor.execute("""INSERT INTO sip_addrs
879 | (sip_command, sip_addr_type, sip_addr_display_name,
880 | sip_addr_uri_scheme, sip_addr_uri_user, sip_addr_uri_password,
881 | sip_addr_uri_host, sip_addr_uri_port) VALUES (?,?,?,?,?,?,?,?)""",
882 | (
883 | cmd, _type, addr['display_name'],
884 | addr['uri']['scheme'], addr['uri']['user'], addr['uri']['password'],
885 | addr['uri']['host'], addr['uri']['port']
886 | ))
887 | add_addr(cmdid,'addr',icd.get('addr'))
888 | add_addr(cmdid,'to',icd.get('to'))
889 | add_addr(cmdid,'contact',icd.get('contact'))
890 | for i in icd.get('from'):
891 | add_addr(cmdid,'from',i)
892 |
893 | def add_via(cmd, via):
894 | self.cursor.execute("""INSERT INTO sip_vias
895 | (sip_command, sip_via_protocol, sip_via_address, sip_via_port)
896 | VALUES (?,?,?,?)""",
897 | (
898 | cmd, via['protocol'],
899 | via['address'], via['port']
900 |
901 | ))
902 |
903 | for i in icd.get('via'):
904 | add_via(cmdid, i)
905 |
906 | def add_sdp(cmd, sdp):
907 | def add_origin(cmd, o):
908 | self.cursor.execute("""INSERT INTO sip_sdp_origins
909 | (sip_command, sip_sdp_origin_username,
910 | sip_sdp_origin_sess_id, sip_sdp_origin_sess_version,
911 | sip_sdp_origin_nettype, sip_sdp_origin_addrtype,
912 | sip_sdp_origin_unicast_address)
913 | VALUES (?,?,?,?,?,?,?)""",
914 | (
915 | cmd, o['username'],
916 | o['sess_id'], o['sess_version'],
917 | o['nettype'], o['addrtype'],
918 | o['unicast_address']
919 | ))
920 | def add_condata(cmd, c):
921 | self.cursor.execute("""INSERT INTO sip_sdp_connectiondatas
922 | (sip_command, sip_sdp_connectiondata_nettype,
923 | sip_sdp_connectiondata_addrtype, sip_sdp_connectiondata_connection_address,
924 | sip_sdp_connectiondata_ttl, sip_sdp_connectiondata_number_of_addresses)
925 | VALUES (?,?,?,?,?,?)""",
926 | (
927 | cmd, c['nettype'],
928 | c['addrtype'], c['connection_address'],
929 | c['ttl'], c['number_of_addresses']
930 | ))
931 | def add_media(cmd, c):
932 | self.cursor.execute("""INSERT INTO sip_sdp_medias
933 | (sip_command, sip_sdp_media_media,
934 | sip_sdp_media_port, sip_sdp_media_number_of_ports,
935 | sip_sdp_media_proto)
936 | VALUES (?,?,?,?,?)""",
937 | (
938 | cmd, c['media'],
939 | c['port'], c['number_of_ports'],
940 | c['proto']
941 | ))
942 | if 'o' in sdp:
943 | add_origin(cmd, sdp['o'])
944 | if 'c' in sdp:
945 | add_condata(cmd, sdp['c'])
946 | if 'm' in sdp:
947 | for i in sdp['m']:
948 | add_media(cmd, i)
949 |
950 | if hasattr(icd,'sdp') and icd.sdp is not None:
951 | add_sdp(cmdid,icd.sdp)
952 |
953 | self.dbh.commit()
954 |
955 |
956 |
957 |
958 |
959 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/Makefile.am:
--------------------------------------------------------------------------------
1 | AUTOMAKE_OPTIONS = foreign
2 |
3 | bin_SCRIPTS = readlogsqltree gnuplotsql
4 | CLEANFILES = $(bin_SCRIPTS)
5 | EXTRA_DIST = readlogsqltree.py gnuplotsql.py
6 |
7 |
8 | do_subst = sed -e 's,[@]PYTHON[@],$(PYTHON),g'
9 |
10 | readlogsqltree: readlogsqltree.py
11 | $(do_subst) < readlogsqltree.py > readlogsqltree
12 | chmod +x readlogsqltree
13 |
14 | gnuplotsql: gnuplotsql.py
15 | $(do_subst) < gnuplotsql.py > gnuplotsql
16 | chmod +x gnuplotsql
17 |
18 | install-exec-hook:
19 | -rm -f $(bin_SCRIPTS)
20 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/csv2sqlite.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #
3 | # create a sqlite database from a csv file
4 | # creates table schema and inserts rows
5 | # can handle multiple csv files
6 | #
7 | # ./csv2sqlite a.csv bs.csv
8 | # will create tables a and bs and bs will get the primary key of type integer "b"
9 | #
10 |
11 | import sqlite3
12 | import csv
13 | import sys
14 | import argparse
15 | import codecs
16 |
17 | if __name__ == '__main__':
18 |
19 | parser = argparse.ArgumentParser(description='Update a sqlite Database with random but correct cc numbers')
20 | parser.add_argument('--database', help='the database to create', required=True)
21 | parser.add_argument('--primary-key', help='create a primary key')
22 | parser.add_argument('files', nargs='*', help='csv files to use as input')
23 | args = parser.parse_args()
24 |
25 | dbh = sqlite3.connect(args.database)
26 | cursor = dbh.cursor()
27 |
28 | for f in args.files:
29 | print("Processing File %s" % (f,))
30 | c = csv.reader(codecs.open(f, 'r', encoding="utf-8-sig"), delimiter=',', quotechar='"')
31 | table = f[:-4]
32 | colnames = c.next()
33 | print("Using column names %s" % " ".join(colnames))
34 | cols = ','.join(colnames)
35 | if args.primary_key is not None:
36 | cols2 = "%s INTEGER PRIMARY KEY, " % args.primary_key + cols
37 | else:
38 | cols2 = cols
39 | create_table = "CREATE TABLE %s ( %s )" % (table, cols2)
40 | insert_into = "INSERT INTO %s (%s) VALUES (%s) " % (table, cols, ','.join(['?' for i in colnames]))
41 |
42 | try:
43 | dbh.execute(create_table)
44 | except Exception as e:
45 | print("Could not CREATE table %s (%s))" % (table,e))
46 | continue
47 | for i in c:
48 | try:
49 | cursor.execute(insert_into, i)
50 | except Exception as e:
51 | print("Could not insert %s into table %s (%s)" % (i,table,e))
52 | print(insert_into)
53 | for i in cols:
54 | create_idx = "CREATE INDEX %s_idx ON %s (%s)" % (i,table,i)
55 | dbh.commit()
56 |
57 |
58 |
59 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/gnuplotsql.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 |
3 | import sqlite3
4 | import os
5 | import datetime
6 | import calendar
7 | import sys
8 | from optparse import OptionParser
9 |
10 | def resolve_result(resultcursor):
11 | names = [resultcursor.description[x][0] for x in range(len(resultcursor.description))]
12 | resolvedresult = [ dict(zip(names, i)) for i in resultcursor]
13 | return resolvedresult
14 |
15 | def get_ranges_from_db(cursor):
16 | # create list of *all* days
17 | ranges = []
18 | dates = []
19 |
20 | r = cursor.execute("""SELECT
21 | strftime('%Y-%m-%d',MIN(connection_timestamp),'unixepoch','localtime') AS start,
22 | strftime('%Y-%m-%d',MAX(connection_timestamp),'unixepoch','localtime') AS stop
23 | FROM
24 | connections""")
25 |
26 | r = resolve_result(r)
27 | # round start and stop by month
28 | start = datetime.datetime.strptime(r[0]['start'], "%Y-%m-%d")
29 | start = datetime.datetime(start.year,start.month,1)
30 |
31 | stop = datetime.datetime.strptime(r[0]['stop'], "%Y-%m-%d")
32 | stop = datetime.datetime(stop.year,stop.month,1)+datetime.timedelta(days=calendar.monthrange(stop.year,stop.month)[1])-datetime.timedelta(seconds=1)
33 |
34 | # create a list of ranges
35 | # (overview|year|month,start,stop)
36 | ranges.append(("all",start,stop))
37 |
38 | cur = start
39 | while cur < stop:
40 | dates.append(cur.strftime("%Y-%m-%d"))
41 | next = cur + datetime.timedelta(1)
42 | if next.year != cur.year:
43 | ranges.append((
44 | "year",
45 | datetime.datetime(cur.year,1,1),
46 | cur)
47 | )
48 | if next.month != cur.month:
49 | ranges.append((
50 | "month",
51 | datetime.datetime(cur.year,cur.month,1),
52 | cur)
53 | )
54 | cur = next
55 |
56 | ranges.append((
57 | "year",
58 | datetime.datetime(cur.year,1,1),
59 | datetime.datetime(cur.year+1,1,1)-datetime.timedelta(1))
60 | )
61 | ranges.append((
62 | "month",
63 | datetime.datetime(cur.year,cur.month,1),
64 | datetime.datetime(cur.year,cur.month,1)+datetime.timedelta(days=calendar.monthrange(stop.year,stop.month)[1])-datetime.timedelta(seconds=1))
65 | )
66 | return (ranges,dates)
67 |
68 | def make_directories(ranges, path_destination):
69 | # create directories
70 | for r in ranges:
71 | if r[0] == 'month':
72 | path = os.path.join(path_destination, r[1].strftime("%Y"), r[1].strftime("%m"))
73 | # print(path)
74 | if not os.path.exists(path):
75 | os.makedirs(path)
76 |
77 | paths = [
78 | os.path.join(
79 | path_destination,
80 | "gnuplot"
81 | ),
82 | os.path.join(
83 | path_destination,
84 | "gnuplot",
85 | "data"
86 | )
87 | ]
88 | for path in paths:
89 | if not os.path.exists(path):
90 | os.makedirs(path)
91 |
92 | def write_index(ranges, _protocols, DSTDIR, image_ext):
93 | tpl_html="""
95 |
96 |
97 | Summary for the dionaea honeypot
98 |
99 |
100 | {headline}
101 |
102 | - {menu_all_label}: {menu_all}
103 | - {menu_timerange_label}: {menu_timerange}
104 | - {menu_overview_label}: {menu_overview}
105 | - {menu_data_label}: {menu_data}
106 | - {menu_plot_label}: {menu_plot}
107 |
108 |
109 | Overviews
110 | {images}
111 |
112 |
113 | """
114 |
115 | # create index.html files
116 | for r in ranges:
117 | web_headline = ""
118 | if r[0] == 'all':
119 | web_headline = "All - {} - {}".format(
120 | r[1].strftime("%Y-%m-%d"),
121 | r[2].strftime("%Y-%m-%d")
122 | )
123 |
124 | if r[0] == 'year':
125 | web_headline = "Year - {}".format(
126 | r[1].strftime("%Y")
127 | )
128 |
129 | if r[0] == 'month':
130 | web_headnline = "Month - {}".format(
131 | r[1].strftime("%Y-%m")
132 | )
133 |
134 | web_menu_timerange_label = ""
135 | web_menu_timeranges = []
136 | if r[0] == "all":
137 | web_menu_all_label = "All"
138 | web_menu_all = """All"""
139 |
140 | # Years
141 | web_menu_timerange_label = "Years"
142 | for y in ranges:
143 | if y[0] != 'year':
144 | continue
145 | web_menu_timeranges.append(
146 | """{} """.format(
147 | y[1].strftime("%Y"),
148 | y[1].strftime("%Y")
149 | )
150 | )
151 |
152 | if r[0] == "year":
153 | web_menu_all_label = "All"
154 | web_menu_all = """All"""
155 |
156 | # write months
157 | web_menu_timerange_label = "Months"
158 | for y in ranges:
159 | if y[0] != 'month' or y[1].year != r[1].year:
160 | continue
161 | web_menu_timeranges.append(
162 | """{}-{}""".format(
163 | y[1].strftime("%m"),
164 | y[1].strftime("%Y"),
165 | y[1].strftime("%m")
166 | )
167 | )
168 | if r[0] == "month":
169 | web_menu_all_label = "All"
170 | web_menu_all = """All"""
171 |
172 | web_menu_timerange_label = "Year"
173 | web_menu_timeranges.append(
174 | """{} """.format(
175 | y[1].strftime("%Y")
176 | )
177 | )
178 |
179 | # Overviews
180 | web_menu_overview_label = "Overview"
181 | web_menu_overviews = []
182 | for p in _protocols:
183 | web_menu_overviews.append(
184 | """{}""".format(p,p)
185 | )
186 |
187 | web_menu_data_label = "Data"
188 | web_menu_datas = []
189 | for p in ["overview"] + _protocols:
190 | path_data = ""
191 | if r[0] == 'all':
192 | path_data = "gnuplot/data/" + p + ".data"
193 | if r[0] == "year":
194 | path_data = "../gnuplot/data/" + p + ".data"
195 | if r[0] == "month":
196 | path_data = "../../gnuplot/data/" + p + ".data"
197 |
198 | web_menu_datas.append("""{} """.format(path_data, p))
199 |
200 | rstart = r[1].strftime("%Y-%m-%d")
201 | rstop = r[2].strftime("%Y-%m-%d")
202 | web_menu_plot_label = "Plot"
203 | web_menu_plots = []
204 | for p in ["overview"] + _protocols:
205 | path_data = ""
206 | if r[0] == 'all':
207 | path_data = "gnuplot"
208 | if r[0] == "year":
209 | path_data = "../gnuplot"
210 | if r[0] == "month":
211 | path_data = "../../gnuplot"
212 |
213 | web_menu_plots.append(
214 | """
215 | {protocol}
216 | """.format(
217 | path_data=path_data,
218 | protocol=p,
219 | range=r[0],
220 | start=rstart,
221 | stop=rstop
222 | )
223 | )
224 |
225 | web_images = """
226 | Any
227 |
228 | """.format(
229 | image_ext=image_ext
230 | )
231 |
232 | for p in _protocols:
233 | web_images = web_images + """
234 | Overview {protocol}
235 |
236 | """.format(
237 | protocol=p,
238 | image_ext=image_ext
239 | )
240 |
241 | content = tpl_html.format(
242 | headline=web_headline,
243 | menu_all_label=web_menu_all_label,
244 | menu_all=web_menu_all,
245 | menu_timerange_label=web_menu_timerange_label,
246 | menu_timerange=" ".join(web_menu_timeranges),
247 | menu_overview_label=web_menu_overview_label,
248 | menu_overview=" ".join(web_menu_overviews),
249 | menu_data_label=web_menu_data_label,
250 | menu_data=" ".join(web_menu_datas),
251 | menu_plot_label=web_menu_plot_label,
252 | menu_plot=" ".join(web_menu_plots),
253 | images=web_images
254 | )
255 |
256 |
257 | w = None
258 |
259 | if r[0] == 'all':
260 | w = open(os.path.join(DSTDIR,"index.html"),"wt")
261 | elif r[0] == 'year':
262 | w = open(os.path.join(DSTDIR,r[1].strftime("%Y"),"index.html"),"wt")
263 | elif r[0] == 'month':
264 | w = open(os.path.join(DSTDIR,r[1].strftime("%Y"),r[1].strftime("%m"),"index.html"),"wt")
265 |
266 | if w == None:
267 | break
268 |
269 | w.write(content)
270 | w.close()
271 |
272 |
273 | def get_overview_data(cursor, path_destination, filename_data, protocol):
274 | data = {}
275 | sql = {}
276 | sql["downloads"] = """
277 | SELECT
278 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
279 | count(*) AS num
280 | FROM
281 | connections AS conn
282 | NATURAL JOIN downloads
283 | {where}
284 | GROUP BY
285 | strftime('{time_format}',conn.connection_timestamp,'unixepoch','localtime')
286 | ORDER BY
287 | conn.connection_timestamp;
288 | """
289 | sql["offers"] = """
290 | SELECT
291 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
292 | count(*) AS num
293 | FROM
294 | connections AS conn
295 | NATURAL JOIN offers
296 | {where}
297 | GROUP BY
298 | strftime('{time_format}',conn.connection_timestamp,'unixepoch','localtime')
299 | ORDER BY
300 | conn.connection_timestamp;
301 | """
302 | sql["shellcodes"] = """
303 | SELECT
304 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
305 | count(*) AS num
306 | FROM
307 | connections AS conn
308 | NATURAL JOIN emu_profiles
309 | {where}
310 | GROUP BY
311 | strftime('{time_format}',conn.connection_timestamp,'unixepoch','localtime')
312 | ORDER BY
313 | conn.connection_timestamp;
314 | """;
315 | sql["accepts"] = """
316 | SELECT
317 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
318 | count(*) AS num
319 | FROM
320 | connections AS conn
321 | {where}
322 | GROUP BY
323 | strftime('{time_format}',conn.connection_timestamp,'unixepoch','localtime')
324 | ORDER BY
325 | conn.connection_timestamp;
326 | """
327 | sql["uniq"] = """
328 | SELECT
329 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
330 | count(DISTINCT downloads.download_md5_hash) as num
331 | FROM
332 | downloads
333 | NATURAL JOIN connections AS conn
334 | NATURAL JOIN offers JOIN connections AS root ON(conn.connection_root = root.connection)
335 | {where}
336 | GROUP BY
337 | strftime('{time_format}',conn.connection_timestamp,'unixepoch','localtime')
338 | ORDER BY
339 | conn.connection_timestamp;
340 | """
341 | sql["newfiles"] = """
342 | SELECT
343 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
344 | count(down.download_md5_hash) AS num
345 | FROM
346 | downloads AS down
347 | JOIN connections AS conn ON(down.connection = conn.connection)
348 | NATURAL JOIN offers
349 | JOIN connections AS root ON(conn.connection_root = root.connection)
350 | {where}
351 | GROUP BY
352 | down.download_md5_hash
353 | ORDER BY
354 | conn.connection_timestamp;
355 | """
356 | sql["hosts"] = """
357 | SELECT
358 | strftime('%Y-%m-%d',conn.connection_timestamp,'unixepoch','localtime') AS date,
359 | COUNT(DISTINCT conn.remote_host) as num
360 | FROM
361 | connections as conn
362 | {where}
363 | GROUP BY
364 | strftime('{time_format}',conn.connection_timestamp,'unixepoch','localtime')
365 | ORDER BY
366 | conn.connection_timestamp;
367 | """
368 | where = ""
369 | if protocol != "":
370 | where ="""
371 | WHERE
372 | conn.connection_protocol='{protocol}'
373 | """
374 |
375 | where = where.format(
376 | protocol=protocol
377 | )
378 |
379 | for t in list(sql.keys()):
380 | print("Selecting %s ..." % t)
381 | db_query = sql[t].format(
382 | time_format="%Y-%m-%d",
383 | where=where
384 | )
385 | #print(db_query)
386 | db_res = cursor.execute(db_query)
387 | db_data = resolve_result(db_res)
388 |
389 | for db_row in db_data:
390 | date = db_row["date"]
391 | if not date in data:
392 | data[date] = {}
393 | for k in list(sql.keys()):
394 | data[date][k] = 0
395 | data[date][t] = str(db_row["num"])
396 |
397 | # fill with zeros
398 | for date in dates:
399 | if date not in data:
400 | data[date] = {}
401 | for k in list(sql.keys()):
402 | data[date][k] = 0
403 |
404 | # write data file
405 | w = open(filename_data,"wt")
406 | for d in dates:
407 | a = data[d]
408 | w.write("{}|{}|{}|{}|{}|{}|{}|{}\n".format(d,
409 | a['hosts'],
410 | a['accepts'],
411 | a['shellcodes'],
412 | a['offers'],
413 | a['downloads'],
414 | a['uniq'],
415 | a['newfiles']))
416 | w.close()
417 |
418 | def plot_overview_data(ranges, path_destination, filename_data, protocol, filename_tpl, image_ext):
419 | suffix = ""
420 | prefix = "overview"
421 | if protocol != "":
422 | suffix = "-{}".format(protocol)
423 | prefix = protocol
424 |
425 | tpl_gnuplot ="""set terminal png size 600,600 nocrop butt font "/usr/share/fonts/truetype/ttf-liberation/LiberationSans-Regular.ttf" 8
426 | set output "{filename_output}"
427 | set xdata time
428 | set timefmt "%Y-%m-%d"
429 | set xrange ["{range_start}":"{range_stop}"]
430 | set format x "%b %d"
431 | set xlabel "date"
432 | set ylabel "count"
433 | set y2label "count"
434 | set y2tics
435 | set grid
436 |
437 | set size 1.0,0.5
438 |
439 | set style line 1 lt rgb "#00C613" # aqua
440 | set style line 2 lt rgb "#6AFFA0" #
441 | set style line 3 lt rgb "#23FF38"
442 | set style line 4 lt rgb "#75BF0F"
443 | set style line 5 lt rgb "#A1FF00"
444 | set style line 6 lt rgb "red" # "#D6FFBF" # deepskyblue
445 |
446 | unset logscale y
447 | set datafile separator "|"
448 | set multiplot
449 |
450 | set origin 0.0,0.5
451 | plot "{filename_data}" using 1:3 title "accept" with boxes fs solid, \\
452 | "" using 1:4 title "shellcode" with boxes fs solid, \\
453 | "" using 1:5 title "offers" with boxes fs solid, \\
454 | "" using 1:6 title "downloads" with boxes fs solid, \\
455 | "" using 1:7 title "uniq" with boxes fs solid, \\
456 | "" using 1:8 title "new" with boxes fs solid
457 |
458 | set origin 0.0,0.0
459 | plot "{filename_data}" using 1:2 title "hosts" with boxes fs solid
460 |
461 | unset multiplot
462 | """
463 |
464 | if filename_tpl != None and os.path.exists(filename_tpl) and os.path.isfile(filename_tpl):
465 | fp = open(filename_tpl, "rt")
466 | tpl_gnuplot = fp.read()
467 | fp.close()
468 |
469 | for r in ranges:
470 | path = ""
471 | print(r)
472 | xstart = r[1]
473 | xstop = r[2]
474 | if r[0] == 'all':
475 | rstart = xstart.strftime("%Y-%m-%d")
476 | rstop = xstop.strftime("%Y-%m-%d")
477 | title = 'all {}-{}'.format(rstart,rstop)
478 | elif r[0] == 'year':
479 | rstart = xstart.strftime("%Y-%m-%d")
480 | rstop = xstop.strftime("%Y-%m-%d")
481 | title = 'year {}-{}'.format(rstart,rstop)
482 | path = xstart.strftime("%Y")
483 | elif r[0] == 'month':
484 | rstart = xstart.strftime("%Y-%m-%d")
485 | rstop = xstop.strftime("%Y-%m-%d")
486 | title = 'month {}-{}'.format(rstart,rstop)
487 | path = os.path.join(xstart.strftime("%Y"),xstart.strftime("%m"))
488 |
489 | output = os.path.join(path_destination, path, "dionaea-overview{}.{}".format(suffix, image_ext))
490 | filename_gnuplot = os.path.join(
491 | path_destination,
492 | "gnuplot",
493 | "{prefix}_{range}_{start}_{stop}.cmd".format(
494 | prefix=prefix,
495 | range=r[0],
496 | start=rstart,
497 | stop=rstop
498 | )
499 | )
500 |
501 | w = open(filename_gnuplot, "wt")
502 | w.write(
503 | tpl_gnuplot.format(
504 | filename_output=output,
505 | range_start=xstart,
506 | range_stop=xstop,
507 | filename_data=filename_data
508 | )
509 | )
510 | w.close()
511 |
512 | os.system("gnuplot {}".format(filename_gnuplot))
513 |
514 | if __name__ == "__main__":
515 | parser = OptionParser()
516 | parser.add_option("-d", "--database", action="store", type="string", dest="database", default="/opt/dionaea/var/dionaea/logsql.sqlite")
517 | parser.add_option("-D", "--destination", action="store", type="string", dest="destination", default="/tmp/dionaea-gnuplot")
518 | parser.add_option("-t", "--tempfile", action="store", type="string", dest="tempfile", default="/tmp/dionaea-gnuplotsql.data")
519 | parser.add_option('-p', '--protocol', dest='protocols', help='none', type="string", action="append")
520 | parser.add_option('', '--all-protocols', dest='all_protocols', help='none', action="store_true", default=False)
521 | parser.add_option('-g', '--gnuplot-tpl', dest='gnuplot_tpl', help='none', type="string", action="store", default=None)
522 | parser.add_option('', '--image-ext', dest='image_ext', help='none', type="string", action="store", default="png")
523 | (options, args) = parser.parse_args()
524 |
525 | dbh = sqlite3.connect(options.database)
526 | cursor = dbh.cursor()
527 |
528 | protocols = options.protocols
529 | if options.all_protocols == True:
530 | protocols = []
531 | db_res = cursor.execute("SELECT connection_protocol FROM connections GROUP BY connection_protocol")
532 | db_data = resolve_result(db_res)
533 | for db_row in db_data:
534 | protocols.append(db_row["connection_protocol"])
535 |
536 | if protocols == None or len(protocols) == 0:
537 | print("No protocols specified")
538 | sys.exit(1)
539 |
540 | (ranges,dates) = get_ranges_from_db(cursor)
541 | make_directories(ranges, options.destination)
542 |
543 | write_index(
544 | ranges,
545 | protocols,
546 | options.destination,
547 | options.image_ext
548 | )
549 |
550 | # general overview
551 | print("[+] getting data for general overview")
552 | filename_data = os.path.join(
553 | options.destination,
554 | "gnuplot",
555 | "data",
556 | "overview.data"
557 | )
558 | get_overview_data(cursor, options.destination, filename_data, "")
559 | plot_overview_data(
560 | ranges,
561 | options.destination,
562 | filename_data,
563 | "",
564 | options.gnuplot_tpl,
565 | options.image_ext
566 | )
567 |
568 | # protocols
569 | for protocol in protocols:
570 | filename_data = os.path.join(
571 | options.destination,
572 | "gnuplot",
573 | "data",
574 | protocol + ".data"
575 | )
576 | print("[+] getting data for {} overview".format(protocol))
577 | get_overview_data(
578 | cursor,
579 | options.destination,
580 | filename_data,
581 | protocol
582 | )
583 | plot_overview_data(
584 | ranges,
585 | options.destination,
586 | filename_data,
587 | protocol,
588 | options.gnuplot_tpl,
589 | options.image_ext
590 | )
591 |
592 |
593 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/gnuplotsql/gnuplot.example:
--------------------------------------------------------------------------------
1 | set terminal png size 600,600 nocrop butt font "/usr/share/fonts/truetype/ttf-liberation/LiberationSans-Regular.ttf" 8
2 | set output "{filename_output}"
3 | set xdata time
4 | set timefmt "%Y-%m-%d"
5 | set xrange ["{range_start}":"{range_stop}"]
6 | set format x "%b %d"
7 | set xlabel "date"
8 | set ylabel "count"
9 | set y2label "count"
10 | set y2tics
11 | set grid
12 |
13 | set size 1.0,0.5
14 |
15 | set style line 1 lt rgb "#00C613"
16 | set style line 2 lt rgb "#6AFFA0"
17 | set style line 3 lt rgb "#23FF38"
18 | set style line 4 lt rgb "#75BF0F"
19 | set style line 5 lt rgb "#A1FF00"
20 | set style line 6 lt rgb "red"
21 |
22 | unset logscale y
23 | set datafile separator "|"
24 | set multiplot
25 |
26 | set origin 0.0,0.5
27 | plot "{filename_data}" using 1:3 title "accept" with lines, \
28 | "" using 1:4 title "shellcode" with lines, \
29 | "" using 1:5 title "offers" with lines, \
30 | "" using 1:6 title "downloads" with lines, \
31 | "" using 1:7 title "uniq" with lines, \
32 | "" using 1:8 title "new" with lines
33 |
34 | set origin 0.0,0.0
35 | plot "{filename_data}" using 1:2 title "hosts" with lines
36 |
37 | unset multiplot
38 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/gnuplotsql/gnuplot.svg.example:
--------------------------------------------------------------------------------
1 | set terminal svg enhanced size 600,600 font "arial,8"
2 | set output "{filename_output}"
3 | set xdata time
4 | set timefmt "%Y-%m-%d"
5 | set xrange ["{range_start}":"{range_stop}"]
6 | set format x "%b %d"
7 | set xlabel "date"
8 | set ylabel "count"
9 | set y2label "count"
10 | set y2tics
11 | set grid
12 |
13 | set size 1.0,0.5
14 |
15 | set style line 1 lt rgb "#00C613"
16 | set style line 2 lt rgb "#6AFFA0"
17 | set style line 3 lt rgb "#23FF38"
18 | set style line 4 lt rgb "#75BF0F"
19 | set style line 5 lt rgb "#A1FF00"
20 | set style line 6 lt rgb "red"
21 |
22 | unset logscale y
23 | set datafile separator "|"
24 | set multiplot
25 |
26 | set origin 0.0,0.5
27 | plot "{filename_data}" using 1:3 title "accept" with lines, \
28 | "" using 1:4 title "shellcode" with lines, \
29 | "" using 1:5 title "offers" with lines, \
30 | "" using 1:6 title "downloads" with lines, \
31 | "" using 1:7 title "uniq" with lines, \
32 | "" using 1:8 title "new" with lines
33 |
34 | set origin 0.0,0.0
35 | plot "{filename_data}" using 1:2 title "hosts" with lines
36 |
37 | unset multiplot
38 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/logsql2postgres.py:
--------------------------------------------------------------------------------
1 | #!/opt/dionaea/bin/python3
2 |
3 | # sudo su postgres
4 | # createdb --owner=xmpp logsql
5 | # psql -U xmpp logsql < modules/python/util/xmpp/pg_schema.sql
6 |
7 | import sys
8 | import sqlite3
9 | import postgresql
10 | import postgresql.driver as pg_driver
11 | import optparse
12 |
13 | def copy(name, lite, pg, src, dst):
14 | print("[+] {0}".format(name))
15 |
16 | pg.execute("DELETE FROM {0}".format(dst['table']))
17 | offset = 0
18 | limit = 10000
19 | insert = pg.prepare(dst['query'])
20 |
21 | while True:
22 | result = lite.execute(src['query'].format(limit, offset))
23 | r = 0
24 | result = result.fetchall()
25 | r = len(result)
26 | insert.load_rows(result)
27 | # print("{0} {1} {2}".format(offset, limit, r))
28 | if r != limit:
29 | # update the sequence if we inserted rows
30 | if offset + r != 0:
31 | pg.execute("SELECT setval('{0}',{1})".format(dst['seq'], offset + r))
32 | break
33 | offset += limit
34 |
35 |
36 | cando = {
37 | 'connections' : ({
38 | # FIXME postgres does not know connection_type pending
39 | # connection_type is an enum, so this may get messy
40 | 'query' : """SELECT
41 | connection,
42 | connection_type,
43 | connection_transport,
44 | datetime(connection_timestamp, 'unixepoch') || ' UTC' AS connection_timestamp,
45 | connection_parent,
46 | connection_root,
47 | ifnull(nullif(local_host,''),'0.0.0.0'),
48 | local_port,
49 | ifnull(nullif(remote_host,''),'0.0.0.0'),
50 | remote_port,
51 | connection_protocol,
52 | remote_hostname FROM connections WHERE connection_type != 'pending' LIMIT {:d} OFFSET {:d} \n"""
53 | },
54 | {
55 | 'table' : 'dionaea.connections',
56 | 'seq' : "dionaea.connections_connection_seq",
57 | 'query' : """INSERT INTO dionaea.connections
58 | (connection,
59 | connection_type,
60 | connection_transport,
61 | connection_timestamp,
62 | connection_parent,
63 | connection_root,
64 | local_host,
65 | local_port,
66 | remote_host,
67 | remote_port,
68 | connection_protocol,
69 | remote_hostname)
70 | VALUES
71 | ($1,$2,$3,$4::text::timestamp,$5,$6,$7::text::inet,$8,$9::text::inet,$10,$11,$12)""",
72 | }),
73 |
74 | 'dcerpcbinds': ({
75 | 'query' : """SELECT
76 | dcerpcbind,
77 | connection,
78 | dcerpcbind_uuid,
79 | dcerpcbind_transfersyntax FROM dcerpcbinds LIMIT {:d} OFFSET {:d} \n"""
80 | },
81 | {
82 | 'table' : 'dionaea.dcerpcbinds',
83 | 'seq' : "dionaea.dcerpcbinds_dcerpcbind_seq",
84 | 'query' : """INSERT INTO dionaea.dcerpcbinds
85 | (dcerpcbind,
86 | connection,
87 | dcerpcbind_uuid,
88 | dcerpcbind_transfersyntax)
89 | VALUES
90 | ($1,$2,$3,$4)""",
91 | }),
92 |
93 | 'dcerpcrequests' : ({
94 | 'query' : """SELECT
95 | dcerpcrequest,
96 | connection,
97 | dcerpcrequest_uuid,
98 | dcerpcrequest_opnum FROM dcerpcrequests LIMIT {:d} OFFSET {:d}"""
99 | },
100 | { 'table' : 'dionaea.dcerpcrequests',
101 | 'seq' : "dionaea.dcerpcrequests_dcerpcrequest_seq",
102 | 'query' : """INSERT INTO dionaea.dcerpcrequests
103 | (dcerpcrequest,
104 | connection,
105 | dcerpcrequest_uuid,
106 | dcerpcrequest_opnum)
107 | VALUES
108 | ($1,$2,$3,$4)""",
109 | }),
110 |
111 | 'dcerpcservices' : ({
112 | 'query' : """SELECT
113 | dcerpcservice,
114 | dcerpcservice_uuid,
115 | dcerpcservice_name FROM dcerpcservices LIMIT {:d} OFFSET {:d}"""
116 | },
117 | { 'table' : 'dionaea.dcerpcservices',
118 | 'seq' : "dionaea.dcerpcservices_dcerpcservice_seq",
119 | 'query' : """INSERT INTO dionaea.dcerpcservices
120 | (dcerpcservice,
121 | dcerpcservice_uuid,
122 | dcerpcservice_name)
123 | VALUES
124 | ($1,$2,$3)""",
125 | }),
126 |
127 | 'dcerpcserviceops' : ({
128 | 'query' : """SELECT
129 | dcerpcserviceop,
130 | dcerpcservice,
131 | dcerpcserviceop_name,
132 | dcerpcserviceop_opnum,
133 | dcerpcserviceop_vuln
134 | FROM dcerpcserviceops LIMIT {:d} OFFSET {:d}"""
135 | },
136 | { 'table' : 'dionaea.dcerpcserviceops',
137 | 'seq' : "dionaea.dcerpcserviceops_dcerpcserviceop_seq",
138 | 'query' : """INSERT INTO dionaea.dcerpcserviceops
139 | (dcerpcserviceop,
140 | dcerpcservice,
141 | dcerpcserviceop_name,
142 | dcerpcserviceop_opnum,
143 | dcerpcserviceop_vuln)
144 | VALUES
145 | ($1,$2,$3,$4,$5)""",
146 | }),
147 |
148 | 'downloads' : ({
149 | 'query' : """SELECT
150 | download,
151 | connection,
152 | download_md5_hash,
153 | download_url FROM downloads LIMIT {:d} OFFSET {:d}"""
154 | },
155 | { 'table' : 'dionaea.downloads',
156 | 'seq' : "dionaea.dcerpcrequests_dcerpcrequest_seq",
157 | 'query' : """INSERT INTO dionaea.downloads
158 | (download,
159 | connection,
160 | download_md5_hash,
161 | download_url)
162 | VALUES
163 | ($1,$2,$3,$4)""",
164 | }),
165 |
166 | 'emu_profiles' : ({
167 | 'query' : """SELECT
168 | emu_profile,
169 | connection,
170 | emu_profile_json FROM emu_profiles LIMIT {:d} OFFSET {:d}"""
171 | },
172 | { 'table' : 'dionaea.emu_profiles',
173 | 'seq' : "dionaea.emu_profiles_emu_profile_seq",
174 | 'query' : """INSERT INTO dionaea.emu_profiles
175 | (emu_profile,
176 | connection,
177 | emu_profile_json)
178 | VALUES
179 | ($1,$2,$3)""",
180 | }),
181 |
182 | 'emu_services' : ({
183 | 'query' : """SELECT
184 | emu_serivce,
185 | connection,
186 | emu_service_url FROM emu_services LIMIT {:d} OFFSET {:d}"""
187 | },
188 | { 'table' : 'dionaea.emu_services',
189 | 'seq' : "dionaea.emu_services_emu_service_seq",
190 | 'query' : """INSERT INTO dionaea.emu_services
191 | (emu_service,
192 | connection,
193 | emu_service_url)
194 | VALUES
195 | ($1,$2,$3)""",
196 | }),
197 |
198 | 'offers' : ({
199 | 'query' : """SELECT
200 | offer,
201 | connection,
202 | offer_url FROM offers LIMIT {:d} OFFSET {:d}"""
203 | },
204 | { 'table' : 'dionaea.offers',
205 | 'seq' : "dionaea.offers_offer_seq",
206 | 'query' : """INSERT INTO dionaea.offers
207 | (offer,
208 | connection,
209 | offer_url)
210 | VALUES
211 | ($1,$2,$3)""",
212 | }),
213 |
214 | 'p0fs' : (
215 | { 'query' : """SELECT
216 | p0f,
217 | connection,
218 | p0f_genre,
219 | p0f_link,
220 | p0f_detail,
221 | p0f_uptime,
222 | p0f_tos,
223 | p0f_dist,
224 | p0f_nat,
225 | p0f_fw FROM p0fs LIMIT {:d} OFFSET {:d}"""
226 | },
227 | { 'table' : 'dionaea.p0fs',
228 | 'seq' : "dionaea.p0fs_p0f_seq",
229 | 'query' : """INSERT INTO dionaea.p0fs
230 | ( p0f,
231 | connection,
232 | p0f_genre,
233 | p0f_link,
234 | p0f_detail,
235 | p0f_uptime,
236 | p0f_tos,
237 | p0f_dist,
238 | p0f_nat,
239 | p0f_fw)
240 | VALUES
241 | ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10)""",
242 | }),
243 |
244 | 'virustotals': (
245 | { 'query' : """SELECT
246 | virustotal,
247 | virustotal_md5_hash,
248 | datetime(virustotal_timestamp, 'unixepoch') || ' UTC' AS virustotal_timestamp,
249 | virustotal_permalink
250 | FROM virustotals LIMIT {:d} OFFSET {:d}"""
251 | },
252 | { 'table' : 'dionaea.virustotals',
253 | 'seq' : "dionaea.virustotals_virustotal_seq",
254 | 'query' : """INSERT INTO dionaea.virustotals
255 | (
256 | virustotal,
257 | virustotal_md5_hash,
258 | virustotal_timestamp,
259 | virustotal_permalink
260 | )
261 | VALUES
262 | ($1,$2,$3::text::timestamptz,$4)""",
263 | }),
264 |
265 | 'virustotalscans': (
266 | { 'query' : """SELECT
267 | virustotalscan,
268 | virustotal,
269 | virustotalscan_scanner,
270 | nullif(virustotalscan_result,'')
271 | FROM virustotalscans LIMIT {:d} OFFSET {:d}"""
272 | },
273 | { 'table' : 'dionaea.virustotalscans',
274 | 'seq' : "dionaea.virustotalscans_virustotalscan_seq",
275 | 'query' : """INSERT INTO dionaea.virustotalscans
276 | (
277 | virustotalscan,
278 | virustotal,
279 | virustotalscan_scanner,
280 | virustotalscan_result
281 | )
282 | VALUES
283 | ($1,$2,$3,$4)""",
284 | }),
285 |
286 | # x
287 | 'mssql_fingerprints': (
288 | { 'query' : """SELECT
289 | mssql_fingerprint,
290 | connection,
291 | mssql_fingerprint_hostname,
292 | mssql_fingerprint_appname,
293 | mssql_fingerprint_cltintname FROM mssql_fingerprints LIMIT {:d} OFFSET {:d}"""
294 | },
295 | { 'table' : 'dionaea.mssql_fingerprints',
296 | 'seq' : "dionaea.mssql_fingerprints_mssql_fingerprint_seq",
297 | 'query' : """INSERT INTO dionaea.mssql_fingerprints
298 | (
299 | mssql_fingerprint,
300 | connection,
301 | mssql_fingerprint_hostname,
302 | mssql_fingerprint_appname,
303 | mssql_fingerprint_cltintname
304 | )
305 | VALUES
306 | ($1,$2,$3,$4,$5)""",
307 | }),
308 |
309 |
310 | 'mssql_commands': (
311 | { 'query' : """SELECT
312 | mssql_command,
313 | connection,
314 | mssql_command_status,
315 | mssql_command_cmd FROM mssql_commands LIMIT {:d} OFFSET {:d}"""
316 | },
317 | { 'table' : 'dionaea.mssql_commands',
318 | 'seq' : "dionaea.mssql_commands_mssql_command_seq",
319 | 'query' : """INSERT INTO dionaea.mssql_commands
320 | (
321 | mssql_command,
322 | connection,
323 | mssql_command_status,
324 | mssql_command_cmd
325 | )
326 | VALUES
327 | ($1,$2,$3,$4)""",
328 | }),
329 |
330 | 'logins': (
331 | { 'query' : """SELECT
332 | login,
333 | connection,
334 | login_username,
335 | login_password FROM logins LIMIT {:d} OFFSET {:d}"""
336 | },
337 | { 'table' : 'dionaea.logins',
338 | 'seq' : "dionaea.logins_login_seq",
339 | 'query' : """INSERT INTO dionaea.logins
340 | (
341 | login,
342 | connection,
343 | login_username,
344 | login_password
345 | )
346 | VALUES
347 | ($1,$2,$3,$4)""",
348 | })
349 | }
350 |
351 | if __name__ == "__main__":
352 | p = optparse.OptionParser()
353 | p.add_option('-s', '--database-host', dest='database_host', help='localhost:5432', type="string", action="store")
354 | p.add_option('-d', '--database', dest='database', help='for example xmpp', type="string", action="store")
355 | p.add_option('-u', '--database-user', dest='database_user', help='for example xmpp', type="string", action="store")
356 | p.add_option('-p', '--database-password', dest='database_password', help='the database users password', type="string", action="store")
357 | p.add_option('-f', '--sqlite-file', dest='sqlite_file', help='path to sqlite db', type="string", action="store")
358 | (options, args) = p.parse_args()
359 |
360 | if len(args) == 0:
361 | print("use {} as args".format( ' '.join(cando.keys()) ) )
362 |
363 | db = {}
364 | db['sqlite'] = {}
365 | db['sqlite']['dbh'] = sqlite3.connect(options.sqlite_file)
366 | db['sqlite']['cursor'] = db['sqlite']['dbh'].cursor()
367 |
368 | db['pg'] = {}
369 | db['pg']['dbh'] = pg_driver.connect(
370 | user = options.database_user,
371 | password = options.database_password,
372 | database = options.database,
373 | host = options.database_host,
374 | port = 5432)
375 |
376 | for i in args:
377 | if i in cando:
378 | copy(i,
379 | db['sqlite']['cursor'],
380 | db['pg']['dbh'],
381 | cando[i][0],
382 | cando[i][1])
383 | # db['pg']['dbh'].commit()
384 |
385 |
386 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/readlogsqltree.py:
--------------------------------------------------------------------------------
1 | #!/opt/dionaea/bin/python3.1
2 |
3 | from optparse import OptionParser
4 | import sqlite3
5 | import json
6 | import sys
7 |
8 | def resolve_result(resultcursor):
9 | names = [resultcursor.description[x][0] for x in range(len(resultcursor.description))]
10 | resolvedresult = [ dict(zip(names, i)) for i in resultcursor]
11 | return resolvedresult
12 |
13 | def print_offers(cursor, connection, indent):
14 | r = cursor.execute("SELECT * from offers WHERE connection = ?", (connection, ))
15 | offers = resolve_result(r)
16 | for offer in offers:
17 | print("{:s} offer: {:s}".format(' ' * indent, offer['offer_url']))
18 |
19 | def print_downloads(cursor, connection, indent):
20 | r = cursor.execute("SELECT * from downloads WHERE connection = ?", (connection, ))
21 | downloads = resolve_result(r)
22 | for download in downloads:
23 | print("{:s} download: {:s} {:s}".format(
24 | ' ' * indent, download['download_md5_hash'],
25 | download['download_url']))
26 | print_virustotals(cursor, download['download_md5_hash'], indent + 2 )
27 |
28 | def print_virustotals(cursor, md5_hash, indent):
29 | r = cursor.execute("""SELECT datetime(virustotal_timestamp, 'unixepoch', 'localtime') as timestamp, virustotal_permalink, COUNT(*) AS scanners,
30 | (
31 | SELECT COUNT(virustotalscan)
32 | FROM virustotals
33 | NATURAL JOIN virustotalscans
34 | WHERE virustotal_md5_hash = ?
35 | AND virustotalscan_result IS NOT NULL ) AS detected
36 | FROM virustotals NATURAL JOIN virustotalscans WHERE virustotal_md5_hash = ?""", (md5_hash, md5_hash))
37 | virustotals = resolve_result(r)
38 | for vt in virustotals:
39 | if vt['timestamp'] is None:
40 | continue
41 | print("{:s} virustotal {} {}/{} ({:.0f}%) {}".format(' ' * indent, vt['timestamp'], vt['detected'], vt['scanners'], vt['detected']/vt['scanners']*100, vt['virustotal_permalink']))
42 |
43 |
44 | r = cursor.execute("SELECT DISTINCT virustotalscan_result from virustotals NATURAL JOIN virustotalscans WHERE virustotal_md5_hash = ? AND virustotalscan_result IS NOT NULL", (md5_hash, ))
45 | virustotals = resolve_result(r)
46 | print("{:s} names ".format(' ' * (indent+2)), end='')
47 | for vt in virustotals:
48 | print("'{}' ".format(vt['virustotalscan_result']), end='')
49 | print("")
50 |
51 | def print_profiles(cursor, connection, indent):
52 | r = cursor.execute("SELECT * from emu_profiles WHERE connection = ?", (connection, ))
53 | profiles = resolve_result(r)
54 | for profile in profiles:
55 | print("{:s} profile: {:s}".format(
56 | ' ' * indent, json.loads(profile['emu_profile_json'])))
57 |
58 | def print_services(cursor, connection, indent):
59 | r = cursor.execute("SELECT * from emu_services WHERE connection = ?", (connection, ))
60 | services = resolve_result(r)
61 | for service in services:
62 | print("{:s} service: {:s}".format(
63 | ' ' * indent, service['emu_service_url']))
64 |
65 | def print_p0fs(cursor, connection, indent):
66 | r = cursor.execute("SELECT * from p0fs WHERE connection = ?", (connection, ))
67 | p0fs = resolve_result(r)
68 | for p0f in p0fs:
69 | print("{:s} p0f: genre:'{}' detail:'{}' uptime:'{}' tos:'{}' dist:'{}' nat:'{}' fw:'{}'".format(
70 | ' ' * indent, p0f['p0f_genre'], p0f['p0f_detail'],
71 | p0f['p0f_uptime'], p0f['p0f_tos'], p0f['p0f_dist'], p0f['p0f_nat'],
72 | p0f['p0f_fw']))
73 |
74 | def print_dcerpcbinds(cursor, connection, indent):
75 | r = cursor.execute("""
76 | SELECT DISTINCT
77 | dcerpcbind_uuid,
78 | dcerpcservice_name,
79 | dcerpcbind_transfersyntax
80 | FROM
81 | dcerpcbinds
82 | LEFT OUTER JOIN dcerpcservices ON (dcerpcbind_uuid = dcerpcservice_uuid)
83 | WHERE
84 | connection = ?""", (connection, ))
85 | dcerpcbinds = resolve_result(r)
86 | for dcerpcbind in dcerpcbinds:
87 | print("{:s} dcerpc bind: uuid '{:s}' ({:s}) transfersyntax {:s}".format(
88 | ' ' * indent,
89 | dcerpcbind['dcerpcbind_uuid'],
90 | dcerpcbind['dcerpcservice_name'],
91 | dcerpcbind['dcerpcbind_transfersyntax']) )
92 |
93 |
94 | def print_dcerpcrequests(cursor, connection, indent):
95 | r = cursor.execute("""
96 | SELECT
97 | dcerpcrequest_uuid,
98 | dcerpcservice_name,
99 | dcerpcrequest_opnum,
100 | dcerpcserviceop_name,
101 | dcerpcserviceop_vuln
102 | FROM
103 | dcerpcrequests
104 | LEFT OUTER JOIN dcerpcservices ON (dcerpcrequest_uuid = dcerpcservice_uuid)
105 | LEFT OUTER JOIN dcerpcserviceops ON (dcerpcservices.dcerpcservice = dcerpcserviceops.dcerpcservice AND dcerpcrequest_opnum = dcerpcserviceop_opnum)
106 | WHERE
107 | connection = ?""", (connection, ))
108 | dcerpcrequests = resolve_result(r)
109 | for dcerpcrequest in dcerpcrequests:
110 | print("{:s} dcerpc request: uuid '{:s}' ({:s}) opnum {:d} ({:s} ({:s}))".format(
111 | ' ' * indent,
112 | dcerpcrequest['dcerpcrequest_uuid'],
113 | dcerpcrequest['dcerpcservice_name'],
114 | dcerpcrequest['dcerpcrequest_opnum'],
115 | dcerpcrequest['dcerpcserviceop_name'],
116 | dcerpcrequest['dcerpcserviceop_vuln']) )
117 |
118 | def print_sip_commands(cursor, connection, indent):
119 | r = cursor.execute("""
120 | SELECT
121 | sip_command,
122 | sip_command_method,
123 | sip_command_call_id,
124 | sip_command_user_agent,
125 | sip_command_allow
126 | FROM
127 | sip_commands
128 | WHERE
129 | connection = ?""", (connection, ))
130 | sipcommands = resolve_result(r)
131 | for cmd in sipcommands:
132 | print("{:s} Method:{:s}".format(
133 | ' ' * indent,
134 | cmd['sip_command_method']))
135 | print("{:s} Call-ID:{:s}".format(
136 | ' ' * indent,
137 | cmd['sip_command_call_id']))
138 | print("{:s} User-Agent:{:s}".format(
139 | ' ' * indent,
140 | cmd['sip_command_user_agent']))
141 | print_sip_addrs(cursor, cmd['sip_command'], indent+2)
142 | print_sip_vias(cursor, cmd['sip_command'], indent+2)
143 | print_sip_sdp_origins(cursor, cmd['sip_command'], indent+2)
144 | print_sip_sdp_connectiondatas(cursor, cmd['sip_command'], indent+2)
145 | print_sip_sdp_medias(cursor, cmd['sip_command'], indent+2)
146 |
147 | def print_sip_addrs(cursor, sip_command, indent):
148 | r = cursor.execute("""
149 | SELECT
150 | sip_addr_type,
151 | sip_addr_display_name,
152 | sip_addr_uri_scheme,
153 | sip_addr_uri_user,
154 | sip_addr_uri_host,
155 | sip_addr_uri_port
156 | FROM
157 | sip_addrs
158 | WHERE
159 | sip_command = ?""", (sip_command, ))
160 | addrs = resolve_result(r)
161 | for addr in addrs:
162 | print("{:s} {:s}: <{}> '{:s}:{:s}@{:s}:{}'".format(
163 | ' ' * indent,
164 | addr['sip_addr_type'],
165 | addr['sip_addr_display_name'],
166 | addr['sip_addr_uri_scheme'],
167 | addr['sip_addr_uri_user'],
168 | addr['sip_addr_uri_host'],
169 | addr['sip_addr_uri_port']))
170 |
171 | def print_sip_vias(cursor, sip_command, indent):
172 | r = cursor.execute("""
173 | SELECT
174 | sip_via_protocol,
175 | sip_via_address,
176 | sip_via_port
177 | FROM
178 | sip_vias
179 | WHERE
180 | sip_command = ?""", (sip_command, ))
181 | vias = resolve_result(r)
182 | for via in vias:
183 | print("{:s} via:'{:s}/{:s}:{}'".format(
184 | ' ' * indent,
185 | via['sip_via_protocol'],
186 | via['sip_via_address'],
187 | via['sip_via_port']))
188 |
189 | def print_sip_sdp_origins(cursor, sip_command, indent):
190 | r = cursor.execute("""
191 | SELECT
192 | sip_sdp_origin_username,
193 | sip_sdp_origin_sess_id,
194 | sip_sdp_origin_sess_version,
195 | sip_sdp_origin_nettype,
196 | sip_sdp_origin_addrtype,
197 | sip_sdp_origin_unicast_address
198 | FROM
199 | sip_sdp_origins
200 | WHERE
201 | sip_command = ?""", (sip_command, ))
202 | vias = resolve_result(r)
203 | for via in vias:
204 | print("{:s} o:'{} {} {} {} {} {}'".format(
205 | ' ' * indent,
206 | via['sip_sdp_origin_username'],
207 | via['sip_sdp_origin_sess_id'],
208 | via['sip_sdp_origin_sess_version'],
209 | via['sip_sdp_origin_nettype'],
210 | via['sip_sdp_origin_addrtype'],
211 | via['sip_sdp_origin_unicast_address']))
212 |
213 | def print_sip_sdp_connectiondatas(cursor, sip_command, indent):
214 | r = cursor.execute("""
215 | SELECT
216 | sip_sdp_connectiondata_nettype,
217 | sip_sdp_connectiondata_addrtype,
218 | sip_sdp_connectiondata_connection_address,
219 | sip_sdp_connectiondata_ttl,
220 | sip_sdp_connectiondata_number_of_addresses
221 | FROM
222 | sip_sdp_connectiondatas
223 | WHERE
224 | sip_command = ?""", (sip_command, ))
225 | vias = resolve_result(r)
226 | for via in vias:
227 | print("{:s} c:'{} {} {} {} {}'".format(
228 | ' ' * indent,
229 | via['sip_sdp_connectiondata_nettype'],
230 | via['sip_sdp_connectiondata_addrtype'],
231 | via['sip_sdp_connectiondata_connection_address'],
232 | via['sip_sdp_connectiondata_ttl'],
233 | via['sip_sdp_connectiondata_number_of_addresses']))
234 |
235 | def print_sip_sdp_medias(cursor, sip_command, indent):
236 | r = cursor.execute("""
237 | SELECT
238 | sip_sdp_media_media,
239 | sip_sdp_media_port,
240 | sip_sdp_media_number_of_ports,
241 | sip_sdp_media_proto
242 | FROM
243 | sip_sdp_medias
244 | WHERE
245 | sip_command = ?""", (sip_command, ))
246 | vias = resolve_result(r)
247 | for via in vias:
248 | print("{:s} m:'{} {} {} {}'".format(
249 | ' ' * indent,
250 | via['sip_sdp_media_media'],
251 | via['sip_sdp_media_port'],
252 | via['sip_sdp_media_number_of_ports'],
253 | via['sip_sdp_media_proto']))
254 |
255 | def print_logins(cursor, connection, indent):
256 | r = cursor.execute("""
257 | SELECT
258 | login_username,
259 | login_password
260 | FROM
261 | logins
262 | WHERE connection = ?""", (connection, ))
263 | logins = resolve_result(r)
264 | for login in logins:
265 | print("{:s} login - user:'{:s}' password:'{:s}'".format(
266 | ' ' * indent,
267 | login['login_username'],
268 | login['login_password']))
269 |
270 | def print_mssql_fingerprints(cursor, connection, indent):
271 | r = cursor.execute("""
272 | SELECT
273 | mssql_fingerprint_hostname,
274 | mssql_fingerprint_appname,
275 | mssql_fingerprint_cltintname
276 | FROM
277 | mssql_fingerprints
278 | WHERE connection = ?""", (connection, ))
279 | fingerprints = resolve_result(r)
280 | for fingerprint in fingerprints:
281 | print("{:s} mssql fingerprint - hostname:'{:s}' cltintname:'{:s}' appname:'{:s}'".format(
282 | ' ' * indent,
283 | fingerprint['mssql_fingerprint_hostname'],
284 | fingerprint['mssql_fingerprint_appname'],
285 | fingerprint['mssql_fingerprint_cltintname']))
286 |
287 | def print_mssql_commands(cursor, connection, indent):
288 | r = cursor.execute("""
289 | SELECT
290 | mssql_command_status,
291 | mssql_command_cmd
292 | FROM
293 | mssql_commands
294 | WHERE connection = ?""", (connection, ))
295 | commands = resolve_result(r)
296 | for cmd in commands:
297 | print("{:s} mssql command - status:{:s} cmd:'{:s}'".format(
298 | ' ' * indent,
299 | cmd['mssql_command_status'],
300 | cmd['mssql_command_cmd']))
301 |
302 |
303 | def print_mysql_commands(cursor, connection, indent):
304 | r = cursor.execute("""
305 | SELECT
306 | mysql_command,
307 | mysql_command_cmd,
308 | mysql_command_op_name
309 | FROM
310 | mysql_commands
311 | LEFT OUTER JOIN mysql_command_ops USING(mysql_command_cmd)
312 | WHERE
313 | connection = ?""", (connection, ))
314 | commands = resolve_result(r)
315 | for cmd in commands:
316 | print("{:s} mysql command (0x{:02x}) {:s}".format(
317 | ' ' * indent,
318 | cmd['mysql_command_cmd'],
319 | cmd['mysql_command_op_name']
320 | ), end='')
321 | # args
322 | r = cursor.execute("""
323 | SELECT
324 | mysql_command_arg_data
325 | FROM
326 | mysql_command_args
327 | WHERE
328 | mysql_command = ?
329 | ORDER BY
330 | mysql_command_arg_index ASC """, (cmd['mysql_command'], ))
331 | args = resolve_result(r)
332 | print("({:s})".format(",".join([ "'%s'" % arg['mysql_command_arg_data'] for arg in args])))
333 |
334 |
335 | def print_connection(c, indent):
336 | indentStr = ' ' * (indent + 1)
337 |
338 | if c['connection_type'] in ['accept', 'reject', 'pending']:
339 | print(indentStr + 'connection {:d} {:s} {:s} {:s} {:s}:{:d} <- {:s}:{:d}'.format(
340 | c['connection'], c['connection_protocol'], c['connection_transport'],
341 | c['connection_type'], c['local_host'], c['local_port'],
342 | c['remote_host'], c['remote_port']), end='')
343 | elif c['connection_type'] == 'connect':
344 | print(indentStr + 'connection {:d} {:s} {:s} {:s} {:s}:{:d} -> {:s}/{:s}:{:d}'.format(
345 | c['connection'], c['connection_protocol'],
346 | c['connection_transport'], c['connection_type'], c['local_host'],
347 | c['local_port'], c['remote_hostname'], c['remote_host'],
348 | c['remote_port']), end='')
349 | elif c['connection_type'] == 'listen':
350 | print(indentStr + 'connection {:d} {:s} {:s} {:s} {:s}:{:d}'.format(
351 | c['connection'], c['connection_protocol'],
352 | c['connection_transport'], c['connection_type'], c['local_host'],
353 | c['local_port']), end='')
354 |
355 | print(' ({} {})'.format(c['connection_root'], c['connection_parent']))
356 |
357 | def recursive_print(cursor, connection, indent):
358 | result = cursor.execute("SELECT * from connections WHERE connection_parent = ?", (connection, ))
359 | connections = resolve_result(result)
360 | for c in connections:
361 | if c['connection'] == connection:
362 | continue
363 | print_connection(c, indent+1)
364 | print_p0fs(cursor, c['connection'], indent+2)
365 | print_dcerpcbinds(cursor, c['connection'], indent+2)
366 | print_dcerpcrequests(cursor, c['connection'], indent+2)
367 | print_profiles(cursor, c['connection'], indent+2)
368 | print_offers(cursor, c['connection'], indent+2)
369 | print_downloads(cursor, c['connection'], indent+2)
370 | print_services(cursor, c['connection'], indent+2)
371 | print_sip_commands(cursor, c['connection'], indent+2)
372 | recursive_print(cursor, c['connection'], indent+2)
373 |
374 | def print_db(opts, args):
375 | dbpath = '/opt/dionaea/var/dionaea/logsql.sqlite'
376 | if len(args) >= 1:
377 | dbpath = args[0]
378 | print("using database located at {0}".format(dbpath))
379 | dbh = sqlite3.connect(dbpath)
380 | cursor = dbh.cursor()
381 |
382 | offset = 0
383 | limit = 1000
384 |
385 | query = """
386 | SELECT DISTINCT
387 | c.connection AS connection,
388 | connection_root,
389 | connection_parent,
390 | connection_type,
391 | connection_protocol,
392 | connection_transport,
393 | datetime(connection_timestamp, 'unixepoch', 'localtime') AS connection_timestamp,
394 | local_host,
395 | local_port,
396 | remote_host,
397 | remote_hostname,
398 | remote_port
399 | FROM
400 | connections AS c
401 | LEFT OUTER JOIN offers ON (offers.connection = c.connection)
402 | LEFT OUTER JOIN downloads ON (downloads.connection = c.connection)
403 | LEFT OUTER JOIN dcerpcbinds ON (dcerpcbinds.connection = c.connection)
404 | LEFT OUTER JOIN dcerpcrequests ON (dcerpcrequests.connection = c.connection)
405 | WHERE
406 | (c.connection_root = c.connection OR c.connection_root IS NULL)
407 | """
408 |
409 | if options.remote_host:
410 | query = query + "\tAND remote_host = '{:s}' \n".format(options.remote_host)
411 |
412 | if options.connection:
413 | query = query + "\tAND c.connection = {:d} \n".format(options.connection)
414 |
415 | if options.in_offer_url:
416 | query = query + "\tAND offer_url LIKE '%{:s}%' \n".format(options.in_offer_url)
417 |
418 | if options.in_download_url:
419 | query = query + "\tAND download_url LIKE '%{:s}%' \n".format(options.in_download_url)
420 |
421 | if options.time_from:
422 | query = query + "\tAND connection_timestamp > {:s} \n".format(options.time_from)
423 |
424 | if options.time_to:
425 | query = query + "\tAND connection_timestamp < {:s} \n".format(options.time_to)
426 |
427 | if options.uuid:
428 | query = query + "\tAND dcerpcbind_uuid = '{:s}' \n".format(options.uuid)
429 |
430 | if options.opnum:
431 | query = query + "\tAND dcerpcrequest_opnum = {:s} \n".format(options.opnum)
432 |
433 | if options.protocol:
434 | query = query + "\tAND connection_protocol = '{:s}' \n".format(options.protocol)
435 |
436 | if options.md5sum:
437 | query = query + "\tAND download_md5_hash = '{:s}' \n".format(options.md5sum)
438 |
439 | if options.type:
440 | query = query + "\tAND connection_type = '{:s}' \n".format(options.type)
441 |
442 | if options.query:
443 | print(query)
444 | return
445 |
446 | while True:
447 | lquery = query + "\t LIMIT {:d} OFFSET {:d} \n".format(limit, offset)
448 | result = cursor.execute(lquery)
449 | connections = resolve_result(result)
450 | # print(connections)
451 | for c in connections:
452 | connection = c['connection']
453 | print("{:s}".format(c['connection_timestamp']))
454 | print_connection(c, 1)
455 | print_p0fs(cursor, c['connection'], 2)
456 | print_dcerpcbinds(cursor, c['connection'], 2)
457 | print_dcerpcrequests(cursor, c['connection'], 2)
458 | print_profiles(cursor, c['connection'], 2)
459 | print_offers(cursor, c['connection'], 2)
460 | print_downloads(cursor, c['connection'], 2)
461 | print_services(cursor, c['connection'], 2)
462 | print_logins(cursor, c['connection'], 2)
463 | print_mssql_fingerprints(cursor, c['connection'], 2)
464 | print_mssql_commands(cursor, c['connection'], 2)
465 | print_mysql_commands(cursor, c['connection'], 2)
466 | print_sip_commands(cursor, c['connection'], 2)
467 | recursive_print(cursor, c['connection'], 2)
468 |
469 | offset += limit
470 | if len(connections) != limit:
471 | break
472 |
473 | if __name__ == "__main__":
474 | parser = OptionParser()
475 | parser.add_option("-r", "--remote-host", action="store", type="string", dest="remote_host")
476 | parser.add_option("-o", "--in-offer-url", action="store", type="string", dest="in_offer_url")
477 | parser.add_option("-d", "--in-download-url", action="store", type="string", dest="in_download_url")
478 | parser.add_option("-c", "--connection", action="store", type="int", dest="connection")
479 | parser.add_option("-q", "--query-only", action="store_true", dest="query", default=False)
480 | parser.add_option("-t", "--time-from", action="store", type="string", dest="time_from")
481 | parser.add_option("-T", "--time-to", action="store", type="string", dest="time_to")
482 | parser.add_option("-u", "--dcerpcbind-uuid", action="store", type="string", dest="uuid")
483 | parser.add_option("-p", "--dcerpcrequest-opnum", action="store", type="string", dest="opnum")
484 | parser.add_option("-P", "--protocol", action="store", type="string", dest="protocol")
485 | parser.add_option("-m", "--downloads-md5sum", action="store", type="string", dest="md5sum")
486 | parser.add_option("-y", "--connection-type", action="store", type="string", dest="type")
487 | (options, args) = parser.parse_args()
488 | print_db(options, args)
489 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/retry.py:
--------------------------------------------------------------------------------
1 | #!/opt/dionaea/bin/python3.1
2 |
3 | from optparse import OptionParser
4 | import socket
5 | import os
6 | import shutil
7 | import sys
8 | import time
9 |
10 | parser = OptionParser()
11 | parser.add_option("-f", "--file", action="store", type="string", dest="filename")
12 | parser.add_option("-H", "--host", action="store", type="string", dest="host")
13 | parser.add_option("-p", "--port", action="store", type="int", dest="port")
14 | parser.add_option("-s", "--send", action="store_true", dest="send", default=False)
15 | parser.add_option("-r", "--recv", action="store_true", dest="recv", default=False)
16 | parser.add_option("-t", "--tempfile", action="store", type="string", dest="tempfile", default="retrystream")
17 | parser.add_option("-u", "--udp", action="store_true", dest="udp", default=False)
18 | parser.add_option("-v", "--verbose", action="store_true", dest="verbose", default=False)
19 | (options, args) = parser.parse_args()
20 |
21 | if os.path.exists(options.tempfile):
22 | os.unlink(options.tempfile)
23 | shutil.copy (options.filename, options.tempfile + ".py")
24 |
25 | sys.path.append(".")
26 | import_string = "from " + options.tempfile + " import stream"
27 | exec(import_string)
28 |
29 | print("doing " + options.filename)
30 | if options.send:
31 | if options.udp == False:
32 | s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
33 | else:
34 | s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
35 |
36 | s.connect((options.host, options.port))
37 |
38 | for i in stream:
39 | if i[0] == 'in':
40 | r = 0
41 | if options.send == True:
42 | r = s.send(i[1])
43 | if options.verbose:
44 | print('send %i of %i bytes' % (r, len(i[1])))
45 | if i[0] == 'out':
46 | x = ""
47 | if options.recv == True:
48 | x = s.recv(len(i[1]))
49 | if options.verbose:
50 | print('recv %i of %i bytes' % ( len(x), len(i[1])) )
51 | time.sleep(1)
52 |
53 | time.sleep(1)
54 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/updateccs.py:
--------------------------------------------------------------------------------
1 | #!/opt/dionaea/bin/python3
2 | #
3 | #
4 | # Basing on:
5 | # gencc: A simple program to generate credit card numbers that pass the MOD 10 check
6 | # (Luhn formula).
7 | # Usefull for testing e-commerce sites during development.
8 | #
9 | # Copyright 2003 Graham King
10 | #
11 | # This program is free software; you can redistribute it and/or modify
12 | # it under the terms of the GNU General Public License as published by
13 | # the Free Software Foundation; either version 2 of the License, or
14 | # (at your option) any later version.
15 | #
16 | # This program is distributed in the hope that it will be useful,
17 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
18 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19 | # GNU General Public License for more details.
20 | #
21 | # You should have received a copy of the GNU General Public License
22 | # along with this program; if not, write to the Free Software
23 | # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 | #
25 | # http://www.darkcoding.net/credit-card-generator/
26 | #
27 |
28 | from random import Random
29 | import sys
30 | import copy
31 | import sqlite3
32 | import argparse
33 |
34 | visaPrefixList = [ ['4', '5', '3', '9'],
35 | ['4', '5', '5', '6'],
36 | ['4', '9', '1', '6'],
37 | ['4', '5', '3', '2'],
38 | ['4', '9', '2', '9'],
39 | ['4', '0', '2', '4', '0', '0', '7', '1'],
40 | ['4', '4', '8', '6'],
41 | ['4', '7', '1', '6'],
42 | ['4'] ]
43 |
44 | mastercardPrefixList = [ ['5', '1'],
45 | ['5', '2'],
46 | ['5', '3'],
47 | ['5', '4'],
48 | ['5', '5'] ]
49 |
50 | amexPrefixList = [ ['3', '4'],
51 | ['3', '7'] ]
52 |
53 | discoverPrefixList = [ ['6', '0', '1', '1'] ]
54 |
55 | dinersPrefixList = [ ['3', '0', '0'],
56 | ['3', '0', '1'],
57 | ['3', '0', '2'],
58 | ['3', '0', '3'],
59 | ['3', '6'],
60 | ['3', '8'] ]
61 |
62 | enRoutePrefixList = [ ['2', '0', '1', '4'],
63 | ['2', '1', '4', '9'] ]
64 |
65 | jcbPrefixList16 = [ ['3', '0', '8', '8'],
66 | ['3', '0', '9', '6'],
67 | ['3', '1', '1', '2'],
68 | ['3', '1', '5', '8'],
69 | ['3', '3', '3', '7'],
70 | ['3', '5', '2', '8'] ]
71 |
72 | jcbPrefixList15 = [ ['2', '1', '0', '0'],
73 | ['1', '8', '0', '0'] ]
74 |
75 | voyagerPrefixList = [ ['8', '6', '9', '9'] ]
76 |
77 |
78 | """
79 | 'prefix' is the start of the CC number as a string, any number of digits.
80 | 'length' is the length of the CC number to generate. Typically 13 or 16
81 | """
82 | def completed_number(prefix, length):
83 | ccnumber = prefix
84 |
85 | # generate digits
86 | while len(ccnumber) < (length - 1):
87 | digit = generator.choice(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])
88 | ccnumber.append(digit)
89 |
90 | # Calculate sum
91 | sum = 0
92 | pos = 0
93 | reversedCCnumber = []
94 | reversedCCnumber.extend(ccnumber)
95 | reversedCCnumber.reverse()
96 |
97 | while pos < length - 1:
98 | odd = int( reversedCCnumber[pos] ) * 2
99 | if odd > 9:
100 | odd -= 9
101 | sum += odd
102 | if pos != (length - 2):
103 | sum += int( reversedCCnumber[pos+1] )
104 | pos += 2
105 | # Calculate check digit
106 | checkdigit = ((sum / 10 + 1) * 10 - sum) % 10
107 | ccnumber.append( str(checkdigit) )
108 | return ''.join(ccnumber)
109 |
110 |
111 | def credit_card_number(generator, prefixList, length):
112 | if type(length) is list:
113 | length = generator.choice(length)
114 | ccnumber = copy.copy( generator.choice(prefixList) )
115 | return completed_number(ccnumber, length)
116 |
117 | generator = None
118 |
119 | def gencc(card):
120 | global generator
121 | cards = { "MasterCard": { "prefix" : mastercardPrefixList, "length": 16 },
122 | "Visa":{ "prefix" : visaPrefixList, "length": [13,16] },
123 | "AmericanExpress":{ "prefix" : amexPrefixList, "length": 15 },
124 | }
125 | if generator is None:
126 | generator = Random()
127 | generator.seed() # Seed from current time
128 |
129 | if card in cards:
130 | return credit_card_number(generator, cards[card]['prefix'], cards[card]['length'])
131 | raise ValueException("card %s is unknown" % card)
132 |
133 | if __name__ == '__main__':
134 |
135 | parser = argparse.ArgumentParser(description='Update a sqlite Database with random but correct cc numbers')
136 | parser.add_argument('database', help='the database to use')
137 | parser.add_argument('--table', help='the table to update', required=True)
138 | parser.add_argument('--type-col', help='the column containing the cc type', required=True)
139 | parser.add_argument('--num-col', help='the column containing the cc number', required=True)
140 | args = parser.parse_args()
141 |
142 | dbh = sqlite3.connect(args.database)
143 | dbh.create_function("gencc",1,gencc)
144 |
145 | cursor = dbh.cursor()
146 | query = "UPDATE {:s} SET {:s}=CAST(gencc({:s}) AS INTEGER)".format(args.table,args.num_col,args.type_col)
147 | print(query)
148 | cursor.execute(query)
149 | dbh.commit()
150 | print("updated the ccs for %i rows" % cursor.rowcount)
151 |
152 |
--------------------------------------------------------------------------------
/dionaea/modules_python_util/xmpp/pg_backend.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python -u
2 | #
3 | # aptitude install python-pyxmpp python-pgsql
4 | #
5 | # with db
6 | # ./pg_backend.py -U USER@sensors.carnivore.it -P XMPPPASS -M dionaea.sensors.carnivore.it -C anon-files -C anon-events -s DBHOST -u DBUSER -d xmpp -p DBPASS -f /tmp/
7 | #
8 | # without db
9 | # ./pg_backend.py -U USER@sensors.carnivore.it -P XMPPPASS -M dionaea.sensors.carnivore.it -C anon-files -C anon-events -f /tmp/
10 |
11 | import sys
12 | import logging
13 | import locale
14 | import codecs
15 | import base64
16 | import md5
17 | import optparse
18 | import time
19 | import io
20 | import os
21 | from pyPgSQL import PgSQL
22 |
23 | from pyxmpp.all import JID,Iq,Presence,Message,StreamError
24 | from pyxmpp.jabber.client import JabberClient
25 | from pyxmpp.jabber.muc import MucRoomManager, MucRoomHandler
26 | from pyxmpp.xmlextra import replace_ns, common_doc, common_ns, get_node_ns
27 | from pyxmpp import xmlextra
28 |
29 |
30 | # PyXMPP uses `logging` module for its debug output
31 | # applications should set it up as needed
32 | logger=logging.getLogger()
33 | logger.addHandler(logging.StreamHandler())
34 | logger.setLevel(logging.INFO) # change to DEBUG for higher verbosity
35 |
36 | dionaea_ns = { "default" : "http://pyxmpp.jajcus.net/xmlns/common",
37 | "dionaea" : "http://dionaea.carnivore.it"}
38 |
39 | # libxml2 is cruel
40 | def xpath_eval(xmlnode,expr,namespaces=None):
41 | ctxt = common_doc.xpathNewContext()
42 | ctxt.setContextNode(xmlnode)
43 | if namespaces:
44 | for prefix,uri in namespaces.items():
45 | ctxt.xpathRegisterNs(unicode(prefix),uri)
46 | ret=ctxt.xpathEval(unicode(expr))
47 | ctxt.xpathFreeContext()
48 | return ret
49 |
50 | class RoomHandler(MucRoomHandler):
51 | def __init__(self):
52 | MucRoomHandler.__init__(self)
53 | self.setns = False
54 |
55 | def user_joined(self, user, stanza):
56 | print 'User %s joined room' % user.room_jid.as_unicode()
57 | user.attacks = {}
58 |
59 | def user_left(self, user, stanza):
60 | print 'User %s left room' % user.room_jid.as_unicode()
61 | user.attacks = None
62 | user = None
63 |
64 | def subject_changed(self, user, stanza):
65 | print 'subject: %s' % stanza
66 |
67 | def message_received(self, user, stanza):
68 |
69 | if not hasattr(user, 'attacks'):
70 | print("invalid message, maybe history")
71 | return
72 | # check if we have dionaea entries in the message
73 | # provide a namespace ...
74 | # I love xml namespaces ...
75 |
76 | # dionaea
77 |
78 | r = stanza.xpath_eval("/default:message/default:body/dionaea:dionaea",
79 | namespaces = dionaea_ns)
80 |
81 | for d in r:
82 | # rename the namespace for the dionaea entries
83 | o = d.ns()
84 | n = d.newNs("http://dionaea.carnivore.it", "dionaea")
85 | d.setNs(n)
86 | replace_ns(d,o,n)
87 |
88 | # get the incident
89 | p = d.hasProp('incident')
90 | mname = p.content
91 | mname = mname.replace(".","_")
92 |
93 | # use the incidents name to get the appropriate handler
94 | method = getattr(self, "handle_incident_" + mname, None)
95 | # method = self.handle_incident_debug
96 | if method is not None:
97 | c = d.children
98 | while c is not None:
99 | # print("c: '%s'" % c)
100 | if c.isText():
101 | c = c.next
102 | continue
103 | # call the handler with the object
104 | # print(mname)
105 | method(user, c)
106 | c = c.next
107 | # else:
108 | # print("method %s is not implemented" % mname)
109 | # self.handle_incident_not_implemented(user, stanza)
110 |
111 | # kippo
112 |
113 | r = stanza.xpath_eval("/default:message/default:body/kippo:kippo",
114 | namespaces = { "default" : "http://pyxmpp.jajcus.net/xmlns/common",
115 | "kippo" : "http://code.google.com/p/kippo/"})
116 |
117 | for d in r:
118 | o = d.ns()
119 | n = d.newNs("http://code.google.com/p/kippo/", "kippo")
120 | d.setNs(n)
121 | replace_ns(d,o,n)
122 | # print(d)
123 | p = d.hasProp('type')
124 | mname = p.content
125 | method = getattr(self, "handle_kippo_" + mname, None)
126 | if method is not None:
127 | for c in d.children:
128 | if c.isText():
129 | continue
130 | method(user, c)
131 | else:
132 | print("method %s is not implemented" % mname)
133 | # self.handle_incident_not_implemented(user, stanza)
134 |
135 | def handle_kippo_createsession(self, user, xmlobj):
136 | try:
137 | local_host = xmlobj.hasProp('local_host').content
138 | remote_host = xmlobj.hasProp('remote_host').content
139 | session = xmlobj.hasProp('session').content
140 | except Exception as e:
141 | print(e)
142 | return
143 | if dbh is not None:
144 | r = cursor.execute(
145 | """INSERT INTO
146 | kippo.sessions
147 | (session_start, session_stop, local_host, remote_host)
148 | VALUES (NOW(),NOW(),%s,%s)""" ,
149 | (local_host, remote_host))
150 | r = cursor.execute("""SELECT CURRVAL('kippo.sessions_session_seq')""")
151 | attackid = cursor.fetchall()[0][0]
152 | user.attacks[session] = (attackid,attackid)
153 | print("[%s] createsession: %s %s %s" % (user.room_jid.as_unicode(), local_host, remote_host, session))
154 |
155 |
156 | def handle_kippo_connectionlost(self, user, xmlobj):
157 | try:
158 | session = xmlobj.hasProp('session').content
159 | except Exception as e:
160 | print(e)
161 | return
162 | if dbh is not None:
163 | if session in user.attacks:
164 | attackid = user.attacks[session][0]
165 | r = cursor.execute("""UPDATE kippo.sessions SET session_stop = NOW() WHERE session = %s""" , (attackid, ))
166 | del user.attacks[session]
167 | print("[%s] connectionlost: %s" % (user.room_jid.as_unicode(), session))
168 |
169 | def _handle_kippo_login(self, user, xmlobj, success):
170 | try:
171 | session = xmlobj.hasProp('session').content
172 | username = xmlobj.hasProp('username').content
173 | password = xmlobj.hasProp('password').content
174 | except Exception as e:
175 | print(e)
176 | return
177 | if dbh is not None:
178 | if session in user.attacks:
179 | attackid = user.attacks[session][0]
180 | r = cursor.execute(
181 | """INSERT INTO
182 | kippo.auths
183 | (auth_timestamp, session, auth_username, auth_password, auth_success)
184 | VALUES (NOW(),%s,%s,%s,%s::boolean)""",
185 | (attackid, username, password, success))
186 | print("[%s] : login %s %s %s %s" % (user.room_jid.as_unicode(), success, username, password, session))
187 |
188 |
189 | def handle_kippo_loginfailed(self, user, xmlobj):
190 | self._handle_kippo_login(user, xmlobj, False)
191 |
192 | def handle_kippo_loginsucceeded(self, user, xmlobj):
193 | self._handle_kippo_login(user, xmlobj, True)
194 |
195 | def _handle_kippo_input(self, user, xmlobj, realm, success):
196 | try:
197 | session = xmlobj.hasProp('session').content
198 | command = xmlobj.content
199 | except Exception as e:
200 | print(e)
201 | return
202 | if dbh is not None:
203 | if session in user.attacks:
204 | attackid = user.attacks[session][0]
205 | cursor.execute("""INSERT INTO kippo.inputs
206 | (session, input_timestamp, input_realm, input_success, input_data)
207 | VALUES (%s,NOW(),%s,%s,%s)""",
208 | (attackid, realm, success, command) )
209 | print("[%s] command %s %s" % (user.room_jid.as_unicode(), command, session))
210 |
211 |
212 | def handle_kippo_command(self, user, xmlobj):
213 | try:
214 | if xmlobj.hasProp('command').content == 'known':
215 | success = True
216 | else:
217 | success = False
218 | self._handle_kippo_input(user, xmlobj,"",success)
219 | except Exception as e:
220 | print(e)
221 | return
222 |
223 | def handle_kippo_input(self, user, xmlobj):
224 | try:
225 | realm = xmlobj.hasProp('realm').content
226 | self._handle_kippo_input(user, xmlobj,realm,True)
227 | except Exception as e:
228 | print(e)
229 | return
230 |
231 |
232 | def handle_kippo_clientversion(self, user, xmlobj):
233 | try:
234 | session = xmlobj.hasProp('session').content
235 | ver = xmlobj.hasProp('version').content
236 | except Exception as e:
237 | print(e)
238 | return
239 | if dbh is not None:
240 | if session in user.attacks:
241 | attackid = user.attacks[session][0]
242 | cursor.execute("""INSERT INTO kippo.clients
243 | (session, version)
244 | VALUES (%s,%s)""",
245 | (attackid, ver) )
246 | print("[%s] version %s %s" % (user.room_jid.as_unicode(), ver, session))
247 |
248 |
249 |
250 | # dionaea
251 | def handle_incident_not_implemented(self, user, xmlobj):
252 | print("USER %s xmlobj '%s'" % (user.room_jid.as_unicode(), xmlobj.serialize()))
253 |
254 | def _handle_incident_connection_new(self, user, xmlobj):
255 | try:
256 | ctype = xmlobj.hasProp('type').content
257 | protocol = xmlobj.hasProp('protocol').content
258 | transport = xmlobj.hasProp('transport').content
259 | local_host = xmlobj.hasProp('local_host').content
260 | remote_host = xmlobj.hasProp('remote_host').content
261 | remote_hostname = xmlobj.hasProp('remote_hostname').content
262 | local_port = xmlobj.hasProp('local_port').content
263 | remote_port = xmlobj.hasProp('remote_port').content
264 | ref = xmlobj.hasProp('ref').content
265 | ref = int(ref)
266 | except Exception as e:
267 | print(e)
268 | return
269 | if remote_hostname == "":
270 | remote_hostname = None
271 | if remote_host == "" or remote_host is None:
272 | remote_host = "0.0.0.0"
273 | if dbh is not None:
274 | r = cursor.execute(
275 | """INSERT INTO
276 | dionaea.connections
277 | (connection_timestamp, connection_type, connection_transport, connection_protocol, local_host, local_port, remote_host, remote_hostname, remote_port)
278 | VALUES (NOW(),%s,%s,%s,%s,
279 | %s,%s,%s,%s)""" ,
280 | (ctype, transport, protocol, local_host,
281 | local_port, remote_host, remote_hostname, remote_port))
282 | r = cursor.execute("""SELECT CURRVAL('dionaea.connections_connection_seq')""")
283 | attackid = cursor.fetchall()[0][0]
284 | user.attacks[ref] = (attackid,attackid)
285 | print("[%s] %s %s %s %s:%s %s/%s:%s %s" % (user.room_jid.as_unicode(), ctype, protocol, transport, local_host, local_port, remote_hostname, remote_host, remote_port, ref))
286 |
287 |
288 | def handle_incident_dionaea_connection_tcp_listen(self, user, xmlobj):
289 | self._handle_incident_connection_new(user,xmlobj)
290 |
291 | def handle_incident_dionaea_connection_tls_listen(self, user, xmlobj):
292 | self._handle_incident_connection_new(user,xmlobj)
293 |
294 | def handle_incident_dionaea_connection_tcp_connect(self, user, xmlobj):
295 | self._handle_incident_connection_new(user,xmlobj)
296 |
297 | def handle_incident_dionaea_connection_tls_connect(self, user, xmlobj):
298 | self._handle_incident_connection_new(user,xmlobj)
299 |
300 | def handle_incident_dionaea_connection_udp_connect(self, user, xmlobj):
301 | self._handle_incident_connection_new(user,xmlobj)
302 |
303 | def handle_incident_dionaea_connection_tcp_accept(self, user, xmlobj):
304 | self._handle_incident_connection_new(user,xmlobj)
305 |
306 | def handle_incident_dionaea_connection_tls_accept(self, user, xmlobj):
307 | self._handle_incident_connection_new(user,xmlobj)
308 |
309 | def handle_incident_dionaea_connection_tcp_reject(self, user, xmlobj):
310 | self._handle_incident_connection_new(user,xmlobj)
311 |
312 | def handle_incident_dionaea_connection_link(self, user, xmlobj):
313 | try:
314 | parent = int(xmlobj.hasProp('parent').content)
315 | child = int(xmlobj.hasProp('child').content)
316 | except Exception as e:
317 | print(e)
318 | return
319 | if dbh is not None and parent in user.attacks:
320 | parentroot, parentid = user.attacks[parent]
321 | if child in user.attacks:
322 | childroot, childid = user.attacks[child]
323 | else:
324 | childid = parentid
325 | user.attacks[child] = (parentroot, childid)
326 | cursor.execute("UPDATE dionaea.connections SET connection_root = %s, connection_parent = %s WHERE connection = %s",
327 | (parentroot, parentid, childid) )
328 | print("[%s] link %s %s" % (user.room_jid.as_unicode(), parent, child))
329 |
330 | def handle_incident_dionaea_connection_free(self, user, xmlobj):
331 | try:
332 | ref = xmlobj.hasProp('ref').content
333 | ref = int(ref)
334 | except Exception as e:
335 | print(e)
336 | return
337 |
338 | if dbh is not None and ref in user.attacks:
339 | del user.attacks[ref]
340 | print("[%s] free %i" % (user.room_jid.as_unicode(), ref))
341 |
342 |
343 | def handle_incident_dionaea_module_emu_profile(self, user, xmlobj):
344 | try:
345 | ref = xmlobj.hasProp('ref').content
346 | profile = xmlobj.content
347 | ref = int(ref)
348 | except Exception as e:
349 | print(e)
350 | return
351 | if dbh is not None and ref in user.attacks:
352 | attackid = user.attacks[ref][1]
353 | cursor.execute("INSERT INTO dionaea.emu_profiles (connection, emu_profile_json) VALUES (%s,%s)",
354 | (attackid, profile) )
355 | print("[%s] profile ref %s: %s" % (user.room_jid.as_unicode(), profile, ref))
356 |
357 | def handle_incident_dionaea_download_offer(self, user, xmlobj):
358 | try:
359 | ref = xmlobj.hasProp('ref').content
360 | url = xmlobj.hasProp('url').content
361 | ref = int(ref)
362 | except Exception as e:
363 | print(e)
364 | return
365 | if dbh is not None and ref in user.attacks:
366 | attackid = user.attacks[ref][1]
367 | cursor.execute("INSERT INTO dionaea.offers (connection, offer_url) VALUES (%s,%s)",
368 | (attackid, url) )
369 | print("[%s] offer ref %i: %s" % (user.room_jid.as_unicode(), ref, url))
370 |
371 | def handle_incident_dionaea_download_complete_hash(self, user, xmlobj):
372 | try:
373 | ref = xmlobj.hasProp('ref').content
374 | md5_hash = xmlobj.hasProp('md5_hash').content
375 | url = xmlobj.hasProp('url').content
376 | ref = int(ref)
377 | except Exception as e:
378 | print(e)
379 | return
380 | if dbh is not None and ref in user.attacks:
381 | attackid = user.attacks[ref][1]
382 | cursor.execute("INSERT INTO dionaea.downloads (connection, download_url, download_md5_hash) VALUES (%s,%s,%s)",
383 | (attackid, url, md5_hash) )
384 | print("[%s] complete ref %s: %s %s" % (user.room_jid.as_unicode(), ref, url, md5_hash))
385 |
386 | def handle_incident_dionaea_download_complete_unique(self, user, xmlobj):
387 | try:
388 | md5_hash = xmlobj.hasProp('md5_hash').content
389 | f = base64.b64decode(xmlobj.content)
390 | my_hash = md5.new(f).hexdigest()
391 | except Exception as e:
392 | print(e)
393 | return
394 | if options.files is not None:
395 | p = os.path.join(options.files, my_hash)
396 | h = io.open(p, "wb+")
397 | h.write(f)
398 | h.close()
399 | print("[%s] file %s <-> %s" % (user.room_jid.as_unicode(), md5_hash, my_hash))
400 |
401 | def handle_incident_dionaea_service_shell_listen(self, user, xmlobj):
402 | pass
403 |
404 | def handle_incident_dionaea_service_shell_connect(self, user, xmlobj):
405 | pass
406 |
407 | def handle_incident_dionaea_modules_python_p0f(self, user, xmlobj):
408 | try:
409 | genre = xmlobj.hasProp('genre').content
410 | link = xmlobj.hasProp('link').content
411 | detail = xmlobj.hasProp('detail').content
412 | uptime = xmlobj.hasProp('uptime').content
413 | tos = xmlobj.hasProp('tos').content
414 | dist = xmlobj.hasProp('dist').content
415 | nat = xmlobj.hasProp('nat').content
416 | fw = xmlobj.hasProp('fw').content
417 | ref = xmlobj.hasProp('ref').content
418 | ref = int(ref)
419 | except Exception as e:
420 | print(e)
421 | return
422 |
423 | if ref in user.attacks:
424 | attackid = user.attacks[ref][1]
425 | cursor.execute("INSERT INTO dionaea.p0fs (connection, p0f_genre, p0f_link, p0f_detail, p0f_uptime, p0f_tos, p0f_dist, p0f_nat, p0f_fw) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)",
426 | (attackid, genre, link, detail, uptime, tos, dist, nat, fw))
427 | print("[%s] p0f ref %i: %s" % (user.room_jid.as_unicode(), ref, genre))
428 |
429 | def handle_incident_dionaea_modules_python_virustotal_report(self, user, xmlobj):
430 | try:
431 | md5_hash = xmlobj.hasProp('md5_hash').content
432 | permalink = xmlobj.hasProp('permalink').content
433 | date = xmlobj.hasProp('date').content
434 | date = int(date)
435 | except Exception as e:
436 | print(e)
437 | return
438 | try:
439 | cursor.execute("INSERT INTO dionaea.virustotals (virustotal_md5_hash, virustotal_timestamp, virustotal_permalink) VALUES (%s,to_timestamp(%s),%s)",(md5_hash, date, permalink))
440 | cursor.execute("""SELECT CURRVAL('dionaea.virustotals_virustotal_seq')""")
441 | print("[%s] virustotal %s" % (user.room_jid.as_unicode(), md5_hash))
442 | except Exception as e:
443 | print(e)
444 | return
445 | r = cursor.fetchall()[0][0]
446 | c = xmlobj.children
447 | while c is not None:
448 | if c.name != 'scan':
449 | c = c.next
450 | continue
451 | try:
452 | scanner = c.hasProp('scanner').content
453 | result = c.hasProp('result').content
454 | except Exception as e:
455 | print(e)
456 | else:
457 | cursor.execute("INSERT INTO dionaea.virustotalscans (virustotal, virustotalscan_scanner, virustotalscan_result) VALUES (%s, %s,%s)",(r, scanner, result))
458 | print("[%s]\t %s %s" % (user.room_jid.as_unicode(), scanner, result))
459 | c = c.next
460 |
461 | def handle_incident_dionaea_modules_python_smb_dcerpc_request(self, user, xmlobj):
462 | try:
463 | uuid = xmlobj.hasProp('uuid').content
464 | opnum = xmlobj.hasProp('opnum').content
465 | ref = xmlobj.hasProp('ref').content
466 | ref = int(ref)
467 | except Exception as e:
468 | print(e)
469 | return
470 | if ref in user.attacks:
471 | attackid = user.attacks[ref][1]
472 | cursor.execute("INSERT INTO dionaea.dcerpcrequests (connection, dcerpcrequest_uuid, dcerpcrequest_opnum) VALUES (%s,%s,%s)",
473 | (attackid, uuid, opnum))
474 | print("[%s] dcerpcrequest ref %i: %s %s" % (user.room_jid.as_unicode(), ref, uuid, opnum))
475 |
476 | def handle_incident_dionaea_modules_python_smb_dcerpc_bind(self, user, xmlobj):
477 | try:
478 | uuid = xmlobj.hasProp('uuid').content
479 | ref = xmlobj.hasProp('ref').content
480 | transfersyntax = xmlobj.hasProp('transfersyntax').content
481 | ref = int(ref)
482 | except Exception as e:
483 | print(e)
484 | return
485 | if dbh is not None and ref in user.attacks:
486 | attackid = user.attacks[ref][1]
487 | cursor.execute("INSERT INTO dionaea.dcerpcbinds (connection, dcerpcbind_uuid, dcerpcbind_transfersyntax) VALUES (%s,%s,%s)",
488 | (attackid, uuid, transfersyntax))
489 | print("[%s] dcerpcbind ref %i: %s %s" % (user.room_jid.as_unicode(), ref, uuid, transfersyntax))
490 |
491 | def handle_incident_dionaea_modules_python_mysql_login(self, user, xmlobj):
492 | try:
493 | ref = xmlobj.hasProp('ref').content
494 | ref = int(ref)
495 | username = xmlobj.hasProp('username').content
496 | password = xmlobj.hasProp('password').content
497 | except Exception as e:
498 | print(e)
499 | return
500 | if dbh is not None and ref in user.attacks:
501 | attackid = user.attacks[ref][1]
502 | cursor.execute("INSERT INTO dionaea.logins (connection, login_username, login_password) VALUES (%s,%s,%s)",
503 | (attackid, username, password))
504 | print("[%s] mysqllogin ref %i: %s %s" % (user.room_jid.as_unicode(), ref, username, password))
505 |
506 | def handle_incident_dionaea_modules_python_mysql_command(self, user, xmlobj):
507 | try:
508 | ref = xmlobj.hasProp('ref').content
509 | ref = int(ref)
510 | cmd = int(xmlobj.hasProp('cmd').content)
511 | args = []
512 | child = xmlobj.children
513 | r = xpath_eval(xmlobj, './dionaea:args/dionaea:arg', namespaces=dionaea_ns)
514 | for i in r:
515 | args.append((i.hasProp('index').content, i.content))
516 | except Exception as e:
517 | print(e)
518 | return
519 | if dbh is not None and ref in user.attacks:
520 | attackid = user.attacks[ref][1]
521 | cursor.execute("INSERT INTO dionaea.mysql_commands (connection, mysql_command_cmd) VALUES (%s,%s)",
522 | (attackid, cmd))
523 | r = cursor.execute("""SELECT CURRVAL('dionaea.mysql_commands_mysql_command_seq')""")
524 | command = cursor.fetchall()[0][0]
525 |
526 | for i in args:
527 | cursor.execute("INSERT INTO dionaea.mysql_command_args (mysql_command, mysql_command_arg_data, mysql_command_arg_index) VALUES (%s,%s,%s)",
528 | (command, i[1], i[0]))
529 |
530 | print("[%s] mysqlcommand ref %i: %i %s" % (user.room_jid.as_unicode(), ref, cmd, args))
531 |
532 | def handle_incident_dionaea_modules_python_sip_command(self, user, xmlobj):
533 | def address_from_xml(e):
534 | address = {}
535 | display_name = e.hasProp('display_name')
536 | if display_name is not None:
537 | display_name = display_name.content
538 | address['display_name'] = display_name
539 | c = e.children
540 | while c is not None:
541 | if c.name == 'uri':
542 | address['uri'] = uri_from_xml(c)
543 | c = c.next
544 | return address
545 | def uri_from_xml(e):
546 | d={}
547 | for u in ['scheme','user','password','port','host']:
548 | p = e.hasProp(u)
549 | if p is not None:
550 | p = p.content
551 | d[u] = p
552 | return d
553 | def via_from_xml(e):
554 | via={}
555 | for u in ['address','port','protocol','port','host']:
556 | p = e.hasProp(u)
557 | if p is not None:
558 | p = p.content
559 | via[u] = p
560 | return via
561 |
562 | def allow_from_xml(e):
563 | return e.content
564 |
565 | def sdp_from_xml(e):
566 | def media_from_xml(e):
567 | d = {}
568 | for u in ['proto','port','media','number_of_ports']:
569 | p = e.hasProp(u)
570 | if p is not None:
571 | p = p.content
572 | d[u] = p
573 | return d
574 | def connectiondata_from_xml(e):
575 | d={}
576 | for u in ['connection_address','number_of_addresses','addrtype','nettype','ttl']:
577 | p = e.hasProp(u)
578 | if p is not None:
579 | p = p.content
580 | d[u] = p
581 | return d
582 | def origin_from_xml(e):
583 | d={}
584 | for u in ['username','unicast_address','nettype','addrtype','sess_id','sess_version']:
585 | p = e.hasProp(u)
586 | if p is not None:
587 | p = p.content
588 | d[u] = p
589 | return d
590 |
591 | sdp = {}
592 | r = xpath_eval(xmlobj, './dionaea:sdp/dionaea:medialist/dionaea:media', namespaces=dionaea_ns)
593 | if len(r) > 0:
594 | medias = []
595 | for i in r:
596 | medias.append(media_from_xml(i))
597 | sdp['m'] = medias
598 |
599 | r = xpath_eval(xmlobj, './dionaea:sdp/dionaea:origin', namespaces=dionaea_ns)
600 | if len(r) > 0:
601 | sdp['o'] = origin_from_xml(r[0])
602 |
603 | r = xpath_eval(xmlobj, './dionaea:sdp/dionaea:connectiondata', namespaces=dionaea_ns)
604 | if len(r) > 0:
605 | sdp['c'] = connectiondata_from_xml(r[0])
606 | return sdp
607 |
608 |
609 | try:
610 | ref = int(xmlobj.hasProp('ref').content)
611 | method = xmlobj.hasProp('method').content
612 | call_id = user_agent = addr = _from = to = contact = via = allow = sdp = None
613 | call_id = xmlobj.hasProp('call_id')
614 | if call_id is not None:
615 | call_id = call_id.content
616 | user_agent = xmlobj.hasProp('user_agent')
617 | if user_agent is not None:
618 | user_agent = user_agent.content
619 |
620 | r = xpath_eval(xmlobj, './dionaea:to/dionaea:addr', namespaces=dionaea_ns)
621 | if len(r) > 0:
622 | addr = address_from_xml(r[0])
623 | # print(addr)
624 | r = xpath_eval(xmlobj, './dionaea:from/dionaea:addr', namespaces=dionaea_ns)
625 | if len(r) > 0:
626 | _from = []
627 | for i in r:
628 | _from.append(address_from_xml(i))
629 | # print(_from)
630 |
631 | r = xpath_eval(xmlobj, './dionaea:to/dionaea:addr', namespaces=dionaea_ns)
632 | if len(r) > 0:
633 | to = address_from_xml(r[0])
634 | # print(to)
635 |
636 | r = xpath_eval(xmlobj, './dionaea:vias/dionaea:via', namespaces=dionaea_ns)
637 | if len(r) > 0:
638 | via = []
639 | for i in r:
640 | via.append(via_from_xml(i))
641 | # print(via)
642 |
643 | r = xpath_eval(xmlobj, './dionaea:allowlist/dionaea:allow', namespaces=dionaea_ns)
644 | if len(r) > 0:
645 | allow = []
646 | for i in r:
647 | allow.append(allow_from_xml(i))
648 | print(allow)
649 |
650 | r = xpath_eval(xmlobj, './dionaea:sdp', namespaces=dionaea_ns)
651 | if len(r) > 0:
652 | sdp = sdp_from_xml(r[0])
653 |
654 | # print(sdp)
655 |
656 | except Exception as e:
657 | import traceback
658 | traceback.print_exc()
659 | return
660 | if dbh is not None and ref in user.attacks:
661 | def calc_allow(a):
662 | if a is None:
663 | return 0
664 | b={ b'UNKNOWN' :(1<<0),
665 | 'ACK' :(1<<1),
666 | 'BYE' :(1<<2),
667 | 'CANCEL' :(1<<3),
668 | 'INFO' :(1<<4),
669 | 'INVITE' :(1<<5),
670 | 'MESSAGE' :(1<<6),
671 | 'NOTIFY' :(1<<7),
672 | 'OPTIONS' :(1<<8),
673 | 'PRACK' :(1<<9),
674 | 'PUBLISH' :(1<<10),
675 | 'REFER' :(1<<11),
676 | 'REGISTER' :(1<<12),
677 | 'SUBSCRIBE' :(1<<13),
678 | 'UPDATE' :(1<<14)
679 | }
680 | allow=0
681 | for i in a:
682 | if i in b:
683 | allow |= b[i]
684 | else:
685 | allow |= b[b'UNKNOWN']
686 | return allow
687 |
688 | def add_addr(cmd, _type, addr):
689 | if addr is None:
690 | return
691 | host = None
692 | from socket import inet_pton,AF_INET,AF_INET6
693 | try:
694 | inet_pton(AF_INET, addr['uri']['host'])
695 | host = addr['uri']['host']
696 | except:
697 | pass
698 | try:
699 | inet_pton(AF_INET6, addr['uri']['host'])
700 | host = addr['uri']['host']
701 | except:
702 | pass
703 |
704 | cursor.execute("""INSERT INTO dionaea.sip_addrs
705 | (sip_command, sip_addr_type, sip_addr_display_name,
706 | sip_addr_uri_scheme, sip_addr_uri_user, sip_addr_uri_password,
707 | sip_addr_uri_hostname, sip_addr_uri_host, sip_addr_uri_port) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)""",
708 | (
709 | cmd, _type, addr['display_name'],
710 | addr['uri']['scheme'], addr['uri']['user'], addr['uri']['password'],
711 | addr['uri']['host'], host, addr['uri']['port']
712 | ))
713 |
714 | def add_via(cmd, via):
715 | cursor.execute("""INSERT INTO dionaea.sip_vias
716 | (sip_command, sip_via_protocol, sip_via_address, sip_via_port)
717 | VALUES (%s,%s,%s,%s)""",
718 | (
719 | cmd, via['protocol'],
720 | via['address'], via['port']
721 |
722 | ))
723 |
724 | def add_sdp(cmd, sdp):
725 | def add_origin(cmd, o):
726 | cursor.execute("""INSERT INTO dionaea.sip_sdp_origins
727 | (sip_command, sip_sdp_origin_username,
728 | sip_sdp_origin_sess_id, sip_sdp_origin_sess_version,
729 | sip_sdp_origin_nettype, sip_sdp_origin_addrtype,
730 | sip_sdp_origin_unicast_address)
731 | VALUES (%s,%s,%s,%s,%s,%s,%s)""",
732 | (
733 | cmd, o['username'],
734 | o['sess_id'], o['sess_version'],
735 | o['nettype'], o['addrtype'],
736 | o['unicast_address']
737 | ))
738 |
739 | def add_condata(cmd, c):
740 | cursor.execute("""INSERT INTO dionaea.sip_sdp_connectiondatas
741 | (sip_command, sip_sdp_connectiondata_nettype,
742 | sip_sdp_connectiondata_addrtype, sip_sdp_connectiondata_connection_address,
743 | sip_sdp_connectiondata_ttl, sip_sdp_connectiondata_number_of_addresses)
744 | VALUES (%s,%s,%s,%s,%s,%s)""",
745 | (
746 | cmd, c['nettype'],
747 | c['addrtype'], c['connection_address'],
748 | c['ttl'], c['number_of_addresses']
749 | ))
750 |
751 | def add_media(cmd, c):
752 | cursor.execute("""INSERT INTO dionaea.sip_sdp_medias
753 | (sip_command, sip_sdp_media_media,
754 | sip_sdp_media_port, sip_sdp_media_number_of_ports,
755 | sip_sdp_media_proto)
756 | VALUES (%s,%s,%s,%s,%s)""",
757 | (
758 | cmd, c['media'],
759 | c['port'], c['number_of_ports'],
760 | c['proto']
761 | ))
762 | if 'o' in sdp:
763 | add_origin(cmd, sdp['o'])
764 | if 'c' in sdp:
765 | add_condata(cmd, sdp['c'])
766 | if 'm' in sdp:
767 | for i in sdp['m']:
768 | add_media(cmd, i)
769 |
770 | attackid = user.attacks[ref][1]
771 | cursor.execute("""INSERT INTO dionaea.sip_commands
772 | (connection, sip_command_method, sip_command_call_id,
773 | sip_command_user_agent, sip_command_allow) VALUES (%s,%s,%s,%s,%s)""",
774 | (attackid, method, call_id, user_agent, calc_allow(allow)))
775 |
776 | r = cursor.execute("""SELECT CURRVAL('dionaea.sip_commands_sip_command_seq')""")
777 | cmdid = cursor.fetchall()[0][0]
778 |
779 |
780 | add_addr(cmdid,'addr',addr)
781 | add_addr(cmdid,'to',to)
782 | add_addr(cmdid,'contact',contact)
783 | for i in _from:
784 | add_addr(cmdid,'from',i)
785 |
786 | for i in via:
787 | add_via(cmdid, i)
788 |
789 | if sdp is not None:
790 | add_sdp(cmdid,sdp)
791 |
792 | print("[%s] sipcommand ref %i: %s %s" % (user.room_jid.as_unicode(), ref, method, addr))
793 |
794 |
795 |
796 | class Client(JabberClient):
797 | """Simple bot (client) example. Uses `pyxmpp.jabber.client.JabberClient`
798 | class as base. That class provides basic stream setup (including
799 | authentication) and Service Discovery server. It also does server address
800 | and port discovery based on the JID provided."""
801 |
802 | def __init__(self, jid, password):
803 |
804 | # if bare JID is provided add a resource -- it is required
805 | if not jid.resource:
806 | print(jid.resource)
807 | jid=JID(jid.node, jid.domain, "Echobot")
808 |
809 | # setup client with provided connection information
810 | # and identity data
811 | JabberClient.__init__(self, jid, password,
812 | disco_name="PyXMPP example: echo bot", disco_type="bot", keepalive=10)
813 |
814 | # register features to be announced via Service Discovery
815 | self.disco_info.add_feature("jabber:iq:version")
816 | self.muc = []
817 |
818 | def stream_state_changed(self,state,arg):
819 | """This one is called when the state of stream connecting the component
820 | to a server changes. This will usually be used to let the user
821 | know what is going on."""
822 | print "*** State changed: %s %r ***" % (state,arg)
823 |
824 | def session_started(self):
825 | """This is called when the IM session is successfully started
826 | (after all the neccessery negotiations, authentication and
827 | authorizasion).
828 | That is the best place to setup various handlers for the stream.
829 | Do not forget about calling the session_started() method of the base
830 | class!"""
831 | JabberClient.session_started(self)
832 |
833 | # set up handlers for supported queries
834 | self.stream.set_iq_get_handler("query","jabber:iq:version",self.get_version)
835 |
836 | # set up handlers for stanzas
837 | self.stream.set_presence_handler("available",self.presence)
838 | self.stream.set_presence_handler("subscribe",self.presence_control)
839 | self.stream.set_presence_handler("subscribed",self.presence_control)
840 | self.stream.set_presence_handler("unsubscribe",self.presence_control)
841 | self.stream.set_presence_handler("unsubscribed",self.presence_control)
842 |
843 | # set up handler for
844 | self.stream.set_message_handler("normal",self.message)
845 | print(self.stream)
846 |
847 | print u"joining..."
848 | self.roommgr = MucRoomManager(self.stream)
849 | self.roommgr.set_handlers()
850 | nick = self.jid.node + '-' + self.jid.resource
851 | for loc in options.channels: #['anon-events@dionaea.sensors.carnivore.it','anon-files@dionaea.sensors.carnivore.it']:
852 | roomjid = JID(loc, options.muc)
853 | print("\t %s" % roomjid.as_unicode())
854 | h = RoomHandler()
855 | self.muc.append(h)
856 | mucstate = self.roommgr.join(roomjid, nick, h)
857 | h.assign_state(mucstate)
858 |
859 |
860 | def get_version(self,iq):
861 | """Handler for jabber:iq:version queries.
862 |
863 | jabber:iq:version queries are not supported directly by PyXMPP, so the
864 | XML node is accessed directly through the libxml2 API. This should be
865 | used very carefully!"""
866 | iq=iq.make_result_response()
867 | q=iq.new_query("jabber:iq:version")
868 | q.newTextChild(q.ns(),"name","Echo component")
869 | q.newTextChild(q.ns(),"version","1.0")
870 | self.stream.send(iq)
871 | return True
872 |
873 | def message(self,stanza):
874 | """Message handler for the component.
875 |
876 | Echoes the message back if its type is not 'error' or
877 | 'headline', also sets own presence status to the message body. Please
878 | note that all message types but 'error' will be passed to the handler
879 | for 'normal' message unless some dedicated handler process them.
880 |
881 | :returns: `True` to indicate, that the stanza should not be processed
882 | any further."""
883 | subject=stanza.get_subject()
884 | body=stanza.get_body()
885 | t=stanza.get_type()
886 | print u'Message from %s received.' % (unicode(stanza.get_from(),)),
887 | return True
888 |
889 | def presence(self,stanza):
890 | """Handle 'available' (without 'type') and 'unavailable' ."""
891 | msg=u"%s has become " % (stanza.get_from())
892 | t=stanza.get_type()
893 | if t=="unavailable":
894 | msg+=u"unavailable"
895 | else:
896 | msg+=u"available"
897 |
898 | show=stanza.get_show()
899 | if show:
900 | msg+=u"(%s)" % (show,)
901 |
902 | status=stanza.get_status()
903 | if status:
904 | msg+=u": "+status
905 | print msg
906 |
907 | def presence_control(self,stanza):
908 | """Handle subscription control stanzas -- acknowledge
909 | them."""
910 | msg=unicode(stanza.get_from())
911 | t=stanza.get_type()
912 | if t=="subscribe":
913 | msg+=u" has requested presence subscription."
914 | elif t=="subscribed":
915 | msg+=u" has accepted our presence subscription request."
916 | elif t=="unsubscribe":
917 | msg+=u" has canceled his subscription of our."
918 | elif t=="unsubscribed":
919 | msg+=u" has canceled our subscription of his presence."
920 |
921 | print msg
922 | p=stanza.make_accept_response()
923 | self.stream.send(p)
924 | return True
925 |
926 | def print_roster_item(self,item):
927 | if item.name:
928 | name=item.name
929 | else:
930 | name=u""
931 | print (u'%s "%s" subscription=%s groups=%s'
932 | % (unicode(item.jid), name, item.subscription,
933 | u",".join(item.groups)) )
934 |
935 | def roster_updated(self,item=None):
936 | if not item:
937 | print u"My roster:"
938 | for item in self.roster.get_items():
939 | self.print_roster_item(item)
940 | return
941 | print u"Roster item updated:"
942 | self.print_roster_item(item)
943 |
944 | # XMPP protocol is Unicode-based to properly display data received
945 | # _must_ convert it to local encoding or UnicodeException may be raised
946 | locale.setlocale(locale.LC_CTYPE,"")
947 | encoding=locale.getlocale()[1]
948 | if not encoding:
949 | encoding="us-ascii"
950 | sys.stdout=codecs.getwriter(encoding)(sys.stdout,errors="replace")
951 | sys.stderr=codecs.getwriter(encoding)(sys.stderr,errors="replace")
952 |
953 | p = optparse.OptionParser()
954 | p.add_option('-U', '--username', dest='username', help='user e.g. user@example.com', type="string", action="store")
955 | p.add_option('-R', '--resource', dest='resource', default="backend", help='e.g. backend', type="string", action="store")
956 | p.add_option('-P', '--password', dest='password', help='e.g. secret', type="string", action="store")
957 | p.add_option('-M', '--muc', dest='muc', help='conference.example.com', type="string", action="store")
958 | p.add_option('-C', '--channel', dest='channels', help='conference.example.com', type="string", action="append")
959 | p.add_option('-s', '--database-host', dest='database_host', help='localhost:5432', type="string", action="store")
960 | p.add_option('-d', '--database', dest='database', help='for example xmpp', type="string", action="store")
961 | p.add_option('-u', '--database-user', dest='database_user', help='for example xmpp', type="string", action="store")
962 | p.add_option('-p', '--database-password', dest='database_password', help='the database users password', type="string", action="store")
963 | p.add_option('-f', '--files-destination', dest='files', help='where to store new files', type="string", action="store")
964 | (options, args) = p.parse_args()
965 |
966 | if not options.username or not options.resource or not options.password:
967 | print("Missing credentials")
968 |
969 | if options.database_host and options.database and options.database_user and options.database_password:
970 | print("Connecting to the database")
971 | dbh = PgSQL.connect(host=options.database_host, user=options.database_user, password=options.database_password)
972 | dbh.autocommit = 1
973 | cursor = dbh.cursor()
974 | else:
975 | print("Not connecting to the database, are you sure?")
976 | dbh = None
977 |
978 | if not options.files:
979 | print("Not storing files, are you sure?")
980 |
981 | while True:
982 | print u"creating client... %s" % options.resource
983 | c=Client(JID(options.username + '/' + options.resource),options.password)
984 |
985 | print u"connecting..."
986 | c.connect()
987 |
988 | print u"looping..."
989 | try:
990 | # Component class provides basic "main loop" for the applitation
991 | # Though, most applications would need to have their own loop and call
992 | # component.stream.loop_iter() from it whenever an event on
993 | # component.stream.fileno() occurs.
994 | c.loop(1)
995 | c.idle()
996 | except KeyboardInterrupt:
997 | print u"disconnecting..."
998 | c.disconnect()
999 | print u"exiting..."
1000 | break
1001 | except Exception,e:
1002 | import traceback
1003 | traceback.print_exc()
1004 | continue
1005 |
1006 |
1007 | # vi: sts=4 et sw=4
1008 |
--------------------------------------------------------------------------------
/dns/README.md:
--------------------------------------------------------------------------------
1 | # Bind configuration files for a Sinkhole DNS server
2 |
3 | See also http://X
--------------------------------------------------------------------------------
/dns/db.root.honeypot:
--------------------------------------------------------------------------------
1 | ;
2 | ; Bind configuration file for sinkhole
3 | ;
4 | $TTL 10
5 | @ IN SOA localhost. root.localhost. (
6 | 1 ; Serial
7 | 10 ; Refresh
8 | 10 ; Retry
9 | 10 ; Expire
10 | 10 ) ; Negative Cache TTL
11 | ;
12 |
13 | IN NS localhost
14 | * IN A 127.0.0.1
15 |
--------------------------------------------------------------------------------
/dns/named.conf:
--------------------------------------------------------------------------------
1 | // This is the primary configuration file for the BIND DNS server named.
2 | //
3 | // Please read /usr/share/doc/bind9/README.Debian.gz for information on the
4 | // structure of BIND configuration files in Debian, *BEFORE* you customize
5 | // this configuration file.
6 | //
7 | // If you are just adding zones, please do that in /etc/bind/named.conf.local
8 |
9 | include "/etc/bind/named.conf.options";
10 | include "/etc/bind/named.conf.local";
11 | #include "/etc/bind/named.conf.default-zones";
12 |
13 | zone "." {
14 | type master;
15 | file "/etc/bind/db.root.honeypot";
16 | };
17 |
18 |
19 |
--------------------------------------------------------------------------------
/dns/named.conf.options:
--------------------------------------------------------------------------------
1 | options {
2 | directory "/var/cache/bind";
3 |
4 | // forwarders {
5 | // 8.8.8.8;
6 | // };
7 |
8 | // dnssec-validation auto;
9 | recursion no;
10 | allow-transfer { none; };
11 |
12 | auth-nxdomain no; # conform to RFC1035
13 | // listen-on-v6 { any; };
14 | statistics-file "/var/log/named/named_stats.txt";
15 | memstatistics-file "/var/log/named/named_mem_stats.txt";
16 | version "9.9.1-P2";
17 | };
18 |
19 | logging{
20 |
21 | channel query_log {
22 | file "/var/log/named/query.log";
23 | severity info;
24 | print-time yes;
25 | print-severity yes;
26 | print-category yes;
27 | };
28 |
29 | category queries {
30 | query_log;
31 | };
32 | };
33 |
34 |
--------------------------------------------------------------------------------
/elk/README.md:
--------------------------------------------------------------------------------
1 | # ELK
2 |
3 | The Elasticsearch ELK Stack (Elasticsearch, Logstash and Kibana) is an ideal solution for a search and analytics platform on honeypot data.
4 |
5 | See http://www.vanimpe.eu/2014/12/13/using-elk-dashboard-honeypots/ for a detailed overview.
6 |
7 | 
8 | 
9 | 
10 | 
11 | 
12 | 
13 |
14 | # Dionaea
15 |
16 | Use the patch from dionaea/logsql.py to keep track of changes in the sqlite database.
17 | Make sure you alter the sqlite database
18 |
19 | ```
20 | sqlite> alter table connections add column id integer;
21 | ```
22 |
23 | # Tips
24 |
25 | * Use "geoip.full.raw" to prevent split string data
26 | * curl -XDELETE 'http://localhost:9200/logstash-*'
27 |
28 | # ELK basic Setup
29 |
30 | mkdir /data
31 | cd /data
32 | wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gz
33 | tar zxvf elasticsearch-1.7.1.tar.gz
34 | ln -s elasticsearch-1.7.1 elasticsearch
35 | wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.tar.gz
36 | tar zxvf logstash-1.5.3.tar.gz
37 | ln -s logstash-1.5.3 logstash
38 |
39 |
--------------------------------------------------------------------------------
/elk/TODO.md:
--------------------------------------------------------------------------------
1 | * Query Elasticsearch from CLI to get a list of IPs
2 | * Query Elasticsearch from CLI with a list of IPs to get hits (limit with timestamp and ports)ı
--------------------------------------------------------------------------------
/elk/_grokparsefailure.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/_grokparsefailure.png
--------------------------------------------------------------------------------
/elk/conpot.singlelogline.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #
3 | # Get conpot events from mysql database and print to logfile
4 | # Uses a temp file to keep track of last printed id
5 | #
6 | # Configuration:
7 | # Change LAST_CONNECTION_FILE, SQLITE_DB and database connection settings
8 | # Leave SQLITE_DB empty for mysql-db
9 | # Change LOGFILE or leave empty for output to screen
10 | # Change honeypot-network definitions
11 | #
12 | # Koen Van Impe
13 | # koen.vanimpe@cudeso.be @cudeso http://www.vanimpe.eu
14 | # 20141210
15 | #
16 |
17 | import os
18 | import sys
19 | import datetime
20 | import sqlite3
21 | import MySQLdb
22 |
23 | DSTIP="192.168.218.140"
24 |
25 | LAST_CONNECTION_FILE = "/tmp/conpot-singlelogline.id"
26 | LOGFILE="/var/log/elk-import/conpot-single.log"
27 |
28 | SQLITE_DB="/opt/myhoneypot/logs/conpot.db"
29 |
30 | DB_USER="conpot"
31 | DB_PASS="conpot"
32 | DB_DB="conpot"
33 | DB_HOST="localhost"
34 |
35 | connection_start = 0
36 | connection_id = 0
37 |
38 | if __name__ == "__main__":
39 |
40 | if os.path.isfile(LAST_CONNECTION_FILE):
41 | f = open(LAST_CONNECTION_FILE, 'r')
42 | f_content = f.read()
43 | f.close()
44 | if f_content and int(f_content) > 0:
45 | connection_start = int(f_content)
46 |
47 | if SQLITE_DB:
48 | db = sqlite3.connect(SQLITE_DB)
49 | else:
50 | db = MySQLdb.connect(host=DB_HOST, user=DB_USER, passwd=DB_PASS, db=DB_DB)
51 | cur = db.cursor()
52 |
53 | if LOGFILE:
54 | f_log = open(LOGFILE, 'a')
55 | cur.execute("SELECT * FROM events WHERE id > %s ORDER BY id ASC" % connection_start)
56 | for row in cur.fetchall() :
57 | if len(row) == 8:
58 | skip = 0
59 | else:
60 | skip = -1
61 | connection_id = row[0]
62 | sensor_id = row[1]
63 | session_id = row[2 + skip]
64 | timestamp = row[3 + skip]
65 | source = row[4 + skip].split(',')
66 | srcip = source[0][2:-1]
67 | srcport = source[1][:-1]
68 | protocol = row[5 + skip]
69 | request_raw = row[6 + skip].replace('\n', '|').replace('\r', '')
70 | response = row[7 + skip]
71 | if LOGFILE:
72 | f_log.write("%s : %s \t %s \t %s \t %s \t %s \t %s \t '%s' \n" % (timestamp, srcip, srcport, DSTIP, protocol, response, sensor_id, request_raw))
73 | else:
74 | print "%s : %s \t %s \t %s \t %s \t %s \t %s \t '%s' \n" % (timestamp, srcip, srcport, DSTIP, protocol, response, sensor_id, request_raw)
75 | db.close()
76 | if LOGFILE:
77 | f_log.close()
78 |
79 | if not(connection_id and connection_id > 0):
80 | connection_id = connection_start
81 | f = open(LAST_CONNECTION_FILE, 'w')
82 | f.write(str(connection_id))
83 | f.close()
84 |
--------------------------------------------------------------------------------
/elk/dionaea-singlelogline.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #
3 | # Get dionaea events from sqlite database and print to logfile
4 | # Uses a temp file to keep track of last printed id
5 | #
6 | # Configuration:
7 | # Change SQLITE_DB and LAST_CONNECTION_FILE
8 | # Change LOGFILE or leave empty for output to screen
9 | #
10 | # Koen Van Impe
11 | # koen.vanimpe@cudeso.be @cudeso http://www.vanimpe.eu
12 | # 20141206
13 | #
14 |
15 | import os
16 | import sys
17 | import datetime
18 | import sqlite3
19 |
20 | SQLITE_DB = "/var/lib/dionaea/logsql.sqlite"
21 | LAST_CONNECTION_FILE = "/tmp/dionaea-singlelogline.id"
22 | LOGFILE="/var/log/elk-import/dionaea-single.log"
23 | IGNORE_SRC=[ "127.0.0.1" ]
24 |
25 | connection_start = 0
26 | connection_id = 0
27 |
28 | if __name__ == "__main__":
29 |
30 | if os.path.isfile(SQLITE_DB):
31 |
32 | if os.path.isfile(LAST_CONNECTION_FILE):
33 | f = open(LAST_CONNECTION_FILE, 'r')
34 | f_content = f.read()
35 | f.close()
36 | if f_content and int(f_content) > 0:
37 | connection_start = int(f_content)
38 |
39 | conn = sqlite3.connect(SQLITE_DB)
40 | c = conn.cursor()
41 |
42 | if LOGFILE:
43 | f_log = open(LOGFILE, 'a')
44 | for row in c.execute("SELECT * FROM connections WHERE connection > %s ORDER BY connection ASC" % connection_start):
45 | timestamp = datetime.datetime.fromtimestamp(row[4]).strftime('%Y-%m-%d %H:%M:%S')
46 | connection_type = row[1]
47 | protocol = row[2]
48 | connection_protocol = row[3]
49 | dst_ip = row[7]
50 | dst_port = row[8]
51 | src_ip = row[9]
52 | src_port = row[11]
53 | hostname = row[10]
54 | connection_id = row[0]
55 | if src_ip in IGNORE_SRC:
56 | continue
57 | if connection_protocol == "p0fconnection":
58 | continue
59 | if LOGFILE:
60 | f_log.write("%s : %-10s \t %-10s \t %s \t %s \t %s \t %s \t %s \t %s\n" % (timestamp, connection_type, connection_protocol, protocol, src_ip, src_port, dst_ip, dst_port, hostname))
61 | else:
62 | print "%s : %-10s \t %-10s \t %s \t %s \t %s \t %s \t %s \t %s " % (timestamp, connection_type, connection_protocol, protocol, src_ip, src_port, dst_ip, dst_port, hostname)
63 | conn.close()
64 | if LOGFILE:
65 | f_log.close()
66 |
67 | if not(connection_id and connection_id > 0):
68 | connection_id = connection_start
69 | f = open(LAST_CONNECTION_FILE, 'w')
70 | f.write(str(connection_id))
71 | f.close()
72 | else:
73 | print "Sqlite DB not found : %s " % SQLITE_DB
74 |
--------------------------------------------------------------------------------
/elk/elk-import.logrotate:
--------------------------------------------------------------------------------
1 | # Logrotate script for ELK imports
2 | #
3 | # run with "logrotate /elk/elk-import.logrotate"
4 | su root syslog
5 |
6 | /var/log/elk-import/conpot-single.log /var/log/elk-import/glastopf-single.log /var/log/elk-import/dionaea-single.log {
7 | daily
8 | notifempty
9 | rotate 3
10 | missingok
11 | compress
12 | delaycompress
13 | create 660 root root
14 | dateext
15 | }
16 |
--------------------------------------------------------------------------------
/elk/glastopf-singlelogline.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #
3 | # Get glastopf events from mysql database and print to logfile
4 | # Uses a temp file to keep track of last printed id
5 | #
6 | # Configuration:
7 | # Change LAST_CONNECTION_FILE, SQLITE_DB and database connection settings
8 | # Leave SQLITE_DB empty for mysql-db
9 | # Change LOGFILE or leave empty for output to screen
10 | # Change honeypot-network definitions (DSTP, DSTPORT, PROTOCOL)
11 | #
12 | # Koen Van Impe
13 | # koen.vanimpe@cudeso.be @cudeso http://www.vanimpe.eu
14 | # 20141210
15 | #
16 |
17 | import os
18 | import sys
19 | import datetime
20 | import sqlite3
21 | import MySQLdb
22 |
23 | DSTIP="192.168.218.140"
24 | DSTPORT="80"
25 | PROTOCOL="tcp"
26 |
27 | LAST_CONNECTION_FILE = "/tmp/glastopf-singlelogline.id"
28 | LOGFILE="/var/log/elk-import/glastopf-single.log"
29 |
30 | SQLITE_DB="/opt/myhoneypot/db/glastopf.db"
31 |
32 | DB_USER="glastopf"
33 | DB_PASS="glastopf"
34 | DB_DB="glastopf"
35 | DB_HOST="localhost"
36 |
37 | connection_start = 0
38 | connection_id = 0
39 |
40 | if __name__ == "__main__":
41 |
42 | if os.path.isfile(LAST_CONNECTION_FILE):
43 | f = open(LAST_CONNECTION_FILE, 'r')
44 | f_content = f.read()
45 | f.close()
46 | if f_content and int(f_content) > 0:
47 | connection_start = int(f_content)
48 |
49 | if SQLITE_DB:
50 | db = sqlite3.connect(SQLITE_DB)
51 | else:
52 | db = MySQLdb.connect(host=DB_HOST, user=DB_USER, passwd=DB_PASS, db=DB_DB)
53 | cur = db.cursor()
54 |
55 | if LOGFILE:
56 | f_log = open(LOGFILE, 'a')
57 | cur.execute("SELECT * FROM events WHERE id > %s ORDER BY id ASC" % connection_start)
58 | for row in cur.fetchall() :
59 | connection_id = row[0]
60 | timestamp = row[1]
61 | source = row[2].split(':')
62 | srcip = source[0]
63 | srcport = source[1]
64 | request_url = row[3]
65 | request_raw = row[4].replace('\n', '|').replace('\r', '')
66 | request_type = request_raw[0:4].strip()
67 | if not(request_type == "POST" or request_type == "GET" or request_type == "HEAD"):
68 | request_type = "Unknown"
69 | pattern = row[5]
70 | filename = row[6]
71 | if LOGFILE:
72 | f_log.write("%s : %s \t %s \t %s \t %s \t %s \t %s \t %s \t %s \t %s \t '%s' \n" % (timestamp, srcip, srcport, DSTIP, DSTPORT, PROTOCOL, request_url, pattern, filename, request_type, request_raw))
73 | else:
74 | print "%s : %s \t %s \t %s \t %s \t %s \t %s \t %s \t %s \t %s \t '%s' \n" % (timestamp, srcip, srcport, DSTIP, DSTPORT, PROTOCOL, request_url, pattern, filename, request_type, request_raw)
75 | db.close()
76 | if LOGFILE:
77 | f_log.close()
78 |
79 | if not(connection_id and connection_id > 0):
80 | connection_id = connection_start
81 | f = open(LAST_CONNECTION_FILE, 'w')
82 | f.write(str(connection_id))
83 | f.close()
84 |
--------------------------------------------------------------------------------
/elk/hp1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/hp1.png
--------------------------------------------------------------------------------
/elk/hp2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/hp2.png
--------------------------------------------------------------------------------
/elk/hp3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/hp3.png
--------------------------------------------------------------------------------
/elk/hp4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/hp4.png
--------------------------------------------------------------------------------
/elk/hp5.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/hp5.png
--------------------------------------------------------------------------------
/elk/hp6.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cudeso/cudeso-honeypot/c2c9630756ff003e3ee22153c30428f60551a134/elk/hp6.png
--------------------------------------------------------------------------------
/elk/inspect-to-csv/README.md:
--------------------------------------------------------------------------------
1 | # Convert the 'Inspect' window in Kibana to CSV output
2 |
3 | Python script that reads the output from the Inspect value (stored in request_file variable), queries the Elasticsearch server and converts the output to CSV.
4 | Does not work on Histogram. Extracts the "facet" data.
5 |
--------------------------------------------------------------------------------
/elk/inspect-to-csv/inspect-to-csv.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | #
3 | # Process the 'Inspect' command from Kibana and convert to CSV
4 | #
5 | # Save the 'inspect' window output in the variable 'request_file'
6 | # It will extract the URL and request and return CSV output
7 | # Does not work on histogram ...
8 | #
9 | # Koen Van Impe on 2014-12-31
10 | # koen dot vanimpe at cudeso dot be
11 | # license New BSD : http://www.vanimpe.eu/license
12 | #
13 | #
14 |
15 | import requests
16 | import json
17 |
18 | request_file = "request.inspect"
19 |
20 | # Read the request
21 | f = open( request_file, "r")
22 | request = f.read()
23 | f.close()
24 |
25 | # Split URL and request
26 | curl_request = request.split("' -d '")
27 | p = curl_request[0].find("-XGET ")
28 | url = curl_request[0][p+7:]
29 | request = curl_request[1][0:-1]
30 |
31 | # Get the response from the Elasticsearch server
32 | response = requests.post(url, data=request)
33 | response_json = json.loads( response.text )
34 |
35 | # Print out all the master elements
36 | #for el in response_json:
37 | # print el
38 |
39 | if "facets" in response_json:
40 | if "terms" in response_json["facets"]:
41 | terms = response_json["facets"]["terms"]["terms"]
42 | for t in terms:
43 | print "%s,%s" % (t["count"], t["term"])
44 |
45 | if "hits" in response_json:
46 | print "Hits"
47 | if "hits" in response_json["hits"]:
48 | for h in response_json["hits"]["hits"]:
49 | print h
50 |
51 | # Print the full response
52 | #print response.text
--------------------------------------------------------------------------------
/elk/inspect-to-csv/request.inspect:
--------------------------------------------------------------------------------
1 | curl -XGET 'http://127.0.0.1:9200/_all/_search?pretty' -d '{
2 | "facets": {
3 | "terms": {
4 | "terms": {
5 | "field": "type",
6 | "size": 10,
7 | "order": "count",
8 | "exclude": []
9 | },
10 | "facet_filter": {
11 | "fquery": {
12 | "query": {
13 | "filtered": {
14 | "query": {
15 | "bool": {
16 | "should": [
17 | {
18 | "query_string": {
19 | "query": "basetype:\"honeypot\""
20 | }
21 | },
22 | {
23 | "query_string": {
24 | "query": "type:\"glastopf\""
25 | }
26 | },
27 | {
28 | "query_string": {
29 | "query": "type:\"conpot\""
30 | }
31 | },
32 | {
33 | "query_string": {
34 | "query": "type:\"dionaea\""
35 | }
36 | },
37 | {
38 | "query_string": {
39 | "query": "type:\"kippo\""
40 | }
41 | }
42 | ]
43 | }
44 | },
45 | "filter": {
46 | "bool": {
47 | "must": [
48 | {
49 | "range": {
50 | "@timestamp": {
51 | "from": 1417440142530,
52 | "to": 1420032142531
53 | }
54 | }
55 | }
56 | ]
57 | }
58 | }
59 | }
60 | }
61 | }
62 | }
63 | }
64 | },
65 | "size": 0
66 | }'
--------------------------------------------------------------------------------
/elk/logstash.conf:
--------------------------------------------------------------------------------
1 | input {
2 | file {
3 | path => "/var/log/kippo/kippo.log"
4 | type => "kippo"
5 | }
6 | file {
7 | path => "/var/log/elk-import/dionaea-single.log"
8 | type => "dionaea"
9 | }
10 | sqlite {
11 | path => "/var/lib/dionaea/logsql.sqlite"
12 | exclude_tables => [ 'auth_group','logins','auth_group_permissions','mssql_commands','auth_permission','mssql_fingerprints','auth_user','mysql_command_args','auth_user_groups','mysql_command_ops','auth_user_user_permissions','mysql_commands','offers','dcerpcbinds','p0fs','dcerpcrequests','resolves','dcerpcserviceops','since_table','dcerpcservices','sip_addrs','django_content_type','sip_commands','django_migrations','sip_sdp_connectiondatas','django_session','sip_sdp_medias','django_site','sip_sdp_origins','downloads','sip_vias','emu_profiles','virustotals','emu_services','virustotalscans','emu_services_old']
13 | type => "dionaea22222"
14 | }
15 | file {
16 | path => "/var/log/elk-import/glastopf-single.log"
17 | type => "glastopf"
18 | }
19 | file {
20 | path => "/var/log/elk-import/conpot-single.log"
21 | type => "conpot"
22 | }
23 | file {
24 | path => "/var/log/named/query.log"
25 | type => "dnshpot"
26 | }
27 | file {
28 | path => "/var/log/ulog/syslogemu.log"
29 | type => "iptables"
30 | }
31 | }
32 |
33 | filter {
34 |
35 | ############################################################
36 | # Kippo honeypot
37 | #
38 | if [type] == "kippo" {
39 |
40 | if ( [message] =~ "connection lost" or
41 | [message] =~ "\[HoneyPotTransport" or
42 | [message] =~ "failed auth password" or
43 | [message] =~ "unauthorized login" or
44 | [message] =~ "\treason: " or
45 | [message] =~ "\[SSHChannel session" or
46 | [message] =~ "\[SSHService ssh-connection" or
47 | [message] =~ "\] starting service ssh-connection" or
48 | [message] =~ "\[-\] ") {
49 | drop {}
50 | }
51 | else if ( [message] =~ "\[SSHService ssh-userauth on HoneyPotTransport" and [message] =~ " login attempt ") {
52 | grok {
53 | match => [ "message", "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time}%{ISO8601_TIMEZONE} \[SSHService ssh-userauth on HoneyPotTransport,%{DATA:kippo-session},%{IP:srcip}\] login attempt \[%{DATA:kippo-username}/%{DATA:kippo-password}\]" ]
54 | }
55 | mutate {
56 | add_field => [ "kippo-type", "credentials" ]
57 | strip => [ "kippo-session", "srcip" ]
58 | }
59 | }
60 | else if ( [message] =~ "\[SSHService ssh-userauth on HoneyPotTransport" and [message] =~ " trying auth ") {
61 | grok {
62 | match => [ "message", "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time}%{ISO8601_TIMEZONE} \[SSHService ssh-userauth on HoneyPotTransport,%{DATA:kippo-session},%{IP:srcip}\] %{DATA:kippo-username} trying auth %{WORD:kippo-authmethod}" ]
63 | }
64 | mutate {
65 | add_field => [ "kippo-type", "authentication-method" ]
66 | strip => [ "kippo-session", "srcip", "kippo-authmethod" ]
67 | }
68 | }
69 | else if ( [message] =~ "\[kippo.core.ssh.HoneyPotSSHFactory\] New connection:") {
70 | grok {
71 | match => [ "message", "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time}%{ISO8601_TIMEZONE} \[kippo.core.ssh.HoneyPotSSHFactory\] New connection: %{IP:srcip}:%{DATA:srcport} \(%{IP:dstip}:%{DATA:dstport}\) \[session: %{DATA:kippo-session}\]" ]
72 | }
73 | mutate {
74 | add_field => [ "kippo-type", "connection" ]
75 | strip => [ "kippo-session", "srcip", "dstip", "srcport", "dstport" ]
76 | }
77 | }
78 |
79 | mutate {
80 | add_field => [ "timestamp", "%{year}-%{month}-%{day} %{time}" ]
81 | }
82 | date {
83 | match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
84 | }
85 | }
86 |
87 | ############################################################
88 | # Dionaea honeypot
89 | #
90 | if [type] == "dionaea22222" {
91 | mutate {
92 | add_field => [ "dstip", "%{local_host}" ]
93 | add_field => [ "srcip", "%{remote_host}" ]
94 | add_field => [ "dstport", "%{local_port}" ]
95 | add_field => [ "srcport", "%{remote_port}" ]
96 | convert => [ "dstport", "integer" ]
97 | convert => [ "srcport", "integer" ]
98 | }
99 | }
100 |
101 | if [type] == "dionaea" {
102 | grok {
103 | match => [ "message", "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time} : %{DATA:connection_type} \t %{DATA:connection_protocol}\t%{DATA:protocol}\t %{IP:srcip} \t%{DATA:srcport} \t %{IP:dstip} \t %{DATA:dstport} \t %{DATA:hostname}" ]
104 | }
105 | mutate {
106 | strip => [ "connection_protocol", "connection_type", "protocol", "srcport" , "dstport", "hostname" ]
107 | }
108 | mutate {
109 | add_field => [ "timestamp", "%{year}-%{month}-%{day} %{time}" ]
110 | }
111 | date {
112 | match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
113 | }
114 | }
115 |
116 | ############################################################
117 | # Glastopf honeypot
118 | #
119 | if [type] == "glastopf" {
120 | grok {
121 | match => [ "message", "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time} : %{IP:srcip} \t%{DATA:srcport} \t %{IP:dstip} \t %{DATA:dstport} \t %{DATA:protocol} \t %{DATA:request_url} \t %{DATA:pattern} \t %{DATA:filename} \t %{DATA:request_method} \t '%{DATA:request_raw}' " ]
122 | }
123 | mutate {
124 | strip => [ "srcip", "dstip", "protocol", "srcport" , "dstport", "pattern", "filename", "request_url" ]
125 | }
126 | mutate {
127 | add_field => [ "timestamp", "%{year}-%{month}-%{day} %{time}" ]
128 | }
129 | date {
130 | match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
131 | }
132 | }
133 |
134 | ############################################################
135 | # Conpot honeypot
136 | #
137 | if [type] == "conpot" {
138 | grok {
139 | match => [ "message", "%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time} : %{IP:srcip} \t%{DATA:srcport} \t %{IP:dstip} \t %{DATA:request_protocol} \t %{DATA:response_code} \t %{DATA:sensor_id} \t '%{DATA:request_raw}' " ]
140 | }
141 | mutate {
142 | strip => [ "srcip", "dstip", "request_protocol", "srcport" , "response_code", "sensor_id" ]
143 | }
144 | mutate {
145 | add_field => [ "timestamp", "%{year}-%{month}-%{day} %{time}" ]
146 | }
147 | date {
148 | match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
149 | }
150 | }
151 |
152 |
153 | ############################################################
154 | # DNS honeypot
155 | #
156 | if [type] == "dnshpot" {
157 | grok {
158 | match => [ "message", "%{MONTHDAY:day}-%{MONTH:month}-%{YEAR:year} %{TIME:time} queries: info: client %{IP:srcip}#%{DATA:srcport}%{SPACE}\(%{DATA:hostname}\): query: %{DATA:hostname2} %{DATA:querytype3} %{DATA:querytype} %{DATA:querytype2} \(%{IP:dstip}\)" ]
159 | }
160 | mutate {
161 | add_field => [ "dstport", "53" ]
162 | }
163 | mutate {
164 | strip => [ "srcip", "dstip", "hostname", "srcport" , "hostname2", "querytype", "querytype2" ]
165 | }
166 | mutate {
167 | add_field => [ "timestamp", "%{day}-%{month}-%{year} %{time}" ]
168 | }
169 | date {
170 | match => [ "timestamp", "dd-MMM-YYYY HH:mm:ss.SSS" ]
171 | }
172 | }
173 |
174 | ############################################################
175 | # IPtables / network tracking
176 | #
177 | if [type] == "iptables" {
178 | grok {
179 | match => [ "message", "%{MONTH:month} %{MONTHDAY:day} %{TIME:time} %{HOSTNAME:hostname} IN=%{WORD:incoming_interface} OUT= MAC=(?\S+) SRC=%{IP:srcip} DST=%{IP:dstip} LEN=%{DATA:len} TOS=%{DATA:tos} PREC=%{DATA:prec} TTL=%{DATA:ttl} ID=%{DATA:id}(?:\sDF)? PROTO=%{DATA:protocol} SPT=%{DATA:srcport} DPT=%{DATA:dstport} %{DATA:remainder}"]
180 | }
181 | mutate {
182 | strip => [ "srcip", "dstip", "hostname", "srcport" , "dstport" ]
183 | }
184 | mutate {
185 | add_field => [ "timestamp", "%{month}-%{day} %{time}" ]
186 | }
187 | date {
188 | match => [ "timestamp", "MMM-dd HH:mm:ss" ]
189 | }
190 | }
191 |
192 | ############################################################
193 | # Filters to apply on all honeypot data
194 | #
195 | if ( [type] == "glastopf" or [type] == "dionaea22222" or [type] == "dionaea" or [type] == "conpot" or [type] == "kippo" or [type] == "dnshpot" or [type] == "iptables" ) {
196 | mutate {
197 | add_field => [ "basetype", "honeypot" ]
198 | }
199 | geoip {
200 | source => "srcip"
201 | target => "geoip"
202 | database =>"/var/www/db/GeoLiteCity.dat"
203 | add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
204 | add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
205 | }
206 | mutate {
207 | convert => [ "[geoip][coordinates]", "float" ]
208 | }
209 | geoip {
210 | source => "srcip"
211 | target => "geoip"
212 | database =>"/var/www/db/GeoIPASNum.dat"
213 | add_field => [ "[geoip][full]", "%{[geoip][number]} %{[geoip][asn]}" ]
214 | }
215 | }
216 | }
217 |
218 | output {
219 | elasticsearch {
220 | host => localhost
221 | }
222 | stdout { codec => rubydebug }
223 | }
224 |
--------------------------------------------------------------------------------
/elk/query_ELK.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python
2 |
3 | import urllib2
4 | import json
5 |
6 | url="http://192.168.218.140:9200/_search?q=geoip.country_code2:be"
7 |
8 | req = urllib2.Request(url)
9 | out = urllib2.urlopen(req)
10 | data = out.read()
11 | print data
12 |
13 |
14 |
15 | {"filtered":{"query":{"bool":{"should":[{"query_string":{"query":"*"}}]}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"from":1418607243883,"to":1419212043883}}}]}}}}
--------------------------------------------------------------------------------
/glastopf/.gitignore:
--------------------------------------------------------------------------------
1 | glastopf/*
2 | BFR/*
3 | data/*
4 |
--------------------------------------------------------------------------------
/kippo/kippo.cfg:
--------------------------------------------------------------------------------
1 | #
2 | # Kippo configuration file (kippo.cfg)
3 | #
4 |
5 | [honeypot]
6 |
7 | # IP addresses to listen for incoming SSH connections.
8 | #
9 | # (default: 0.0.0.0) = any address
10 | #ssh_addr = 0.0.0.0
11 |
12 | # Port to listen for incoming SSH connections.
13 | #
14 | # (default: 2222)
15 | ssh_port = 2222
16 |
17 | # Hostname for the honeypot. Displayed by the shell prompt of the virtual
18 | # environment.
19 | #
20 | # (default: svr03)
21 | hostname = svr03
22 |
23 | # Directory where to save log files in.
24 | #
25 | # (default: log)
26 | log_path = log
27 |
28 | # Directory where to save downloaded (malware) files in.
29 | #
30 | # (default: dl)
31 | download_path = dl
32 |
33 | # Maximum file size (in bytes) for downloaded files to be stored in 'download_path'.
34 | # A value of 0 means no limit. If the file size is known to be too big from the start,
35 | # the file will not be stored on disk at all.
36 | #
37 | # (default: 0)
38 | #download_limit_size = 10485760
39 |
40 | # Directory where virtual file contents are kept in.
41 | #
42 | # This is only used by commands like 'cat' to display the contents of files.
43 | # Adding files here is not enough for them to appear in the honeypot - the
44 | # actual virtual filesystem is kept in filesystem_file (see below)
45 | #
46 | # (default: honeyfs)
47 | contents_path = honeyfs
48 |
49 | # File in the python pickle format containing the virtual filesystem.
50 | #
51 | # This includes the filenames, paths, permissions for the whole filesystem,
52 | # but not the file contents. This is created by the createfs.py utility from
53 | # a real template linux installation.
54 | #
55 | # (default: fs.pickle)
56 | filesystem_file = fs.pickle
57 |
58 | # Directory for miscellaneous data files, such as the password database.
59 | #
60 | # (default: data_path)
61 | data_path = data
62 |
63 | # Directory for creating simple commands that only output text.
64 | #
65 | # The command must be placed under this directory with the proper path, such
66 | # as:
67 | # txtcmds/usr/bin/vi
68 | # The contents of the file will be the output of the command when run inside
69 | # the honeypot.
70 | #
71 | # In addition to this, the file must exist in the virtual
72 | # filesystem {filesystem_file}
73 | #
74 | # (default: txtcmds)
75 | txtcmds_path = txtcmds
76 |
77 | # Public and private SSH key files. If these don't exist, they are created
78 | # automatically.
79 | rsa_public_key = data/ssh_host_rsa_key.pub
80 | rsa_private_key = data/ssh_host_rsa_key
81 | dsa_public_key = data/ssh_host_dsa_key.pub
82 | dsa_private_key = data/ssh_host_dsa_key
83 |
84 | # Enables passing commands using ssh execCommand
85 | # e.g. ssh root@localhost
86 | #
87 | # (default: false)
88 | exec_enabled = true
89 |
90 | # IP address to bind to when opening outgoing connections. Used exclusively by
91 | # the wget command.
92 | #
93 | # (default: not specified)
94 | #out_addr = 0.0.0.0
95 |
96 | # Sensor name use to identify this honeypot instance. Used by the database
97 | # logging modules such as mysql.
98 | #
99 | # If not specified, the logging modules will instead use the IP address of the
100 | # connection as the sensor name.
101 | #
102 | # (default: not specified)
103 | #sensor_name=myhostname
104 |
105 | # Fake address displayed as the address of the incoming connection.
106 | # This doesn't affect logging, and is only used by honeypot commands such as
107 | # 'w' and 'last'
108 | #
109 | # If not specified, the actual IP address is displayed instead (default
110 | # behaviour).
111 | #
112 | # (default: not specified)
113 | #fake_addr = 192.168.66.254
114 |
115 | # SSH Version String
116 | #
117 | # Use this to disguise your honeypot from a simple SSH version scan
118 | # frequent Examples: (found experimentally by scanning ISPs)
119 | # SSH-2.0-OpenSSH_5.1p1 Debian-5
120 | # SSH-1.99-OpenSSH_4.3
121 | # SSH-1.99-OpenSSH_4.7
122 | # SSH-1.99-Sun_SSH_1.1
123 | # SSH-2.0-OpenSSH_4.2p1 Debian-7ubuntu3.1
124 | # SSH-2.0-OpenSSH_4.3
125 | # SSH-2.0-OpenSSH_4.6
126 | # SSH-2.0-OpenSSH_5.1p1 Debian-5
127 | # SSH-2.0-OpenSSH_5.1p1 FreeBSD-20080901
128 | # SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu5
129 | # SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu6
130 | # SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7
131 | # SSH-2.0-OpenSSH_5.5p1 Debian-6
132 | # SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze1
133 | # SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2
134 | # SSH-2.0-OpenSSH_5.8p2_hpn13v11 FreeBSD-20110503
135 | # SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
136 | # SSH-2.0-OpenSSH_5.9
137 | #
138 | # (default: "SSH-2.0-OpenSSH_5.1p1 Debian-5")
139 | ssh_version_string = SSH-2.0-OpenSSH_4.2p1 Debian-7ubuntu3.1
140 |
141 | # Banner file to be displayed before the first login attempt.
142 | #
143 | # (default: not specified)
144 | #banner_file =
145 |
146 | # Session management interface.
147 | #
148 | # This is a telnet based service that can be used to interact with active
149 | # sessions. Disabled by default.
150 | #
151 | # (default: false)
152 | interact_enabled = false
153 | # (default: 5123)
154 | interact_port = 5123
155 |
156 | # MySQL logging module
157 | #
158 | # Database structure for this module is supplied in doc/sql/mysql.sql
159 | #
160 | # To enable this module, remove the comments below, including the
161 | # [database_mysql] line.
162 |
163 | #[database_mysql]
164 | #host = localhost
165 | #database = kippo
166 | #username = root
167 | #password =
168 | #port = 3306
169 |
170 | # XMPP Logging
171 | #
172 | # Log to an xmpp server.
173 | # For a detailed explanation on how this works, see:
174 | #
175 | # To enable this module, remove the comments below, including the
176 | # [database_xmpp] line.
177 |
178 | #[database_xmpp]
179 | #server = sensors.carnivore.it
180 | #user = anonymous@sensors.carnivore.it
181 | #password = anonymous
182 | #muc = dionaea.sensors.carnivore.it
183 | #signal_createsession = kippo-events
184 | #signal_connectionlost = kippo-events
185 | #signal_loginfailed = kippo-events
186 | #signal_loginsucceeded = kippo-events
187 | #signal_command = kippo-events
188 | #signal_clientversion = kippo-events
189 | #debug=true
190 |
191 | # Text based logging module
192 | #
193 | # While this is a database logging module, it actually just creates a simple
194 | # text based log. This may not have much purpose, if you're fine with the
195 | # default text based logs generated by kippo in log/
196 | #
197 | # To enable this module, remove the comments below, including the
198 | # [database_textlog] line.
199 |
200 | #[database_textlog]
201 | #logfile = kippo-textlog.log
202 |
--------------------------------------------------------------------------------
/kippo/kippo.logrotate:
--------------------------------------------------------------------------------
1 | /var/log/kippo/*.log {
2 | notifempty
3 | missingok
4 | rotate 28
5 | daily
6 | delaycompress
7 | compress
8 | create 660 kippo kippo
9 | dateext
10 | }
11 |
12 |
--------------------------------------------------------------------------------
/kippo/kippo/start.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 |
3 | set -e
4 |
5 | cd $(dirname $0)
6 |
7 | if [ "$1" != "" ]
8 | then
9 | VENV="$1"
10 |
11 | if [ ! -d "$VENV" ]
12 | then
13 | echo "The specified virtualenv \"$VENV\" was not found!"
14 | exit 1
15 | fi
16 |
17 | if [ ! -f "$VENV/bin/activate" ]
18 | then
19 | echo "The specified virtualenv \"$VENV\" was not found!"
20 | exit 2
21 | fi
22 |
23 | echo "Activating virtualenv \"$VENV\""
24 | . $VENV/bin/activate
25 | fi
26 |
27 | twistd --version
28 |
29 | echo "Starting kippo in the background..."
30 | twistd -y kippo.tac -l /var/log/kippo/kippo.log --pidfile kippo.pid
31 | #twistd -y kippo.tac -l log/kippo.log --pidfile kippo.pid
32 |
--------------------------------------------------------------------------------
/p0f/p0f_init.sh:
--------------------------------------------------------------------------------
1 | ### BEGIN INIT INFO
2 | # Provides: p0f
3 | # Required-Start: $all
4 | # Required-Stop:
5 | # Default-Start: 2 3 4 5
6 | # Default-Stop: 0 1 6
7 | # Short-Description: p0f
8 | # Description: p0f
9 | # Passive OS Fingerprinting
10 | ### END INIT INFO
11 |
12 | # Copied from
13 | # https://raw.githubusercontent.com/zam89/maduu/master/init/p0f
14 |
15 | # Using the lsb functions to perform the operations.
16 | . /lib/lsb/init-functions
17 | # Process name ( For display )
18 | NAME=p0f
19 | DAEMON=/usr/sbin/p0f
20 | PIDFILE=/var/run/p0f.pid
21 | SOCK=/var/run/p0f.sock
22 | CHROOT_USER=dionaea # same user/group
23 | NETWORK_IF=any
24 |
25 | PARAMETERS="-u $CHROOT_USER -i $NETWORK_IF -Q $SOCK -q -l -d -o /var/log/p0f.log"
26 |
27 | # If the daemon is not there, then exit.
28 | test -x $DAEMON || exit 5
29 |
30 | case $1 in
31 | start)
32 | # Checked the PID file exists and check the actual status of process
33 | if [ -e $PIDFILE ]; then
34 | status_of_proc -p $PIDFILE $DAEMON "$NAME process" && status="0" || status="$?"
35 | # If the status is SUCCESS then don't need to start again.
36 | if [ $status = "0" ]; then
37 | exit # Exit
38 | fi
39 | fi
40 | # Start the daemon.
41 | log_daemon_msg "Starting" "$NAME"
42 | # Start the daemon with the help of start-stop-daemon
43 | # Log the message appropriately
44 | if start-stop-daemon --start --quiet --oknodo --pidfile $PIDFILE --exec $DAEMON -- $PARAMETERS; then
45 | PID=`pidof -s p0f`
46 | if [ $PID ] ; then
47 | echo $PID >$PIDFILE
48 | fi
49 | log_end_msg 0
50 | else
51 | log_end_msg 1
52 | fi
53 | sudo chown $CHROOT_USER:$CHROOT_USER $SOCK
54 | ;;
55 | stop)
56 | # Stop the daemon.
57 | if [ -e $PIDFILE ]; then
58 | status_of_proc -p $PIDFILE $DAEMON "Stoppping $NAME" && status="0" || status="$?"
59 | if [ "$status" = 0 ]; then
60 | start-stop-daemon --stop --quiet --oknodo --pidfile $PIDFILE
61 | /bin/rm -rf $PIDFILE
62 | /bin/rm $SOCK
63 | fi
64 | else
65 | log_daemon_msg "$NAME is not running..."
66 | log_end_msg 0
67 | fi
68 | ;;
69 | restart)
70 | # Restart the daemon.
71 | $0 stop && sleep 2 && $0 start
72 | ;;
73 | status)
74 | # Check the status of the process.
75 | if [ -e $PIDFILE ]; then
76 | status_of_proc -p $PIDFILE $DAEMON "$NAME process" && exit 0 || exit $?
77 | else
78 | log_daemon_msg "$NAME is not running..."
79 | log_end_msg 0
80 | fi
81 | ;;
82 | *)
83 | # For invalid arguments, print the usage message.
84 | echo "Usage: $0 {start|stop|restart|status}"
85 | exit 2
86 | ;;
87 | esac
88 |
--------------------------------------------------------------------------------