├── .gitignore
├── .python-version
├── README.md
├── build-guide
├── README.md
└── requirements.txt
├── build
└── install.md
├── concourse
└── mkdocs_deploy.sh
├── docs
├── about
│ ├── backstory.md
│ ├── dataflow.md
│ └── what_is_it.md
├── configure
│ ├── reference.md
│ ├── rock-manager.md
│ └── setup-tui.md
├── deploy
│ ├── initial-access.md
│ ├── multi-node.md
│ ├── single-node.md
│ └── terminology.md
├── img
│ ├── admin-user.jpg
│ ├── choose-components.png
│ ├── choose-enabled.png
│ ├── date-time.png
│ ├── display-config.png
│ ├── docket-getpcap.png
│ ├── docket-submit.png
│ ├── docket.png
│ ├── favicon.ico
│ ├── filebeat-pipeline.png
│ ├── install-source.png
│ ├── install_banner.png
│ ├── mgmt-ip-dhcp.png
│ ├── network.png
│ ├── rock-diagram-new.png
│ ├── rock-flow.png
│ ├── rock-initialboot.jpg
│ ├── rock-user.jpg
│ ├── rock.png
│ ├── rock_logo.png
│ ├── run-installer.png
│ ├── select-interfaces.png
│ ├── set-hostname.png
│ ├── tui-start.png
│ └── write-config.png
├── index.md
├── install
│ ├── install.md
│ ├── media.md
│ ├── requirements.md
│ └── vm_installation.md
├── quickstart.md
├── reference
│ ├── changelog.md
│ ├── contribution.md
│ ├── latest.md
│ ├── license.md
│ ├── support.md
│ └── tutorials.md
├── services
│ ├── docket.md
│ ├── elasticsearch.md
│ ├── filebeat.md
│ ├── fsf.md
│ ├── index.md
│ ├── kafka.md
│ ├── kibana.md
│ ├── logstash.md
│ ├── stenographer.md
│ ├── suricata.md
│ └── zeek.md
└── usage
│ ├── Tips-and-Tricks.md
│ ├── index.md
│ └── support.md
├── mkdocs.yml
└── operate
└── index.md
/.gitignore:
--------------------------------------------------------------------------------
1 | # Generated Site #
2 | ##################
3 | site/
4 |
5 | # Compiled source #
6 | ###################
7 | *.com
8 | *.class
9 | *.dll
10 | *.exe
11 | *.o
12 | *.so
13 |
14 | # Packages #
15 | ############
16 | # it's better to unpack these files and commit the raw source
17 | # git has its own built in compression methods
18 | *.7z
19 | *.dmg
20 | *.gz
21 | *.iso
22 | *.jar
23 | *.rar
24 | *.tar
25 | *.zip
26 |
27 | # Logs and databases #
28 | ######################
29 | *.log
30 | *.sql
31 | *.sqlite
32 |
33 | # OS generated files #
34 | ######################
35 | .DS_Store
36 | .DS_Store?
37 | ._*
38 | .Spotlight-V100
39 | .Trashes
40 | ehthumbs.db
41 | Thumbs.db
42 |
43 | # Virtual Environment - exclude local venv dir
44 | myenv/
45 |
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
1 | 3.6.8
2 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | ---
6 |
7 | # Welcome
8 | This repository hosts the full documentation for [RockNSM](https://rocknsm.io), an open-source collections platform that focuses on being **reliable**, **scalable**, and **secure** in order to perform Network Security Monitoring (NSM), network hunting, and incident response (IR) missions.
9 |
10 |
11 | ## Hosted Docs
12 | Enter the full documentation at: [https://docs.rocknsm.io/](https://docs.rocknsm.io/)
13 |
14 |
15 | ## Latest
16 | We are pleased to announce that ROCK 2.5 is here! You can read the full details in the [Releases page](https://rocknsm.github.io/rock-docs/reference/latest/), but here's a quick overview of some of the latest additions:
17 |
18 |
19 | - [x] New: ROCK has move to the ECS standard
20 | - [x] New: Out of the box support for XFS Disk Quotas
21 | - [x] New: Updated ROCK Dashboards
22 | - [x] Fix: Various visualization issues in ROCK dashboard
23 | - [x] Fix: (x509) Certificate issues resolved
24 | - [x] Update: Elastic Stack components to version 7.6
25 | - [x] Update: Zeek to version 3
26 | - [x] Update: Zeek to version 5
27 |
28 | ## Video Guides
29 | There are several video walkthroughs in the [Tutorials Section](https://rocknsm.github.io/rock-docs/reference/tutorials/).
30 |
31 |
32 | ## Credit
33 | This project is made possible by the efforts of an ever-growing list of amazing people. Take a look around our project to see all our contributors.
34 |
--------------------------------------------------------------------------------
/build-guide/README.md:
--------------------------------------------------------------------------------
1 | # Rock-Docs Contribution Guide
2 |
3 | This document aims to serve as a contribution how-to that will provide step by
4 | step instructions on the set up, develop, and submission of updated
5 | documentation for the RockNSM project using [MkDocs](https://www.mkdocs.org/).
6 | Here is a high level overview of the process:
7 |
8 | - fork the project and clone to your local machine
9 | - setup local environment & install requirements
10 | - start local dev server to write content
11 | - build static web content from your markdown
12 | - deploy your changes
13 |
14 |
15 | ## Install Requirements
16 |
17 | The following tools and packages will be required to develop documentation:
18 | - git
19 | - python3
20 | - pip3
21 | - mkdocs
22 | - mkdocs addons & theme
23 | - [mkdocs-material](https://squidfunk.github.io/mkdocs-material/)
24 | - [mkdocs-awesome-pages-plugin](https://github.com/lukasgeiter/mkdocs-awesome-pages-plugin)
25 | - [pymdown-extensions](https://squidfunk.github.io/mkdocs-material/extensions/pymdown/)
26 | - [markdown-emdash](https://github.com/czue/markdown-emdash)
27 | - [mkdocs-exclude](https://pypi.org/project/mkdocs-exclude/)
28 | - a python virtual environment
29 |
30 | Next up are instructions specific to your OS.
31 |
32 | #### macOS
33 |
34 | Use the [brew](https://brew.sh/) package manager where noted below (and
35 | generally whenever possible).
36 |
37 | 1. Install Git:
38 | $ `brew install git`
39 |
40 | 2. Install Python3: **_already on system_**
41 | - validate with $ `which python3`
42 |
43 | 3. Install pip3:
44 | - download the [get-pip.py](https://pip.pypa.io/en/stable/installing/) script
45 | - install by running: $ `python3 get-pip.py`
46 |
47 | 4. Upgrade pip:
48 | $ `pip3 install --upgrade pip`
49 |
50 | 5. Install MkDocs and additional requirements:
51 | $ `pip3 install -r ./build-guide/requirements.txt`
52 |
53 |
54 | #### Centos / Fedora
55 |
56 | 1. Install Git:
57 | `sudo yum install -y git`
58 |
59 | 2. Ensure Python3 installed:
60 | `which python3`
61 |
62 | 3. Install pip3:
63 | `sudo yum install -y python-pip`
64 |
65 | 4. Verify pip3 installed
66 | `which pip3`
67 |
68 | 5. Upgrade pip:
69 | $ `pip3 install --upgrade pip`
70 |
71 |
72 | ## Git the Things
73 |
74 | 1. Clone a bare copy of this repo to your machine:
75 | $ `git clone --bare https://github.com/rocknsm/rock-docs.git`
76 |
77 | 2. Change directory into the project folder:
78 | $ `cd rock-docs`
79 |
80 | 2. Create an alternate timeline for later pull request / merging:
81 | $ `git branch devel-`
82 |
83 | 3. Jump into the new branch:
84 | $ `git checkout devel-`
85 |
86 |
87 | ## Python Virtual Environment Setup
88 |
89 | This step can be complete using several different methods, but this guide will
90 | stick with standard virtual environment process using the `virtualenv` tool
91 | that is baked into Python3.
92 |
93 | 1. Create a virtual environment for the project (repo) folder:
94 | $ `virtualenv myenv`
95 |
96 | 2. Activate this new venv:
97 | $ `source myenv/bin/activate`
98 |
99 | If successful, this will change your prompt by appending the venv name. It will
100 | look something like this: `(myenv) user@host$`
101 |
102 | To later deactivate (jump out of) this virtualenv, simply run:
103 | $ `deactivate`
104 |
105 | 3. Install MkDocs and additional plugins inside:
106 | $ `pip3 install -r ./build-guide/requirements.txt`
107 |
108 |
109 | ## Local Development
110 |
111 | 1. From the root level of the repo (where `mkdocs.yml` file resides) run the
112 | command to start the local server:
113 |
114 | $ `mkdocs serve`
115 |
116 | If you don't want to lose you current prompt, start server as a background service:
117 |
118 | $ `mkdocs serve & ` (later, bring back to the foreground with `fg`)
119 |
120 | 2. Browse to http://localhost:8000 in order to preview changes. Content will
121 | refresh automatically as files are edited and **saved**.
122 |
123 |
124 | ## Building Web Content
125 |
126 | After you've got things looking good in your local environment it's time to use
127 | MkDocs to build the static web files (html, css, etc.) that will be placed on
128 | some kind of hosting solution.
129 |
130 | 1. Stop the local server with the `ctrl + c` key combo
131 |
132 | 2. Use the mkdocs build option to generate the static files:
133 | $ `mkdocs build -t mkdocs`
134 |
135 | 3. Use the mkdocs deploy option to send the previously built static files to :
136 | $ `mkdocs gh-deploy`
137 |
138 |
139 | ## Submitting Changes
140 |
141 | After the above workflow is complete, it's time to submit your work for
142 | approval:
143 |
144 | > A prerequisite step to be performed by a rock-docs admin:
145 | - duplicate the master branch to be titled "X.X-archive"
146 |
147 | 1. With all your work committed, push your changes up to your project fork:
148 | $ `git push`
149 |
150 | 2. Use the Github graphical interface to submit an official Pull Request to
151 | merge in all your changes.
152 |
--------------------------------------------------------------------------------
/build-guide/requirements.txt:
--------------------------------------------------------------------------------
1 | mkdocs
2 | mkdocs-awesome-pages-plugin
3 | mkdocs-material
4 | pymdown-extensions
5 | markdown-emdash
6 | mkdocs-exclude
7 |
--------------------------------------------------------------------------------
/build/install.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | # Installation Guide
6 |
7 | #### Agenda
8 | - [Overview](#overview)
9 | - [Getting the Bits](#getting-media)
10 | - [Applying the Image](#apply-the-image)
11 | - [Install](#install)
12 | - [Configure](#configure)
13 | - [Deploy](#deploy)
14 |
15 |
16 | ## Overview
17 |
18 | If there’s one thing that should be carried away from the installation section, it's this:
19 |
20 | > RockNSM has been designed to be used as a distro. It's not a package or a suite of tools. It’s built from the ground up purposefully. THE ONLY SUPPORTED INSTALL IS THE OFFICIAL ISO.
21 |
22 | Yes, one can clone the project and run the Ansible on some bespoke CentOS build, and you may have great success... but you've **voided the warranty**. Providing a clean product that makes supporting submitted issues is important to us. The ISO addresses most use cases.
23 |
24 |
25 | ## Getting Media
26 |
27 | The lastest ROCK build is available at [download.rocknsm.io](https://download.rocknsm.io/isos/stable/).
28 |
29 |
30 | ## Apply the Image
31 |
32 | Now it's time to create a bootable USB drive with the fresh ROCK build. Let's look at few options.
33 |
34 | #### CLI
35 |
36 | If you live in the terminal, use `dd` to apply the image. These instructions are for using a terminal in macOS. If you're in a different environment, google is your friend.
37 |
38 | :warning: Take CAUTION when using these commands by ENSURING you're writing to the correct disk / partition! :warning:
39 |
40 | 1. once you've inserted a USB get the drive ID:
41 | `diskutil list`
42 |
43 | 2. unmount the target drive so you can write to it:
44 | `diskutil unmount /dev/disk#`
45 |
46 | 3. write the image to drive:
47 | `sudo dd bs=8m if=path/to/rockiso of=/dev/disk#`
48 |
49 | #### Via GUI
50 |
51 | If you don't want to apply the image in the terminal, there are plenty of great tools to do this with a graphical interface:
52 |
53 | **Cross-platform**
54 | - [Etcher](http://etcher.io) - our go-to standard
55 | - [YUMI](https://www.pendrivelinux.com/yumi-multiboot-usb-creator/) - create multibooting disk
56 | - [SD Card Formatter](https://www.sdcard.org/downloads/formatter_4/) - works well
57 |
58 | **Windows**
59 | - [Win32 Disk Imager](https://sourceforge.net/projects/win32diskimager/)
60 |
61 |
62 | ## Installation
63 |
64 |
65 |
66 |
67 |
68 |
69 | #### Network Connection
70 |
71 | During install, ROCK will see the network interface with an ip address and default gateway and designate it as the _**management**_ port. So plug into the interface you want to use to remotely manage your sensor.
72 |
73 | ### Install Types
74 |
75 | ROCK works with both legacy BIOS and UEFI booting. Once booted from the USB, you are presented with 2 primary installation paths:
76 |
77 | * Automated
78 | * Custom
79 |
80 | #### Automated
81 |
82 | The "Automated" option is intended to serve as a _**starting point**_ that allow you to get into things. It utilizes the Centos Anaconda installer to make some of the harder decisions for users by skipping over many options to get you up and running. It makes a best guess at how to use resources.
83 |
84 |
85 | #### Custom
86 |
87 | The "Custom" allows more advanced users to customize their configuration. This is especially helpful when you're working with multiple disks and/or a large amount of storage on a single disk. Custom is encouraged for a production environment in order to get more granular in choosing how disk space is allocated.
88 |
89 | If your target machine to use as a ROCK sensor has multiple disks to use it is **highly recommended** to select "Custom install". This is because the default RHEL (and even other linux distributions) partioning will use the majority of the storage for the `/home` partion.
90 |
91 |
92 | ##### Custom - Disk Allocation
93 |
94 | Configuring disk and storage is a deep topic on it's own, but let's talke about a few examples to get started:
95 |
96 | ##### Stenographer
97 |
98 | A common gotcha occurs when you want full packet capture (via [Stenographer](../services/stenographer.md)), but it isn't given a separate partition. Stenographer is great at managing it's own disk space (starts to overwrite oldest data at 90% capacity), but that doesn't cut it when it's sharing the same mount point as Bro, Suricata , and other tools that generate data in ROCK.
99 |
100 | Best practice would be to create a `/data/stenographer` partition in order to prevent limited operations. For example, Elasticsearch will (rightfully) lock indexes up to a read-only state in order to keep things from crashing hard.
101 |
102 | ##### Separating System Logs
103 |
104 | Another useful partition to create is `/var/log` to separate system log files from the rest of the system.
105 |
106 | ##### Partitioning Example
107 |
108 | Below is a good starting point when partitioning
109 |
110 | | MOUNT POINT | USAGE | SIZE |
111 | | ---------- | ----------------- | -------- |
112 | | **SYSTEM** | **SYSTEM** | **SYSTEM** |
113 | | / | root filesystem | 15 GiB |
114 | | /boot | legacy boot files | 512 MiB |
115 | | /boot/efi | uefi boot files | 512 MiB |
116 | | swap | memory shortage | ~8 GiB+ |
117 | | **DATA** | **DATA** | **DATA** |
118 | | /var/log | system log files | ~15 GiB |
119 | | /home | user home dirs | ~20 GiB |
120 | | /data | data partition | ~ GiB |
121 | | /data/stenographer | steno partition | ~ GiB |
122 |
123 |
124 | For more information to assist with the partitioning process, you can see the [RHEL guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-disk-partitioning-setup-x86#sect-custom-partitioning-x86). Also, it may be a bit more self explanatory for you if you click “automatic partitions” then modify accordingly.
125 |
126 |
127 | > For the purposes of simplicity this guide will demonstrate an **Automated** install. If you have multiple disks to configure use the _Custom_ option.
128 |
129 |
130 |
131 | ##### DATE & TIME
132 |
133 | `UTC` is generally preferred for logging data as the timestamps from anywhere in the world will have a proper order without calculating offsets and daylight savings. That said, Kibana will present the Bro logs according to your timezone (as set in the browser). The bro logs themselves (i.e. in /data/bro/logs/) log in [epoch time](https://en.wikipedia.org/wiki/Unix_time) and will be written in UTC regardless of the system timezone.
134 |
135 |
136 |
137 |
138 |
139 | Bro includes a utility for parsing these on the command line called `bro-cut`. It can be used to print human-readable timestamps in either the local sensor timezone or UTC. You can also give it a custom format string to specify what you'd like displayed.
140 |
141 | ##### Caveat: Environments without NTP access
142 | If RockNSM does not have access to an NTP server, you must ensure that your system time is set to the current UTC time. If your system clock is set to local time, you will notice an incorrect time offset in your data. To resolve/prevent this, set the clock to the correct UTC time:
143 | ```
144 | sudo timedatectl set-ntp false
145 | sudo timedatectl set-time '2019-03-02 00:35:02'
146 | sudo timedatectl set-ntp true
147 | ```
148 |
149 | ##### Network & Hostname
150 |
151 | Before beginning the install process it's best to connect the interface you've selected to be the **management interface**. Here's the order of events:
152 |
153 | - ROCK will initially look for an interface with a default gateway and treat that interface as the MGMT INTERFACE
154 | - All remaining interfaces will be treated as MONITOR INTERFACES
155 |
156 | 1. Ensure that the interface you intend to use for MGMT has been turned on and has an ip address
157 | 2. Set the hostname of the sensor in the bottom left corner
158 | - this hostname will populate the Ansible inventory file in `/etc/rocknsm/hosts.ini`
159 |
160 |
161 |
162 |
163 |
164 | ##### User Creation
165 |
166 | ROCK is configured with the root user disabled. We recommend that you leave it that way. Once you've kicked off the install, click **User Creation** at the next screen (shown above) and complete the required fields to set up a non-root admin user.
167 |
168 |
169 |
170 |
171 |
172 | > If this step is not completed now do not fear, you will be prompted to create this account after first login.
173 |
174 | - click **Finish Installation** and wait for reboot
175 | - accept license agreement: `c` + `ENTER`
176 |
177 |
178 | ## Configure
179 |
180 | The primary configuration file for ROCK is [/etc/rocknsm/config.yml](https://github.com/rocknsm/rock/blob/master/playbooks/templates/rock_config.yml.j2). This file contains key variables like network interface setup, cpu cores assignment, and much more. There are a lot of options to tune here, so take time to familiarize. Let's break down this file into it's major sections:
181 |
182 |
183 | ##### Network Interfaces
184 | As mentioned previously, ROCK takes the interface with a default gateway and will uses as MGMT. Beginning at line 8, `config.yml` displays the remaining interfaces that will be used to **MONITOR** traffic.
185 | ```
186 | # The "rock_monifs:" listed below are the interfaces that were not detected
187 | # as having an active IP address. Upon running the deploy script, these
188 | # interfaces will be configured for monitoring (listening) operations.
189 | # NOTE: management interfaces should *not* be listed here:
190 |
191 | rock_monifs:
192 | -
193 | -
194 | ```
195 |
196 | ##### Realworld Example
197 | ```
198 | [admin@rock ~]$ ip a
199 |
200 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
201 | 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
202 | link/ether ...
203 | inet 192.168.1.207/24 brd 192.168.1.255 scope global noprefixroute dynamic enp0s3
204 | ...
205 | 3: enp0s4: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
206 | link/ether ...
207 | ```
208 |
209 |
210 | Let's run through the above basic example to illustrate. The demo box has 2 NICs:
211 | 1. `enp0s3` - is plugged in for install and deployment with an ip address from local dhcp. This will be used to **manage** the sensor
212 | 2. `enp0s4` - will be disconnected (unused) during install and deployment and be listed as a `rock_monif` in the config file
213 |
214 | Lines 7 - 9 of `/etc/rocknsm/config.yml` show that the other interface (`enp0s3`) is listed as MONITOR interface.
215 | ```yml
216 | # interfaces that should be configured for sensor applications
217 | rock_monifs:
218 | - enp0s3
219 | ```
220 |
221 |
222 | ##### Sensor Resource Configuration
223 | ```
224 | # Set the hostname of the sensor:
225 | rock_hostname:
226 |
227 | # Set the Fully Qualified Domain Name:
228 | rock_fqdn:
229 |
230 | # Set the number of CPUs assigned to Bro:
231 | bro_cpu:
232 |
233 | # Set the Elasticsearch cluster name:
234 | es_cluster_name:
235 |
236 | # Set the Elasticsearch cluster node name:
237 | es_node_name:
238 |
239 | # Set the value of Elasticsearch memory:
240 | es_mem:
241 | ```
242 |
243 |
244 | ##### Installation Source Configuration
245 | We've taken into consideration that your sensor won't always have internet access. The ISO's default value is set to offline:
246 |
247 | ```yml
248 | 53 # The primary installation variable defines the ROCK installation method:
249 | 54 # ONLINE: used if the system may reach out to the internet
250 | 55 # OFFLINE: used if the system may *NOT* reach out to the internet
251 | 56 # The default value "False" will deploy using OFFLINE (local) repos.
252 | 57 # A value of "True" will perform an install using ONLINE mirrors.
253 | 58
254 | 59 rock_online_install: True
255 | ```
256 |
257 | If your sensor does have access to get to online repos just set `rock_online_install: True`, Ansible will configure your system for the yum repositories listed and pull packages and git repos directly from the URLs shown. You can easily point this to local mirrors if needed.
258 |
259 | If this value is set to `False`, Ansible will look at the cached files in `/srv/rocknsm`.
260 |
261 |
262 |
263 |
264 | #### Data Retention Configuration
265 |
266 | This section controls how long NSM data stay on the sensor:
267 | ```
268 | # Set the interval in which Elasticsearch indexes are closed:
269 | elastic_close_interval:
270 |
271 | # Set the interval in which Elasticsearch indexes are deleted:
272 | elastic_delete_interval:
273 |
274 | # Set value for Kafka retention (in hours):
275 | kafka_retention:
276 |
277 | # Set value for Bro log retention (in days):
278 | bro_log_retention:
279 |
280 | # Set value for Bro statistics log retention (in days):
281 | bro_stats_retention:
282 |
283 | # Set how often logrotate will roll Suricata log (in days):
284 | suricata_retention:
285 |
286 | # Set value for FSF log retention (in days):
287 | fsf_retention:
288 | ```
289 |
290 |
291 | ##### ROCK Component Options
292 | This is a critical section that provides boolean options to choose what components of ROCK are **_installed_** and **_enabled_** during deployment.
293 |
294 | ```yml
295 | # The following "with_" statements define what components of RockNSM are
296 | # installed when running the deploy script:
297 |
298 | with_stenographer: True
299 | with_docket: True
300 | with_bro: True
301 | with_suricata: True
302 | with_snort: True
303 | with_suricata_update: True
304 | with_logstash: True
305 | with_elasticsearch: True
306 | with_kibana: True
307 | with_zookeeper: True
308 | with_kafka: True
309 | with_lighttpd: True
310 | with_fsf: True
311 |
312 | # The following "enable_" statements define what RockNSM component services
313 | # are enabled (start automatically on system boot):
314 |
315 | enable_stenographer: True
316 | enable_docket: True
317 | enable_bro: True
318 | enable_suricata: True
319 | enable_snort: True
320 | enable_suricata_update: True
321 | enable_logstash: True
322 | enable_elasticsearch: True
323 | enable_kibana: True
324 | enable_zookeeper: True
325 | enable_kafka: True
326 | enable_lighttpd: True
327 | enable_fsf: True
328 | ```
329 |
330 | A good example for changing this section would involve [Stenographer](../services/stenographer.md). Collecting raw PCAP is resource and _**storage intensive**_. You're machine may not be able to handle that and if you just wanted to focus on network logs, then you would set both options in the config file to **disable** both installing and enabling Steno:
331 |
332 | ```yml
333 | 67 with_stenographer: False
334 | ...
335 | ...
336 | ...
337 | 83 enable_stenographer: False
338 | ```
339 |
340 |
341 | ## Deployment
342 |
343 | Once `config.yml` has been tuned to suit your environment, it's finally time to **deploy this thing**. This is done by running the deployment script, which is in the install user's path (`/usr/sbin/`):
344 |
345 | ```
346 | /usr/sbin/
347 | ├── ...
348 | ├── deploy_rock.sh.sh
349 | ├── ...
350 | ```
351 |
352 | To kick off the deployment script run: `sudo deploy_rock.sh`
353 |
354 | Once the deployment is completed with the components you chose you'll be congratulated with a success banner.
355 |
356 |
357 |
358 |
359 |
360 |
361 | #### Generate Defaults
362 |
363 | > What do I do when I've completely messed things up and need to start over?
364 |
365 | Great question. There's a simple solution for when the base config file needs to be reset back to default settings. There's a script called `generate_defaults.sh` also located in your `$PATH`:
366 |
367 | `sudo generate_defaults.sh`
368 |
369 | Simply execute this to (re)generate a fresh default `config.yml` for you and get you out of jail.
370 |
371 | ## Initial Kibana Access
372 |
373 | We strive to do the little things right, so rather than having Kibana available to everyone in the free world it's sitting behind an Nginx reverse proxy. It's also secured by a [passphrase](https://xkcd.com/936/). The credentials are generated and then stored in the home directory of the user you created during the initial installation e.g. `/home/admin`.
374 |
375 | 1. `cat` and copy the contents of `~/KIBANA_CREDS.README`
376 | 1. browse to https://
377 | 1. enter this user / password combo
378 | 1. profit!
379 |
380 | ---
381 |
382 | Continue to the [Usage Guide](../operate/index.md).
383 |
384 |
385 |
--------------------------------------------------------------------------------
/concourse/mkdocs_deploy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # export LC_ALL=C.UTF-8
4 | # export LANG=C.UTF-8
5 | yum update -y
6 | yum install -y python36u-pip
7 | pip3 install 'pymdown-extensions<5.0' 'Markdown<3.0' 'mkdocs<1.0' 'mkdocs-material<3.0'
8 |
9 | cd rock-docs-git
10 | mkdocs gh-deploy
11 |
--------------------------------------------------------------------------------
/docs/about/backstory.md:
--------------------------------------------------------------------------------
1 | # Backstory
2 |
3 | ROCK is a tool that was created to solve a problem that was realized several
4 | years ago while engaged in real-world missions and training exercises. That
5 | problem was that the pervasive network sensor platform at the time, had many
6 | architectural and operational issues. It was built on an insecure platform, had
7 | performance problems, and did not follow the [Unix Philosophy](https://en.wikipedia.org/wiki/Unix_philosophy).
8 |
9 | The origin of RockNSM can be traced back to the Fall of 2014 when a couple of
10 | wide-eyed dreamers started working on their own solution while drinking whiskey
11 | in a hotel room. The project developed from an on-mission hasty replacement, to
12 | an all-in-one sensor solution, and now a more full featured analysis stack
13 | capable of multi-node deployments.
14 |
15 |
16 | ## Credit
17 |
18 | This project is made possible by the efforts of an ever-growing list of amazing
19 | people. Take a look around our project to see all our contributors.
20 |
--------------------------------------------------------------------------------
/docs/about/dataflow.md:
--------------------------------------------------------------------------------
1 | # Data Flow
2 |
3 | This is a high level model of how packets flow through the sensor:
4 |
5 |
6 |
7 |
8 |
9 |
10 |
--------------------------------------------------------------------------------
/docs/about/what_is_it.md:
--------------------------------------------------------------------------------
1 | # What is ROCK
2 |
3 |
4 |
5 |
6 |
7 | ### The Mission
8 |
9 | * **Reliable** - we believe the folks at Red Hat do Linux right. ROCK is built on Centos7 and provides an easy path to a supported enterprise OS ([RHEL](https://www.redhat.com/en)).
10 |
11 | * **Secure** - with SELinux, ROCK is highly secure by default. [SELinux](https://selinuxproject.org/page/Main_Page) uses context to define security controls to prevent, for instance, a text editor process from talking to the internet. [#setenforce1](https://twitter.com/search?q=%23setenforce1&src=typd)
12 |
13 | * **Scalable** - Whether you're tapping a SoHo network or a large enterprise, ROCK is designed with scale in mind.
14 |
15 |
16 |
17 |
18 | ### Capability
19 |
20 | * Passive and reliable high-speed data acquisition via AF_PACKET, feeding systems for metadata (Bro), signature detection (Suricata), extracted network file metadata (FSF), and full packet capture (Stenographer).
21 |
22 | * A messaging layer (Kafka and Logstash) that provides flexibility in scaling the platform to meet operational needs, as well as providing some degree of data reliability in transit.
23 |
24 | * Reliable data storage and indexing (Elasticsearch) to support rapid retrieval and analysis (Kibana and Docket) of the data.
25 |
26 | * Pivoting off Kibana data rapidly into full packet capture (Docket and Stenographer).
27 |
28 |
29 | ### Components
30 |
31 | * Full Packet Capture via [Google Stenographer](https://github.com/google/stenographer)
32 |
33 | * Protocol Analysis and Metadata via [Bro](https://www.bro.org/)
34 |
35 | * Signature Based Alerting via [Suricata](https://suricata-ids.org/)
36 |
37 | * Recursive File Scanning via [FSF](https://github.com/EmersonElectricCo/fsf).
38 |
39 | * Output from Suricata and FSF are moved to message queue via [Filebeat](https://www.elastic.co/products/beats/filebeat)
40 |
41 | * Message Queuing and Distribution via [Apache Kafka](http://kafka.apache.org/)
42 |
43 | * Message Transport via [Logstash](https://www.elastic.co/products/logstash)
44 |
45 | * Data Storage, Indexing, and Search via [Elasticsearch](https://www.elastic.co/)
46 |
47 |
48 | ### Analyst Toolkit
49 |
50 | * [Kibana](https://www.elastic.co/products/kibana) provides data UI and visualization
51 |
52 | * [Docket](../services/docket.md) allows for quick and targeted pivoting to PCAP
53 |
--------------------------------------------------------------------------------
/docs/configure/reference.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | The primary configuration file for RockNSM is found at `/etc/rocknsm/config.yml`.
4 |
5 | This file defines key information that drives the Ansible deployment playbook
6 | like network interface setup, cpu cores assignment, and much more. There are a
7 | lot of options to tune here so take time to familiarize.
8 |
9 | > A template of this file in it's entirety can be found [[here on github]](https://github.com/rocknsm/rock/blob/master/playbooks/templates/rock_config.yml.j2), but for greater clarity let's break it down into it's major sections:
10 |
11 |
12 | ### Network Interface
13 | As mentioned previously, ROCK takes the interface with an ip address / gateway and will use that as the _management_ NIC. `config.yml` displays the remaining interfaces that will be used to **MONITOR** traffic.
14 |
15 | Let's run through a basic example:
16 | ```
17 | [admin@rock ~]$ ip a
18 |
19 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
20 | 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
21 | link/ether ...
22 | inet 192.168.1.207/24 brd 192.168.1.255 scope global noprefixroute dynamic enp0s3
23 | ...
24 | 3: enp0s4: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
25 | link/ether ...
26 | ```
27 |
28 | The demo box above has 2 NICs:
29 | 1. `enp0s3` - is plugged in for install and deployment with an ip address from local dhcp. This will be used to **manage** the sensor
30 | 2. `enp0s4` - will be unused (not connected) during install and deployment and be listed as a `rock_monif` in the config file
31 |
32 | The config file shows the other interface (`enp0s3`) is listed as MONITOR interface.
33 | ```yml
34 | # interfaces that should be configured for sensor applications
35 | rock_monifs:
36 | - enp0s3
37 | ```
38 |
39 |
40 | ### Sensor Resource
41 |
42 | ```yml
43 | # Set the hostname of the sensor:
44 | rock_hostname: rocknsm_sensor_1
45 |
46 | # Set the Fully Qualified Domain Name:
47 | rock_fqdn: rocknsm_sensor_1.rocknsm.lan
48 |
49 | # Set the number of CPUs assigned to Bro:
50 | bro_cpu: 2
51 |
52 | # Set the Elasticsearch cluster name:
53 | es_cluster_name: rocknsm
54 |
55 | # Set the Elasticsearch cluster node name:
56 | es_node_name: localhost
57 |
58 | # Set the value of Elasticsearch memory:
59 | es_mem: 5
60 | ```
61 |
62 | ### Installation Source
63 | We've taken into consideration that your sensor won't always have internet
64 | access. Currently the default value is set to `rock_online_install: True`:
65 |
66 | ```yml
67 | # The primary installation variable defines the ROCK installation method:
68 | # ONLINE: used if the system may reach out to the internet
69 | # OFFLINE: used if the system may *NOT* reach out to the internet
70 | # The default value "False" will deploy using OFFLINE (local) repos.
71 | # A value of "True" will perform an install using ONLINE mirrors.
72 |
73 | rock_online_install: True
74 | ```
75 |
76 | #### Online
77 | Does your sensor has access to [upstream](https://imgs.xkcd.com/comics/the_cloud.png)
78 | online repositories? If so, then make sure that this value is set to
79 | `rock_online_install: True`.
80 |
81 |
82 | #### Offline
83 | If you are in an offline environment, then set it to `rock_online_install: False`.
84 | Ansible will deploy using the locally cached files found in `/srv/rocknsm`.
85 |
86 |
87 | > Note: In our next release the default behavior will be changed to an offline
88 | install (reference [Issue #376](https://github.com/rocknsm/rock/issues/376))
89 |
90 | ### Data Retention
91 | This section controls how long NSM data stay on the sensor:
92 | ```yml
93 | # Set the interval in which Elasticsearch indexes are closed:
94 | elastic_close_interval: 15
95 |
96 | # Set the interval in which Elasticsearch indexes are deleted:
97 | elastic_delete_interval: 60
98 |
99 | # Set value for Kafka retention (in hours):
100 | kafka_retention: 168
101 |
102 | # Set value for Bro log retention (in days):
103 | bro_log_retention: 0
104 |
105 | # Set value for Bro statistics log retention (in days):
106 | bro_stats_retention: 0
107 |
108 | # Set how often logrotate will roll Suricata log (in days):
109 | suricata_retention: 3
110 |
111 | # Set value for FSF log retention (in days):
112 | fsf_retention: 3
113 | ```
114 |
115 | ### Component Options
116 | This is a critical section that provides boolean options to choose what components of ROCK are **_installed_** and **_enabled_** during deployment.
117 |
118 | ```yml
119 | rock_services:
120 | - name: bro
121 | quota_weight: 1
122 | installed: True
123 | enabled: True
124 |
125 | - name: stenographer
126 | quota_weight: 8
127 | installed: True
128 | enabled: True
129 |
130 | - name: docket
131 | quota_weight: 0
132 | installed: True
133 | enabled: True
134 |
135 | - name: suricata
136 | quota_weight: 2
137 | installed: True
138 | enabled: True
139 |
140 | - name: elasticsearch
141 | quota_weight: 4
142 | installed: True
143 | enabled: True
144 |
145 | - name: kibana
146 | quota_weight: 0
147 | installed: True
148 | enabled: True
149 |
150 | - name: zookeeper
151 | quota_weight: 0
152 | installed: True
153 | enabled: True
154 |
155 | - name: kafka
156 | quota_weight: 4
157 | installed: True
158 | enabled: True
159 |
160 | - name: lighttpd
161 | quota_weight: 0
162 | installed: True
163 | enabled: True
164 |
165 | - name: fsf
166 | quota_weight: 1
167 | installed: True
168 | enabled: True
169 |
170 | - name: filebeat
171 | quota_weight: 0
172 | installed: True
173 | enabled: True
174 | ```
175 |
176 | A good example for changing this section would involve [Stenographer](../services/stenographer.md). Collecting raw PCAP is resource and _**storage intensive**_. You're machine may not be able to handle that and if you just wanted to focus on network logs, then you would set both options in the config file to **disable** installing and enabling Stenographer.
--------------------------------------------------------------------------------
/docs/configure/rock-manager.md:
--------------------------------------------------------------------------------
1 | One of the new features in version 2.4 is the Rock Manager. This script provides
2 | a one stop shop for managing all things ROCK.
3 |
4 | To display all available commands and options run: $`rock`
5 |
6 | ```
7 | Usage: /usr/sbin/rock COMMAND [options]
8 | Commands:
9 | setup Launch TUI to configure this host for deployment
10 | tui Alias for setup
11 | ssh-config Configure hosts in inventory to use key-based auth (multinode)
12 | deploy Deploy selected ROCK components
13 | deploy-offline Same as deploy --offline (Default ISO behavior)
14 | deploy-online Same as deploy --online
15 | stop Stop all ROCK services
16 | start Start all ROCK services
17 | restart Restart all ROCK services
18 | status Report status for all ROCK services
19 | genconfig Generate default configuration based on current system
20 | destroy Destroy all ROCK data: indexes, logs, PCAP, i.e. EVERYTHING
21 | NOTE: Will not remove any services, just the data
22 |
23 | Options:
24 | --config, -c Specify full path to configuration overrides
25 | --extra, -e Set additional variables as key=value or YAML/JSON passed to ansible-playbook
26 | --help, -h Show this usage information
27 | --inventory, -i Specify path to Ansible inventory file
28 | --limit Specify host to run plays
29 | --list-hosts Outputs a list of matching hosts; does not execute anything else
30 | --list-tags List all available tags
31 | --list-tasks List all tasks that would be executed
32 | --offline, -o Deploy ROCK using only local repos (Default ISO behavior)
33 | --online, -O Deploy ROCK using online repos
34 | --playbook, -p Specify path to Ansible playbook file
35 | --skip-tags Only run plays and tasks whose tags do not match these values
36 | --tags, -t Only run plays and tasks tagged with these values
37 | --verbose, -v Increase verbosity of ansible-playbook
38 | ```
39 |
40 | As you can see above, the `rock` command has many options available, so here's a
41 | basic breakdown of how you would use this command:
42 |
43 |
44 | ## Initial Deployment
45 | ```shell
46 | sudo rock setup # launches interface to configure settings
47 | sudo rock ssh-setup # configures ssh access to rock nodes (multinode)
48 | sudo rock deploy-offline # deploys ROCK using local pre-staged packages
49 | sudo rock deploy-online # deploys ROCK using online repo packages
50 | ```
51 |
52 | > Stand by for a detailed walkthrough of the initial deployment process in the
53 | "Deploy" section coming up.
54 |
55 |
56 | ## Basic Usage
57 | ```shell
58 | sudo rock status # display status of all ROCK services
59 | sudo rock stop # stop all ROCK services
60 | sudo rock start # start all ROCK services
61 | sudo rock restart # restart all ROCK services
62 | ```
63 |
64 | ## Advanced Usage
65 | ```shell
66 | sudo rock genconfig # regenerate a fresh config.yml file
67 | sudo rock destroy # removes ALL sensor data (logs, indices, PCAP)
68 | ```
69 |
70 |
71 |
--------------------------------------------------------------------------------
/docs/configure/setup-tui.md:
--------------------------------------------------------------------------------
1 | ## Overview
2 |
3 | Another new 2.4 feature is the Setup (**T**)ext (**U**)ser (**I**)nterface. This
4 | allows for a unified interface in order to configure ROCK sensor settings. The
5 | new interface can be called by running either:
6 |
7 | * `sudo rock setup`
8 | * `sudo rock tui`
9 |
10 |
11 |
12 |
13 |
14 |
15 | > The Setup TUI currently only supports configuring a deploying a single-node
16 | instance of ROCK.
17 |
18 |
19 | ## Building Manually
20 |
21 | For the ROCK veterans in the house, you can still run a manual build by:
22 |
23 | 1. customizing your config file at: `/etc/rocknsm/config.yml`
24 | 2. deploy with the new rock manager using preferred packages:
25 |
26 | * `sudo rock deploy-offline`
27 | * `sudo rock deploy-online`
28 |
--------------------------------------------------------------------------------
/docs/deploy/initial-access.md:
--------------------------------------------------------------------------------
1 | We strive to do the little things right, so rather than having Kibana available
2 | to everyone in the free world, it's sitting behind a reverse proxy and secured
3 | by an [(XKCD) Passphrase](https://xkcd.com/936/).
4 |
5 | The credentials are written to the home directory of the user that runs the
6 | deploy script. Most of the time, this will be the administrative user
7 | created at installation e.g. `/home/admin`.
8 |
9 | ## Kibana
10 |
11 | To get into the Kibana interface:
12 |
13 | *Note: we are aware of a new change with macOS Catalina and the Chrome browser that requires SSL certificates are signed with a CA. We're working on the issue and will update the SSL proxy process soon. Until then, Safari and Firefox work. Windows is not affected.*
14 |
15 | 1. Copy the passphrase in `~/KIBANA_CREDS.README`
16 | 2. Point your browser to Kibana:
17 | * `https://`
18 | * `https://` (if you have DNS set up)
19 | 3. Enter the user and password
20 | 4. Profit!
21 |
--------------------------------------------------------------------------------
/docs/deploy/multi-node.md:
--------------------------------------------------------------------------------
1 | Now let's look at how to perform a ROCK deployment across multiple sensors.
2 | This is where we can break out server roles for more complex and distributed
3 | environment.
4 |
5 | ## Assumptions
6 |
7 | This document assumes that you have done a fresh installation of ROCK on **all**
8 | ROCK nodes using the [latest ISO](https://mirror.rocknsm.io/isos/stable/).
9 |
10 | 1. 3 (minimum) sensors
11 | 1. sensors on same network, able to communicate
12 | 1. in this demo there are 3 newly installed ROCK sensors:
13 | * `rock01.rock.lan - 172.16.1.23X`
14 | * `rock02.rock.lan - 172.16.1.23X`
15 | * `rock03.rock.lan - 172.16.1.23X`
16 | 1. `admin` account created at install (same username for all nodes)
17 |
18 |
19 | ## Multi-node Configuration
20 |
21 | ### Designate Deployment Manager
22 |
23 | First and foremost, one sensor will need to be designated as the "Deployment
24 | Manager" (referred to as "DM" for the rest of this guide). This is a notional
25 | designation to keep the process clean and as simple as possible. For our lab we
26 | will use `rock01` as the DM.
27 |
28 | > NOTE: all following commands will be run from your Deployment Manager (DM) box
29 |
30 | Confirm that you can remotely connect to the DM:
31 | `ssh admin@`
32 |
33 | Confirm that you can ping all other rock hosts. For this example:
34 | `ping 172.16.1.236 #localhost`
35 | `ping 172.16.1.237`
36 | `ping 172.16.1.239`
37 |
38 |
39 | ### Edit Inventory File
40 |
41 | A deployment is performed using an Ansible playbooks, and target systems are defined
42 | in an Ansible [inventory file](https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html). In ROCK this file is located at `/etc/rocknsm/hosts.ini`.
43 |
44 | We need to add additional ROCK sensors to the following sections of the
45 | inventory file:
46 |
47 | * [rock]
48 | * [web]
49 | * [zookeeper]
50 | * [es_masters]
51 | * [es_data]
52 | * [es_ingest]
53 |
54 |
55 | #### [rock]
56 |
57 | This group defines what nodes will be running ROCK services (Zeek, Suricata, Stenographer, Kafka, Zookeeper)
58 |
59 | example:
60 | ```yml
61 | [rock]
62 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
63 | rock02.rock.lan ansible_host=172.16.1.237
64 | rock03.rock.lan ansible_host=172.16.1.239
65 | ```
66 |
67 |
68 | #### [web]
69 |
70 | This group defines what node will be running web services (Kibana, Lighttpd, and Docket).
71 |
72 | example:
73 | ```yml
74 | [web]
75 | rock03.rock.lan ansible_host=172.16.1.239
76 | ```
77 |
78 |
79 | #### [zookeeper]
80 |
81 | This group defines what node(s) will be running zookeeper (Kafka cluster manager).
82 |
83 | example:
84 |
85 | ```yml
86 | [zookeeper]
87 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
88 | ```
89 |
90 | #### [es_masters]
91 |
92 | This group defines what node(s) will be running Elasticsearch, acting as
93 | master nodes.
94 |
95 | example:
96 |
97 | ```yml
98 | [es_masters]
99 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
100 | rock02.rock.lan ansible_host=172.16.1.237
101 | rock03.rock.lan ansible_host=172.16.1.239
102 | ```
103 |
104 | #### [es_data]
105 |
106 | This group defines what node(s) will be running Elasticsearch, acting as
107 | data nodes.
108 |
109 | example:
110 |
111 | ```yml
112 | [es_data]
113 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
114 | rock02.rock.lan ansible_host=172.16.1.237
115 | rock03.rock.lan ansible_host=172.16.1.239
116 | ```
117 |
118 | #### [es_ingest]
119 |
120 | This group defines what node(s) will be running Elasticsearch, acting as
121 | ingest nodes. For ROCK deployments, generally, everything that is data node eligible is also ingest node eligible.
122 |
123 | example:
124 |
125 | ```yml
126 | [es_ingest]
127 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
128 | rock02.rock.lan ansible_host=172.16.1.237
129 | rock03.rock.lan ansible_host=172.16.1.239
130 | ```
131 |
132 | #### Example Inventory
133 |
134 | Above, we broke out every section. Let's take a look at a cumulative example of
135 | all the above sections in `host.ini` file for this demo:
136 |
137 | Simple example inventory:
138 | ```yml
139 | [rock]
140 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
141 | rock02.rock.lan ansible_host=172.16.1.237
142 | rock03.rock.lan ansible_host=172.16.1.239
143 |
144 | [web]
145 | rock03.rock.lan ansible_host=172.16.1.239
146 |
147 | [lighttpd:children]
148 | web
149 |
150 | [sensors:children]
151 | rock
152 |
153 | [bro:children]
154 | sensors
155 |
156 | [fsf:children]
157 | sensors
158 |
159 | [kafka:children]
160 | sensors
161 |
162 | [stenographer:children]
163 | sensors
164 |
165 | [suricata:children]
166 | sensors
167 |
168 | [filebeat:children]
169 | fsf
170 | suricata
171 |
172 | [zookeeper]
173 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
174 |
175 | [elasticsearch:children]
176 | es_masters
177 | es_data
178 | es_ingest
179 |
180 | [es_masters]
181 | # This group should only ever contain exactly 1 or 3 nodes!
182 | #simplerockbuild.simplerock.lan ansible_host=127.0.0.1 ansible_connection=local
183 | # Multi-node example #
184 | #elasticsearch0[1:3].simplerock.lan
185 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
186 | rock02.rock.lan ansible_host=172.16.1.237
187 | rock03.rock.lan ansible_host=172.16.1.239
188 |
189 | [es_data]
190 | #simplerockbuild.simplerock.lan ansible_host=127.0.0.1 ansible_connection=local
191 | # Multi-node example #
192 | #elasticsearch0[1:4].simplerock.lan
193 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
194 | rock02.rock.lan ansible_host=172.16.1.237
195 | rock03.rock.lan ansible_host=172.16.1.239
196 |
197 | [es_ingest]
198 | #simplerockbuild.simplerock.lan ansible_host=127.0.0.1 ansible_connection=local
199 | # Multi-node example #
200 | #elasticsearch0[1:4].simplerock.lan
201 | rock01.rock.lan ansible_host=127.0.0.1 ansible_connection=local
202 | rock02.rock.lan ansible_host=172.16.1.237
203 | rock03.rock.lan ansible_host=172.16.1.239
204 |
205 | [elasticsearch:vars]
206 | # Disable all node roles by default
207 | node_master=false
208 | node_data=false
209 | node_ingest=false
210 |
211 | [es_masters:vars]
212 | node_master=true
213 |
214 | [es_data:vars]
215 | node_data=true
216 |
217 | [es_ingest:vars]
218 | node_ingest=true
219 |
220 | [docket:children]
221 | web
222 |
223 | [kibana:children]
224 | web
225 |
226 | [logstash:children]
227 | sensors
228 | ```
229 |
230 | If you have a bit of a more complex use-case, here's a more extensive example that has:
231 |
232 | * 3 Elasticsearch master nodes
233 | * 4 Elasticsearch data nodes
234 | * 1 Elasticsearch coordinating node
235 | * 1 Logstash node
236 | * 2 ROCK sensors
237 |
238 | We also added a few more sections, `[Logstash]` and `[Elasticsearch]`. We did this because we're breaking Logstash out into it's own server, as well as adding a coordinating node that will just be part of Elasticsearch, but not a data, ingest, or master node.
239 |
240 | ```yml
241 | # Define all of our Ansible_hosts up front
242 | es-master-0.rock.lan ansible_host=192.168.1.5
243 | es-master-1.rock.lan ansible_host=192.168.1.6
244 | es-master-2.rock.lan ansible_host=192.168.1.7
245 | es-data-0.rock.lan ansible_host=192.168.1.8
246 | es-data-1.rock.lan ansible_host=192.168.1.9
247 | es-data-2.rock.lan ansible_host=192.168.1.10
248 | es-data-3.rock.lan ansible_host=192.168.1.11
249 | es-coord-0.rock.lan ansible_host=192.168.1.12
250 | ls-pipeline-0.rock.lan ansible_host=192.168.1.13
251 | rock-0.rock.lan ansible_host=192.168.1.14
252 | rock-1.rock.lan ansible_host=192.168.1.15
253 |
254 | [rock]
255 | rock-0.rock.lan
256 | rock-1.rock.lan
257 |
258 | [web]
259 | es-coord-0.rock.lan
260 |
261 | [lighttpd:children]
262 | web
263 |
264 | [sensors:children]
265 | rock
266 |
267 | [zeek:children]
268 | sensors
269 |
270 | [fsf:children]
271 | sensors
272 |
273 | [kafka:children]
274 | sensors
275 |
276 | [stenographer:children]
277 | sensors
278 |
279 | [suricata:children]
280 | sensors
281 |
282 | [filebeat:children]
283 | fsf
284 | suricata
285 |
286 | [zookeeper]
287 | rock-0.rock.lan
288 |
289 | [elasticsearch:children]
290 | es_masters
291 | es_data
292 | es_ingest
293 |
294 | [elasticsearch]
295 | es-coord-0.rock.lan
296 |
297 | [es_masters]
298 | # This group should only ever contain exactly 1 or 3 nodes!
299 | # Multi-node example #
300 | #elasticsearch0[1:3].simplerock.lan
301 | es-master-[0:2].rock.lan
302 |
303 | [es_data]
304 | # Multi-node example #
305 | #elasticsearch0[1:4].simplerock.lan
306 | es-data-[0:3].rock.lan
307 |
308 | [es_ingest]
309 | # Multi-node example #
310 | #elasticsearch0[1:4].simplerock.lan
311 | es-data-[0:3].rock.lan
312 |
313 | [elasticsearch:vars]
314 | # Disable all node roles by default
315 | node_master=false
316 | node_data=false
317 | node_ingest=false
318 |
319 | [es_masters:vars]
320 | node_master=true
321 |
322 | [es_data:vars]
323 | node_data=true
324 |
325 | [es_ingest:vars]
326 | node_ingest=true
327 |
328 | [docket:children]
329 | web
330 |
331 | [kibana:children]
332 | web
333 |
334 | [logstash]
335 | ls-pipeline-0.rock.lan
336 | ```
337 |
338 | ### SSH Config
339 |
340 | After the inventory file entries are finalized, it's time to configure ssh in
341 | order for Ansible to communicate to all nodes during deployment. The ssh-config
342 | command will perform the following on all other nodes in the inventory:
343 |
344 | * add an authorized keys entry
345 | * add the user created at install to the sudoers file
346 |
347 | To configure ssh run: `sudo rock ssh-config`
348 |
349 | ### Update ROCK Configuration File
350 |
351 | There are a few last steps, both are in `/etc/rocknsm/config.yml`
352 |
353 | 1. If you are doing an offline installation, you need to change `rock_online_install: True` to `rock_online_install: False`.
354 | 1. If you are going to have 2 different sensor components, you need to manually define your Zookeeper host by adding `kafka_zookeeper_host: Zookeeper IP` to the configuration file. It isn't necessary where it is placed.
355 |
356 | ## Multi-node Deployment
357 |
358 | Now, we're finally ready to deploy!
359 |
360 | ```
361 | sudo rock deploy
362 | ```
363 |
364 | ### Success
365 |
366 | Once the deployment is completed with the components you chose, you'll be
367 | congratulated with a success banner. Congratulations!
368 |
369 |
370 |
371 |
372 |
373 | We strongly recommend giving all of the services a fresh stop/start with `sudo rockctl restart`.
374 |
--------------------------------------------------------------------------------
/docs/deploy/single-node.md:
--------------------------------------------------------------------------------
1 | # Single Node Deployment
2 |
3 | Let's get started and deploy a single ROCK sensor. This is the most straight
4 | forward (and most common) way to deploy.
5 |
6 | The TUI is an interactive user experience that improves the configuration process,
7 | rather than manually editing a `.yml` file. But to be clear, the end goal of all
8 | the selections made in the TUI will be writing out to `/etc/rocknsm/config.yml`.
9 |
10 |
11 | ## Requirements
12 |
13 | The following steps assume that you have done a fresh installation of ROCK using
14 | the [latest ISO](https://mirror.rocknsm.io/isos/stable/).
15 |
16 |
17 | ## Deploy using the TUI
18 |
19 | Let's walk through this menu line by line, starting with the first line that's
20 | already highlighted and ready to hit : Start things up by running:
21 |
22 | $`sudo rock setup`
23 |
24 |
25 |
26 |
27 |
28 |
29 | ### Select Interfaces
30 |
31 | This is where we define what interfaces are used for what purposes. In this
32 | example there are 2 interfaces to work with:
33 |
34 | 1. `ens33` - currently connected, will use for remote management
35 | 1. `ens34` - not currently connected, intended for monitor interface
36 |
37 | Choose the interfaces you want to for your usecase, and the settings will be
38 | reviewed before exiting:
39 |
40 |
41 |
42 |
43 |
44 |
45 | ### Management IP
46 |
47 | Let's choose how the management IP address is defined. We can let DHCP decide, or
48 | set a static address:
49 |
50 |
51 |
52 |
53 |
54 | For this example we'll use DHCP.
55 |
56 |
57 | ### Set Hostname
58 |
59 | The next set allows you to set the local hostname. For this example we'll call
60 | this box `rock01.rock.lan`. We have seen issues where naming a system with just numbers and dots causes ID conflicts with Kafka, meaning data cannot get into ROCK. We're not entirely sure what causes that, but hostnames like `2.6.0` seem to cause that...so, avoid names like that if you can.
61 |
62 |
63 |
64 |
65 |
66 |
67 | ### Offline/Online
68 |
69 | There are 2 options when selecting the source of installation packages:
70 |
71 | 1. `Yes = Online` - connects to upstream repositories for packages
72 | 1. `No = Offline` - uses local packages ( located in `/srv` )
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 | ### Choose Components
81 |
82 | Next up is selecting what components are installed during the deploy. These
83 | choices are given to provide flexibility in choosing only the services you want
84 | (or have the capability) to run.
85 |
86 | In the screen below you can see we've selected all the things.
87 |
88 |
89 |
90 |
91 |
92 |
93 | ### Choose Enabled Services
94 |
95 | After selecting what components are installed, we get a similar interface for
96 | selecting which of those services are _enabled_ (configured to start automatically
97 | on initial boot).
98 |
99 |
100 |
101 |
102 |
103 |
104 | ### Display Config
105 |
106 | This next selection gives the opportunity to pause and verify the configuration
107 | choices made and review if any changes need to be made.
108 |
109 |
110 |
111 |
112 |
113 |
114 | ### Write Config
115 |
116 | After reviewing the config, we can write out the changes to disk.
117 |
118 |
119 |
120 |
121 |
122 |
123 | ### Run Installer
124 |
125 | Finally, we can kick off our deployment and ROCK things thing.
126 |
127 |
128 |
129 |
130 |
131 |
132 | ### Success
133 |
134 | Once the deployment is completed with the components you chose, you'll be
135 | congratulated with a success banner. Congratulations!
136 |
137 |
138 |
139 |
140 |
141 |
142 |
143 |
144 | We strongly recommend giving all of the services a fresh stop/start with `sudo rockctl restart`.
145 |
--------------------------------------------------------------------------------
/docs/deploy/terminology.md:
--------------------------------------------------------------------------------
1 | # Terminology
2 |
3 | ## Deployment Types
4 |
5 | Before getting too deep into the bits let's make sure we're using the same
6 | terminology for things. There are 2 main ways to deploy RockNSM:
7 |
8 | 1. Single Node
9 | 2. Multi Node
10 |
11 |
12 | ### Single Node
13 |
14 | A "Single Node Deployment" is used to deploy one _standalone_ ROCK sensor. This
15 | is the simplest scenario and the most common use case.
16 |
17 |
18 | ### Multi-Node
19 |
20 | A "Muti-Node Deployment" is used to interconnect a collection of ROCK sensors that
21 | have already been installed and can talk to each other on the same network.
22 |
23 | This requires a bit more work up front to deploy, but is allows for a lot of
24 | scaling and flexibility to share workload across multiple boxes.
25 |
--------------------------------------------------------------------------------
/docs/img/admin-user.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/admin-user.jpg
--------------------------------------------------------------------------------
/docs/img/choose-components.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/choose-components.png
--------------------------------------------------------------------------------
/docs/img/choose-enabled.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/choose-enabled.png
--------------------------------------------------------------------------------
/docs/img/date-time.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/date-time.png
--------------------------------------------------------------------------------
/docs/img/display-config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/display-config.png
--------------------------------------------------------------------------------
/docs/img/docket-getpcap.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/docket-getpcap.png
--------------------------------------------------------------------------------
/docs/img/docket-submit.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/docket-submit.png
--------------------------------------------------------------------------------
/docs/img/docket.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/docket.png
--------------------------------------------------------------------------------
/docs/img/favicon.ico:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/favicon.ico
--------------------------------------------------------------------------------
/docs/img/filebeat-pipeline.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/filebeat-pipeline.png
--------------------------------------------------------------------------------
/docs/img/install-source.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/install-source.png
--------------------------------------------------------------------------------
/docs/img/install_banner.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/install_banner.png
--------------------------------------------------------------------------------
/docs/img/mgmt-ip-dhcp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/mgmt-ip-dhcp.png
--------------------------------------------------------------------------------
/docs/img/network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/network.png
--------------------------------------------------------------------------------
/docs/img/rock-diagram-new.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/rock-diagram-new.png
--------------------------------------------------------------------------------
/docs/img/rock-flow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/rock-flow.png
--------------------------------------------------------------------------------
/docs/img/rock-initialboot.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/rock-initialboot.jpg
--------------------------------------------------------------------------------
/docs/img/rock-user.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/rock-user.jpg
--------------------------------------------------------------------------------
/docs/img/rock.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/rock.png
--------------------------------------------------------------------------------
/docs/img/rock_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/rock_logo.png
--------------------------------------------------------------------------------
/docs/img/run-installer.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/run-installer.png
--------------------------------------------------------------------------------
/docs/img/select-interfaces.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/select-interfaces.png
--------------------------------------------------------------------------------
/docs/img/set-hostname.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/set-hostname.png
--------------------------------------------------------------------------------
/docs/img/tui-start.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/tui-start.png
--------------------------------------------------------------------------------
/docs/img/write-config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/rocknsm/rock-docs/7d8d13430436816c3974ac0e08879dbc20192a41/docs/img/write-config.png
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | # Welcome
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 | [RockNSM](https://rocknsm.io) is the premier sensor platform for Network Security
11 | Monitoring (NSM) hunting and incident response (IR) operations. ROCK is the
12 | open-source security distribution that prioritizes being:
13 |
14 | - **Reliable**
15 | - **Scalable**
16 | - **Secure**
17 |
18 | Above all else, ROCK exists to __**aid the analyst**__ in the fight to find the
19 | adversary.
20 |
21 | ---
22 |
23 | ## Quickstart
24 |
25 | If you're already familiar with building sensors you can jump straight into things in the
26 | [Quickstart Guide](./quickstart.md).
27 |
28 | ## Latest
29 |
30 | See the [Releases](reference/latest.md) page for the latest info on ROCK 2.5.
31 |
32 |
33 | ## Contents
34 |
35 |
36 |
37 | [About](about/what_is_it.md) - project overview / background / dataflow
38 |
39 | [Install](install/requirements.md) - requirements / install media / installation
40 |
41 | [Configure](configure/index.md) - configuring for your use case
42 |
43 | [Deploy](deployment/index.md) - development via Ansible playbooks
44 |
45 | [Usage](usage/index.md) - basic usage overview and troubleshooting
46 |
47 | [Services](services/index.md) - component directory and management info
48 |
49 | [Reference](reference/tutorials.md) - concept / design, components / dataflow
50 |
--------------------------------------------------------------------------------
/docs/install/install.md:
--------------------------------------------------------------------------------
1 | # Installation Guide
2 |
3 | Let's just hold on for [one hot minute](https://youtu.be/UIyMhzU655o) before
4 | installing, just in case you like to skip around.
5 |
6 | It's dangerous to deploy sensors alone... take these _[minumum requirements](./requirements.md)_:
7 |
8 | - 8GB RAM
9 | - 4 Physical Cores
10 | - 256GB disk
11 | - 2 NICs
12 | - active connection on mgmt port
13 |
14 |
15 | ## Network Connection
16 | This is a critical setup point:
17 |
18 | > Before booting to the ISO, connect the network interface that you intend to
19 | use to remotely manage ROCK.
20 |
21 | Why? During install, ROCK will see the network interface with an ip address and
22 | default gateway and designate it as the _**management**_ port. So plug into that
23 | interface and boot to your USB drive.
24 |
25 |
26 | ## Install Types
27 | ROCK works with both legacy BIOS and UEFI booting. Once booted from the USB, you are presented with 2 primary installation paths:
28 |
29 | 1. **Automated**
30 | 1. **Custom**
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 | ### Automated
39 | The "Automated" option is intended to serve as a _**starting point**_ that allows you to get into things. It uses the Centos Anaconda installer to make some of the harder decisions for users by skipping over many options to get you up and running. It makes a best guess at how to use resources -- most notably how to manage available disks.
40 |
41 | Bottom line: think of this as a product quickstart mode, perfect for installing on a VM or other temporary hardware. It is _**not**_ for production sensor deployment.
42 |
43 |
44 | > **For the rest of this install guide we'll work through the more detail oriented "Custom Install of ROCK" option.**
45 |
46 |
47 | ### Custom
48 | The "Custom" allows for more customization of a ROCK installation. This is especially helpful when you're working with multiple disks or even a large amount of storage on a single disk. The Custom option is recommended for production environments in order to get more granular in choosing how disk space is allocated.
49 |
50 |
51 | #### Disk Allocation
52 | Configuring disk and storage is a deep topic on it's own, but let's talke about a few examples to get started:
53 |
54 |
55 | ##### Stenographer
56 | A common gotcha occurs when you want full packet capture (via [Stenographer](../services/stenographer.md)), but it isn't given a separate partition. Stenographer is great at managing it's own disk space (starts to overwrite oldest data at 90% capacity), but that doesn't cut it when it's sharing the same mount point as Bro, Suricata , and other tools that generate data in ROCK.
57 |
58 | Best practice would be to create a `/data/stenographer` partition in order to prevent limited operations. For example, Elasticsearch will (rightfully) lock indexes up to a read-only state in order to keep things from crashing hard.
59 |
60 |
61 | ##### System Logs
62 | Another useful partition to create is `/var/log` to separate system log files from the rest of the system.
63 |
64 |
65 | ##### Example Table
66 | Below is a good starting point when partitioning
67 |
68 | | MOUNT POINT | USAGE | SIZE |
69 | | ---------- | ----------------- | -------- |
70 | | **SYSTEM** | **SYSTEM** | **SYSTEM** |
71 | | / | root filesystem | 15 GiB |
72 | | /boot | legacy boot files | 512 MiB |
73 | | /boot/efi | uefi boot files | 512 MiB |
74 | | swap | memory shortage | ~8 GiB+ |
75 | | **DATA** | **DATA** | **DATA** |
76 | | /var/log | system log files | ~15 GiB |
77 | | /home | user home dirs | ~20 GiB |
78 | | /data | data partition | ~ GiB |
79 | | /data/stenographer | steno partition | ~ GiB |
80 |
81 |
82 | For more information to assist with the partitioning process, you can see the [RHEL guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-disk-partitioning-setup-x86#sect-custom-partitioning-x86). Also, it may be a bit more self explanatory for you if you click “automatic partitions” then modify accordingly.
83 |
84 |
85 |
86 | #### Date & Time
87 | `UTC` is generally preferred for logging data as the timestamps from anywhere in the world will have a proper order without calculating offsets and daylight savings. That said, Kibana will present the Bro logs according to your timezone (as set in the browser). The bro logs themselves (i.e. in /data/bro/logs/) log in [epoch time](https://en.wikipedia.org/wiki/Unix_time) and will be written in UTC regardless of the system timezone.
88 |
89 |
90 |
91 |
92 |
93 | Bro includes a utility for parsing these on the command line called `bro-cut`. It can be used to print human-readable timestamps in either the local sensor timezone or UTC. You can also give it a custom format string to specify what you'd like displayed.
94 |
95 |
96 | #### Network & Hostname
97 |
98 | Before beginning the install process it's best to connect the interface you've selected to be the **management interface**. Here's the order of events:
99 |
100 | - ROCK will initially look for an interface with a default gateway and treat that interface as the MGMT INTERFACE
101 | - All remaining interfaces will be treated as MONITOR INTERFACES
102 |
103 | 1. Ensure that the interface you intend to use for MGMT has been turned on and has an ip address
104 | 2. Set the hostname of the sensor in the bottom left corner
105 | - this hostname will populate the Ansible inventory file in `/etc/rocknsm/hosts.ini`
106 |
107 |
108 |
109 |
110 |
111 | > Do not use a hostname that contains an underscore, for example `rock_2-5`. This will cause deployment failures!
112 |
113 |
114 | #### User Creation
115 | ROCK is configured with the root user disabled. We recommend that you leave it that way. Once you've kicked off the install, click **User Creation** at the next screen (shown above) and complete the required fields to set up a non-root admin user.
116 |
117 |
118 |
119 |
120 |
121 | > If this step is not completed during install, do not fear. you will be prompted to create this account after first login.
122 |
123 |
124 | #### Wrapping Up
125 |
126 | Once the install is complete you will be able to click **Finish Installation**
127 | and then reboot.
128 |
129 | You can then accept the license agreement: `c`(ontinue) + `ENTER`
130 |
131 | The `sshd` services is enabled at startup, so if you intend to complete the next
132 | steps remotely, note the management ip address now by running `ip a`. Your newly
133 | minted sensor is ready to be configured for your environment.
134 |
--------------------------------------------------------------------------------
/docs/install/media.md:
--------------------------------------------------------------------------------
1 | # Install Media
2 |
3 | If there’s one thing that should be carried away from the installation section, it's this:
4 |
5 | RockNSM has been designed to be used as a security distribution, not a package or a suite of tools. It’s built from the ground up and the ONLY SUPPORTED INSTALL IS THE OFFICIAL ISO.
6 |
7 | Yes, one can clone the project and run the Ansible on some bespoke CentOS build, and you may have great success... but you've **voided the warranty**. Providing a clean product that makes supporting submitted issues is important to us. The ISO addresses most use cases.
8 |
9 |
10 | ## Download
11 |
12 | The lastest ROCK build is available at [download.rocknsm.io](https://download.rocknsm.io/isos/stable/).
13 |
14 |
15 | ## Applying the ISO
16 |
17 | Now it's time to create a bootable USB drive with the fresh ROCK build. Let's look at few options.
18 |
19 | ### Linux
20 |
21 | #### CLI
22 |
23 | If you live in the terminal, use `dd` to apply the image. These instructions are for using a RHEL based system. If you're in a different environment, google is your friend.
24 |
25 | > **CAUTION** when using these commands by **ENSURING** you're writing to the correct disk / partition!
26 |
27 | 1. once you've inserted a USB get the drive ID:
28 | `lsblk`
29 |
30 | 2. unmount the target drive so you can write to it:
31 | `umount /dev/disk#`
32 |
33 | 3. write the image to drive:
34 | `sudo dd bs=8M if=path/to/rockiso of=/dev/disk#`
35 |
36 | #### GUI
37 |
38 | If you don't want to apply the image in the terminal, there are plenty of great tools to do this with a graphical interface:
39 |
40 | - [Etcher](http://etcher.io) - our go-to choice (cross-platform)
41 | - [SD Card Formatter](https://www.sdcard.org/downloads/formatter_4/) - works well
42 | - [YUMI](https://www.pendrivelinux.com/yumi-multiboot-usb-creator/) - create multibooting disk
43 |
44 | > **Note** while we do not want to dictate what tool you use, we've received reports when using Rufus, a popular Windows based tool, to make bootable media. We reported this on Feb 26 https://twitter.com/rocknsm/status/1100517122021748737. The above GUI tools have been proven to work.
45 |
46 | ### macOS
47 |
48 | #### CLI
49 |
50 | For the terminal, we'll once again use `dd`, but with a few differences from the linux instructions above.
51 |
52 | > **CAUTION** when using these commands by **ENSURING** you're writing to the correct disk / partition!
53 |
54 | 1. once you've inserted a USB get the drive ID:
55 | `diskutil list`
56 |
57 | 2. unmount the target drive so you can write to it:
58 | `diskutil unmount /dev/disk#`
59 |
60 | 3. write the image to drive:
61 | `sudo dd bs=8m if=path/to/rockiso of=/dev/disk#`
62 |
63 | #### GUI
64 |
65 | - [Etcher](http://etcher.io) - our defacto tool
66 |
67 |
68 | ### Windows
69 |
70 | When applying ISO on a Windows box our experience is with graphical applications entirely. Here's a list of what works well:
71 |
72 | - [Etcher](http://etcher.io) - our defacto tool
73 | - [Win32 Disk Imager](https://sourceforge.net/projects/win32diskimager/)
74 | - [SD Card Formatter](https://www.sdcard.org/downloads/formatter_4/)
75 | - [Rufus](https://rufus.ie/) - (we recently [encountered an issue](https://twitter.com/rocknsm/status/1100517122021748737) with Rufus
76 | and v2.3 -- 2/26/19)
77 |
--------------------------------------------------------------------------------
/docs/install/requirements.md:
--------------------------------------------------------------------------------
1 | # Requirements
2 |
3 | Installation of ROCK can be broken down into three main steps:
4 |
5 | 1. Install
6 | 1. Configure
7 | 1. Deploy
8 |
9 | Before that, let's cover what you're going to need before starting.
10 |
11 |
12 | ## Sensor Hardware
13 |
14 | The analysis of live network data is a resource intensive task, so the higher
15 | the IOPS the better. Here's the bottom line:
16 |
17 | > **If you throw hardware at ROCK it will use it, and use it well.**
18 |
19 |
20 | ### Minimum Specs
21 |
22 |
23 | | RESOURCE | RECOMMENDATION |
24 | | ----------- | ------------------ |
25 | | CPU | **4+** physical cores |
26 | | Memory | **8GB** RAM minimum, the more the better :) |
27 | | Storage | **256GB**, with 200+ of that dedicated to `/data`, SSD preferred |
28 | | Network | **2 gigabit interfaces**, one for management and one for collection |
29 |
30 |
31 | ## Install Media
32 |
33 | - ROCK install image - download `.iso` **[here](https://download.rocknsm.io/isos/stable/)**
34 | - 8GB+ capacity USB drive - to apply install image
35 | - BIOS settings to allow booting from mounted USB drive
36 |
37 |
38 | ## Network Connection
39 |
40 | ROCK is first and foremost a _**passive**_ network sensor and is designed with
41 | the assumption that there may not be a network connection available during
42 | install. There's some built-in flexibility with deploying ROCK, and this will
43 | be clarified more in then next sections.
44 |
45 |
46 |
47 | > NOTE: Check out the [ROCK@home Video Series](https://www.youtube.com/channel/UCUD0VHMKqPkdnJshsngZq9Q) that goes into detail on many things about deploying ROCK to include hardware choices for both sensor and network equipment.
48 |
--------------------------------------------------------------------------------
/docs/install/vm_installation.md:
--------------------------------------------------------------------------------
1 | # VM Build Guide
2 |
3 | The following walkthrough is based on VMware Fusion, but serves well as a general template to follow.
4 |
5 | > The more resources you give ROCK, the happier it'll be.
6 |
7 | ## New VM
8 |
9 | * in the top left corner click "*Add > New... then Custom Machine*"
10 | * select the "*Linux > RedHat Enterprise 64 template*"
11 | * create new virtual disk
12 | * name your VM, save
13 |
14 | Lets customize some settings, change based on your hardware.
15 |
16 | * **Processors & Memory**
17 | * Processors - 4 cores
18 | * Memory - 8192MB (8GB)
19 |
20 |
21 | * **Hard Disk**
22 | * increase the disk to 20GB
23 | * customize settings
24 | * save as name
25 |
26 |
27 | * **Network Adapter**
28 | * By default the vm is created with one interface - this will be for management.
29 | * lets add a second (listening) interface:
30 | * add device (top right), net adapter, add, “private to my mac”
31 |
32 |
33 | * **Boot Device**
34 | * click CD/DVD (IDE)
35 | * check the "Connect CD/DVD Drive" box
36 | * expand advanced options and browse to the latest ROCK iso
37 |
38 |
39 | * **Run Auto Installer**
40 |
41 | Once the above changes are made, it's time to install the OS. Lets run the "Auto Install", but before we do, there are some customization that can be done for VMs:
42 |
43 | * click the "*Start Up*" button while holding the `esc` key
44 |
45 | * hit `tab` for full config options
46 |
47 | * add the following values, speparated by spaces:
48 | * `biosdevname=0`
49 |
50 | * `net.ifnames=0` - this will ensure you get interface names like `eth0`. If you have physical hardware, I _highly_ recommend that you do not use this function
51 |
52 | * `vga=773` - improves video resolution issues
53 |
54 | * **ENTER**, and ROCK install script will install
55 | * create _**admin**_ user acct
56 | * **REBOOT** when install process is complete
57 |
58 | TIP: The `root` account is locked by default and the user account you created has `sudo` access.
59 |
60 | ### Updating
61 |
62 | > NOTE: VMware Fusion will allow local ssh out of the box. If you're using Virtualbox you'll need to set up local [ssh port forwarding](https://nsrc.org/workshops/2014/btnog/raw-attachment/wiki/Track2Agenda/ex-virtualbox-portforward-ssh.htm).
63 |
64 | Log in with the admin credentials used during the install process, and lets get this box current:
65 | ```
66 | sudo yum update -y && reboot
67 | ```
68 |
--------------------------------------------------------------------------------
/docs/quickstart.md:
--------------------------------------------------------------------------------
1 | # ROCK Quickstart
2 |
3 | This is a _**hasty**_ guide to get right into building your very own sensor, just
4 | for users already familiar with building sensors and know what they're doing.
5 |
6 | If you're not an NSM ninja, you can start building from the beginning of the full docs [here](install/requirements.md).
7 |
8 |
9 | ### Getting the Bits
10 | Download the lastest ROCK build from [download.rocknsm.io](https://download.rocknsm.io/isos/stable/)
11 | and create a bootable disk using your favorite burning utility.
12 |
13 |
14 | ### Install Rock
15 | 1. If you have a network connection available, plug it into the management port
16 | 1. Select the **Custom Install** at initial boot
17 | 1. Configure disk partitions to suit your needs
18 | 1. Create an administrative user
19 | 1. Reboot and log back in as admin user
20 |
21 |
22 | ### Configure & Deploy ROCK
23 | 1. Familiarize with the updated ROCK commands / options by running $`rock`
24 | 1. Run $`sudo rock setup`
25 | 1. Follow each step of the setup TUI by the numbers
26 | 1. Deploy your sensor by executing the last setup option "Run Installer"
27 |
28 |
29 | ---
30 |
31 | ## Next Up
32 | A good next step would be the [usage](./usage/index.md) section for details on doing basic functions checks before plugging into the stream.
33 |
--------------------------------------------------------------------------------
/docs/reference/changelog.md:
--------------------------------------------------------------------------------
1 | # Changelog
2 |
3 | ## 2.5 -- 2020-02-21
4 |
5 | - New: ROCK has move to the ECS standard
6 | - New: Out of the box support for XFS Disk Quotas
7 | - New: Updated ROCK Dashboards
8 | - Fix: Various visualization issues in ROCK dashboard
9 | - Fix: (x509) Certificate issues resolved
10 | - Update: Elastic Stack components to version 7.6
11 | - Update: Zeek to version 3
12 | - Update: Zeek to version 5
13 |
14 |
15 | ## 2.4 -- 2019-04-02
16 |
17 | - New: Text User Interface (TUI) for initial host setup
18 | - New: ROCK manager utility
19 | - New: Automated Testing Infrastructure
20 | - Fixes: 95 closed issues
21 | - Upgrade: Elastic 6.6 -> 6.7.1
22 | - Upgrade: Suricata 4.1.1 -> 4.1.3
23 | - Upgrade: Zookeeper 3.4.11 -> 3.4.13
24 |
25 |
26 | ## 2.3 -- 2019-02-25
27 |
28 | - New: Add ability to do multi-host deployment of sensor + data tiers (#339)
29 | - New: Integrate Docket into Kibana by default
30 | - New: Improvements and additional Kibana dashboards
31 | - Fixes: issue with Bro failing when monitor interface is down (#343)
32 | - Fixes: issue with services starting that shouldn’t (#346)
33 | - Fixes: race condition on loading dashboards into Kibana (#356)
34 | - Fixes: configuration for Docket allowing serving from non-root URI (#361)
35 | - Change: bro log retention value to one week rather than forever (#345)
36 | - Change: Greatly improve documentation (#338)
37 | - Change: Reorganize README (#308)
38 | - Change: Move ECS to rock-dashboards repo (#305)
39 | - Change: Move RockNSM install paths to filesystem hierarchy standard locations (#344)
40 |
41 |
42 | ## 2.2 -- 2018-10-26
43 |
44 | - Feature: rockctl command to quickly check or change services
45 | - Feature: Docket, a REST API and web UI to query multiple stenographer instances, now using TCP port 443
46 | - Optimization: Kibana is now running on TCP port 443
47 | - Feature: Added Suricata-Update to manage Suricata signatures
48 | - Feature: GPG signing of packages and repo metadata
49 | - Feature: Added functional tests using testinfra
50 | - Feature: Initial support of Elastic Common Schema
51 | - Feature: Elastic new Features
52 | - Canvas
53 | - Elastic Maps Service
54 | - Feature: Include full Elasticstack (with permission) including features formerly known as X-Pack:
55 | - Graph
56 | - Machine Learning
57 | - Reporting
58 | - Security
59 | - Monitoring
60 | - Alerting
61 | - Elasticsearch SQL
62 | - Optimization: Elastic dashboards, mappings, and Logstash config moved to module-like construct
63 | - Upgrade: CentOS is updated to 7.5 (1804)
64 | - Upgrade: Elastic Stack is updated to 6.4.2
65 | - Upgrade: Suricata is updated to 4.0.5
66 | - Upgrade: Bro is updated to 2.5.4
67 |
68 |
69 | ## 2.1 -- 2018-08-23
70 |
71 |
81 |
--------------------------------------------------------------------------------
/docs/reference/contribution.md:
--------------------------------------------------------------------------------
1 | # How to Pitch In
2 |
3 | ROCK is an open source project and would not be what it is without a community of
4 | users and contributors. There are many ways to contribute, so take a look at how:
5 |
6 |
7 | ## General Support
8 | For quick questions and deployment support, please join the
9 | [RockNSM Community](https://community.rocknsm.io).
10 |
11 |
12 | ## Github Contribution
13 |
14 | ### Issues
15 | Before you submit an issue please search the issue tracker, as there may already
16 | be an issue for your problem and it's discussion might just solve things. We
17 | want to resolve issues as soon as possible, but in order to do so please provide
18 | as much detail about the environment and situation as possible.
19 |
20 |
21 | ### Pull Requests
22 | In order to issue a PR, fork the project, make your changes, and add descriptive
23 | messages to your commits (this is a mandatory requirement for your PR to be
24 | considered). Submit a PR to the main [rock repo](https://github.com/rocknsm/rock).
25 | If changes or additions are suggested, edit your fork and this will automatically
26 | update your PR.
27 |
--------------------------------------------------------------------------------
/docs/reference/latest.md:
--------------------------------------------------------------------------------
1 | # Latest
2 |
3 | ## Release 2.5
4 |
5 | We are pleased to announce that ROCK 2.5 is out! Here's a quick overview of
6 | some of the latest additions:
7 |
8 |
9 | NEW - ROCK has move to the [ECS](https://github.com/elastic/ecs) standard!
10 |
11 | - legacy pipeline is still available (on ISO install)
12 | - aliases are in place to assist backwards compatibility
13 |
14 | NEW - Out of the box support for XFS Disk Quotas
15 |
16 | - puts quota on `/data` or falls back to `/`
17 | - works for both automated and manual installs
18 | - standalone playbook to setup quotas on installs other than ISO download (reboot req.)
19 | - the amount of disk given to a service is enabled by weight
20 |
21 | NEW - Updated ROCK Dashboards
22 |
23 | - available in ISO install
24 | - incorporating Vega into dashboards
25 |
26 | FIX - various visualization issues in ROCK dashboard
27 |
28 | FIX - x509 certificate issues resolved
29 |
30 | UPDATE - All Elastic Stack components to [v7.6](https://www.elastic.co/blog/elasticsearch-7-6-0-released)
31 |
32 | NEW - Updated Zeek to version 3
33 |
34 | NEW - Updated Suricata to version 5
--------------------------------------------------------------------------------
/docs/reference/license.md:
--------------------------------------------------------------------------------
1 | # License
2 |
3 | ```
4 |
5 |
6 | Apache License
7 | Version 2.0, January 2004
8 | http://www.apache.org/licenses/
9 |
10 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
11 |
12 | 1. Definitions.
13 |
14 | "License" shall mean the terms and conditions for use, reproduction,
15 | and distribution as defined by Sections 1 through 9 of this document.
16 |
17 | "Licensor" shall mean the copyright owner or entity authorized by
18 | the copyright owner that is granting the License.
19 |
20 | "Legal Entity" shall mean the union of the acting entity and all
21 | other entities that control, are controlled by, or are under common
22 | control with that entity. For the purposes of this definition,
23 | "control" means (i) the power, direct or indirect, to cause the
24 | direction or management of such entity, whether by contract or
25 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
26 | outstanding shares, or (iii) beneficial ownership of such entity.
27 |
28 | "You" (or "Your") shall mean an individual or Legal Entity
29 | exercising permissions granted by this License.
30 |
31 | "Source" form shall mean the preferred form for making modifications,
32 | including but not limited to software source code, documentation
33 | source, and configuration files.
34 |
35 | "Object" form shall mean any form resulting from mechanical
36 | transformation or translation of a Source form, including but
37 | not limited to compiled object code, generated documentation,
38 | and conversions to other media types.
39 |
40 | "Work" shall mean the work of authorship, whether in Source or
41 | Object form, made available under the License, as indicated by a
42 | copyright notice that is included in or attached to the work
43 | (an example is provided in the Appendix below).
44 |
45 | "Derivative Works" shall mean any work, whether in Source or Object
46 | form, that is based on (or derived from) the Work and for which the
47 | editorial revisions, annotations, elaborations, or other modifications
48 | represent, as a whole, an original work of authorship. For the purposes
49 | of this License, Derivative Works shall not include works that remain
50 | separable from, or merely link (or bind by name) to the interfaces of,
51 | the Work and Derivative Works thereof.
52 |
53 | "Contribution" shall mean any work of authorship, including
54 | the original version of the Work and any modifications or additions
55 | to that Work or Derivative Works thereof, that is intentionally
56 | submitted to Licensor for inclusion in the Work by the copyright owner
57 | or by an individual or Legal Entity authorized to submit on behalf of
58 | the copyright owner. For the purposes of this definition, "submitted"
59 | means any form of electronic, verbal, or written communication sent
60 | to the Licensor or its representatives, including but not limited to
61 | communication on electronic mailing lists, source code control systems,
62 | and issue tracking systems that are managed by, or on behalf of, the
63 | Licensor for the purpose of discussing and improving the Work, but
64 | excluding communication that is conspicuously marked or otherwise
65 | designated in writing by the copyright owner as "Not a Contribution."
66 |
67 | "Contributor" shall mean Licensor and any individual or Legal Entity
68 | on behalf of whom a Contribution has been received by Licensor and
69 | subsequently incorporated within the Work.
70 |
71 | 2. Grant of Copyright License. Subject to the terms and conditions of
72 | this License, each Contributor hereby grants to You a perpetual,
73 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
74 | copyright license to reproduce, prepare Derivative Works of,
75 | publicly display, publicly perform, sublicense, and distribute the
76 | Work and such Derivative Works in Source or Object form.
77 |
78 | 3. Grant of Patent License. Subject to the terms and conditions of
79 | this License, each Contributor hereby grants to You a perpetual,
80 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
81 | (except as stated in this section) patent license to make, have made,
82 | use, offer to sell, sell, import, and otherwise transfer the Work,
83 | where such license applies only to those patent claims licensable
84 | by such Contributor that are necessarily infringed by their
85 | Contribution(s) alone or by combination of their Contribution(s)
86 | with the Work to which such Contribution(s) was submitted. If You
87 | institute patent litigation against any entity (including a
88 | cross-claim or counterclaim in a lawsuit) alleging that the Work
89 | or a Contribution incorporated within the Work constitutes direct
90 | or contributory patent infringement, then any patent licenses
91 | granted to You under this License for that Work shall terminate
92 | as of the date such litigation is filed.
93 |
94 | 4. Redistribution. You may reproduce and distribute copies of the
95 | Work or Derivative Works thereof in any medium, with or without
96 | modifications, and in Source or Object form, provided that You
97 | meet the following conditions:
98 |
99 | (a) You must give any other recipients of the Work or
100 | Derivative Works a copy of this License; and
101 |
102 | (b) You must cause any modified files to carry prominent notices
103 | stating that You changed the files; and
104 |
105 | (c) You must retain, in the Source form of any Derivative Works
106 | that You distribute, all copyright, patent, trademark, and
107 | attribution notices from the Source form of the Work,
108 | excluding those notices that do not pertain to any part of
109 | the Derivative Works; and
110 |
111 | (d) If the Work includes a "NOTICE" text file as part of its
112 | distribution, then any Derivative Works that You distribute must
113 | include a readable copy of the attribution notices contained
114 | within such NOTICE file, excluding those notices that do not
115 | pertain to any part of the Derivative Works, in at least one
116 | of the following places: within a NOTICE text file distributed
117 | as part of the Derivative Works; within the Source form or
118 | documentation, if provided along with the Derivative Works; or,
119 | within a display generated by the Derivative Works, if and
120 | wherever such third-party notices normally appear. The contents
121 | of the NOTICE file are for informational purposes only and
122 | do not modify the License. You may add Your own attribution
123 | notices within Derivative Works that You distribute, alongside
124 | or as an addendum to the NOTICE text from the Work, provided
125 | that such additional attribution notices cannot be construed
126 | as modifying the License.
127 |
128 | You may add Your own copyright statement to Your modifications and
129 | may provide additional or different license terms and conditions
130 | for use, reproduction, or distribution of Your modifications, or
131 | for any such Derivative Works as a whole, provided Your use,
132 | reproduction, and distribution of the Work otherwise complies with
133 | the conditions stated in this License.
134 |
135 | 5. Submission of Contributions. Unless You explicitly state otherwise,
136 | any Contribution intentionally submitted for inclusion in the Work
137 | by You to the Licensor shall be under the terms and conditions of
138 | this License, without any additional terms or conditions.
139 | Notwithstanding the above, nothing herein shall supersede or modify
140 | the terms of any separate license agreement you may have executed
141 | with Licensor regarding such Contributions.
142 |
143 | 6. Trademarks. This License does not grant permission to use the trade
144 | names, trademarks, service marks, or product names of the Licensor,
145 | except as required for reasonable and customary use in describing the
146 | origin of the Work and reproducing the content of the NOTICE file.
147 |
148 | 7. Disclaimer of Warranty. Unless required by applicable law or
149 | agreed to in writing, Licensor provides the Work (and each
150 | Contributor provides its Contributions) on an "AS IS" BASIS,
151 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
152 | implied, including, without limitation, any warranties or conditions
153 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
154 | PARTICULAR PURPOSE. You are solely responsible for determining the
155 | appropriateness of using or redistributing the Work and assume any
156 | risks associated with Your exercise of permissions under this License.
157 |
158 | 8. Limitation of Liability. In no event and under no legal theory,
159 | whether in tort (including negligence), contract, or otherwise,
160 | unless required by applicable law (such as deliberate and grossly
161 | negligent acts) or agreed to in writing, shall any Contributor be
162 | liable to You for damages, including any direct, indirect, special,
163 | incidental, or consequential damages of any character arising as a
164 | result of this License or out of the use or inability to use the
165 | Work (including but not limited to damages for loss of goodwill,
166 | work stoppage, computer failure or malfunction, or any and all
167 | other commercial damages or losses), even if such Contributor
168 | has been advised of the possibility of such damages.
169 |
170 | 9. Accepting Warranty or Additional Liability. While redistributing
171 | the Work or Derivative Works thereof, You may choose to offer,
172 | and charge a fee for, acceptance of support, warranty, indemnity,
173 | or other liability obligations and/or rights consistent with this
174 | License. However, in accepting such obligations, You may act only
175 | on Your own behalf and on Your sole responsibility, not on behalf
176 | of any other Contributor, and only if You agree to indemnify,
177 | defend, and hold each Contributor harmless for any liability
178 | incurred by, or claims asserted against, such Contributor by reason
179 | of your accepting any such warranty or additional liability.
180 |
181 | END OF TERMS AND CONDITIONS
182 |
183 | APPENDIX: How to apply the Apache License to your work.
184 |
185 | To apply the Apache License to your work, attach the following
186 | boilerplate notice, with the fields enclosed by brackets "[]"
187 | replaced with your own identifying information. (Don't include
188 | the brackets!) The text should be enclosed in the appropriate
189 | comment syntax for the file format. We also recommend that a
190 | file or class name and description of purpose be included on the
191 | same "printed page" as the copyright notice for easier
192 | identification within third-party archives.
193 |
194 | Copyright [yyyy] [name of copyright owner]
195 |
196 | Licensed under the Apache License, Version 2.0 (the "License");
197 | you may not use this file except in compliance with the License.
198 | You may obtain a copy of the License at
199 |
200 | http://www.apache.org/licenses/LICENSE-2.0
201 |
202 | Unless required by applicable law or agreed to in writing, software
203 | distributed under the License is distributed on an "AS IS" BASIS,
204 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
205 | See the License for the specific language governing permissions and
206 | limitations under the License.
207 | ```
208 |
--------------------------------------------------------------------------------
/docs/reference/support.md:
--------------------------------------------------------------------------------
1 | # Support
2 |
3 | ## Community Support
4 | For community support using ROCK, we encourage you to check out the fine members over at the [RockNSM Community Site](https://community.rocknsm.io/). From there you'll be able to connect with other users of ROCK as well as the developers and maintainers of the project. This is by far the fastest way to connect with others as well as start to troubleshoot issues you may be experiencing.
5 |
6 | ## GitHub Issue
7 | In the event that you identify an issue with the project, please feel free to log it using the [GitHub Issue tracker](https://github.com/rocknsm/rock/issues) for the project.
8 |
9 | Please be sure to include the contents of `/etc/rocknsm/rocknsm-buildstamp`.
10 |
--------------------------------------------------------------------------------
/docs/reference/tutorials.md:
--------------------------------------------------------------------------------
1 | # Tutorials
2 |
3 | We've been working hard to create clear and relevant training content:
4 |
5 | ## Video Guides
6 |
7 | ### ROCK Introduction
8 |
9 | [ROCK Introduction](https://youtu.be/tcEpI_vpeWc) - what ROCK is and how
10 | everything works together
11 |
12 | ### ROCK@home Series
13 |
14 | [ROCK@home](https://youtu.be/w8h1ft8QTFk) - 3 part series on the lowest barrier
15 | to entry: tapping your home network
16 |
17 | ### BSidesKC 2018
18 |
19 | [Threat Hunting with RockNSM](https://www.youtube.com/watch?v=-Mp1pUXvKuw) - this
20 | talk by Bradford Dabbs discusses the benefits of a passive first approach and
21 | how RockNSM can be used to facilitate it.
22 |
--------------------------------------------------------------------------------
/docs/services/docket.md:
--------------------------------------------------------------------------------
1 | # Docket
2 |
3 | Docket is a web UI that makes it easy for analysts to filter mountains of PCAP down to specific chunks in order to find the [baddies](https://v637g.app.goo.gl/qkGzskQTs5goPdBH6).
4 |
5 |
6 | https://[sensorip]/app/docket/
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 | ## Overview
15 |
16 | PCAP is great, but doesn't scale well. There's so much detail that it can be overwhelming to sort through. A great alternate to "following the TCP stream" through an ocean of packets is to use a tool like Docket that allows for easy filtering on key points such as:
17 |
18 | - timeframe
19 | - hosts
20 | - networks
21 | - ports
22 | - more ...
23 |
24 | The NSM community has needed a solution like Docket for a while, and we're excited to see how it empowers the analysis process.
25 |
26 |
27 | ## Basic Usage
28 |
29 | To access Docket point to `https:///app/docket/`. Please note the **trailing slash**. (This is due to Kibana being served from the same proxy and gets greedy with the URL path).
30 |
31 |
32 | #### Submit Request
33 |
34 |
35 |
36 |
37 |
38 | Once into the UI simply add your search criteria and click "Submit".
39 |
40 |
41 | #### Get PCAP
42 |
43 |
44 |
45 |
46 |
47 | Once the job is processed, click "Get PCAP" to download to your box locally.
48 |
49 |
50 | ## Management
51 |
52 | #### Services
53 |
54 | Docket requires the following services to function:
55 |
56 | - `lighttpd` - TLS connection
57 | - `stenographer` - tool to write / query pcap
58 | - `stenographer@` - process for each monitor interface
59 |
60 | Current status can be checked with the following commands:
61 |
62 | $ `sudo systemctl status lighttpd`
63 |
64 | $ `sudo rockctl status`
65 |
66 |
67 | #### Changing Lighttpd Credentials
68 |
69 | For the diligent (paranoid), the credentials that were initially generated at
70 | installation can be changed with the following steps:
71 |
72 | 1. create a new shell variable, example: `USER_NAME=operator`
73 | 2. append the new user to lighttpd config file:
74 | $ `sudo sh -c "echo -n '$USER_NAME:' >> /etc/lighttpd/rock-htpasswd.user"`
75 | 3. generate new password for new user:
76 | $ `sudo sh -c "openssl passwd -apr1 >> /etc/lighttpd/rock-htpasswd.user"`
77 |
78 |
79 | ## Directories
80 |
81 | Here are some important filesystem paths that will be useful for any necessary
82 | troubleshooting efforts:
83 |
84 | ### PCAP Storage
85 |
86 | User requested PCAP jobs are saved in:
87 |
88 | `/var/spool/docket`
89 |
90 | In a multi-user environment this directory can fill up depending on how much space has been allocated to the `/var` partition. Keep this path clean to prevent issues.
91 |
92 | ### Python Socket
93 |
94 | `/run/docket/`
95 |
--------------------------------------------------------------------------------
/docs/services/elasticsearch.md:
--------------------------------------------------------------------------------
1 | # Elasticsearch
2 |
3 | ## Overview
4 | Elasticsearch is the data storage and retrieval system in RockNSM. Elasticsearch
5 | is an "indexed JSON document store". Unlike other solutions, (network) events are
6 | indexed **once** on initial ingest, and after which you can run queries and
7 | aggregations quickly and efficiently.
8 |
9 | ROCK sends all logs preformatted in JSON, complete with human readable timestamps.
10 | This does two things:
11 |
12 | 1. Elasticsearch compression is effectively increased since there is not two
13 | copies of the data, raw and JSON.
14 | 1. The preformatted timestamps and JSON log data greatly increase the logging and
15 | error rate while increasing reliability of the logging infrastructure.
16 |
17 |
18 | ## Management
19 |
20 | ### Service
21 |
22 | Elasticsearch is deployed as a systemd unit, called `elasticsearch.service`.
23 | Normal systemd procedures apply here:
24 |
25 | ```
26 | sudo systemctl start elasticsearch
27 | sudo systemctl status elasticsearch
28 | sudo systemctl stop elasticsearch
29 | sudo systemctl restart elasticsearch
30 | ```
31 |
32 | ### API Access
33 |
34 | Elasticsearch data can be accessed via a [Restful API](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs.html)
35 | over port 9200. Kibana is the most common way this is done, but this can also be
36 | accomplished with `curl` commands, such as: `$ curl :9200/_cat/indices`.
37 |
38 |
39 | ## Directories
40 |
41 | * home:
42 | * `/usr/share/elasticsearch`
43 | * data:
44 | * `/data/elasticsearch/`
45 | * application logs:
46 | * `/var/log/elasticsearch/`
47 |
--------------------------------------------------------------------------------
/docs/services/filebeat.md:
--------------------------------------------------------------------------------
1 | # Filebeat
2 |
3 |
4 |
5 |
6 |
7 |
8 | ## Overview
9 |
10 | [Elastic Beats](https://www.elastic.co/products/beats) are lightweight
11 | "data shippers". Filebeat's role in ROCK is to do just this: ship file data to
12 | the next step in the pipeline.
13 |
14 | The following ROCK components depend on Filebeat to send their log files into
15 | the Kafka message queue:
16 |
17 | 1. **Suricata** - writes alerting data into `eve.json`
18 |
19 | 2. **FSF** - writes static file scan results to `rockout.log`
20 |
21 |
22 | ## Management
23 |
24 | #### Service
25 |
26 | FSF is deployed as a systemd unit, called `filebeat.service`. This service is
27 | configured and enabled on startup. This can be verified with either:
28 |
29 | `$ sudo rockctl status`
30 |
31 | `$ sudo systemctl status filebeat`
32 |
33 |
34 | ## Directories
35 |
36 | The configuration path for Filebeat is found at:
37 |
38 | `/etc/filebeat/filebeat.yml`
39 |
--------------------------------------------------------------------------------
/docs/services/fsf.md:
--------------------------------------------------------------------------------
1 | # FSF
2 | [FSF](https://github.com/EmersonElectricCo/fsf) is included in RockNSM to
3 | provide _**static**_ file analysis on filetypes of interest.
4 |
5 |
6 | ## Overview
7 | FSF works in conjuction with the file extraction framework provided by [Bro](./bro.md).
8 | Bro can be configured to watch for specific file (mime) types, as well as
9 | establishing max file sizes that will be extracted.
10 |
11 | FSF uses a client-server model and can watch for new extracted files in the
12 | `/data/fsf/` partition.
13 |
14 |
15 | ## Management
16 |
17 | ### Services
18 |
19 | FSF is deployed as a systemd unit, called `fsf.service`. Normal systemd procedures
20 | apply here:
21 |
22 | ```
23 | sudo systemctl start fsf
24 | sudo systemctl status fsf
25 | sudo systemctl stop fsf
26 | sudo systemctl restart fsf
27 | ```
28 |
29 | It can also be managed using the `rockctl` command.
30 |
31 | ## Directories
32 |
33 | ### Server
34 |
35 | `/opt/fsf/fsf-server/conf/config.py` - main config file
36 | `/opt/fsf/fsf-server/main.py` - server script
37 |
38 | ### Client
39 |
40 | `/opt/fsf/fsf-client/conf/config.py` - main config file
41 | `/opt/fsf/fsf-client/fsf_client.py` - client binary
42 |
--------------------------------------------------------------------------------
/docs/services/index.md:
--------------------------------------------------------------------------------
1 | # Summary
2 |
3 | ROCK uses a collection of open-source applications as described in this
4 | "Services" section. This portion of the documentation covers the basic
5 | administration of each of the major components of RockNSM.
6 |
7 |
8 | ## Managing All Services
9 |
10 | If you know Linux operating systems that use systemd, the you'll be familiar
11 | with the command `systemctl` to manage services. ROCK provides two separate
12 | scripts that provide similar functionality, depending upon your deployment
13 | scenario.
14 |
15 | 1. $ `rockctl` - service manage for a **single-node** instance
16 |
17 | 1. $ `rock` - service manage for a **multi-node** instance
18 |
19 |
20 | ### rockctl
21 |
22 | The `rockctl` command is a Bash script that calls systemd to action all
23 | ROCK services. Use this tool when you're connected to a specific sensor and
24 | need to take action on the sensor services as a group. `rockctl` can be used
25 | to perform the following operations:
26 |
27 | ```shell
28 | $ sudo rockctl status # Report status for all ROCK services on this host
29 | stop # Stop all ROCK services on this host
30 | start # Start all ROCK services on this host
31 | restart # Restart all ROCK services on this host
32 | reset-failed # Reset failed status all ROCK services on this host
33 | # Useful for services like stenographer that start and exit
34 | ```
35 |
36 |
37 | ### rock
38 |
39 | The `rock` command is a Bash script that uses Ansible to action services across
40 | all sensors across all sensors that are part of a multi-node deployment.
41 |
42 | ```shell
43 | [admin@rock ~]$ rock
44 |
45 | Commands:
46 | setup Launch TUI to configure this host for deployment
47 | tui Alias for setup
48 | ssh-config Configure hosts in inventory to use key-based auth (multinode)
49 | deploy Deploy selected ROCK components
50 | deploy-offline Same as deploy --offline (Default ISO behavior)
51 | deploy-online Same as deploy --online
52 | stop Stop all ROCK services
53 | start Start all ROCK services
54 | restart Restart all ROCK services
55 | status Report status for all ROCK services
56 | genconfig Generate default configuration based on current system
57 | destroy Destroy all ROCK data: indexes, logs, PCAP, i.e. EVERYTHING
58 | NOTE: Will not remove any services, just the data
59 |
60 | Options:
61 | --config, -c Specify full path to configuration overrides
62 | --extra, -e Set additional variables as key=value or YAML/JSON passed to ansible-playbook
63 | --help, -h Show this usage information
64 | --inventory, -i Specify path to Ansible inventory file
65 | --limit Specify host to run plays
66 | --list-hosts Outputs a list of matching hosts; does not execute anything else
67 | --list-tags List all available tags
68 | --list-tasks List all tasks that would be executed
69 | --offline, -o Deploy ROCK using only local repos (Default ISO behavior)
70 | --online, -O Deploy ROCK using online repos
71 | --playbook, -p Specify path to Ansible playbook file
72 | --skip-tags Only run plays and tasks whose tags do not match these values
73 | --tags, -t Only run plays and tasks tagged with these values
74 | --verbose, -v Increase verbosity of ansible-playbook
75 | ```
76 |
77 |
78 | ## Managing Single Services
79 |
80 | When you need to take action on a singular service (a la carte), use `systemctl`
81 | as you would
82 |
83 |
84 | ## Published URLs and Ports
85 |
86 | ROCK uses the `lighttpd` webserver to perform vhost redirects to it's web
87 | interfaces. It's configured to listen for (IPV4 only) connections over
88 | port `443` for the following:
89 |
90 | ### URLs
91 |
92 | - [Kibana](kibana.md) is accessible at: `https:///app/kibana`
93 | - [Docket](docket.md) is accessible at: `https:///app/docket/`
94 |
95 |
96 | ### Ports
97 |
98 | * Elasticsearch: `:9200`
99 | * Kibana: `:5601`
100 |
--------------------------------------------------------------------------------
/docs/services/kafka.md:
--------------------------------------------------------------------------------
1 | # Kafka
2 | [Kafka](https://kafka.apache.org/documentation/) is a wicked fast and reliable
3 | message queue.
4 |
5 |
6 | ## Overview
7 | Kafka solves the problem of having multiple data sources sending into the same
8 | pipeline. It acts as a staging area to allow [Logstash](./Logstash) to keep up
9 | with things.
10 |
11 |
12 | ## Management
13 |
14 | #### Services
15 | Kafka is deployed as a systemd unit, called `kafka.service`. Normal systemd
16 | procedures apply here:
17 |
18 | ```
19 | sudo systemctl start kafka
20 | sudo systemctl status kafka
21 | sudo systemctl stop kafka
22 | sudo systemctl restart kafka
23 | ```
24 |
25 | It can also be managed using the `rockctl` command.
26 |
27 |
28 | ## Directories
29 |
30 | `etc/kafka/server.properties` - primary config file
31 |
--------------------------------------------------------------------------------
/docs/services/kibana.md:
--------------------------------------------------------------------------------
1 | # Kibana
2 |
3 | ## Overview
4 | Kibana is the web interface used to display data inside [Elasticseach](elasticsearch.md).
5 |
6 |
7 | ## Basic Usage
8 |
9 | Open a web browser and visit the following url: `https:///app/kibana`.
10 |
11 | On first connection, users will be prompted for a username and password. Upon
12 | running the deploy script a random passphrase is generated in the style of [XKCD](https://xkcd.com/936/).
13 |
14 | These credentials are stored in "KIBANA_CREDS.README" file located in the home directory of the user
15 | created at install e.g. `/home/admin/KIBANA_CREDS.README`.
16 |
17 | ## Management
18 |
19 | ### Service
20 |
21 | Kibana is deployed as a systemd unit, called `kibana.service`. Normal
22 | systemd procedures apply here:
23 |
24 | ```
25 | sudo systemctl start kibana
26 | sudo systemctl status kibana
27 | sudo systemctl stop kibana
28 | sudo systemctl restart kibana
29 | ```
30 |
31 | ## Directories
32 |
33 | * Home:
34 | `/usr/share/kibana`
35 | * Data:
36 | Stored in `.kibana` index in [Elasticseach](elasticsearch.md)
37 | * Application Logs:
38 | `journalctl -u kibana`
39 |
--------------------------------------------------------------------------------
/docs/services/logstash.md:
--------------------------------------------------------------------------------
1 | # Logstash
2 |
3 | [Logstash](https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html) is part of the Elastic Stack that performs log file filtering and
4 | enrichment.
5 |
6 |
7 | ## Management
8 |
9 | #### Services
10 |
11 | Logstash is deployed as a systemd unit, called `logstash.service`. Normal systemd
12 | procedures apply here:
13 |
14 | ```
15 | sudo systemctl start logstash
16 | sudo systemctl status logstash
17 | sudo systemctl stop logstash
18 | sudo systemctl restart logstash
19 | ```
20 |
21 | It can also be managed using the `rockctl` command.
22 |
23 | ## Directories
24 |
25 | `/etc/logstash/` - main config path
26 |
27 | `/etc/logstash/conf.d` - ROCK specific config
28 |
29 | `/var/lib/logstash` - data path
30 |
--------------------------------------------------------------------------------
/docs/services/stenographer.md:
--------------------------------------------------------------------------------
1 | # Stenographer
2 |
3 | ROCK uses [Stenographer](https://github.com/google/stenographer) for full packet
4 | capture. Among other features, it provides the following advantages over other
5 | solutions:
6 |
7 | * it's fast
8 | * manages disk space
9 | * will fill it's disk partition to 90%
10 | * then start to overwrite oldest data forward
11 |
12 | ## Management
13 |
14 | ### Systemd
15 | Stenographer is deployed as a systemd unit, called `stenographer.service`. Normal
16 | systemd procedures apply here:
17 |
18 | ```shell
19 | sudo systemctl start stenographer
20 | sudo systemctl status stenographer
21 | sudo systemctl stop stenographer
22 | sudo systemctl restart stenographer
23 | ```
24 |
25 | ### Rockctl
26 | It can also be managed using the `rockctl` command.
27 |
28 |
29 | ### Multiple Interfaces
30 |
31 | It's important to note that Stenographer will have a (main) parent process, and
32 | a child process for every interface that it uses to capture packets. ex:
33 | ```shell
34 | STENOGRAPHER:
35 | Active: active (exited) since Mon 2019-01-28 22:51:47 UTC; 1 weeks 0 days ago
36 | STENOGRAPHER@EM1:
37 | Active: active (running) since Mon 2019-01-28 22:51:47 UTC; 1 weeks 0 days ago
38 | ```
39 |
40 | In order to restart all Stenographer processes, a wildcard can be used:
41 | `sudo sytemctl restart stenographer*`
42 |
43 |
44 | ## Directories
45 |
46 | Stenographer is great at managing it's own disk space, but that doesn't cut it
47 | when it's sharing the same mount point as Bro, Suricata , and other tools that
48 | generate data in ROCK.
49 |
50 | Best practice would be to create a `/data/stenographer` partition in order to
51 | prevent limited operations.
52 |
--------------------------------------------------------------------------------
/docs/services/suricata.md:
--------------------------------------------------------------------------------
1 | # Suricata
2 |
3 | Intrusion Detection Systems (IDS) are a great way to quickly alert on known
4 | bad. Alerts are triggered when a packet matches a defined pattern or
5 | _**signature**_.
6 |
7 | [Suricata](https://suricata-ids.org/) is the IDS / Alerting tool of choice for
8 | RockNSM. It provides a lot of features not available in our previous option.
9 | Most importantly, Suricata offers:
10 |
11 | - multi-threading capability
12 | - active development community and
13 | - frequent feature additions & project momentum
14 |
15 |
16 | ## Service Management
17 |
18 | Suricata is deployed as a systemd unit called `suricata.service`. Normal systemd
19 | procedures apply here. It can also be managed using the `rockctl` command using
20 | the same syntax:
21 |
22 | ```
23 | sudo systemctl start suricata
24 | sudo systemctl status suricata
25 | sudo systemctl stop suricata
26 | sudo systemctl restart suricata
27 | ```
28 |
29 | The default ROCK configuration has the Suricata service enabled on startup.
30 |
31 |
32 | ## Notable Files / Directories
33 |
34 | `/etc/suricata/` - main configuration path
35 | `/var/lib/suricata/` - primary rule path
36 |
37 |
38 | ## Updating Rules
39 |
40 | The newest versions of Suricata come with the `suricata-update` command to
41 | manange and update rulesets. The official documentation is found
42 | [here](https://suricata.readthedocs.io/en/suricata-4.1.2/rule-management/suricata-update.html).
43 |
44 |
45 | ### Enabling Feeds
46 |
47 | Suricata Update is a Python module and is automatically bundled with Suricata
48 | starting with version 4.1. While it does have documentation, it’s helpful to
49 | have a practical example. One of the awesome features with Suricata Update is
50 | it comes with a pre-configured list of signature feeds out of the box, both
51 | free and paid. It makes it very simple to enabled paid feeds.
52 |
53 | To view the list of available feeds, login to your RockNSM system and run:
54 |
55 | $ `sudo suricata-update list-sources`
56 |
57 | This will return something similar to the following:
58 |
59 | ```yaml
60 | Name: oisf/trafficid
61 | Vendor: OISF
62 | Summary: Suricata Traffic ID ruleset
63 | License: MIT
64 | Name: et/open
65 | Vendor: Proofpoint
66 | Summary: Emerging Threats Open Ruleset
67 | License: MIT
68 | Name: scwx/security
69 | Vendor: Secureworks
70 | Summary: Secureworks suricata-security ruleset.
71 | License: Commercial
72 | Parameters: secret-code
73 | Subscription: https://www.secureworks.com/contact/ (Please reference CTU Countermeasures)
74 | Name: scwx/malware
75 | Vendor: Secureworks
76 | Summary: Secureworks suricata-malware ruleset.
77 | License: Commercial
78 | Parameters: secret-code
79 | Subscription: https://www.secureworks.com/contact/ (Please reference CTU Countermeasures)
80 | Name: et/pro
81 | Vendor: Proofpoint
82 | Summary: Emerging Threats Pro Ruleset
83 | License: Commercial
84 | Replaces: et/open
85 | Parameters: secret-code
86 | Subscription: https://www.proofpoint.com/us/threat-insight/et-pro-ruleset
87 | Name: ptresearch/attackdetection
88 | Vendor: Positive Technologies
89 | Summary: Positive Technologies Attack Detection Team ruleset
90 | License: Custom
91 | Name: sslbl/ssl-fp-blacklist
92 | Vendor: Abuse.ch
93 | Summary: Abuse.ch SSL Blacklist
94 | License: Non-Commercial
95 | Name: tgreen/hunting
96 | Vendor: tgreen
97 | Summary: Heuristic ruleset for hunting. Focus on anomaly detection and showcasing latest engine features, not performance.
98 | License: GPLv3
99 | Name: etnetera/aggressive
100 | Vendor: Etnetera a.s.
101 | Summary: Etnetera aggressive IP blacklist
102 | License: MIT
103 | ```
104 |
105 | Without any additional configuration, suricata-update will automatically pull in the et/open ruleset. You can disable this ruleset if you desire. Now, if you are a subscriber to et/pro or another included ruleset that requires an access code (sometimes referred to as an “oinkcode” in Snort parlance), you can pass that on the command line or suricata-update will prompt you.
106 |
107 | `suricata-update enable-source et/pro secret-code=xxxxxxxxxxxxxxxx`
108 |
109 | ### Manipulating Individual Rules
110 |
111 | Often times, we want to turn off specific rules — maybe they’re too noisy for our network, or corporate policy doesn’t concern with browser extensions on BYOD systems. Again, `suricata-update` makes our life easy on our RockNSM sensors.
112 |
113 | ```shell
114 | # Elevate to a root shell and go to Suricata dir
115 | sudo -s
116 | cd /etc/suricata
117 |
118 | # Generate default suricata-update configs
119 | suricata-update --dump-sample-configs
120 | ```
121 |
122 | This command will generate six default files:
123 |
124 | 1. `update.yaml` - the suricata-update config file
125 | 1. `enable.conf` - config to enable rules that are usually disabled
126 | 1. `disable.conf` - config to disable rules that are usually enabled
127 | 1. `modify.conf` - use regex to modify rules
128 | 1. `drop.conf` - change rules from alert to drop, not used in RockNSM
129 | 1. `threshold.in` - set thresholds to limit too-frequent firing of given alerts
130 |
131 | To disable the noisy rule above, we just need to specify its signature ID (e.g. alert.signature_id). Open disable.conf and add the following line:
132 |
133 | ```
134 | # Disable invalid timestamp rule (sid: 2210044)
135 | 2210044
136 | ```
137 |
138 | We could alternatively specify the rule using regular expressions:
139 |
140 | ```
141 | # Disable all SURICATA STREAM alert rules
142 | re:^alert.*SURICATA STREAM
143 | ```
144 |
145 | Next, just run `suricata-update`. Note, you want to ensure that `suricata-update` runs as the same user as the `suricata` service. On RockNSM, this is the `suricata` system user. This should really only be necessary the first time `suricata-update` is run, to ensure that when Suricata runs it can read the rules. Near the end of the run, you’ll see a summary how many rules were loaded, disabled, enabled, modified, dropped, and some other stats.
146 |
147 | ```
148 | sudo -u suricata -g suricata suricata-update
149 | ...
150 | 27/2/2019 -- 00:08:39 - -- Loaded 29733 rules.
151 | 27/2/2019 -- 00:08:40 - -- Disabled 68 rules.
152 | 27/2/2019 -- 00:08:40 - -- Enabled 0 rules.
153 | 27/2/2019 -- 00:08:40 - -- Modified 0 rules.
154 | 27/2/2019 -- 00:08:40 - -- Dropped 0 rules.
155 | 27/2/2019 -- 00:08:41 - -- Enabled 188 rules for flowbit dependencies.
156 | 27/2/2019 -- 00:08:41 - -- Backing up current rules.
157 | 27/2/2019 -- 00:08:45 - -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 29733; enabled: 22272; added: 4; removed 19; modified: 1215
158 | ```
159 |
160 | You can sanity check that it worked by checking the output for the signature you disabled with grep. You could also search using the same regex as before! If you want to match the regex pattern, be sure to search for a line starting with a # followed by a single space, as this is how the rule is commented out. If the disable configuration worked, you’ll see the rule, but it will be commented out.
161 |
162 | ```bash
163 | sudo grep 2210044 /var/lib/suricata/rules/suricata.rules
164 | # alert tcp any any -> any any (msg:"SURICATA STREAM Packet with invalid timestamp"; stream-event:pkt_invalid_timestamp; classtype:protocol-command-decode; sid:2210044; rev:2;)
165 | ```
166 |
167 | To check for a regex you could do this.
168 |
169 | ```
170 | sudo grep '^# alert.*SURICATA STREAM'
171 | /var/lib/suricata/rules/suricata.rules
172 | ...
173 | (a whole bunch of rules match this)
174 | ```
175 |
176 | ### Local Rule Management
177 |
178 | Suricata Update lets you manage local rules using the same process above. In the `update.yaml` it defaults to loading all rules in the `/etc/suricata/rules` directory. You could add some local site-specific directory, as well. Suricata Update will parse each of these rules and apply the same operations that you configured, as detailed above.
179 |
180 | ### Automating It
181 |
182 | RockNSM automatically will run your suricata-update process once per day. This is done using crond in `/etc/cron.d/rocknsm_suricata-update` every day at noon UTC (which is the default and recommended RockNSM sensor time zone).
183 |
--------------------------------------------------------------------------------
/docs/services/zeek.md:
--------------------------------------------------------------------------------
1 | # Zeek
2 |
3 | > Note: While "Zeek" is the new name of the project, directories,
4 | service files, and binaries still (for now) retain the "bro" name.
5 |
6 | ## Overview
7 | Zeek (the artist formerly known as Bro) is used to provide network protocol analysis within ROCK. It is extremely
8 | customizable, and it is encouraged that you take advantage of this.
9 |
10 | When deploying custom Zeek scripts, please be sure to store them under a
11 | subdirectory of `/usr/share/bro/site/scripts/`. We can't guarantee that your
12 | customizations won't be overwritten by Ansible if you don't follow this pattern.
13 |
14 | ## Management
15 |
16 | ### Service
17 | Zeek is deployed as a systemd unit, called `bro.service`. Normal systemd procedures
18 | apply here:
19 |
20 | ```
21 | sudo systemctl start bro
22 | sudo systemctl status bro
23 | sudo systemctl stop bro
24 | sudo systemctl restart bro
25 | ```
26 |
27 | The `broctl` command is now an alias. Using this alias prevents dangerous
28 | permission changes caused by running the real broctl binary with sudo. The only
29 | safe way otherwise to run `broctl` is to execute it as the `bro` user and `bro`
30 | group as such:
31 |
32 | `sudo -u bro -g bro /usr/bin/broctl`
33 |
34 | ## Directories
35 |
36 | * Home
37 | * `/usr/share/bro/`
38 | * Data
39 | * `/data/bro/logs/current/{stream_name.log}`
40 | * Application Logs
41 | * `/data/bro/logs/current/{stdout.log, stderr.log}`
42 |
43 |
44 | **Note:** By default, Zeek will write ASCII logs to the data path above AND write
45 | JSON directly to Kafka. In general, you will be accessing the Bro data from
46 | [Elasticsearch](elasticsearch.md) via [Kibana](kibana.md).
47 |
--------------------------------------------------------------------------------
/docs/usage/Tips-and-Tricks.md:
--------------------------------------------------------------------------------
1 | # Tips and Tricks
2 | ### Tip #1
3 | Need to ingest historic PCAP, but want to tag it for a collection location?
4 | use the following command:
5 | ```
6 | zeek -C -r PCAPFILE.pcap local "ROCK::sensor_id=PCAPCOLLECTIONPOINT"
7 | ```
8 | This will update the observer.hostname field in your kibana instance!
9 |
10 | ---
11 | ### Tip #2
12 | ---
13 | ### Tip #3
14 | ---
15 | ### Tip #4
16 | ---
--------------------------------------------------------------------------------
/docs/usage/index.md:
--------------------------------------------------------------------------------
1 | # Basic Usage
2 |
3 | ## Key Interfaces
4 |
5 | ### Kibana - `https://localhost`
6 |
7 | ---
8 | :warning: We are aware of an issue with macOS Catalina and the most current version of Chrome browser that prevents Chrome from allowing self-signed TLS certificates. We are looking for an answer and will update when we find that. This does not affect Safari or Firefox or other operating systems.
9 |
10 | As a workaround, you can [manually add and Always Trust](https://support.apple.com/guide/keychain-access/change-the-trust-settings-of-a-certificate-kyca11871/mac) the RockNSM TLS certificate to your macOS keychain via Keychain Access and restart Chrome.
11 |
12 | ---
13 |
14 | The generated credentials are in the home directory of the user created at install:
15 |
16 | `~/KIBANA_CREDS.README`
17 |
18 | ### Docket - `https://localhost/app/docket/`
19 |
20 | Docket - web interface for pulling PCAP from the sensor (must be enabled in config)
21 |
22 | > localhost **or** IP of the management interface of the box
23 |
24 | ## Update Suricata
25 | Updating the IDS rules is paramount.
26 |
27 | We'll use `suricata-update`, which is bundled with Suricata that takes a bunch of rule files and merges them into one rule that is stored in `/var/lib/suricata/rules/suricata.rules`.
28 |
29 | ### Online Sensor
30 | ```
31 | sudo -u suricata suricata-update
32 | ```
33 |
34 | ### Offline Sensor
35 | Since the sensor is offline, we can't use `suricata-update` to download the rules for us, so we'll download the most recent Emerging Threats rules and update locally.
36 |
37 | From a system that has Internet access
38 | ```
39 | curl -OL https://rules.emergingthreats.net/open/suricata/emerging.rules.tar.gz
40 | scp emerging.rules.tar.gz user@sensorIP:
41 | ```
42 | Now connect to the sensors and update locally.
43 | ```
44 | ssh user@sensorIP
45 | tar zxf emerging.rules.tar.gz
46 | sudo suricata-update --local rules/
47 | rm -r rules emerging.rules.tar.gz
48 | ```
49 |
50 | ## Functions Checks
51 |
52 |
53 | ### Cluster Health
54 | Check to see that the ES cluster says it's green:
55 | ```
56 | curl -s localhost:9200/_cluster/health?pretty
57 | ```
58 |
59 | ### Document Check
60 | See how many documents are in the indexes. The count should be non-zero:
61 | ```
62 | curl -s localhost:9200/_all/_count?pretty
63 | ```
64 |
65 | ### Testing with PCAP
66 | You can fire some traffic across the sensor at this point to see if it's
67 | collecting. This requires that you upload your own test PCAP to the box. PCAP is
68 | typically huge, so if you don't have any just lying around, here's a quick test:
69 |
70 | - Download a small test file from the folks who brought us `tcpreplay`
71 | [here](http://tcpreplay.appneta.com/wiki/captures.html):
72 | ```
73 | curl -LO https://s3.amazonaws.com/tcpreplay-pcap-files/smallFlows.pcap
74 | ```
75 | - Replay the PCAP file across your _monitor interface_:
76 | ```
77 | sudo tcpreplay -i [your-monitor-interface] /path/to/smallflow.pcap
78 | ```
79 |
80 | - After a few moments, the document count should go up. This can again be
81 | validated with:
82 | ```
83 | curl -s localhost:9200/_all/_count?pretty
84 | ```
85 | - You should have plain text bro logs showing up in /data/bro/logs/current/:
86 | ```
87 | ls -ltr /data/bro/logs/current/
88 | ```
89 |
90 |
91 | ## Rockctl
92 |
93 | The basic service management functions are accomplished with:
94 |
95 | `sudo rockctl status` - get the status of ROCK services
96 |
97 |
100 |
101 | `sudo rockctl start` - start ROCK services
102 |
103 |
106 |
107 | `sudo rockctl stop` - stop ROCK services
108 |
109 |
112 |
113 | `sudo rockctl reset-failed` - clear the failed states of services
114 |
--------------------------------------------------------------------------------
/docs/usage/support.md:
--------------------------------------------------------------------------------
1 | # Support Guide
2 |
3 | This section aims to smooth out the most frequent issues new users will run into.
4 |
5 | ### Suricata Service Fails to Start
6 | This can occur after successful installation of the ROCK sensor.
7 |
8 | To identify if this is an issue, run `sudo rockctl status` and you'll see
9 | ```
10 | SURICATA:
11 | Active: failed (Result: exit-code) since Mon 2020-01-06 01:00:57 UTC; 40min ago
12 | ```
13 | To validate, run run `sudo journalctl -u suricata` and look for the `MemoryDenyWriteExecute`, meaning Suricata is using more RAM than is available. We need to tamp 'er down.
14 | ```
15 | sudo journalctl -u suricata
16 | -- Logs begin at Sun 2020-01-05 22:14:04 UTC, end at Mon 2020-01-06 01:42:58 UTC. --
17 | Jan 05 22:57:54 rock-1.rock.lan systemd[1]: [/usr/lib/systemd/system/suricata.service:17] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
18 | Jan 05 22:57:54 rock-1.rock.lan systemd[1]: [/usr/lib/systemd/system/suricata.service:18] Unknown lvalue 'LockPersonality' in section 'Service'
19 | Jan 05 22:57:54 rock-1.rock.lan systemd[1]: [/usr/lib/systemd/system/suricata.service:19] Unknown lvalue 'ProtectControlGroups' in section 'Service'
20 | Jan 05 22:57:54 rock-1.rock.lan systemd[1]: [/usr/lib/systemd/system/suricata.service:20] Unknown lvalue 'ProtectKernelModules' in section 'Service'
21 | Jan 05 22:58:05 rock-1.rock.lan systemd[1]: Started Suricata Intrusion Detection Service.
22 | ```
23 | To fix this issue, identify the non-management interface that is connected by using `ip link` and looking for the interface that has `BROADCAST` and `state UP`. In this example, `ens192f1` is `UP` (`ens193` is the management interface).
24 | ```
25 | ip link
26 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
27 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
28 | 2: ens193: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
29 | link/ether 00:50:56:9f:c1:4e brd ff:ff:ff:ff:ff:ff
30 | 3: ens192f0: mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
31 | link/ether f8:f2:1e:34:0f:40 brd ff:ff:ff:ff:ff:ff
32 | 4: ens192f1: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
33 | link/ether f8:f2:1e:34:0f:41 brd ff:ff:ff:ff:ff:ff
34 | 5: ens256f0: mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
35 | link/ether f8:f2:1e:34:0a:80 brd ff:ff:ff:ff:ff:ff
36 | 6: ens256f1: mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
37 | link/ether f8:f2:1e:34:0a:81 brd ff:ff:ff:ff:ff:ff
38 | ```
39 | `sudo vi /etc/suricata/rocknsm-overrides.yaml` and comment out everything under the `af-packet` heading with the exception of the connected non-management interface and setting the `threads` from `auto` to `12` (in our example, but you may want to use a different number based on your environment). Example:
40 | ```
41 | af-packet:
42 | # - interface: ens192f0
43 | # #threads: auto
44 | # cluster-id: 99
45 | # cluster-type: cluster_flow
46 | # defrag: true
47 | # use-mmap: true
48 | # mmap-locked: true
49 | # #rollover: true
50 | # tpacket-v3: true
51 | # use-emergency-flush: true
52 | - interface: ens192f1
53 | threads: 12
54 | cluster-id: 98
55 | cluster-type: cluster_flow
56 | defrag: true
57 | use-mmap: true
58 | mmap-locked: true
59 | #rollover: true
60 | tpacket-v3: true
61 | use-emergency-flush: true
62 | # - interface: ens256f1
63 | # #threads: auto
64 | # cluster-id: 97
65 | # cluster-type: cluster_flow
66 | # defrag: true
67 | # use-mmap: true
68 | # mmap-locked: true
69 | # #rollover: true
70 | # tpacket-v3: true
71 | # use-emergency-flush: true
72 | # - interface: ens256f0
73 | # #threads: auto
74 | # cluster-id: 96
75 | # cluster-type: cluster_flow
76 | # defrag: true
77 | # use-mmap: true
78 | # mmap-locked: true
79 | # #rollover: true
80 | # tpacket-v3: true
81 | # use-emergency-flush: true
82 | ```
83 | Afterwards, restart `sudo rockctl status` and verify everything is started:
84 | ```
85 | SURICATA:
86 | Active: active (running) since Mon 2020-01-06 01:55:37 UTC; 1min 28s ago
87 | ```
88 |
89 | ## Autodetect Assumptions
90 |
91 | When writing the scripts to generate default values, we had to make some
92 | assumptions. The defaults are generated according to these assumptions and
93 | should generally work if your sensor aligns with them. That said, these
94 | assumptions will give you a working sensor, but may need some love for higher
95 | performance. If you cannot meet these assumptions, look at the indicated
96 | configuration variables in `/etc/rocknsm/config.yml` for workaround approaches
97 | (with impact on performance).
98 |
99 | > TIP: We assume that any interface that does not have a default route will be used for collection. Each sensor application will be configured accordingly.
100 |
101 | **WARNING**: This so far has been the number one problem with a fresh install
102 | for beta testers!! Check your interface configuration!!
103 |
104 | * Two Network Interfaces:
105 | * a management interface with a default route
106 | * an interface without a default route (defined by `rock_monifs`)
107 |
108 | * You have mounted your largest storage volume(s) under `/data/`
109 | (defined by `rock_data_dir`)
110 |
111 | * Your hostname (FQDN) is defined in the `playbooks/inventory/all-in-one.ini` file
112 |
113 | * You allow management via SSH from any network (defined by `rock_mgmt_nets`)
114 |
115 | * You wish to use Zeek, Suricata, Stenographer (disabled by default) and the
116 | whole data pipeline. (See `with_*` options)
117 |
118 | * If installed via ISO, you will perform an offline install, else we assume
119 | online (defined by `rock_online_install`)
120 |
121 | * Zeek will use half of your CPU resources, up to 8 CPUs
122 |
123 | We will continue to add more support information as the userbase grows.
124 |
125 | ## Deployment Script
126 | If you find the deployment is failing, the script can be run with very verbose
127 | output. This example will write the output to a file for review:
128 |
129 | `DEBUG=1 ./deploy_rock.sh | tee /tmp/deploy_rock.log`
130 |
131 |
132 | ## Log Timestamps
133 |
134 | UTC is generally preferred for logging data as the timestamps from anywhere in the world will have a proper order without calculating offsets. That said, Kibana will present the zeek logs according to your timezone (as set in the browser). The logs themselves (i.e. in /data/bro/logs/) log in epoch time and will be written in UTC regardless of the system timezone.
135 |
136 | Zeek includes a utility for parsing these on the command line called `bro-cut`. It can be used to print human-readable timestamps in either the local sensor timezone or UTC. You can also give it a custom format string to specify what you'd like displayed.
137 |
--------------------------------------------------------------------------------
/mkdocs.yml:
--------------------------------------------------------------------------------
1 | # Project information
2 | site_name: 'RockNSM'
3 | site_description: 'Official ROCK Documentation'
4 | site_author: 'RockNSM'
5 | #site_url: 'https://rocknsm.github.io/rock-docs/'
6 |
7 | # Repository
8 | repo_name: 'rocknsm/rock-docs'
9 | repo_url: 'https://github.com/rocknsm/rock-docs'
10 |
11 | # Copyright
12 | copyright: 'Copyright © 2019 RockNSM'
13 |
14 | # Configuration
15 | theme:
16 | name: 'material'
17 | language: 'en'
18 | # feature:
19 | # tabs: true
20 | palette:
21 | primary: 'Blue Grey'
22 | accent: 'indigo'
23 | font:
24 | text: 'Roboto'
25 | code: 'Roboto Mono'
26 | logo: 'img/rock.png'
27 | favicon: 'img/favicon.ico'
28 |
29 |
30 | # Customization
31 | extra:
32 | #manifest: 'manifest.webmanifest'
33 | social:
34 | - type: 'github'
35 | link: 'https://github.com/rocknsm'
36 | - type: 'twitter'
37 | link: 'https://twitter.com/rocknsm'
38 |
39 | # Google Analytics
40 | #google_analytics:
41 | # - '#'
42 | # - 'auto'
43 |
44 | # Extensions
45 | markdown_extensions:
46 | - admonition
47 | - codehilite:
48 | guess_lang: false
49 | - toc:
50 | permalink: true
51 |
52 | # Page tree
53 | nav:
54 | - Welcome: index.md
55 | - About:
56 | - about/what_is_it.md
57 | - about/backstory.md
58 | - about/dataflow.md
59 | - Install:
60 | - Requirements: install/requirements.md
61 | - Media: install/media.md
62 | - Installation: install/install.md
63 | #- Example VM Install: vm_install.md
64 | - Configure:
65 | - Rock Manager: configure/rock-manager.md
66 | - Setup TUI: configure/setup-tui.md
67 | - Config Reference: configure/reference.md
68 | - Deploy:
69 | - Terminology: deploy/terminology.md
70 | - Single Node: deploy/single-node.md
71 | - Multi Node: deploy/multi-node.md
72 | - Initial Access: deploy/initial-access.md
73 | - Usage:
74 | - Basic Operation: usage/index.md
75 | - Support: usage/support.md
76 | - Services:
77 | - Overview: services/index.md
78 | - Zeek: services/zeek.md
79 | - Stenographer: services/stenographer.md
80 | - Suricata: services/suricata.md
81 | - FSF: services/fsf.md
82 | - Filebeat: services/filebeat.md
83 | - Kafka: services/kafka.md
84 | - Logstash: services/logstash.md
85 | - Elasticsearch: services/elasticsearch.md
86 | - Kibana: services/kibana.md
87 | - Docket: services/docket.md
88 | - Reference:
89 | - Latest Release: reference/latest.md
90 | - Tutorials and Videos: reference/tutorials.md
91 | - Changelog: reference/changelog.md
92 | - License: reference/license.md
93 | - Contribution: reference/contribution.md
94 | - Commercial Support: reference/support.md
95 |
--------------------------------------------------------------------------------
/operate/index.md:
--------------------------------------------------------------------------------
1 | # ROCK Operation
2 |
3 | ## Functions Checks
4 |
5 | To verify that you're collecting on the right interface:
6 | ```
7 | less /etc/suricata/rocknsm-overrides.yaml
8 | ...
9 | af-packet:
10 | - interface:
11 | ...
12 | ```
13 |
14 | After the initial build, the ES cluster will be yellow because the marvel index will think it's missing a replica. Run this to fix this issue. This job will run from cron just after midnight every day:
15 |
16 | - `/usr/local/bin/es_cleanup.sh 2>&1 > /dev/null`
17 |
18 | Check to see that the ES cluster says it's green:
19 |
20 | - `curl -s localhost:9200/_cluster/health | jq '.'`
21 |
22 | See how many documents are in the indexes. The count should be non-zero:
23 |
24 | - `curl -s localhost:9200/_all/_count | jq '.'`
25 |
26 | You can fire some traffic across the sensor at this point to see if it's collecting. NOTE: This requires that you upload your own test PCAP to the box.
27 |
28 | - `sudo tcpreplay -i [your monitor interface] /path/to/a/test.pcap`
29 |
30 | After replaying some traffic, or just waiting a bit, the count should be going up.
31 |
32 | - `curl -s localhost:9200/_all/_count | jq '.'`
33 |
34 | You should have plain text bro logs showing up in /data/bro/logs/current/:
35 |
36 | - `ls -ltr /data/bro/logs/current/`
37 |
38 |
39 | ## Start / Stop / Status
40 |
41 | @todo Modify the `rock_*` tasks to be `rockctl {start|stop|status}`
42 | They're still there, for now, but `rockctl` is the "One True Path":tm:.
43 |
44 |
45 | These functions are accomplished with `rock_stop`, `rock_start`, and `rock_status`.
46 |
47 | > NOTE: these may need to be prefaced with /usr/local/bin/ depending on your _$PATH_.
48 |
49 | * `sudo rock_start`
50 |
51 |
52 |
53 |
54 |
55 | * `sudo rock_status`
56 |
57 |
58 |
59 |
60 |
61 | * `sudo rock_stop`
62 |
63 |
64 |
65 |
66 |
67 |
68 | ### Configuring Bro
69 | `/etc/bro/networks.cfg` is where you will verify the correct networks are listed for Bro. If you are monitoring networks not listed, or would like to carve them up differently, you can do that here as well.
70 | ```
71 | sudo vi /etc/bro/networks.cfg
72 | #LOCAL NETS
73 | 10.0.0.0/8 RFC1918
74 | 172.16.0.0/12 RFC1918
75 | 192.168.0.0/16 RFC1918
76 |
77 | ##########
78 | ## ROCK ##
79 | ##########
80 | # Add networks for the networks you are monitoring into this file if they're not RFC1918
81 | ##########
82 | ```
83 |
84 | ### Configuring Suricata
85 | `/etc/suricata/suricata.yml` is where you will verify the correct networks are listed for Suricata. You'll do this on the `HOME_NET` line.
86 | ```
87 | sudo vi /etc/suricata/suricata.yml
88 | ...
89 | HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
90 | ...
91 | ```
92 | Likely, you're going to want to make sure that Bro and Suricata are looking at the same networks, so if you made a change when configuring Bro (`/etc/bro/networks.cfg`), you'll want to ensure that you mirror those changes here.
93 |
94 | ### Configuring Stenographer
95 | `/etc/stenographer/config` is where you're configure Stenographer for packet capture. Likely, you'll need to update this with `/etc/stenographer/config.`:
96 | ```
97 | cat /etc/stenographer/config
98 | # if this is not the correct interface, you can simply update it
99 | sudo cp /etc/stenographer/config. /etc/stenographer/config
100 | ```
101 |
102 | ### Key web interfaces
103 |
104 | https://localhost - Kibana web interface - After deploy, the created creds are in the home directory of the user created upon install as `KIBANA_CREDS.README`
105 | https://localhost:8443 - Docket - (If enabled) The web interface for pulling PCAP from the sensor
106 |
107 | > localhost = IP of the management interface of the box
108 |
109 | ### Log Timestamps
110 |
111 | UTC is generally preferred for logging data as the timestamps from anywhere in the world will have a proper order without calculating offsets. That said, Kibana will present the bro logs according to your timezone (as set in the browser). The bro logs themselves (i.e. in /data/bro/logs/) log in epoch time and will be written in UTC regardless of the system timezone.
112 |
113 | Bro includes a utility for parsing these on the command line called `bro-cut`. It can be used to print human-readable timestamps in either the local sensor timezone or UTC. You can also give it a custom format string to specify what you'd like displayed.
114 |
--------------------------------------------------------------------------------