├── LICENSE ├── README.md ├── configs ├── auditd.rules ├── filebeat.yml ├── osquery.conf └── packetbeat.yml └── images ├── demoagents.png ├── dssdash.png ├── managerui.png ├── samplearch.png ├── visresult.png ├── visresult1.png ├── wazuhalertindexdefine.png ├── wazuhdone.png └── wazuhsamplesearch.png /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # SIAC 2 | 3 | SIAC is a SIEM In A Can. It's pronounced like "sigh-ack." SIAC can run in the cloud, on bare metal, or a hybrid environment. 4 | 5 | ## Background 6 | 7 | As the name implies, SIAC is a SIEM. The purpose of this project is not to provide an off-the-shelf security monitoring and alerting solution, but rather to demonstrate how organizations and individuals can use free and open-source tools to build out modern information security capabilities. SIAC is capable of scaling to N nodes and handling tens of thousands of events per second (EPS). This work is based on CityBase's security engineering R&D. 8 | 9 | The SIAC project documentation has been released for a few reasons: 10 | 11 | * More and more organizations are eager to build out their own toolchain, but aren't sure where to start. We hope that this documentation can change that. 12 | 13 | * Security budget is a scarce commodity and defenders are often being asked to implement enterprise solutions without an enterprise budget. 14 | 15 | * Sharing security knowledge is good, and makes our industry better. 16 | 17 | ## Disclaimers 18 | 19 | These are very important and contain information required to operate SIAC securely in production. 20 | 21 | * This project presents a **dramatically** scaled down version of a SIEM and it has not been subjected to any kind of performance testing. 22 | 23 | * This example stack does not implement any encryption for data in transit. Certificate and key management policies can vary greatly between organizations and their environments. When implementing any or all of this stack, it is your responsibility to implement encryption in a way that is congruent with the security policies of your organization. All components have support for network-level encryption. Specific to elasticsearch, please investigate options such as [X-pack](https://www.elastic.co/products/x-pack), [Search Guard](https://github.com/floragunncom/search-guard), and Nginx as a reverse proxy. 24 | 25 | * This example stack does not implement any authentication. The policies and procedures for managing secrets can vary greatly between organizations and their environments. When implementing any or all of this stack, it is your responsibility to implement authentication in a way that is congruent with the security policies of your organization. All components have support for client/server authentication and there are also [plugins](https://github.com/floragunncom/search-guard) that can help, but to keep it simple, we don't implement any of these in the documentation. 26 | 27 | * For the sake of simplicity, all server-side components live on one machine. All documented components support a distributed and clustered architecture. When implementing any or all of this stack, it is important to determine how these components are broken out, secured, and scaled for your organization. 28 | 29 | * All configuration files represent the bare minimum requirements for getting services up and running, and client components shipping event data. Please consult the full reference configuration files and documentation, where applicable. 30 | 31 | ## Design 32 | 33 | Before digging into the rest of the documentation and standing up a SIAC, it might be helpful to understand what this project does and what drove certain design choices. 34 | 35 | * We wanted it to have as little custom code as possible and to work with automation tools such as Salt and Terraform. This speeds up deployment, disaster recovery, and provisioning which are usually bottlenecks in traditional SIEM architecture. 36 | 37 | * It had to support modern Linux operating systems, and run in the cloud. Traditional SIEMs don't do modern or cloud very well. 38 | 39 | * It needed to help us maintain PCI compliance, and provide a good actionable view of data for our auditors which mapped directly to certain controls outlined in the PCI-DSS. This should help any organization cruise through their ROC and evidence collection. 40 | 41 | * Horizontal scalability. Searching and indexing need to be fast. Adding speed and capacity should be as simple as N+1. 42 | 43 | * Modular architecture. There's always new tools in the security space and we wanted to be able to add and remove components without too much complexity. 44 | 45 | * Security and event data correlation should be transparent. Black boxes are old and busted. This should be hot and new. 46 | 47 | ### Capability Overview 48 | 49 | According to [Wikipedia](https://en.wikipedia.org/wiki/Security_information_and_event_management#Capabilities/Components), there are 7 key capabilities a SIEM should implement: 50 | 51 | * Data aggregation 52 | * Correlation 53 | * Alerting 54 | * Dashboards 55 | * Compliance 56 | * Retention 57 | * Forensic analysis 58 | 59 | SIAC does all of these. 60 | 61 | #### PCI Compliance 62 | 63 | A lot of the dashboarding functionality we'll be looking at is backed by the [Wazuh Kibana](https://github.com/wazuh/wazuh-kibana-app) app. 64 | 65 | As mentioned earlier, one of the core requirements for our stack was functionality that would support us in maintaining our PCI compliance, and communicating this information to our auditors. The fact that Wazuh maps rules/alerts to specific sections of the PCI-DSS, and provides a PCI-specific dashboard has helped immensely. Please refer to the annotated images for additional context. Please see the [Wazuh documentation relating to PCI compliance](https://documentation.wazuh.com/current/pci-dss/index.html) for additional details. 66 | 67 | **PCI Dashboard** 68 | ![PCI Dashboard 1](/images/dssdash.png) 69 | 70 | **PCI Dashboard Continued** 71 | ![PCI Dashboard 2](/images/demoagents.png) 72 | 73 | [Wazuh](https://wazuh.com/) is a fork of the very popular OSSEC software package which provides a lot of additional functionality such as agent management/registration, centralized configuration management, file integrity monitoring, and host-based intrusion detection capabilities. Similar to the PCI dashboards above, the Wazuh Kibana app also provides ready-to-use visualizations for [FIM](https://documentation.wazuh.com/3.x/user-manual/capabilities/file-integrity/index.html), HIDS, [CIS](https://documentation.wazuh.com/3.x/user-manual/capabilities/policy-monitoring/ciscat/ciscat.html) benchmarks, and much more. 74 | 75 | Another helpful application component is the Wazuh management functionality which is part of the Kibana app. This component allows for agent grouping, monitoring, error reporting, configuration review, and more. 76 | 77 | **Wazuh Manager UI** 78 | ![Manager](/images/managerui.png) 79 | 80 | Additional screenshots of Wazuh app can be found in the [official documentaiton](https://documentation.wazuh.com/current/index.html#example-screenshots). 81 | 82 | #### Visualizations 83 | 84 | One of the most powerful features of building off of ELK is the visualization capabilities. We've included the [kbn_network plugin](https://github.com/dlumbrer/kbn_network) with this stack since we found it so useful for visualizating relationships between indexed field data. In this example, we use data from the packetbeat index to visualize the source/destination relationships of 25 distinct source/destination nodes. 85 | 86 | **kbn_network Plugin Visualization** 87 | ![Node 1](/images/visresult.png) 88 | 89 | While that's interesting to look at, it's a little too broad to be practical. If we add an additional search constraint based on source IP, we can view the unique hosts that the source IP has talked to over an arbitrary time period. 90 | 91 | **kbn_network Plugin Visualization** 92 | ![Node 2](/images/visresult1.png) 93 | 94 | This type of relationship mapping can be applied to any indexed data such as DNS lookups, host executable activity, and probably a lot of other interesting things we haven't gotten around to just yet. 95 | 96 | #### Raw Search 97 | 98 | Elasticsearch and the Lucene query syntax are extremely powerful for searching very large volumes of indexed data. A detailed tutorial on using ELK to search data is beyond the scope of this documentation, but once SIAC is up and running, you can experiment with searching data in the filebeat, packetbeat, and wazuh-alerts, indexes. 99 | 100 | #### Flexibility 101 | 102 | Beyond the inherent flexibility that exists when working with open-source software, all of the visual components can be customized to your needs. This means that if there's a saved search, visualization, or dashboard that you want to modify and save, it's very easy to do. 103 | 104 | #### Yes, it's a real SIEM 105 | 106 | At this point it should be clear that while SIAC may be small in this documented build, the sum of its components are more than capable of supporting an enterprise security program both in terms of scale and functionality. Following the documentation, it should take no more than 30 minutes to have a SIAC instance up and running. 107 | 108 | ## Building it out 109 | 110 | ### Server: Installation and Configuration 111 | 112 | The backend stack uses Elasticsearch as the primary data store, which holds event data generated by client systems. This data is fed to the backend from the clients using [Beats](https://www.elastic.co/products/beats). We make sense of this data using [Kibana](https://www.elastic.co/products/kibana), [Wazuh](https://wazuh.com/), and various custom dashboards. 113 | 114 | The following installation and configuration steps should be considered "quick start" in order to get the system operational, have a rough understanding of how the components work together, and start searching some simple dashboards and event data. 115 | 116 | **Requirements:** 64-bit Ubuntu Desktop 16.04 LTS, 4GB RAM, 1 CPU core. Why desktop? It made copying and pasting easier in VMware. 117 | 118 | The following commands will set up the repositories for Wazuh, Java, Node, and Elastic, install the appropriate packages, generate a SSL certificate for the Wazuh auth daemon, and start the authorization service. 119 | 120 | ``` 121 | apt-get update 122 | apt-get install curl apt-transport-https lsb-release 123 | curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - 124 | echo "deb https://packages.wazuh.com/3.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list 125 | apt-get update 126 | apt-get install auditd 127 | apt-get install wazuh-manager=3.2.0-1 128 | openssl req -x509 -batch -nodes -days 365 -newkey rsa:2048 -keyout /var/ossec/etc/sslmanager.key -out /var/ossec/etc/sslmanager.cert 129 | /var/ossec/bin/ossec-authd 130 | curl -sL https://deb.nodesource.com/setup_6.x | bash - 131 | apt-get install nodejs 132 | apt-get install wazuh-api=3.2.0-1 133 | add-apt-repository ppa:webupd8team/java 134 | apt-get update 135 | apt-get install oracle-java8-installer 136 | curl -s https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - 137 | echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-6.x.list 138 | apt-get update 139 | apt-get install elasticsearch=6.2.0 140 | apt-get install kibana=6.2.0 141 | ``` 142 | Open `/etc/elasticsearch/elasticsearch.yml`, uncomment `network.host:` and set it to be the IP bound to the primary network interface. 143 | 144 | Open `/etc/kibana/kibana.yml`, uncomment `server.port:` and leave it set to `5601`. Uncomment `server.host:` and set it to be the IP bound to the primary network interface. Uncomment `elasticsearch.url:` and change localhost to be the IP bound to the network interface of your elasticsearch service. 145 | 146 | ``` 147 | systemctl enable elasticsearch.service 148 | systemctl restart elasticsearch.service 149 | systemctl enable kibana.service 150 | systemctl restart kibana.service 151 | ``` 152 | Do a quick health check to make sure your elasticsearch service is running, i.e.: 153 | 154 | ``` 155 | curl http://192.168.214.134:9200 156 | { 157 | "name" : "QHmFjRw", 158 | "cluster_name" : "elasticsearch", 159 | "cluster_uuid" : "pphMqGKKR8eTPunEnYyfXg", 160 | "version" : { 161 | "number" : "6.2.0", 162 | "build_hash" : "37cdac1", 163 | "build_date" : "2018-02-01T17:31:12.527918Z", 164 | "build_snapshot" : false, 165 | "lucene_version" : "7.2.1", 166 | "minimum_wire_compatibility_version" : "5.6.0", 167 | "minimum_index_compatibility_version" : "5.0.0" 168 | }, 169 | "tagline" : "You Know, for Search" 170 | } 171 | ``` 172 | Confirm that the Kibana app is online and accessible by pointing your browser at http://YOURKIBANAIP:5601/app/kibana. If all is well and good, proceed with installing the Wazuh Kibana app and other backend components. Be mindful of the following commands and change `localhost` to be the IP address of your elasticsearch server that is running on port 9200. 173 | 174 | ``` 175 | curl https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/wazuh-elastic6-template-alerts.json | curl -XPUT 'http://localhost:9200/_template/wazuh' -H 'Content-Type: application/json' -d @- 176 | curl https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/wazuh-elastic6-template-monitoring.json | curl -XPUT 'http://localhost:9200/_template/wazuh-agent' -H 'Content-Type: application/json' -d @- 177 | curl https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/alert_sample.json | curl -XPUT "http://localhost:9200/wazuh-alerts-3.x-"`date +%Y.%m.%d`"/wazuh/sample" -H 'Content-Type: application/json' -d @- 178 | ``` 179 | 180 | What we've just done is load the wazuh-alerts index template, load the wazuh-monitoring index template, and load one sample alert. Open Kibana, click the Management application, enter the index name with a * as pictured, and click "next step." 181 | 182 | **Kibana Management** 183 | ![Index Template](/images/wazuhalertindexdefine.png) 184 | 185 | On the next screen, select time filter field name "@timestamp." Click create index pattern. 186 | 187 | Once that's done, click Discover app in Kibana, and make sure the selected index is wazuh-alerts-. It's important to note that the sample alert inserted into elasticsearch is from 2015, so change your search time frame to be for the last 5 years. If successful, you should see the following. 188 | 189 | **Kibana Search** 190 | ![Sample Alert](/images/wazuhsamplesearch.png) 191 | 192 | We are almost done setting up the server. 193 | 194 | ``` 195 | apt-get install logstash=1:6.2.0-1 196 | curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/logstash/01-wazuh-local.conf 197 | usermod -a -G ossec logstash 198 | ``` 199 | Open up `/etc/logstash/conf.d/01-wazuh.conf` and change `hosts => ["localhost:9200"]` to the IP address of your elasticsearch service. 200 | 201 | Next, start the logstash service, install the Kibana app, and load the index templates for filebeat and packetbeat. 202 | ``` 203 | systemctl enable logstash.service 204 | systemctl start logstash.service 205 | export NODE_OPTIONS="--max-old-space-size=3072" 206 | /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.2.0_6.2.0.zip 207 | systemctl restart kibana.service 208 | apt-get install packetbeat=6.2.0 209 | apt-get install filebeat=6.2.0 210 | filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["YOURELASTICIP:9200"]' 211 | packetbeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["YOURELASTICIP:9200"]' 212 | ``` 213 | 214 | Reload your Kibana browser to access the Wazuh app on the left. When prompted, complete the API setup as documented [here](https://documentation.wazuh.com/current/installation-guide/installing-elastic-stack/connect_wazuh_app.html). Naturally it doesn't look like much because we haven't connected a client machine to populate it with data. It should be noted that even without an agent connected, the system will collect local event data from the server itself and slowly populate the indexes. 215 | 216 | **Wazuh Application** 217 | ![Wazuh Application](/images/wazuhdone.png) 218 | 219 | At this point, the server installation is done. The [kbn_network](https://github.com/dlumbrer/kbn_network) Kibana plugin is not required, but it is very cool. To install it, follow the steps on the GitHub repo and restart Kibana once done. Note that you will need to edit package.json and set the Kibana verion to 6.2.0. To install the filebeat and packetbeat dashboards, please consult the documentation [here](https://www.elastic.co/guide/en/beats/filebeat/current/load-kibana-dashboards.html) and [here](https://www.elastic.co/guide/en/beats/packetbeat/master/load-kibana-dashboards.html) respectively. 220 | 221 | **Note:** It is strongly recommended that when building to scale, all package versions are pinned. For example, running different beat versions, may result in naming inconsistencies in your indices. Running an Elastic cluster with mixed service versions, even minor versions such as 6.2.0 vs 6.2.1, will cause issues with cluster recovery, index rebalancing, and who knows what else. 222 | 223 | ## Client: Installation and Configuration 224 | 225 | The client-side stack is an amalgamation of lightweight software which generates data. This stack currently consists of: 226 | 227 | * [Osquery](https://osquery.io/) (OS instrumentation and querying) 228 | * [Wazuh](https://wazuh.com/) (File integrity monitoring + host-based intrusion detection + auditd analysis/transport + PCI stuff) 229 | * [Filebeat](https://www.elastic.co/products/beats/filebeat) (syslog + osquery + transport) 230 | * [Auditd](https://www.systutorials.com/docs/linux/man/8-auditd/) (Linux auditing system) 231 | * [Packetbeat](https://www.elastic.co/products/beats/packetbeat) (network data + transport) 232 | 233 | Installation of the client stack is very straightforward on Linux. Please note that if you're installing the agent on Windows, you will need to generate an agent key. This process is documented [here](https://documentation.wazuh.com/current/user-manual/registering/registration-process.html). There is an easy [powershell script](https://raw.githubusercontent.com/wazuh/wazuh-api/3.2/examples/api-register-agent.ps1) provided that works very well for registering agents using the RESTful API. Be advised that you need to configure the script to point to your installation directory (on the local windows system) and configure the “Wazuh-Manager-IP” to the IP Address of your server. 234 | 235 | **Requirements:** 64-bit Ubuntu Desktop 16.04 LTS, 2GB RAM, 1 CPU core. 236 | 237 | To make life easy, copy the following into a Bash script and execute it. **Before** running the script, change `WAZUHMANAGERIP` on line 23 to be the IP of your Wazuh manager server. 238 | ```bash 239 | #!/bin/bash 240 | #in case curl isn't installed... 241 | yes | apt-get install curl && 242 | 243 | curl -s https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - && 244 | echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-6.x.list && 245 | apt-get update && 246 | 247 | #install auditd on the client 248 | yes | apt-get install auditd && 249 | #download auditd config 250 | curl -so /etc/audit/audit.rules https://raw.githubusercontent.com/citybasebrooks/SIAC/master/configs/auditd.rules && 251 | 252 | #install the Wazuh agent 253 | yes | apt-get install lsb-release && 254 | curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add - && 255 | CODENAME=$(lsb_release -cs) && 256 | echo "deb https://packages.wazuh.com/apt $CODENAME main" \ 257 | | tee /etc/apt/sources.list.d/wazuh.list && 258 | apt-get update && 259 | yes | apt-get install wazuh-agent && 260 | #register the agent 261 | /var/ossec/bin/agent-auth -m WAZUHMANAGERIP && 262 | 263 | #install and configure osquery 264 | apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1484120AC4E9F8A1A577AEEE97A80C63C9D8B80B && 265 | add-apt-repository "deb [arch=amd64] https://osquery-packages.s3.amazonaws.com/xenial xenial main" && 266 | apt-get update && 267 | yes | apt-get install osquery && 268 | curl -so /etc/osquery/osquery.conf https://raw.githubusercontent.com/citybasebrooks/SIAC/master/configs/osquery.conf && 269 | 270 | #install and configure filebeat 271 | yes | apt-get install filebeat && 272 | curl -so /etc/filebeat/filebeat.yml https://raw.githubusercontent.com/citybasebrooks/SIAC/master/configs/filebeat.yml && 273 | 274 | #install and configure packetbeat 275 | yes | apt-get install packetbeat && 276 | curl -so /etc/packetbeat/packetbeat.yml https://raw.githubusercontent.com/citybasebrooks/SIAC/master/configs/packetbeat.yml && 277 | 278 | echo "*****Done. Script completed successfully*****" 279 | ``` 280 | Assuming the script completes without issues, you're almost done. 281 | 282 | Edit `/var/ossec/etc/ossec.conf` and change `MANAGER_IP` to be the IP of your Wazuh manager server. 283 | 284 | Edit `/etc/filebeat/filebeat.yml` and change `hosts: ["YOURELASTICIP:9200"]` to be the IP of your elasticsearch server. 285 | 286 | Edit `/etc/packetbeat/packetbeat.yml` and change `hosts: ["YOURELASTICIP:9200"]` to be the IP of your elasticsearch server. 287 | 288 | Enable and restart client stack services. 289 | ``` 290 | root@client# systemctl enable filebeat && systemctl restart filebeat 291 | root@client# systemctl enable packetbeat && systemctl restart packetbeat 292 | root@client# systemctl enable wazuh-agent && systemctl restart wazuh-agent 293 | root@client# osqueryctl start 294 | ``` 295 | Refresh your Kibana browser, and go back to the Management app to create index patterns for packetbeat-* and filebeat-*. 296 | 297 | Congratulations. Setup is complete. Check out the cool stuff under Dashboards, Visualizations, and Discover in Kibana! 298 | 299 | A few notes about the client services: 300 | 301 | * The current osquery configuration file schedules certain [query packs](https://github.com/facebook/osquery/tree/master/packs) to run, rather than facilitating real-time querying. Osquery supports real-time querying/management, and can be scaled out using options such as [Fleet](https://github.com/kolide/fleet) or [Doorman](https://github.com/mwielgoszewski/doorman). 302 | 303 | * When using Redis output for any of the beats, there will be **no compression.** The workaround for this is to architect a pipeline such that logstash is used as a sort of "decompression proxy" before the data goes into Redis. 304 | 305 | ### Portability 306 | 307 | For ease of documentation, both the client and server systems are running 64-bit Ubuntu 16.04 LTS. All of the client and server components can run on DEB, RPM, and Windows-based operating systems. Porting the software and config files to a different OS should be very straightforward. 308 | 309 | ### Questions 310 | 311 | **How is the performance?** 312 | 313 | Great! For some simple testing and capacity planning, we used a 3 node cluster with 2 data nodes, and 1 master/search node. The data nodes ran with 4 cores and 16GB or RAM. There were no special storage requirements. Our testing demonstrated that we were able to index over 100 million events in a 24 hour period (more than 1,000 EPS) without running into any memory, CPU, or disk issues. The data nodes were able to keep up with massive EPS spikes without needing a queuing mechanism, such as Redis. We probably could have more than quadrupled our EPS ingest without having to grow the cluster, aside from storage. In production it is recommended to have something in the pipeline for queuing. 314 | 315 | **How is this different from Security Onion?** 316 | 317 | Architecturally, I'd say significantly different but it's also not designed to be as off-the-shelf as Security Onion. One of the big benefits of SIAC is that at its core, it's nothing more than FOSS packages and configuration files. This means that you can import this data into config management or automation software, such as Salt and Terraform, and stand up your own SIAC in a matter of minutes. 318 | 319 | **Does it handle CloudTrail logs?** 320 | 321 | Yes. See [here](https://documentation.wazuh.com/current/amazon/index.html). 322 | 323 | **How come you didn't use a certain component?** 324 | 325 | There's a lot of things that could be added to the stack, or swapped out for something else but again, the project exists to demonstrate a particular concept. The client and server stacks are very modular, so adding or substituting components shouldn't be too difficult. 326 | 327 | **Why Packetbeat as opposed to Bro?** 328 | 329 | Bro is a perfectly acceptable component for this stack. For our R&D purposes, we were dealing with a pure AWS workload, so the concept of a network TAP doesn't exist. We could have worked around that with a Bro cluster, but that would have been a bit more difficult to administer for a PoC. The reason for this is the Bro cluster architecture. In a cloud-based environment, Bro's clustered architecture requires the implementation of worker, proxy, and manager nodes. The [manager node serves a **very** important role](https://www.bro.org/sphinx/cluster/index.html): 330 | 331 | >It receives log messages and notices from the rest of the nodes in the cluster using the Bro communications protocol (note that if you are using a logger, then the logger receives all logs instead of the manager). The result is a single log instead of many discrete logs that you have to combine in some manner with post-processing. The manager also takes the opportunity to de-duplicate notices, and it has the ability to do so since it’s acting as the choke point for notices and how notices might be processed into actions (e.g., emailing, paging, or blocking). 332 | 333 | Packetbeat has a [built-in facility](https://www.elastic.co/guide/en/beats/packetbeat/current/configuration-interfaces.html#_literal_ignore_outgoing_literal) for avoiding duplicate records with less architectural complexity. 334 | 335 | **What about Windows support?** 336 | 337 | This is all cross-platform. The big thing you'd be missing on your client stack is the Windows equivelant of auditd. To ameloriate this, I would recommend looking at [Winlogbeat](https://www.elastic.co/downloads/beats/winlogbeat) and [sysmon](https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon) used in conjunction with SwiftOnSecurity's [sysmon config](https://github.com/SwiftOnSecurity/sysmon-config). Also of interest might be the sysmon configs by [Olaf Hartong](https://github.com/olafhartong/sysmon-modular) and [ion-storm](https://github.com/ion-storm/sysmon-config). 338 | 339 | **What kind of alerting is available?** 340 | 341 | This stack has none, but that's an easy problem to solve. The Wazuh manager can be configured to alert through [PagerDuty, Slack](https://documentation.wazuh.com/current/user-manual/manager/output-options/manual-integration.html), and [email](https://documentation.wazuh.com/current/user-manual/manager/output-options/manual-email-report/index.html). For an Elasticsearch or Kibana plugin, please explore options like [Sentinl](https://github.com/sirensolutions/sentinl), [ElastAlert](https://github.com/Yelp/elastalert), or [411](https://github.com/etsy/411). The latter options allow for more complex alerting. 342 | 343 | **How customizable is it?** 344 | 345 | Extremely. All dashboards, vizualizations, index patterns, saved searches, etc. are customizable and can be saved. Since Wazuh is a fork of OSSEC, there is of course support for creating your own custom rules and decoders. 346 | 347 | **How does data retention work?** 348 | 349 | Time/volume for data storage is up to the user. By default, the indexes will roll over every 24 hours and start new indexes in the format of indexname-YYYY.MM.dd. Closing, deleting, and managing indexes can be accomplished with [curator](https://github.com/elastic/curator). 350 | 351 | **What about updating?** 352 | 353 | The components outlined here will all work in harmony so long as you keep your Elasticsearch, Kibana, Wazuh, and Beats versions pinned to 6.2.0(Elastic) and 3.2.0 (Wazuh). If you want to update any of these packages, your biggest dependency will be to make sure that the Wazuh Kibana app has been updated to support your target elasticsearch/Kibana version. And of course, test everything prior to upgrading in production. 354 | 355 | **What might a distributed architecture look like?** 356 | 357 | There's lots of different ways to build an ELK stack, and your data pipeline may look different depending on what works best in your environment. Based on the components in this project, you may wish to use the below as a starting point. This gives you a 3 node ES cluster with two data nodes, and one dedicated search/master node. Kibana is broken out as a separate component and pointed at the ES master node for search. The Wazuh manager node is split off to its own system. Data from the Wazuh master is pushed to one of your ingest nodes. With regard to the client stack, filebeat, packetbeat, and osquery data would be shipped directly to one of your ingest nodes as well. The Wazuh agent would talk directly to the manager node. 358 | 359 | ![Sample Architecture](/images/samplearch.png) 360 | 361 | 362 | #### Notices 363 | Distributed under the [Apache License 2.0](https://github.com/CityBaseInc/SIACWIP/blob/master/LICENSE) 364 | 365 | Developed by Andrew Brooks of CityBase 366 | 367 | Special thanks to CityBase, my peers for helping with documentation review, and the many people and projects that inspired this. 368 | 369 | THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 370 | HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 371 | -------------------------------------------------------------------------------- /configs/auditd.rules: -------------------------------------------------------------------------------- 1 | # First rule - delete all 2 | -D 3 | 4 | # Increase the buffers to survive stress events. 5 | # Make this bigger for busy systems 6 | -b 320 7 | 8 | # Feel free to add below this line. See auditctl man page 9 | -w /home -p w -k audit-wazuh-w 10 | -w /home -p a -k audit-wazuh-a 11 | -w /home -p r -k audit-wazuh-r 12 | -w /home -p x -k audit-wazuh-x 13 | 14 | -a exit,always -F euid=0 -F arch=b64 -S execve -k audit-wazuh-c 15 | -a exit,always -F euid=0 -F arch=b32 -S execve -k audit-wazuh-c -------------------------------------------------------------------------------- /configs/filebeat.yml: -------------------------------------------------------------------------------- 1 | name: filebeat 2 | 3 | filebeat.prospectors: 4 | - type: log 5 | paths: 6 | - /var/log/*.log 7 | - type: log 8 | paths: /var/log/osquery/osqueryd.*.log 9 | json.keys_under_root: true 10 | json.add_error_key: true 11 | 12 | output.elasticsearch: 13 | hosts: ["YOURELASTICIP:9200"] -------------------------------------------------------------------------------- /configs/osquery.conf: -------------------------------------------------------------------------------- 1 | { 2 | "schedule": { 3 | "osquery_status": { 4 | "query": "SELECT * FROM osquery_info;", 5 | "interval": 60, 6 | "description": "Display information about the osqueryd daemon" 7 | }, 8 | "logged_in_users": { 9 | "query": "SELECT * FROM logged_in_users;", 10 | "interval": 60 11 | }, 12 | 13 | "kernel_integrity": { 14 | "query": "SELECT * FROM kernel_integrity;", 15 | "interval": 60 16 | } 17 | }, 18 | 19 | "packs": { 20 | "incident-response": "/usr/share/osquery/packs/incident-response.conf", 21 | "it-compliance": "/usr/share/osquery/packs/it-compliance.conf", 22 | "hardware-monitoring": "/usr/share/osquery/packs/hardware-monitoring.conf" 23 | } 24 | } -------------------------------------------------------------------------------- /configs/packetbeat.yml: -------------------------------------------------------------------------------- 1 | #============================== Network device ================================ 2 | name: packetbeat 3 | # Select the network interface to sniff the data. On Linux, you can use the 4 | # "any" keyword to sniff on all connected interfaces. 5 | packetbeat.interfaces.device: any 6 | 7 | #================================== Flows ===================================== 8 | 9 | # Set `enabled: false` or comment out all options to disable flows reporting. 10 | packetbeat.flows: 11 | # Set network flow timeout. Flow is killed if no packet is received before being 12 | # timed out. 13 | timeout: 30s 14 | 15 | # Configure reporting period. If set to -1, only killed flows will be reported 16 | period: 10s 17 | 18 | #========================== Transaction protocols ============================= 19 | 20 | packetbeat.protocols: 21 | - type: icmp 22 | # Enable ICMPv4 and ICMPv6 monitoring. Default: false 23 | enabled: true 24 | 25 | - type: dns 26 | # Configure the ports where to listen for DNS traffic. You can disable 27 | # the DNS protocol by commenting out the list of ports. 28 | ports: [53] 29 | 30 | # include_authorities controls whether or not the dns.authorities field 31 | # (authority resource records) is added to messages. 32 | include_authorities: true 33 | 34 | # include_additionals controls whether or not the dns.additionals field 35 | # (additional resource records) is added to messages. 36 | include_additionals: true 37 | 38 | - type: http 39 | # Configure the ports where to listen for HTTP traffic. You can disable 40 | # the HTTP protocol by commenting out the list of ports. 41 | ports: [80, 8080, 8000, 5000, 8002] 42 | 43 | output.elasticsearch: 44 | hosts: ["YOURELASTICIP:9200"] 45 | -------------------------------------------------------------------------------- /images/demoagents.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/demoagents.png -------------------------------------------------------------------------------- /images/dssdash.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/dssdash.png -------------------------------------------------------------------------------- /images/managerui.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/managerui.png -------------------------------------------------------------------------------- /images/samplearch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/samplearch.png -------------------------------------------------------------------------------- /images/visresult.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/visresult.png -------------------------------------------------------------------------------- /images/visresult1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/visresult1.png -------------------------------------------------------------------------------- /images/wazuhalertindexdefine.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/wazuhalertindexdefine.png -------------------------------------------------------------------------------- /images/wazuhdone.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/wazuhdone.png -------------------------------------------------------------------------------- /images/wazuhsamplesearch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CityBaseInc/SIAC/5f8f49931d462199f2c04cbffce54bf7ba66232d/images/wazuhsamplesearch.png --------------------------------------------------------------------------------