├── .env ├── .github └── ISSUE_TEMPLATE │ └── bug-issue-report.md ├── CONTRIBUTING.md ├── LICENSE ├── PULL_REQUEST_TEMPLATE.md ├── README.md ├── docker-compose.yml ├── etc ├── logstash │ └── config │ │ ├── logstash.yml │ │ └── pipelines.yml └── pfelk │ ├── conf.d │ ├── 01-inputs.pfelk │ ├── 02-firewall.pfelk │ ├── 05-apps.pfelk │ ├── 20-interfaces.pfelk │ ├── 30-geoip.pfelk │ ├── 35-rules-desc.bkg │ ├── 36-ports-desc.bkg │ ├── 37-enhanced_user_agent.pfelk │ ├── 38-enhanced_url.pfelk │ ├── 45-enhanced_private.pfelk │ ├── 49-cleanup.pfelk │ └── 50-outputs.pfelk │ ├── databases │ ├── private-hostnames.csv │ ├── rule-names.csv │ └── service-names-port-numbers.csv │ └── patterns │ ├── openvpn.grok │ └── pfelk.grok └── set-logstash-password.sh /.env: -------------------------------------------------------------------------------- 1 | # User 2 | ELASTIC_USER=elastic 3 | 4 | # Password for the 'elastic' user (at least 6 characters) 5 | ELASTIC_PASSWORD=changeme 6 | 7 | # Password for the 'kibana_system' user (at least 6 characters) 8 | KIBANA_PASSWORD=changeme 9 | 10 | # Password for the 'logstash_system' user (at least 6 characters) 11 | LOGSTASH_PASSWORD=changeme 12 | 13 | # Version of Elastic products 14 | STACK_VERSION=8.9.0 15 | 16 | # Set the cluster name 17 | CLUSTER_NAME=pfelk 18 | 19 | # Set to 'basic' or 'trial' to automatically start the 30-day trial 20 | LICENSE=basic 21 | #LICENSE=trial 22 | 23 | # Port to expose Elasticsearch HTTP API to the host 24 | ES_PORT=9200 25 | #ES_PORT=127.0.0.1:9200 26 | 27 | # Port to expose Kibana to the host 28 | KIBANA_PORT=5601 29 | #KIBANA_PORT=80 30 | 31 | # Increase or decrease based on the available host memory (in bytes) 32 | MEM_LIMIT=1073741824 33 | 34 | # Project namespace (defaults to the current folder name if not set) 35 | #COMPOSE_PROJECT_NAME=myproject 36 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug-issue-report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug/issue report 3 | about: Create a report to help us improve 4 | title: '' 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | 10 | --- 11 | name: 'Bug/Error ' 12 | about: Create a report to help us improve 13 | title: '' 14 | labels: '' 15 | assignees: '' 16 | 17 | --- 18 | 19 | **Describe the bug** 20 | A clear and concise description of what the bug is. 21 | 22 | **To Reproduce** 23 | Steps to reproduce the behavior: 24 | 1. Go to '...' 25 | 2. Click on '....' 26 | 3. Scroll down to '....' 27 | 4. See error 28 | 29 | **Screenshots** 30 | If applicable, add screenshots to help explain your problem. 31 | 32 | **Operating System (please complete the following information):** 33 | - OS (`printf "$(uname -srm)\n$(cat /etc/os-release)\n"`): 34 | - Version of Docker (`docker -v`): 35 | - Version of Docker-Compose (`docker-compose -v`): 36 | 37 | **Elasticsearch, Logstash, Kibana (please complete the following information):** 38 | - Version of ELK (`cat /docker-pfelk/.env`) 39 | 40 | **Service logs 41 | - `docker-compose logs pfelk01` 42 | - `docker-compose logs pfelk02` 43 | - `docker-compose logs pfelk03` 44 | - `docker-compose logs logstash` 45 | - `docker-compose logs kibana` 46 | 47 | **Additional context** 48 | Add any other context about the problem here. 49 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to docker-pfelk 2 | 3 | --- 4 | 5 | ### Questions :question: 6 | 7 | * Visit our [FAQs](#) <<<-coming soon->>> 8 | * <<>> 9 | 10 | --- 11 | 12 | ### Start Here 13 | First, checkout the [pfELK README](README.md) for information on how to configure and run pfELK. 14 | 15 | **Then, forward your firewall logs.** 16 | 17 | * Make sure you followed all instructions 18 | * Configured your firewall(s) (pfSense or OPNsense) 19 | * All required dependencies were installed and/or configured 20 | 21 | > :clock1: _For more complex installations please reach out for assistance (i.e. multiple firewalls, sending firewall logs through VPN etc..._ 22 | --- 23 | 24 | ### How to contribute 25 | 26 | #### Documentation! :page_with_curl: 27 | 28 | Please help improve documentation with your inputs to our READMEs, examples, and FAQs. 29 | 30 | #### Bugs :beetle: 31 | 32 | **Before submitting a bug report:** 33 | * Ensure the bug was not already reported by searching for [existing issues in pfELK](https://github.com/a3ilson/pfelk/issues) 34 | * If an issue is already open, comment to that issue and provide any additional details, to assist. 35 | * Check the [FAQs](#) for common questions and problems <<<-coming soon->>> 36 | 37 | Bugs and issues are tracked as [GitHub Issues](https://github.com/a3ilson/pfelk/issues). 38 | **Please follow these guidelines when submitting a bug request:** 39 | * Ensure the title captures the subject of the issue 40 | * Describe the exact procedures in replicating the issue(s) 41 | * Explain the issue(s) 42 | * Fill out the [Bug/Error template](https://github.com/a3ilson/pfelk/issues/new/choose) 43 | 44 | #### Feature Requests :sparkles: 45 | 46 | Feature requests include new features and minor improvements to existing functionality. 47 | 48 | Feature requests are tracked as [GitHub Issues](https://github.com/a3ilson/pfelk/issues/new/choose). 49 | **Please follow these guidelines when submitting a feature request:** 50 | * Ensure the title captures the subject of the requested feature 51 | * Describe the feature in as much detail as possible 52 | * Provide examples to help us understand the requested feature(s) 53 | * Follow the directions in the [feature template](https://github.com/a3ilson/pfelk/issues/new/choose) 54 | 55 | #### Pull Requests 56 | 57 | **Collaboration is highly encouraged!** Make it better, improve it and share! 58 | 59 | **Help us with pull requests:** 60 | * Describe the problem and solution 61 | * Reference and include the issue number(s) 62 | * Verify any and all changes were extensively tested 63 | 64 | If this helped, feel free to make a contribution: 65 | 66 | [![Donate](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=KA7KSUM22FW7Q¤cy_code=USD&source=url). 67 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | # Pull Request Template 2 | 3 | ## Description 4 | 5 | Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. 6 | 7 | Fixes # (issue) 8 | 9 | ## Type of change 10 | 11 | Please delete options that are not relevant. 12 | 13 | - [ ] Bug fix (non-breaking change which fixes an issue) 14 | - [ ] New feature (non-breaking change which adds functionality) 15 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) 16 | - [ ] This change requires a documentation update 17 | 18 | ## How Has This Been Tested? 19 | 20 | Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration 21 | 22 | - [ ] Original Configuration 23 | - [ ] Adjusted Configuration 24 | 25 | **Test Configuration**: 26 | * Elastic Stack Version: 27 | * Linux Version/Type: 28 | * Java Version: 29 | * Docker Version: 30 | * Docker-Compose Version: 31 | * Elastic Stack Configuration Files: 32 | 33 | ## Checklist: 34 | 35 | - [ ] Code follows the style guidelines of this project 36 | - [ ] I have performed a self-review of my own code 37 | - [ ] I have commented my code, particularly in hard-to-understand areas 38 | - [ ] I have made corresponding changes to the documentation 39 | - [ ] My changes generate no new warnings 40 | - [ ] I have added tests that prove my fix is effective or that my feature works 41 | - [ ] New and existing unit tests pass locally with my changes 42 | - [ ] Any dependent changes have been merged and published in downstream modules 43 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Merged repository with [pfelk/pfelk](https://github.com/pfelk/pfelk) 2 | * 13 Augsut 2023 3 | 4 | # Elastic Integration 5 | - https://docs.elastic.co/en/integrations/pfsense 6 | 7 | # docker-pfelk 8 | Deploy pfelk with docker-compose [Video Tutorial](https://www.youtube.com/watch?v=xl0v9h8RXBc) 9 | 10 | ![Version badge](https://img.shields.io/badge/ELK-8.9.0-blue.svg) 11 | 12 | [![YouTube](https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/3ilson) 13 | 14 | ### (0) Required Prerequisits 15 | - [X] Docker 16 | - [X] Docker-Compose 17 | - [X] Adequate Memory (i.e. 8GB+) 18 | 19 | #### (1) Docker Install 20 | ``` 21 | sudo apt-get install docker 22 | ``` 23 | ``` 24 | sudo apt-get install docker-compose 25 | ``` 26 | 27 | ### (2) Download pfELK Docker 28 | ``` 29 | sudo wget https://github.com/pfelk/docker/archive/refs/heads/main.zip 30 | ``` 31 | #### (2a) Unzip pfelkdocker.zip 32 | ``` 33 | sudo apt-get install unzip 34 | ``` 35 | ``` 36 | sudo unzip main.zip 37 | ``` 38 | ### (3) Memory 39 | #### (3a) Set vm.max_map_count to no less than 262144 (must run each time host is booted) 40 | ``` 41 | sudo sysctl -w vm.max_map_count=262144 42 | ``` 43 | #### (3b) Set vm.max_map_count to no less than 262144 (one time configuration) 44 | ``` 45 | sudo echo "vm.max_map_count=262144" >> /etc/sysctl.conf 46 | ``` 47 | ### (4) Configure Variables (Credentials) 48 | #### (4a) Edit `.env` File 49 | ``` 50 | sudo nano .env 51 | ``` 52 | #### (4b) Amend `.env` File as Desired 53 | ``` 54 | ELK_VERSION=8.9.0 55 | ELASTIC_PASSWORD=changeme 56 | KIBANA_PASSWORD=changeme 57 | LOGSTASH_PASSWORD=changeme 58 | LICENSE=basic 59 | ``` 60 | #### (4c) Update `LOGSTASH_PASSWORD` in configuration files 61 | ``` 62 | sed -i 's/logstash_system_password/LOGSTASH-PASSWORD/' etc/logstash/config/logstash.yml 63 | sed -i 's/elastic_password/ELASTIC-PASSWORD/' etc/pfelk/conf.d/50-outputs.pfelk 64 | ``` 65 | or use the Script 66 | ``` 67 | ./set-logstash-password.sh 68 | ``` 69 | ### (5) Start Docker 70 | ``` 71 | sudo docker-compose up 72 | ``` 73 | Once fully running, navigate to the host ip (ex: 192.168.0.100:5601) 74 | 75 | ### (6) Install Templates 76 | * Templates [here](https://github.com/pfelk/pfelk/blob/main/install/templates.md) 77 | 78 | ### (7) Finish Configuring 79 | * Finish Configuring [here](https://github.com/pfelk/pfelk/blob/main/install/configuration.md) 80 | 81 | ### (8) Finished 82 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "2.2" 2 | 3 | services: 4 | setup: 5 | image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} 6 | volumes: 7 | - certs:/usr/share/elasticsearch/config/certs 8 | user: "0" 9 | command: > 10 | bash -c ' 11 | if [ x${ELASTIC_PASSWORD} == x ]; then 12 | echo "Set the ELASTIC_PASSWORD environment variable in the .env file"; 13 | exit 1; 14 | elif [ x${KIBANA_PASSWORD} == x ]; then 15 | echo "Set the KIBANA_PASSWORD environment variable in the .env file"; 16 | exit 1; 17 | elif [ x${LOGSTASH_PASSWORD} == x ]; then 18 | echo "Set the LOGSTASH_PASSWORD environment variable in the .env file"; 19 | exit 1; 20 | fi; 21 | if [ ! -f config/certs/ca.zip ]; then 22 | echo "Creating CA"; 23 | bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip; 24 | unzip config/certs/ca.zip -d config/certs; 25 | fi; 26 | if [ ! -f config/certs/certs.zip ]; then 27 | echo "Creating certs"; 28 | echo -ne \ 29 | "instances:\n"\ 30 | " - name: es01\n"\ 31 | " dns:\n"\ 32 | " - es01\n"\ 33 | " - localhost\n"\ 34 | " ip:\n"\ 35 | " - 127.0.0.1\n"\ 36 | " - name: es02\n"\ 37 | " dns:\n"\ 38 | " - es02\n"\ 39 | " - localhost\n"\ 40 | " ip:\n"\ 41 | " - 127.0.0.1\n"\ 42 | " - name: es03\n"\ 43 | " dns:\n"\ 44 | " - es03\n"\ 45 | " - localhost\n"\ 46 | " ip:\n"\ 47 | " - 127.0.0.1\n"\ 48 | > config/certs/instances.yml; 49 | bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key; 50 | unzip config/certs/certs.zip -d config/certs; 51 | fi; 52 | echo "Setting file permissions" 53 | chown -R 1000:1000 config/certs; 54 | find . -type d -exec chmod 750 \{\} \;; 55 | find . -type f -exec chmod 640 \{\} \;; 56 | echo "Waiting for Elasticsearch availability"; 57 | until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done; 58 | echo "Setting kibana_system password"; 59 | until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done; 60 | echo "Setting logstash_system password"; 61 | until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/logstash_system/_password -d "{\"password\":\"${LOGSTASH_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done; 62 | echo "All done!"; 63 | ' 64 | healthcheck: 65 | test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"] 66 | interval: 1s 67 | timeout: 5s 68 | retries: 120 69 | 70 | es01: 71 | depends_on: 72 | setup: 73 | condition: service_healthy 74 | image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} 75 | volumes: 76 | - certs:/usr/share/elasticsearch/config/certs 77 | - esdata01:/usr/share/elasticsearch/data 78 | ports: 79 | - ${ES_PORT}:9200 80 | environment: 81 | - node.name=es01 82 | - cluster.name=${CLUSTER_NAME} 83 | - cluster.initial_master_nodes=es01,es02,es03 84 | - discovery.seed_hosts=es02,es03 85 | - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} 86 | - bootstrap.memory_lock=true 87 | - xpack.security.enabled=true 88 | - xpack.security.http.ssl.enabled=true 89 | - xpack.security.http.ssl.key=certs/es01/es01.key 90 | - xpack.security.http.ssl.certificate=certs/es01/es01.crt 91 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt 92 | - xpack.security.http.ssl.verification_mode=certificate 93 | - xpack.security.transport.ssl.enabled=true 94 | - xpack.security.transport.ssl.key=certs/es01/es01.key 95 | - xpack.security.transport.ssl.certificate=certs/es01/es01.crt 96 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt 97 | - xpack.security.transport.ssl.verification_mode=certificate 98 | - xpack.license.self_generated.type=${LICENSE} 99 | mem_limit: ${MEM_LIMIT} 100 | ulimits: 101 | memlock: 102 | soft: -1 103 | hard: -1 104 | healthcheck: 105 | test: 106 | [ 107 | "CMD-SHELL", 108 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", 109 | ] 110 | interval: 10s 111 | timeout: 10s 112 | retries: 120 113 | 114 | es02: 115 | depends_on: 116 | - es01 117 | image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} 118 | volumes: 119 | - certs:/usr/share/elasticsearch/config/certs 120 | - esdata02:/usr/share/elasticsearch/data 121 | environment: 122 | - node.name=es02 123 | - cluster.name=${CLUSTER_NAME} 124 | - cluster.initial_master_nodes=es01,es02,es03 125 | - discovery.seed_hosts=es01,es03 126 | - bootstrap.memory_lock=true 127 | - xpack.security.enabled=true 128 | - xpack.security.http.ssl.enabled=true 129 | - xpack.security.http.ssl.key=certs/es02/es02.key 130 | - xpack.security.http.ssl.certificate=certs/es02/es02.crt 131 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt 132 | - xpack.security.http.ssl.verification_mode=certificate 133 | - xpack.security.transport.ssl.enabled=true 134 | - xpack.security.transport.ssl.key=certs/es02/es02.key 135 | - xpack.security.transport.ssl.certificate=certs/es02/es02.crt 136 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt 137 | - xpack.security.transport.ssl.verification_mode=certificate 138 | - xpack.license.self_generated.type=${LICENSE} 139 | mem_limit: ${MEM_LIMIT} 140 | ulimits: 141 | memlock: 142 | soft: -1 143 | hard: -1 144 | healthcheck: 145 | test: 146 | [ 147 | "CMD-SHELL", 148 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", 149 | ] 150 | interval: 10s 151 | timeout: 10s 152 | retries: 120 153 | 154 | es03: 155 | depends_on: 156 | - es02 157 | image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} 158 | volumes: 159 | - certs:/usr/share/elasticsearch/config/certs 160 | - esdata03:/usr/share/elasticsearch/data 161 | environment: 162 | - node.name=es03 163 | - cluster.name=${CLUSTER_NAME} 164 | - cluster.initial_master_nodes=es01,es02,es03 165 | - discovery.seed_hosts=es01,es02 166 | - bootstrap.memory_lock=true 167 | - xpack.security.enabled=true 168 | - xpack.security.http.ssl.enabled=true 169 | - xpack.security.http.ssl.key=certs/es03/es03.key 170 | - xpack.security.http.ssl.certificate=certs/es03/es03.crt 171 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt 172 | - xpack.security.http.ssl.verification_mode=certificate 173 | - xpack.security.transport.ssl.enabled=true 174 | - xpack.security.transport.ssl.key=certs/es03/es03.key 175 | - xpack.security.transport.ssl.certificate=certs/es03/es03.crt 176 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt 177 | - xpack.security.transport.ssl.verification_mode=certificate 178 | - xpack.license.self_generated.type=${LICENSE} 179 | mem_limit: ${MEM_LIMIT} 180 | ulimits: 181 | memlock: 182 | soft: -1 183 | hard: -1 184 | healthcheck: 185 | test: 186 | [ 187 | "CMD-SHELL", 188 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", 189 | ] 190 | interval: 10s 191 | timeout: 10s 192 | retries: 120 193 | 194 | kibana: 195 | depends_on: 196 | es01: 197 | condition: service_healthy 198 | es02: 199 | condition: service_healthy 200 | es03: 201 | condition: service_healthy 202 | image: docker.elastic.co/kibana/kibana:${STACK_VERSION} 203 | volumes: 204 | - certs:/usr/share/kibana/config/certs 205 | - kibanadata:/usr/share/kibana/data 206 | ports: 207 | - ${KIBANA_PORT}:5601 208 | environment: 209 | - SERVERNAME=kibana 210 | - ELASTICSEARCH_HOSTS=https://es01:9200 211 | - ELASTICSEARCH_USERNAME=kibana_system 212 | - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD} 213 | - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt 214 | mem_limit: ${MEM_LIMIT} 215 | healthcheck: 216 | test: 217 | [ 218 | "CMD-SHELL", 219 | "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'", 220 | ] 221 | interval: 10s 222 | timeout: 10s 223 | retries: 120 224 | 225 | logstash: 226 | depends_on: 227 | es01: 228 | condition: service_healthy 229 | es02: 230 | condition: service_healthy 231 | es03: 232 | condition: service_healthy 233 | image: docker.elastic.co/logstash/logstash:${STACK_VERSION} 234 | volumes: 235 | - certs:/usr/share/logstash/config/certs 236 | - ./etc/logstash/config/:/usr/share/logstash/config 237 | - ./etc/pfelk/conf.d/:/etc/pfelk/conf.d:ro 238 | - ./etc/pfelk/patterns/:/etc/pfelk/patterns:ro 239 | - ./etc/pfelk/databases/:/etc/pfelk/databases:ro 240 | ports: 241 | - 5140:5140/tcp 242 | - 5140:5140/udp 243 | environment: 244 | LS_JAVA_OPTS: -Xmx1G -Xms1G 245 | mem_limit: ${MEM_LIMIT} 246 | restart: unless-stopped 247 | 248 | volumes: 249 | certs: 250 | driver: local 251 | esdata01: 252 | driver: local 253 | esdata02: 254 | driver: local 255 | esdata03: 256 | driver: local 257 | kibanadata: 258 | driver: local 259 | -------------------------------------------------------------------------------- /etc/logstash/config/logstash.yml: -------------------------------------------------------------------------------- 1 | # logstash.yml 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: Ture (DOCKER ONLY) # 5 | # Description: This is a required file for a Docker installation # 6 | # # 7 | ################################################################################ 8 | # 9 | http.host: "0.0.0.0" 10 | #path.config: etc/pfelk/logstash/pipeline/pipelines.yml 11 | xpack.monitoring.elasticsearch.hosts: [ "https://es01:9200" ] 12 | xpack.monitoring.elasticsearch.username: elastic 13 | xpack.monitoring.elasticsearch.password: changeme 14 | xpack.monitoring.elasticsearch.ssl.certificate_authority: /usr/share/logstash/config/certs/ca/ca.crt 15 | 16 | ## X-Pack security credentials 17 | xpack.monitoring.enabled: true 18 | -------------------------------------------------------------------------------- /etc/logstash/config/pipelines.yml: -------------------------------------------------------------------------------- 1 | # pipelines.yml 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: True (DOCKER ONLY) # 5 | # Description: This is a required file for a pfelk installation # 6 | # This file is where you define your pipelines. You can define multiple. # 7 | # For more information on multiple pipelines, see the documentation: # 8 | # https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html # 9 | # # 10 | ################################################################################ 11 | # 12 | - pipeline.id: pfelk 13 | path.config: "/etc/pfelk/conf.d/*.pfelk" 14 | pipeline.ecs_compatibility: v8 #Disable if not running Elastic v8+ 15 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/01-inputs.pfelk: -------------------------------------------------------------------------------- 1 | # 01-inputs.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: True # 5 | # Description: Sets the type, port to listen, and initial grok pattern. User # 6 | # may amend at their own risk, as desired. # 7 | ################################################################################ 8 | # 9 | input { 10 | ### Firewall ### 11 | syslog { 12 | id => "pfelk-firewall-0001" 13 | type => "firewall" 14 | port => 5140 15 | syslog_field => "message" 16 | ecs_compatibility => v1 17 | grok_pattern => "<%{POSINT:[log][syslog][priority]}>%{GREEDYDATA:pfelk}" 18 | #ssl => true 19 | #ssl_certificate_authorities => ["/etc/logstash/ssl/YOURCAHERE.crt"] 20 | #ssl_certificate => "/etc/logstash/ssl/SERVER.crt" 21 | #ssl_key => "/etc/logstash/ssl/SERVER.key" 22 | #ssl_verify_mode => "force_peer" 23 | tags => ["pfelk"] 24 | } 25 | } 26 | # 27 | filter { 28 | grok { 29 | patterns_dir => [ "/etc/pfelk/patterns" ] 30 | match => [ "pfelk", "%{PFELK}" ] 31 | } 32 | #### RFC 5424 Date/Time Format #### 33 | date { 34 | match => [ "[event][created]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ] 35 | target => "[event][created]" 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/02-firewall.pfelk: -------------------------------------------------------------------------------- 1 | # 02-firewall.pfelk 2 | ################################################################################ 3 | # Version: 23.08a # 4 | # Required: True # 5 | # Description: Enriches pf (firewall) logs (OPNsense/pfSense) # 6 | # # 7 | ################################################################################ 8 | # 9 | filter { 10 | ### filterlog ### 11 | if [log][syslog][appname] =~ /^filterlog$/ { 12 | mutate { 13 | add_tag => "firewall" 14 | add_field => { "[event][dataset]" => "pfelk.firewall" } 15 | replace => { "[log][syslog][appname]" => "firewall" } 16 | copy => { "filter_message" => "pf_csv" } 17 | } 18 | mutate { 19 | split => { "pf_csv" => "," } 20 | } 21 | 22 | # [Common Fields] 23 | # rule.id, pf.rule.subid, pf.anchor, rule.uuid, interface.name, event.reason, event.action, network.direction, network.type 24 | # [Not ECS compliant fields] pf.rule.subid, 25 | mutate { 26 | add_field => { 27 | "[rule][id]" => "%{[pf_csv][0]}" 28 | "[pf][rule][subid]" => "%{[pf_csv][1]}" 29 | "[pf][anchor]" => "%{[pf_csv][2]}" 30 | "[rule][uuid]" => "%{[pf_csv][3]}" 31 | "[interface][name]" => "%{[pf_csv][4]}" 32 | "[event][reason]" => "%{[pf_csv][5]}" 33 | "[event][action]" => "%{[pf_csv][6]}" 34 | "[network][direction]" => "%{[pf_csv][7]}" 35 | "[network][type]" => "%{[pf_csv][8]}" 36 | } 37 | } 38 | # [IPv4] 39 | # [ECS compliant fields] network.iana_number, network.protocol, source.ip, destination.ip 40 | # [Not ECS compliant fields] pf.tos, pf.ecn, pf.ttl, pf.id, pf.offest, pf.flags, pf.packet.length 41 | if [network][type] == "4" { 42 | mutate { 43 | add_field => { 44 | "[pf][tos]" => "%{[pf_csv][9]}" 45 | "[pf][ecn]" => "%{[pf_csv][10]}" 46 | "[pf][ttl]" => "%{[pf_csv][11]}" 47 | "[pf][id]" => "%{[pf_csv][12]}" 48 | "[pf][offset]" => "%{[pf_csv][13]}" 49 | "[pf][flags]" => "%{[pf_csv][14]}" 50 | "[network][protocol]" => "%{[pf_csv][15]}" 51 | "[network][iana_number]" => "%{[pf_csv][16]}" 52 | "[pf][packet][length]" => "%{[pf_csv][17]}" 53 | "[source][ip]" => "%{[pf_csv][18]}" 54 | "[destination][ip]" => "%{[pf_csv][19]}" 55 | } 56 | } 57 | # [TCP] 58 | # [ECS compliant fields] source.port, destingation.port 59 | # [Not ECS compliant fields] pd.data_length, pf.tcp.flags, pf.tcp..sequence_number, pf.tcp..ack, pf.tcp..window, pf.tcp.urg, pf.tcp.options 60 | if [network][protocol] == "tcp" { 61 | mutate { 62 | add_field => { 63 | "[source][port]" => "%{[pf_csv][20]}" 64 | "[destination][port]" => "%{[pf_csv][21]}" 65 | "[pf][data_length]" => "%{[pf_csv][22]}" 66 | "[pf][tcp][flags]" => "%{[pf_csv][23]}" 67 | "[pf][tcp][sequence_number]" => "%{[pf_csv][24]}" 68 | "[pf][tcp][ack]" => "%{[pf_csv][25]}" 69 | "[pf][tcp][window]" => "%{[pf_csv][26]}" 70 | "[pf][tcp][urg]" => "%{[pf_csv][27]}" 71 | "[pf][tcp][options]" => "%{[pf_csv][28]}" 72 | } 73 | } 74 | } 75 | # [UDP] 76 | # [ECS compliant fields] source.port, destination.port 77 | # [Not ECS compliant fields] pf.data_length 78 | if [network][iana_number] == "6" or [network][iana_number] == "17" { 79 | mutate { 80 | add_field => { 81 | "[source][port]" => "%{[pf_csv][20]}" 82 | "[destination][port]" => "%{[pf_csv][21]}" 83 | "[pf][data_length]" => "%{[pf_csv][22]}" 84 | } 85 | } 86 | } 87 | } 88 | # [IPv6] 89 | # [ECS compliant fields] network.iana_number, network.protocol, source.ip, destination.ip 90 | # [Not ECS compliant fields] pf.class, pf.flow, pf.hoplimit, pf.packet.length 91 | if [network][type] == "6" { 92 | mutate { 93 | add_field => { 94 | "[pf][class]" => "%{[pf_csv][9]}" 95 | "[pf][flow]" => "%{[pf_csv][10]}" 96 | "[pf][hoplimit]" => "%{[pf_csv][11]}" 97 | "[network][protocol]" => "%{[pf_csv][12]}" 98 | "[network][iana_number]" => "%{[pf_csv][13]}" 99 | "[pf][packet][length]" => "%{[pf_csv][14]}" 100 | "[source][ip]" => "%{[pf_csv][15]}" 101 | "[destination][ip]" => "%{[pf_csv][16]}" 102 | } 103 | } 104 | # [TCP] 105 | # [ECS compliant fields] source.port, destination.port 106 | # [Not ECS compliant fields] pf.data_length, pf.tcp.flags, pf.tcp..sequence_number, pf.tcp..ack, pf.tcp..window, pf.tcp.urg, pf.tcp.options 107 | if [network][protocol] == "tcp" { 108 | mutate { 109 | add_field => { 110 | "[source][port]" => "%{[pf_csv][17]}" 111 | "[destination][port]" => "%{[pf_csv][18]}" 112 | "[pf][data_length]" => "%{[pf_csv][19]}" 113 | "[pf][tcp][flags]" => "%{[pf_csv][20]}" 114 | "[pf][tcp][sequence_number]" => "%{[pf_csv][21]}" 115 | "[pf][tcp][ack]" => "%{[pf_csv][22]}" 116 | "[pf][tcp][window]" => "%{[pf_csv][23]}" 117 | "[pf][tcp][urg]" => "%{[pf_csv][24]}" 118 | "[pf][tcp][options]" => "%{[pf_csv][25]}" 119 | } 120 | } 121 | } 122 | # [UDP] 123 | # [ECS compliant fields] source.port, destination.port 124 | # [Not ECS compliant fields] pf.data_length 125 | if [network][protocol] == "udp" { 126 | mutate { 127 | add_field => { 128 | "[source][port]" => "%{[pf_csv][17]}" 129 | "[destination][port]" => "%{[pf_csv][18]}" 130 | "[pf][data_length]" => "%{[pf_csv][19]}" 131 | } 132 | } 133 | } 134 | } 135 | # [ECS] Rename values/fields for ECS compliance 136 | if [network][direction] =~ /^out$/ { 137 | mutate { 138 | rename => { "[pf][data_length]" => "[destination][bytes]" } 139 | rename => { "[pf][packet][length]" => "[destination][packets]" } 140 | } 141 | } 142 | if [network][direction] =~ /^in$/ { 143 | mutate { 144 | rename => { "[pf][data_length]" => "[source][bytes]" } 145 | rename => { "[pf][packet][length]" => "[source][packets]" } 146 | } 147 | } 148 | if [network][type] == "4" { 149 | mutate { 150 | update => { "[network][type]" => "ipv4" } 151 | } 152 | } 153 | if [network][type] == "6" { 154 | mutate { 155 | update => { "[network][type]" => "ipv6" } 156 | } 157 | } 158 | if [network][direction] =~ /^in$/ { 159 | mutate { 160 | update => { "[network][direction]" => "ingress" } 161 | } 162 | } 163 | if [network][type] =~ /^out$/ { 164 | mutate { 165 | update => { "[network][type]" => "egress" } 166 | } 167 | } 168 | } 169 | } 170 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/05-apps.pfelk: -------------------------------------------------------------------------------- 1 | # 05-apps.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: True # 5 | # Description: Parses events based on process.name and further enriches events # 6 | # # 7 | ################################################################################ 8 | # 9 | filter { 10 | ### captive portal ### 11 | # Rename pfSense captive portal log from logportalauth to captiveportal 12 | if [log][syslog][appname] =~ /^logportalauth/ { 13 | mutate { 14 | replace => { "[log][syslog][appname]" => "captiveportal" } 15 | } 16 | } 17 | if [log][syslog][appname] =~ /^captiveportal/ { 18 | mutate { 19 | add_tag => "captive" 20 | add_field => { "[ecs][version]" => "1.7.0" } 21 | add_field => { "[event][dataset]" => "pfelk.captive" } 22 | rename => { "filter_message" => "captiveportalmessage" } 23 | } 24 | grok { 25 | patterns_dir => [ "/etc/pfelk/patterns" ] 26 | match => [ "captiveportalmessage", "%{CAPTIVEPORTAL}" ] 27 | } 28 | } 29 | ### dhcpd ### 30 | if [log][syslog][appname] =~ /^dhcpd$/ { 31 | mutate { 32 | add_tag => [ "dhcp", "dhcpdv4" ] 33 | add_field => { "[event][dataset]" => "pfelk.dhcp" } 34 | replace => { "[log][syslog][appname]" => "dhcp" } 35 | } 36 | grok { 37 | patterns_dir => [ "/etc/pfelk/patterns" ] 38 | match => [ "filter_message", "%{DHCPD}"] 39 | } 40 | } 41 | ### dpinger ### 42 | if [log][syslog][appname] =~ /^dpinger/ { 43 | mutate { 44 | add_tag => "dpinger" 45 | add_field => { "[event][dataset]" => "pfelk.dpinger" } 46 | } 47 | } 48 | ### haproxy ### 49 | if [log][syslog][appname] =~ /^haproxy/ { 50 | mutate { 51 | add_tag => "haproxy" 52 | add_field => { "[event][dataset]" => "pfelk.haproxy" } 53 | } 54 | grok { 55 | patterns_dir => [ "/etc/pfelk/patterns" ] 56 | match => [ "filter_message", "%{HAPROXY}" ] 57 | } 58 | } 59 | ### nginx ### 60 | if [log][syslog][appname] =~ /^nginx/ { 61 | mutate { 62 | add_tag => "nginx" 63 | add_field => { "[event][dataset]" => "pfelk.nginx" } 64 | replace => { "[log][syslog][appname]" => "nginx" } 65 | } 66 | grok { 67 | patterns_dir => [ "/etc/pfelk/patterns" ] 68 | match => { "filter_message" => "%{NGINX}" } 69 | } 70 | } 71 | ### openvpn ### 72 | if [log][syslog][appname] =~ /^openvpn/ { 73 | mutate { 74 | add_tag => "openvpn" 75 | add_field => { "[event][dataset]" => "pfelk.openvpn" } 76 | } 77 | grok { 78 | patterns_dir => [ "/etc/pfelk/patterns" ] 79 | match => [ "filter_message", "%{OPENVPN}" ] 80 | } 81 | if [openvpn_message] { 82 | grok { 83 | patterns_dir => [ "/etc/pfelk/patterns" ] 84 | match => [ "openvpn_message", "%{OPENVPN_RAW}" ] 85 | } 86 | } 87 | } 88 | ### named ### 89 | if [log][syslog][appname] =~ /^named/ { 90 | mutate { 91 | add_tag => "bind9" 92 | add_field => { "[event][dataset]" => "pfelk.bind9" } 93 | } 94 | grok { 95 | #patterns_dir => [ "/etc/pfelk/patterns" ] 96 | match => [ "filter_message", "%{BIND9}" ] 97 | } 98 | } 99 | ### ntpd ### 100 | if [log][syslog][appname] =~ /^ntpd/ { 101 | mutate { 102 | add_tag => "ntpd" 103 | add_field => { "[event][dataset]" => "pfelk.ntpd" } 104 | } 105 | } 106 | ### php-fpm ### 107 | if [log][syslog][appname] =~ /^php-fpm/ { 108 | mutate { 109 | add_tag => "web_portal" 110 | add_field => { "[event][dataset]" => "pfelk.webportal" } 111 | } 112 | grok { 113 | patterns_dir => [ "/etc/pfelk/patterns" ] 114 | match => { "filter_message" => "%{PF_APP} %{PF_APP_DATA}" } 115 | } 116 | mutate { 117 | lowercase => [ "[pf][app][action]" ] 118 | } 119 | } 120 | ### snort ### 121 | if [log][syslog][appname] =~ /^snort/ { 122 | mutate { 123 | add_tag => "snort" 124 | add_field => { "[ecs][version]" => "1.7.0" } 125 | add_field => { "[event][dataset]" => "pfelk.snort" } 126 | add_field => { "[event][category]" => "intrusion_detection" } 127 | add_field => { "[agent][type]" => "snort" } 128 | } 129 | grok { 130 | patterns_dir => [ "/etc/pfelk/patterns" ] 131 | match => [ "filter_message", "%{SNORT}" ] 132 | } 133 | } 134 | ### suricata ### 135 | if [log][syslog][appname] =~ /^suricata$/ { 136 | if [filter_message] =~ /^{.*}$/ { 137 | json { 138 | source => "filter_message" 139 | target => "[suricata][eve]" 140 | add_tag => "suricata_json" 141 | } 142 | } 143 | if [suricata][eve][src_ip] and ![source][ip] { 144 | mutate { 145 | add_field => { "[source][ip]" => "%{[suricata][eve][src_ip]}" } 146 | } 147 | } 148 | if [suricata][eve][dest_ip] and ![destination][ip] { 149 | mutate { 150 | add_field => { "[destination][ip]" => "%{[suricata][eve][dest_ip]}" } 151 | } 152 | } 153 | if [suricata][eve][src_port] and ![source][port] { 154 | mutate { 155 | add_field => { "[source][port]" => "%{[suricata][eve][src_port]}" } 156 | } 157 | } 158 | if [suricata][eve][dest_port] and ![destination][port] { 159 | mutate { 160 | add_field => { "[destination][port]" => "%{[suricata][eve][dest_port]}" } 161 | add_field => { "[threatintel][indicator][ip]" => "%{[source][ip]} %{[suricata][eve][http][url]}" } 162 | } 163 | } 164 | if "suricata_json" not in [tags] { 165 | grok { 166 | patterns_dir => [ "/etc/pfelk/patterns" ] 167 | match => [ "filter_message", "%{SURICATA}" ] 168 | } 169 | } 170 | mutate { 171 | remove_tag => "suricata_json" 172 | add_tag => "suricata" 173 | add_field => { "[event][dataset]" => "pfelk.suricata" } 174 | } 175 | } 176 | ### squid ### 177 | if [log][syslog][appname] == "(squid-1)" { 178 | mutate { 179 | replace => [ "[log][syslog][appname]", "squid" ] 180 | add_field => { "[event][dataset]" => "pfelk.squid" } 181 | } 182 | if [filter_message] =~ /^{.*}$/ { 183 | json { 184 | source => "filter_message" 185 | add_tag => "squid_json" 186 | } 187 | } 188 | if "squid_json" not in [tags] { 189 | grok { 190 | patterns_dir => [ "/etc/pfelk/patterns" ] 191 | match => [ "filter_message", "%{SQUID}" ] 192 | } 193 | } 194 | ### squid ECS => Built-in SIEM JSON ### 195 | if "squid_json" in [tags] { 196 | grok { 197 | match => [ "[url][original]", "%{URIPROTO}://%{URIHOST:referer_domain}%{GREEDYDATA:[url][path]}" ] 198 | } 199 | mutate { 200 | rename => { "[http][response][body][status_code]" => "[http][response][status_code]" } 201 | rename => { "referer_domain" => "[url][domain]" } 202 | } 203 | } 204 | mutate { 205 | remove_tag => "squid_json" 206 | add_tag => "squid" 207 | } 208 | } 209 | ### unbound ### 210 | if [log][syslog][appname] =~ /^unbound/ { 211 | mutate { 212 | add_tag => "unbound" 213 | add_field => { "[ecs][version]" => "1.9.0" } 214 | add_field => { "[event][dataset]" => "pfelk.unbound" } 215 | } 216 | grok { 217 | patterns_dir => [ "/etc/pfelk/patterns" ] 218 | match => [ "filter_message", "%{UNBOUND}" ] 219 | } 220 | ### unbound ECS => Built-in SIEM ### 221 | grok { 222 | match => [ "[dns][question][name]", "(\.)?(?<[dns][question][registered_domain]>[^.]+\.[^.]+)$" ] 223 | add_tag => "unbound-registered_domain" 224 | } 225 | if "unbound-registered_domain" not in [tags] { 226 | grok { 227 | match => [ "[dns][question][name]", "(?<[dns][question][registered_domain]>[^.]+\.[^.]+)$" ] 228 | } 229 | } 230 | grok { 231 | match => [ "[dns][question][name]", "(\.)?(?<[dns][question][tld]>[^.]+)$" ] 232 | } 233 | 234 | mutate { 235 | remove_tag => "unbound-registered_domain" 236 | } 237 | } 238 | } 239 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/20-interfaces.pfelk: -------------------------------------------------------------------------------- 1 | # 20-interfaces.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: False - Optional # 5 | # Description: Adds interface.alias and network.name based on interface.name # 6 | # The interface.alias and network.name fields may be amended as desired # 7 | ################################################################################ 8 | # 9 | ### firewall-1 ### 10 | filter { 11 | ### Change first.network.local to pfSesne or OPNsense host name ### 12 | if [host][name] == "first.network.local" { 13 | ### Change interface as desired ### 14 | if [interface][name] =~ /^igb0$/ { 15 | mutate { 16 | add_field => { "[interface][alias]" => "WAN" } 17 | add_field => { "[network][name]" => "FiOS" } 18 | } 19 | } 20 | ### Change interface as desired ### 21 | if [interface][name] =~ /^igb1$/ { 22 | mutate { 23 | add_field => { "[interface][alias]" => "LAN" } 24 | add_field => { "[network][name]" => "Home Network" } 25 | } 26 | } 27 | ### Change interface as desired ### 28 | if [interface][name] =~ /^igb2$/ { 29 | mutate { 30 | add_field => { "[interface][alias]" => "DMZ" } 31 | add_field => { "[network][name]" => "Exposed Network" } 32 | } 33 | } 34 | ### Change interface as desired ### 35 | if [interface][name] =~ /^ix0$/ { 36 | mutate { 37 | add_field => { "[interface][alias]" => "LAN" } 38 | add_field => { "[network][name]" => "Home Network" } 39 | } 40 | } 41 | ### Change interface as desired ### 42 | if [interface][name] =~ /^ix2$/ { 43 | mutate { 44 | add_field => { "[interface][alias]" => "DEV" } 45 | add_field => { "[network][name]" => "Test Network" } 46 | } 47 | } 48 | ### Change interface as desired ### 49 | if [interface][name] =~ /^ix0_vlan300$/ { 50 | mutate { 51 | add_field => { "[interface][alias]" => "WiFi" } 52 | add_field => { "[network][name]" => "WiFi Network" } 53 | } 54 | } 55 | ### Change interface as desired ### 56 | if [interface][name] =~ /^ix0_vlan500$/ { 57 | mutate { 58 | add_field => { "[interface][alias]" => "IoT" } 59 | add_field => { "[network][name]" => "IoT Network" } 60 | } 61 | } 62 | ### Change interface as desired ### 63 | if [interface][name] =~ /^ix0_vlan3000$/ { 64 | mutate { 65 | add_field => { "[interface][alias]" => "WiFi" } 66 | add_field => { "[network][name]" => "Guest Network" } 67 | } 68 | } 69 | ### Change interface as desired ### 70 | if [interface][name] =~ /^lo0$/ { 71 | mutate { 72 | add_field => { "[interface][alias]" => "Link-Local" } 73 | update => { "[network][direction]" => "%{[network][direction]}bound" } 74 | update => { "[network][type]" => "ipv%{[network][type]}" } 75 | } 76 | } 77 | ### Fallback interface ### 78 | if ![interface][alias] and [interface][name] { 79 | mutate { 80 | add_field => { "[interface][alias]" => "%{[interface][name]}" } 81 | add_field => { "[network][name]" => "%{[interface][name]}" } 82 | } 83 | } 84 | } 85 | } 86 | ### firewall-2 ### 87 | filter { 88 | ### Change first.network.local to pfSesne or OPNsense host name ### 89 | if [host][name] == "second.network.local" { 90 | ### Change interface as desired ### 91 | if [interface][name] =~ /^igb0$/ { 92 | mutate { 93 | add_field => { "[interface][alias]" => "WAN" } 94 | add_field => { "[network][name]" => "FiOS" } 95 | } 96 | } 97 | ### Change interface as desired ### 98 | if [interface][name] =~ /^igb1$/ { 99 | mutate { 100 | add_field => { "[interface][alias]" => "LAN" } 101 | add_field => { "[network][name]" => "Home Network" } 102 | } 103 | } 104 | ### Change interface as desired ### 105 | if [interface][name] =~ /^igb2$/ { 106 | mutate { 107 | add_field => { "[interface][alias]" => "DEV" } 108 | add_field => { "[network][name]" => "Lab" } 109 | } 110 | } 111 | ### Change interface as desired ### 112 | if [interface][name] =~ /^igb3$/ { 113 | mutate { 114 | add_field => { "[interface][alias]" => "DMZ" } 115 | add_field => { "[network][name]" => "Exposed Network" } 116 | } 117 | } 118 | ### Change interface as desired ### 119 | if [interface][name] =~ /^igb1_vlan2000$/ { 120 | mutate { 121 | add_field => { "[interface][alias]" => "VLAN" } 122 | add_field => { "[network][name]" => "Isolated Network" } 123 | } 124 | } 125 | ### Change interface as desired ### 126 | if [interface][name] =~ /^lo0$/ { 127 | mutate { 128 | add_field => { "[interface][alias]" => "Link-Local" } 129 | update => { "[network][direction]" => "%{[network][direction]}bound" } 130 | update => { "[network][type]" => "ipv%{[network][type]}" } 131 | } 132 | } 133 | ### Fallback interface ### 134 | if ![interface][alias] and [interface][name] { 135 | mutate { 136 | add_field => { "[interface][alias]" => "%{[interface][name]}" } 137 | add_field => { "[network][name]" => "%{[interface][name]}" } 138 | } 139 | } 140 | } 141 | } 142 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/30-geoip.pfelk: -------------------------------------------------------------------------------- 1 | # 30-geoip.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: False - Optional # 5 | # Description: Enriches source.ip and destination.ip fields with GeoIP data # 6 | # For MaxMind, remove all instances of "#MMR#" or leave for built-in GeoIP # 7 | ################################################################################ 8 | # 9 | filter { 10 | if "pfelk" in [tags] { 11 | if [source][ip] { 12 | ### Check if source.ip address is private 13 | cidr { 14 | address => [ "%{[source][ip]}" ] 15 | network => [ "0.0.0.0/32", "10.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16", "172.16.0.0/12", "192.168.0.0/16", "224.0.0.0/4", "255.255.255.255/32", "fe80::/10", "fc00::/7", "ff00::/8", "::1/128", "::" ] 16 | add_tag => "IP_Private_Source" 17 | } 18 | if "IP_Private_Source" not in [tags] { 19 | geoip { 20 | source => "[source][ip]" 21 | #MMR# database => "/var/lib/GeoIP/GeoLite2-City.mmdb" 22 | } 23 | geoip { 24 | source => "[source][ip]" 25 | default_database_type => 'ASN' 26 | #MMR# database => "/var/lib/GeoIP/GeoLite2-ASN.mmdb" 27 | } 28 | mutate { 29 | add_tag => "GeoIP_Source" 30 | } 31 | } 32 | } 33 | if [destination][ip] { 34 | ### Check if destination.ip address is private 35 | cidr { 36 | address => [ "%{[destination][ip]}" ] 37 | network => [ "0.0.0.0/32", "10.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16", "172.16.0.0/12", "192.168.0.0/16", "224.0.0.0/4", "255.255.255.255/32", "fe80::/10", "fc00::/7", "ff00::/8", "::1/128", "::" ] 38 | add_tag => "IP_Private_Destination" 39 | } 40 | if "IP_Private_Destination" not in [tags] { 41 | geoip { 42 | source => "[destination][ip]" 43 | #MMR# database => "/var/lib/GeoIP/GeoLite2-City.mmdb" 44 | #ECSv8# target => "[destination]" 45 | } 46 | geoip { 47 | source => "[destination][ip]" 48 | default_database_type => 'ASN' 49 | #MMR# database => "/var/lib/GeoIP/GeoLite2-ASN.mmdb" 50 | } 51 | mutate { 52 | add_tag => "GeoIP_Destination" 53 | } 54 | } 55 | } 56 | } 57 | ### PROXY ### 58 | if "haproxy" in [tags] or "nginx" in [tags] { 59 | if [client][ip] { 60 | # Check if client.ip address is private 61 | cidr { 62 | address => [ "%{[client][ip]}" ] 63 | network => [ "0.0.0.0/32", "10.0.0.0/8", "127.0.0.0/8", "169.254.0.0/16", "172.16.0.0/12", "192.168.0.0/16", "224.0.0.0/4", "255.255.255.255/32", "fe80::/10", "fc00::/7", "ff00::/8", "::1/128", "::" ] 64 | add_tag => "IP_Private_Proxy" 65 | } 66 | if "IP_Private_Proxy" not in [tags] { 67 | geoip { 68 | source => "[client][ip]" 69 | #MMR# database => "/var/lib/GeoIP/GeoLite2-City.mmdb" 70 | } 71 | geoip { 72 | source => "[client][ip]" 73 | default_database_type => 'ASN' 74 | #MMR# database => "/var/lib/GeoIP/GeoLite2-ASN.mmdb" 75 | } 76 | mutate { 77 | add_tag => "GeoIP_Source" 78 | } 79 | } 80 | } 81 | } 82 | } 83 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/35-rules-desc.bkg: -------------------------------------------------------------------------------- 1 | # 35-rules-desc.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: No - Optional # 5 | # Description: Checks for the presense of the rule_number field, if present # 6 | # runs translates the rule_number into a referenced description. # 7 | ################################################################################ 8 | # 9 | filter { 10 | if "firewall" in [tags] { 11 | if [rule][id] { 12 | translate { 13 | source => "[rule][id]" 14 | target => "[rule][name]" 15 | dictionary_path => "/etc/pfelk/databases/rule-names.csv" 16 | refresh_interval => 60 17 | refresh_behaviour => replace 18 | fallback => "%{[rule][id]}" 19 | } 20 | mutate { 21 | add_field => { "[rule][description]" => "%{[interface][alias]}: %{[rule][name]}" } 22 | } 23 | } 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/36-ports-desc.bkg: -------------------------------------------------------------------------------- 1 | # 36-ports-desc.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: False - Optional # 5 | # Description: Checks for the presense of the port field, if present runs # 6 | # translates the port into a referenced description. # 7 | ################################################################################ 8 | # 9 | filter { 10 | if "firewall" in [tags] { 11 | if [network][iana_number] { 12 | translate { 13 | source => "[network][iana_number]" 14 | target => "[network][application]" 15 | dictionary_path => "/etc/pfelk/databases/service-names-port-numbers.csv" 16 | refresh_interval => 300 17 | refresh_behaviour => replace 18 | #fallback => "%{[network][iana_number]}" 19 | } 20 | } 21 | } 22 | } 23 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/37-enhanced_user_agent.pfelk: -------------------------------------------------------------------------------- 1 | # 37-enhanced_user_agent.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: False - Optional # 5 | # Description: enriches user_agent.original field for suricata/squid logs # 6 | # # 7 | ################################################################################ 8 | # 9 | filter { 10 | if [user_agent][original] { 11 | ruby { 12 | code => " 13 | http_useragent = event.get('[user_agent][original]') 14 | event.set('[user_agent][meta][total_length]', http_useragent.length) 15 | " 16 | tag_on_exception => "_rubyexception-all-http_user_agent_parsing" 17 | } 18 | useragent { 19 | source => "[user_agent][original]" 20 | target => "user_agent" 21 | # Add field if successful for checking later 22 | add_field => { "[@metadata][ua_parse]" => "true" } 23 | } 24 | # UA to new ECS 25 | if [@metadata][ua_parse] == "true" { 26 | if [user_agent][os_major] and [user_agent][os_minor] and [user_agent][patch] { 27 | mutate { 28 | convert => { 29 | "[user_agent][os_major]" => "string" 30 | "[user_agent][os_minor]" => "string" 31 | "[user_agent][patch]" => "string" 32 | } 33 | remove_field => ["[user_agent][os][version]"] 34 | } 35 | # mutate { 36 | # add_field => { 37 | # "[user_agent][os][version]" => "%{[user_agent][os_major]}.%{[user_agent][os_minor]}-%{[user_agent][patch]}" 38 | # } 39 | # } 40 | } 41 | mutate { 42 | rename => { 43 | "[user_agent][os]" => "[user_agent][os][full]" 44 | "[user_agent][os_name]" => "[user_agent][os][name]" 45 | "[user_agent][device]" => "[user_agent][device][name]" 46 | } 47 | remove_field => ["[user_agent][os][major]"] 48 | remove_field => ["[user_agent][os][minor]"] 49 | remove_field => ["[user_agent][patch]"] 50 | } 51 | } 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/38-enhanced_url.pfelk: -------------------------------------------------------------------------------- 1 | # 38-enhanced_url.pfelk 2 | ################################################################################ 3 | # Version: 21.04 # 4 | # Required: No - Optional # 5 | # Description: URL parsing, normalization, and enrichment for url.original # 6 | # field(s) found in suricata and squid logs. # 7 | ################################################################################ 8 | # 9 | filter { 10 | if [url][original] { 11 | ruby { 12 | code => ' 13 | require "addressable/uri" 14 | url = event.get("[url][original]") 15 | url_total_paths = 0 16 | # Length of url 17 | url_length = url.length 18 | # Total "paths" (ie: "/") 19 | url_total_paths = url.split("/").length - 1 20 | # If url after being cleaned or originally is only "/" then we will get -1 so need to say if -1 set to 0 21 | if url_total_paths == -1 22 | url_total_paths = 1 23 | end 24 | # url Contains non ascii 25 | url_has_non_ascii = !url.ascii_only? 26 | # url Contains non whitespace 27 | url_has_whitespace = url.match?(/\s/) 28 | # URL/HTTP is a best practice and not strict. 29 | begin 30 | url_parsed = Addressable::URI.parse(url) 31 | url_extension = url_parsed.extname 32 | url_query = url_parsed.query 33 | url_user = url_parsed.user 34 | url_password = url_parsed.password 35 | url_scheme = url_parsed.scheme 36 | url_port = url_parsed.port 37 | url_host = url_parsed.host 38 | # may not exist, but want to grab these incase log/event already (has) set these 39 | previous_url_extension = event.get("[url][extension]") 40 | previous_url_query = event.get("[url][query]") 41 | previous_url_user = event.get("[client][user][name]") 42 | previous_url_password = event.get("[client][user][password]") 43 | previous_url_scheme = event.get("[url][scheme]") 44 | previous_url_port = event.get("[url][port]") 45 | previous_url_host = event.get("[url][domain]") 46 | previous_url_scheme = event.get("[url][scheme]") 47 | if !url_extension.nil? && !url_extension.empty? && !defined?(previous_url_extension).nil? 48 | event.set("[url][extension]", url_extension[1..-1]) 49 | end 50 | if !url_query.nil? && !url_query.empty? && !defined?(previous_url_query).nil? 51 | event.set("[url][query]", url_query) 52 | end 53 | if !url_user.nil? && !url_user.empty? && !defined?(previous_url_user).nil? 54 | event.set("[client][user][name]", url_user) 55 | end 56 | if !url_password.nil? && !url_password.empty? && !defined?(previous_url_password).nil? 57 | event.set("[client][user][password]", url_password) 58 | end 59 | #if !url_scheme.nil? && !url_scheme.empty? && !defined?(previous_url_scheme).nil? 60 | if !url_scheme.nil? && !url_scheme.empty? 61 | event.set("[url][scheme]", url_scheme) 62 | end 63 | if !url_port.nil? && !url_port.to_s.empty? && !defined?(previous_url_port).nil? 64 | event.set("[url][port]", url_port) 65 | end 66 | rescue Addressable::URI::InvalidURIError 67 | # Add a value so we know there was an erroneous url 68 | current_tagged_log = event.get("tags") 69 | tagged_value = "invalid url" 70 | # Field exists so append 71 | if !defined?(current_tagged_log).nil? 72 | # If multiple values append to existing array 73 | if current_tagged_log.is_a? Enumerable 74 | new_tagged_log = current_tagged_log.push(tagged_value) 75 | # Single value so create an array 76 | else 77 | new_tagged_log = [ current_tagged_log, tagged_value ] 78 | end 79 | event.set("tags", new_tagged_log) 80 | # Field doesn"t exist so safe to create 81 | else 82 | event.set("tags", tagged_value) 83 | end 84 | rescue ArgumentError => e 85 | # Add a value so we know there was a string error 86 | current_tagged_log = event.get("meta_log_tags") 87 | tagged_value = %Q"url error: #{e.message}" 88 | # Field exists so append 89 | if !defined?(current_tagged_log).nil? 90 | # If multiple values append to existing array 91 | if current_tagged_log.is_a? Enumerable 92 | new_tagged_log = current_tagged_log.push(tagged_value) 93 | # Single value so create an array 94 | else 95 | new_tagged_log = [ current_tagged_log, tagged_value ] 96 | end 97 | event.set("tags", new_tagged_log) 98 | # Field doesn"t exist so safe to create 99 | else 100 | event.set("tags", tagged_value) 101 | end 102 | end 103 | # Set additional event parameters 104 | event.set("[url][meta][total_length]", url_length) 105 | event.set("[url][meta][total_paths]", url_total_paths) 106 | event.set("[url][meta][has_non_ascii]", url_has_non_ascii) 107 | event.set("[url][meta][has_whitespace]", url_has_whitespace) 108 | ' 109 | tag_on_exception => "_rubyexception-all-url_enrich" 110 | } 111 | } 112 | } 113 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/45-enhanced_private.pfelk: -------------------------------------------------------------------------------- 1 | # 45-enhanced_private.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: False - Optional # 5 | # Description: Adds customized host.hostname via dictionary lookup with domain # 6 | # and adds cusotmiazed GeoIP data for local IP address # 7 | ################################################################################ 8 | # 9 | filter { 10 | if "dhcp" in [tags] or "unbound" in [tags] or "squid" in [tags] { 11 | if [client][ip] { 12 | translate { 13 | source => "[client][ip]" 14 | target => "[host][hostname]" 15 | ################################################################################ 16 | ### Edit referenced dictionary for local IP/Hostnames ### 17 | ################################################################################ 18 | dictionary_path => "/etc/pfelk/databases/private-hostnames.csv" 19 | refresh_interval => 600 20 | refresh_behaviour => replace 21 | fallback => "%{[client][ip]}" 22 | } 23 | cidr { 24 | address => [ "%{[client][ip]}" ] 25 | network => [ "0.0.0.0/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", "fc00::/7", "127.0.0.0/8", "::1/128", "169.254.0.0/16", "fe80::/10", "224.0.0.0/4", "ff00::/8", "255.255.255.255/32", "::" ] 26 | add_tag => "Private_Client_IP" 27 | } 28 | } 29 | if "Private_Client_IP" in [tags] { 30 | mutate { 31 | ################################################################################ 32 | ### Amend pfelk.dev to your own domain name ### 33 | ################################################################################ 34 | replace => { "[host][name]" => "%{[host][hostname]}.pfelk.dev" } 35 | ################################################################################ 36 | ### Amend the values below as desired ### 37 | ################################################################################ 38 | add_field => { 39 | "[client][as][organization][name]" => "pfelk" 40 | "[client][geo][location]" => "32.309, -64.813" 41 | "[client][geo][city_name]" => "City" 42 | "[client][geo][country_name]" => "Country" 43 | "[client][geo][region_name]" => "State" 44 | "[client][geo][country_iso_code]" => "US" 45 | } 46 | ################################################################################ 47 | lowercase => [ "[host][hostname]" ] 48 | lowercase => [ "[host][name]" ] 49 | } 50 | } 51 | mutate { 52 | remove_tag => [ "Private_Client_IP" ] 53 | } 54 | } 55 | } 56 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/49-cleanup.pfelk: -------------------------------------------------------------------------------- 1 | # 49-cleanup.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: False - Optional # 5 | # Description: Removed unwanted logs based on the process.pid field and # 6 | # additional fields. Additionally, pf.tcp.options is split (multiple values) # 7 | ################################################################################ 8 | # 9 | # Update as needed to remove unwanted logs based on the process.pid field 10 | filter { 11 | # if [process][pid] in ["78", "46", "45", "43"] { 12 | # drop { } 13 | # } 14 | mutate { 15 | remove_field => ["filter_message", "pfelk", "pfelk_csv"] 16 | split => { "[pf][tcp][options]" => ";" } 17 | rename => { "message" => "[event][original]" } 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /etc/pfelk/conf.d/50-outputs.pfelk: -------------------------------------------------------------------------------- 1 | # 50-outputs.pfelk 2 | ################################################################################ 3 | # Version: 23.08 # 4 | # Required: True # 5 | # Description: Sends enriched logs to Elasticsearch. # 6 | # # 7 | ################################################################################ 8 | # 9 | filter { 10 | if [log][syslog][appname] == "captiveportal" { 11 | mutate { add_field => { "[data_stream][namespace]" => "captiveportal" } } 12 | } else if [log][syslog][appname] == "dhcp" { 13 | mutate { add_field => { "[data_stream][namespace]" => "dhcp" } } 14 | } else if [log][syslog][appname] == "firewall" { 15 | mutate { add_field => { "[data_stream][namespace]" => "firewall" } } 16 | } else if "bind9" in [tags] or "dpinger" in [tags] or "ntpd" in [tags] or "web_portal" in [tags] { 17 | mutate { add_field => { "[data_stream][namespace]" => "firewall_processes" } } 18 | } else if [log][syslog][appname] == "haproxy" { 19 | mutate { add_field => { "[data_stream][namespace]" => "haproxy" } } 20 | } else if [log][syslog][appname] == "nginx" { 21 | mutate { add_field => { "[data_stream][namespace]" => "nginx" } } 22 | } else if [log][syslog][appname] == "openvpn" { 23 | mutate { add_field => { "[data_stream][namespace]" => "openvpn" } } 24 | } else if [log][syslog][appname] == "unbound" { 25 | mutate { add_field => { "[data_stream][namespace]" => "unbound" } } 26 | } else if [log][syslog][appname] == "suricata" { 27 | mutate { add_field => { "[data_stream][namespace]" => "suricata" } } 28 | } else if [log][syslog][appname] == "snort" { 29 | mutate { add_field => { "[data_stream][namespace]" => "snort" } } 30 | } else if [log][syslog][appname] == "squid" { 31 | mutate { add_field => { "[data_stream][namespace]" => "squid" } } 32 | } else { 33 | mutate { add_field => { "[data_stream][namespace]" => "unknown" } } 34 | } 35 | } 36 | output { 37 | # if [type] == "beats" { 38 | # elasticsearch { 39 | # hosts => ["http://es01:9200"] 40 | # index => "%{[@metadata][beat]}-%{[@metadata][version]}" 41 | # ecs_compatibility => "v1" 42 | # manage_template => false 43 | # ### X-Pack Username and Password ### 44 | # user => USERNAMEHERE 45 | # password => PASSWORDHERE 46 | # } 47 | # } 48 | elasticsearch { 49 | data_stream => "true" 50 | data_stream_type => "logs" 51 | data_stream_dataset => "pfelk" 52 | ### Unsecured Method ### 53 | # hosts => ["http://es01:9200"] 54 | ### X-Pack Security Method ### 55 | hosts => ["https://es01:9200"] 56 | # ssl => true 57 | # [NONDOCKER] cacert => '/etc/logstash/config/certs/http_ca.crt' 58 | cacert => '/usr/share/logstash/config/certs/ca/ca.crt' 59 | user => "elastic" 60 | password => "changeme" 61 | } 62 | } 63 | -------------------------------------------------------------------------------- /etc/pfelk/databases/private-hostnames.csv: -------------------------------------------------------------------------------- 1 | "IP","Hostname" 2 | -------------------------------------------------------------------------------- /etc/pfelk/databases/rule-names.csv: -------------------------------------------------------------------------------- 1 | "Rule","Label" 2 | "0","null" 3 | -------------------------------------------------------------------------------- /etc/pfelk/patterns/openvpn.grok: -------------------------------------------------------------------------------- 1 | # openvpn.grok 2 | ################################################################################ 3 | # Version: 23.03-beta # 4 | # # 5 | # OPNsense/pfSense openvpn log grok pattern for pfELK # 6 | # # 7 | ################################################################################ 8 | # 9 | # OPENVPN_RAW 10 | OPENVPN_RAW (%{OPENVPN_USER_AUTH}|%{OPENVPN_USER_ACT}|%{OPENVPN_USER_IP}|%{OPENVPN_OTHER}) 11 | 12 | OPENVPN_USER_AUTH user\s'%{USER:[openvpn][user][name]}'\s%{WORD:[openvpn][event][action]} 13 | OPENVPN_USER_ACT .*'%{HOSTNAME:server}'\suser\s'%{USER:[openvpn][user][name]}'.*'%{IP:[openvpn][client][ip]}:%{INT:[openvpn][client][port]}'\s-\s%{WORD:[openvpn][event][action]} 14 | OPENVPN_USER_IP %{USERNAME:[openvpn][user][name]}/%{IP:[openvpn][client][address]}:%{INT:[openvpn][client][port]}.*=%{IPV4:[openvpn[client][ipv4]},(.*=%{IPV6:[openvpn][client][ipv6]})? 15 | OPENVPN_OTHER %{IP:[openvpn][client][ip]}:%{INT:[openvpn][client][port]}%{GREEDYDATA:[openvpn][message]} 16 | 17 | 18 | ################################################################################ 19 | # OPENVPN_RAW 20 | # OPENVPN_RAW (%{OPENVPN_AUTH}|%{OPENVPN_MGT}|%{OPENVPN_MSG}|%{OPENVPN_IV}|%{OPENVPN_USR}|%{OPENVPN_EST}|%{OPENVPN_INFO}|%{OPENVPN_OTHER}|%{OPENVPN_USER_SENT_CONTROL}) 21 | 22 | # OPENVPN - AUTH 23 | # OPENVPN_AUTH user '%{USERNAME:[openvpn][user][name]}' %{WORD:[openvpn][event][action]} %{TIME:[openvpn][event][starttime]} 24 | 25 | # OPENVPN - EST 26 | # OPENVPN_EST %{GREEDYDATA:[openvpn][event][reason]} \[AF_INET]%{IP:[openvpn][client][ip]}:%{INT:[openvpn][client][port]} %{TIME:[openvpn][event][starttime]} 27 | 28 | # OPENVPN - INFO 29 | # OPENVPN_INFO %{GREEDYDATA:[openvpn][event][reason]}\: %{DATA:[openvpn][event][reference]} %{TIME:[openvpn][event][starttime]} 30 | 31 | # OPENVPN - IP 32 | # OPENVPN_IP %{IP:[openvpn][client][ip]}:%{INT:[openvpn][client][port]} 33 | 34 | # OPENVPN - IV 35 | # OPENVPN_IV %{OPENVPN_IP} (peer info: )|(%{IV_VER}|%{IV_PLAT}|%{IV_PROTO}|%{IV_NCP}|%{IV_CIPHERS}|%{IV_LZ4}|%{IV_LZ4v2}|%{IV_COMP_STUB}|%{IV_COMP_STUBv2}|%{IV_TCPNL}|%{IV_GUI_VER}) 36 | # IV_VER IV_VER=%{GREEDYDATA:[openvpn][iv][version]} %{TIME:[openvpn][event][starttime]} 37 | # IV_PLAT IV_PLAT=%{WORD:[openvpn][iv][paltform]} %{TIME:[openvpn][event][starttime]} 38 | # IV_PROTO IV_PROTO=%{INT:[openvpn][iv][protocol]} %{TIME:[openvpn][event][starttime]} 39 | # IV_NCP IV_NCP=%{INT:[openvpn][iv][ncp]} %{TIME:[openvpn][event][starttime]} 40 | # IV_CIPHERS IV_CIPHERS=%{GREEDYDATA:[openvpn][tls][client][supported_ciphers]} %{TIME:[openvpn][event][starttime]} 41 | # IV_LZ4 IV_LZ4=%{INT:[openvpn][iv][lz4]} %{TIME:[openvpn][event][starttime]} 42 | # IV_LZ4v2 IV_LZ4v2=%{INT:[openvpn][iv][lz4v2]} %{TIME:[openvpn][event][starttime]} 43 | # IV_COMP_STUB IV_COMP_STUB=%{INT:[openvpn][iv][comp_stub]} %{TIME:[openvpn][event][starttime]} 44 | # IV_COMP_STUBv2 IV_COMP_STUBv2=%{INT:[openvpn][iv][comp_stubv2]} %{TIME:[openvpn][event][starttime]} 45 | # IV_TCPNL IV_TCPNL=%{INT:[openvpn][iv][tcpnl]} %{TIME:[openvpn][event][starttime]} 46 | # IV_GUI_VER IV_GUI_VER=%{WORD:[openvpn][iv][gui_ver]} %{TIME:[openvpn][event][starttime]} 47 | 48 | # OPENVPN - MANAGMENT 49 | # OPENVPN_MGT (%{OPENVPN_MGT_CLIENT}|%{OPENVPN_MGT_CMD}) 50 | # OPENVPN_MGT_CLIENT (?<[openvpn][management]>(Client)) (?<[openvpn][management][status]>(connected|disconnected))|(%{OPENVPN_MGT_CLIENT_C}|%{OPENVPN_MGT_CLIENT_C}) 51 | # OPENVPN_MGT_CLIENT_C %{TIME:[openvpn][event][starttime]} 52 | # OPENVPN_MGT_CLIENT_D from %{GREEDYDATA:[openvpn][location]} %{TIME:[openvpn][event][starttime]} 53 | # OPENVPN_MGT_CMD (?<[openvpn][management]>(CMD)) \'%{GREEDYDATA:[openvpn][management][message]}\' %{TIME:[openvpn][event][starttime]} 54 | 55 | # OPENVPN - MSG 56 | # OPENVPN_MSG %{OPENVPN_IP} VERIFY (?<[openvpn][event][kind]]>(WARNING|SCRIPT OK|OK)): (depth=)?(%{INT:[openvpn][event][code]},)? %{GREEDYDATA:[openvpn][tls][issuer]} %{TIME:[openvpn][event][starttime]} 57 | 58 | # OPENVPN - OTHER 59 | # OPENVPN_OTHER %{OPENVPN_IP} .* %{TIME:[openvpn][event][starttime]} IP 60 | 61 | #OPENVPN_USR 62 | # OPENVPN_USR %{USERNAME:[openvpn][user][name]}/%{IP:[openvpn][client][address]}:%{INT:[openvpn][client][port]}|(%{OPENVPN_USR_CIPHER}|%{OPENVPN_USR_OTHER}|%{OPENVPN_USER_SENT_CONTROL}) 63 | # OPENVPN_USR_CIPHER (?<[openvpn][type]>(Data Channel|Outgoing Data Channel|Outgoing Data Channel))?(\:) %{GREEDYDATA:[openvpn][cipher][message]} '%{GREEDYDATA:[openvpn][tls][cipher]}' (initialized with %{INT:[openvpn][cipher][bit_length]} bit key)?(\s)?%{TIME:[openvpn][event][starttime]} 64 | # OPENVPN_USR_OTHER .* %{TIME:[openvpn][event][starttime]} 65 | 66 | #OPENVPN_USER_SENT_CONTROL 67 | # OPENVPN_USER_SENT_CONTROL SENT CONTROL \[%{USERNAME:[openvpn][user][name_test]}\]\: \'PUSH_REPLY,dhcp-option DNS %{IP:[openvpn][dns][ip]},redirect-gateway def1,route-gateway %{IP:[openvpn][gateway][ip]},topology subnet,ping 10,ping-restart 60,ifconfig %{IP:[openvpn][client][nat][ip]} %{IP:[openvpn][client][nat][subnet]},peer-id %{INT:[openvpn][peerid]},cipher %{GREEDYDATA:[openvpn][tls][cipher]} 68 | -------------------------------------------------------------------------------- /etc/pfelk/patterns/pfelk.grok: -------------------------------------------------------------------------------- 1 | # pfelk.grok 2 | ################################################################################ 3 | # Version: 23.08b # 4 | # # 5 | # # 6 | # # 7 | ################################################################################ 8 | # 9 | # 10 | # PFELK 11 | PFELK (%{PFSENSE}|%{OPNSENSE}|%{RFC5424}) 12 | 13 | # pfSense | OPNsense 14 | PFSENSE %{SYSLOGTIMESTAMP:[event][created]}\s(%{SYSLOGHOST:[log][syslog][hostname]}\s)?%{PROG:[log][syslog][appname]}(\[%{POSINT:[log][syslog][procid]}\])?\:\s%{GREEDYDATA:filter_message} 15 | OPNSENSE %{SYSLOGTIMESTAMP:[event][created]}\s%{SYSLOGHOST:[log][syslog][hostname]}\s%{PROG:[log][syslog][appname]}\[%{POSINT:[log][syslog][procid]}\]\:\s%{GREEDYDATA:filter_message} 16 | 17 | # OPNsense||pfSense RFC5424 18 | RFC5424 (%{INT:[log][syslog][version]}\s*)%{TIMESTAMP_ISO8601:[event][created]}\s%{SYSLOGHOST:[log][syslog][hostname]}\s%{PROG:[log][syslog][appname]}(\s%{POSINT:[log][syslog][procid]})?(\s\-\s\-\s(\-\s)?)?(\s\-\s\[meta\ssequenceId\=(\\)?\"%{NUMBER:[event][sequence]}(\\)?\"\])?\s%{GREEDYDATA:filter_message} 19 | 20 | # CAPTIVE PORTAL (Optional) 21 | CAPTIVEPORTAL (%{CP_PFSENSE}|%{CP_OPNSENSE}) 22 | CP_OPNSENSE %{WORD:[event][action]}\s%{GREEDYDATA:[client][user][name]}\s\(%{IP:[client][ip]}\)\s%{WORD:[observer][ingress][interface][alias]}\s%{INT:[observer][ingress][zone]} 23 | # ToDo - Clean-up pfSense GROK pattern below 24 | CP_PFSENSE (%{CAPTIVE1}|%{CAPTIVE2}) 25 | CAPTIVE1 %{WORD:[observer][ingress][interface][alias]}:\s%{DATA:[observer][ingress][zone]}\s\-\s%{WORD:[event][action]}\:\s%{GREEDYDATA:[client][user][name]},\s%{MAC:[client][mac]},\s%{IP:[client][ip]}(,\s%{GREEDYDATA:[event][reason]})? 26 | CAPTIVE2 %{WORD:[observer][ingress][interface][alias]}:\s%{DATA:[observer][ingress][zone]}\s\-\s%{GREEDYDATA:[event][action]}\:\s%{GREEDYDATA:[client][user][name]},\s%{MAC:[client][mac]},\s%{IP:[client][ip]}(,\s%{GREEDYDATA:[event][reason]})? 27 | 28 | # DHCPv4 (Optional) 29 | DHCPD DHCP(%{DHCPD_DISCOVER}|%{DHCPD_DUPLICATE}|%{DHCPD_OFFER_ACK}|%{DHCPD_REQUEST}|%{DHCPD_DECLINE}|%{DHCPD_RELEASE}|%{DHCPD_INFORM}|%{DHCPD_LEASE})|%{DHCPD_REUSE}|%{DHCPDv6}|(%{GREEDYDATA:[DHCPD][message]})? 30 | DHCPD_DISCOVER (?<[dhcp][operation]>DISCOVER) from %{MAC:[dhcpv4][client][mac]}( \(%{DATA:[dhcpv4][option][hostname]}\))? %{DHCPD_VIA} 31 | DHCPD_DECLINE (?<[dhcp][operation]>DECLINE) of %{IP:[dhcpv4][client][ip]} from %{MAC:[dhcpv4][client][mac]}( \(%{DATA:[dhcpv4][option][hostname]}\))? %{DHCPD_VIA} 32 | DHCPD_DUPLICATE uid %{WORD:[dhcp][operation]} %{IP:[dhcpv4][client][ip]} for client %{MAC:[dhcpv4][client][mac]} is %{WORD:[error][code]} on %{GREEDYDATA:[dhcpv4][client][address]} 33 | DHCPD_INFORM (?<[dhcp][operation]>INFORM) from %{IP:[dhcpv4][client][ip]}? %{DHCPD_VIA} 34 | DHCPD_LEASE (?<[dhcp][operation]>LEASE(QUERY|UNKNOWN|ACTIVE|UNASSIGNED)) (from|to) %{IP:[dhcpv4][client][ip]} for (IP %{IP:[dhcpv4][query][ip]}|client-id %{NOTSPACE:[dhcpv4][query][id]}|MAC address %{MAC:[dhcpv4][query][mac]})( \(%{NUMBER:[dhcpv4][query][associated]} associated IPs\))? 35 | DHCPD_OFFER_ACK (?<[dhcp][operation]>(OFFER|N?ACK)) on %{IP:[dhcpv4][client][ip]} to %{MAC:[dhcpv4][client][mac]}( \(%{DATA:[dhcpv4][option][hostname]}\))? %{DHCPD_VIA} 36 | DHCPD_RELEASE (?<[dhcp][operation]>RELEASE) of %{IP:[dhcpv4][client][ip]} from %{MAC:[dhcpv4][client][mac]}( \(%{DATA:[dhcpv4][option][hostname]}\))? %{DHCPD_VIA} \((?(not )?found)\) 37 | DHCPD_REQUEST (?<[dhcp][operation]>REQUEST) for %{IP:[dhcpv4][client][ip]}( \(%{DATA:[dhcpv4][server][ip]}\))? from %{MAC:[dhcpv4][client][mac]}( \(%{DATA:[dhcpv4][option][hostname]}\))? %{DHCPD_VIA} 38 | DHCPD_VIA via (%{IP:[dhcpv4][relay][ip]}|(?<[interface][name]>[^: ]+)) 39 | DHCPD_REUSE (?<[dhcpv4][operation]>reuse_lease): lease age %{INT:[dhcpv4][lease][duration]}.* lease for %{IPV4:[dhcpv4][client][ip]} 40 | 41 | # DHCPv6 (Optional - In Development) 42 | DHCPDv6 (%{DHCPv6_REPLY}|%{DHCPv6_ACTION}|%{DHCPv6_REUSE}) 43 | DHCPv6_REPLY (?<[dhcpv6][operation]>Advertise|Reply) NA: address %{IP:[dhcpv6][client][ip]} to client with duid %{GREEDYDATA:[dhcpv6][duid]}\siaid\s\=\s%{INT:[dhcpv6][iaid]} valid for %{INT:[dhcpv6][lease][duration]} seconds 44 | DHCPv6_ACTION (?<[dhcpv6][operation]>(Request|Picking|Sending Reply|Sending Advertise|Confirm|Solicit|Renew))(\s)?(message)?(\s)?(to|from)?(\s)?(pool address)? %{IP:[dhcpv6][client][ip]}(\s)?(port %{INT:[dhcpv6][client][port]})?(, transaction ID %{BASE16FLOAT:[dhcpv6][transaction][id]})? 45 | DHCPv6_REUSE (?<[dhcpv6][operation]>Reusing lease) for: %{IPV6:[dhcpv6][client][ip]}, age %{INT:[dhcpv6][lease][age]}.*preferred: %{INT:[dhcpv6][lease][age][preferred]}, valid %{INT:[dhcpv6][lease][age][valid]} 46 | 47 | # HAPROXY 48 | HA_PROXY (%{HAPROXY}|%{HAPROXY_TCP}) 49 | HAPROXY %{IP:[client][ip]}:%{INT:[client][port]} \[%{HAPROXYDATE:[haproxy][timestamp]}\] %{NOTSPACE:[haproxy][frontend_name]} %{NOTSPACE:[haproxy][backend_name]}/%{NOTSPACE:[haproxy][server_name]} %{INT:[haproxy][time_request]}/%{INT:[haproxy][time_queue]}/%{INT:[haproxy][time_backend_connect]}/%{INT:[haproxy][time_backend_response]}/%{NOTSPACE:[host][uptime]} %{INT:[http][response][status_code]} %{NOTSPACE:[haproxy][bytes_read]} %{DATA:[haproxy][http][request][captured_cookie]} %{DATA:[haproxy][http][response][captured_cookie]} %{NOTSPACE:[haproxy][termination_state]} %{INT:[haproxy][connections][active]}/%{INT:[haproxy][connections][frontend]}/%{INT:[haproxy][connections][backend]}/%{INT:[haproxy][connections][server]}/%{NOTSPACE:[haproxy][connections][retries]} %{INT:[haproxy][server_queue]}/%{INT:[haproxy][backend_queue]} (\{%{HAPROXYCAPTUREDREQUESTHEADERS}\})?( )?(\{%{HAPROXYCAPTUREDRESPONSEHEADERS}\})?( )?"(|(%{WORD:[http][request][method]} (%{URIPROTO:[haproxy][mode]}://)?(?:%{USER:[user][name]}(?::[^@]*)?@)?(?:%{URIHOST:[http][request][referrer]})?(?:%{URIPATHPARAM:[http][mode]})?( HTTP/%{NUMBER:[http][version]})?))?"? 50 | HAPROXY_TCP %{IP:[haproxy][client][ip]}:%{INT:[haproxy][client][port]} [%{HAPROXYDATE:haproxy_timestamp}] %{NOTSPACE:[haproxy][frontend_name]} %{NOTSPACE:[haproxy][backend_name]}/%{NOTSPACE:[haproxy][server_name]} %{INT:[haproxy][time_request]}/%{INT:[haproxy][time_queue]}/%{INT:[haproxy][time_backend_connect]} %{INT:[haproxy][time_backend_response]}%{NOTSPACE:[haproxy][time_duration]}%{INT:[haproxy][http_status_code]} %{NOTSPACE:[haproxy][bytes_read]} %{DATA:[haproxy][captured_request_cookie]}%{DATA:[haproxy][captured_response_cookie]} %{NOTSPACE:[haproxy][termination_state]} 51 | 52 | # NGINX 53 | NGINX %{NGINX_META}%{NGINX_LOG}(%{NGINX_EXT})? 54 | NGINX_META %{IPORHOST:[client][ip]}(\s\-\s)(%{USERNAME:[nginx][access][user_name]}|\-)?\s\[%{HTTPDATE:timestamp}\]\s*\" 55 | NGINX_LOG %{WORD:[nginx][access][method]}\s*%{NOTSPACE:[nginx][access][url]}\s*HTTP/%{NUMBER:[nginx][access][http_version]}\"\s%{NUMBER:[nginx][access][response_code]}\s%{NUMBER:[nginx][access][body_sent][bytes]}\s"%{NOTSPACE:[nginx][access][referrer]}"\s"%{DATA:[nginx][access][agent]}" 56 | NGINX_EXT (\s\"\-\"\s*)\"%{IPORHOST:[nginx][access][forwarder]}\"(%{NGINX_EXT_SN}%{NGINX_EXT_RT}%{NGINX_EXT_UA}%{NGINX_EXT_US}%{NGINX_EXT_UT}%{NGINX_EXT_UL})? 57 | NGINX_EXT_SN \s*sn=(\"%{HOSTNAME:[nginx][ingress_controller][upstream][name]}\"|"") 58 | NGINX_EXT_RT \s*rt=(%{NUMBER:[nginx][ingress_controller][http][request][time]}|"") 59 | NGINX_EXT_UA \s*ua=(("%{IP:[nginx][ingress_controller][upstream][ip]}:%{INT:[nginx][ingress_controller][upstream][port]}")|("-")|("%{NOTSPACE:[nginx][ingress_controller][upstream][socket]}"))\s* 60 | NGINX_EXT_US \s*us=(\"%{NUMBER:[nginx][ingress_controller][upstream][response][status_code]}\"|"-") 61 | NGINX_EXT_UT \s*ut=(\"%{NUMBER:[nginx][ingress_controller][upstream][response][time]}\"|"-") 62 | NGINX_EXT_UL \s*ul=(\"%{NUMBER:[nginx][ingress_controller][upstream][response][length]}\"|"-") 63 | 64 | # OPENVPN - Initial Filter 65 | OPENVPN (%{IP:[openvpn][user][ip]}.%{INT:[openvpn][user][port]})?(%{USERNAME:[openvpn][user]})?(.*\(tos\s*%{BASE16NUM:[openvpn][tos]},\s*ttl\s*%{INT:[openvpn][ttl]},\s*id\s*%{POSINT:[openvpn][process][ppid]},\s*offset\s*%{INT:[openvpn][offset]},\s*flags\s*\[%{WORD:[openvpn][flags]}\],\s*proto\s*%{WORD:[openvpn][protocol][type]}\s*\(%{INT:[openvpn][protocol][id]}\),\s*length\s*%{NUMBER:[openvpn][packet][length]}\)\s*%{IP:[openvpn][client][ip]}\.%{INT:[openvpn][client][port]}\s*>\s*%{IP:[openvpn][server][ip]}\.%{INT:[openvpn][server][port]}:\s*\[%{GREEDYDATA:[openvpn][checksum]}\]\s*%{WORD:[openvpn][network][transport]},\s*length\s*%{INT:[openvpn][transport][data_length]})? 66 | 67 | # PF 68 | PF_CARP_DATA (%{WORD:[pf][carp][type]}),(%{INT:[pf][carp][ttl]}),(%{INT:[pf][carp][vhid]}),(%{INT:[pf][carp][version]}),(%{INT:[pf][carp][advbase]}),(%{INT:[pf][carp][advskew]}) 69 | PF_APP (%{DATA:[pf][app][page]}): 70 | PF_APP_DATA (%{PF_APP_LOGOUT}|%{PF_APP_LOGIN}|%{PF_APP_ERROR}|%{PF_APP_GEN}) 71 | PF_APP_LOGIN (%{DATA:[pf][app][action]}) for user \'(%{DATA:[pf][app][user]})\' from: (%{IP:[pf][remote][ip]}) 72 | PF_APP_LOGOUT User (%{DATA:[pf][app][action]}) for user \'(%{DATA:[pf][app][user]})\' from: (%{IP:[pf][remote][ip]}) 73 | PF_APP_ERROR webConfigurator (%{DATA:[pf][app][action]}) for user \'(%{DATA:[pf][app][user]})\' from (%{IP:[pf][remote][ip]}) 74 | PF_APP_GEN (%{GREEDYDATA:[pf][app][action]}) 75 | 76 | # SURICATA 77 | SURICATA \[%{NUMBER:[suricata][rule][uuid]}:%{NUMBER:[suricata][rule][id]}:%{NUMBER:[suricata][rule][version]}\]%{SPACE}%{GREEDYDATA:[suricata][rule][description]}%{SPACE}\[Classification:%{SPACE}%{GREEDYDATA:[suricata][rule][category]}\]%{SPACE}\[Priority:%{SPACE}%{NUMBER:[suricata][priority]}\]%{SPACE}{%{WORD:[network][transport]}}%{SPACE}%{IP:[source][ip]}:%{NUMBER:[source][port]}%{SPACE}->%{SPACE}%{IP:[destination][ip]}:%{NUMBER:[destination][port]} 78 | 79 | # SNORT 80 | SNORT \[%{INT:[rule][uuid]}\:%{INT:[rule][reference]}\:%{INT:[rule][version]}\].%{GREEDYDATA:[vulnerability][description]}.\[Classification\: %{DATA:[vulnerability][classification]}\].\[Priority\: %{INT:[event][severity]}\].\{%{DATA:[network][transport]}\}.%{IP:[source][ip]}(\:%{INT:[source][port]})?.->.%{IP:[destination][ip]}(\:%{INT:[destination][port]})? 81 | 82 | # SQUID 83 | SQUID %{IPORHOST:[client][ip]} %{NOTSPACE:[labels][request_status]}/%{NUMBER:[http][response][body][status_code]} %{NUMBER:[http][response][bytes]} %{NOTSPACE:[http][request][method]} (%{URIPROTO:[url][scheme]}://)?(?<[url][domain]>\S+?)(:%{INT:[url][port]})?(/%{NOTSPACE:[url][path]})?\s+%{NOTSPACE:[http][request][referrer]}\s+%{NOTSPACE:[lables][hierarchy_status]}/%{NOTSPACE:[destination][address]}\s+%{NOTSPACE:[http][response][mime_type]} 84 | 85 | # UNBOUND 86 | UNBOUND %{INT:[process][pgid]}:%{INT:[process][thread][id]}] %{LOGLEVEL:[log][level]}: %{IP:[client][ip]} %{GREEDYDATA:[dns][question][name]}\. %{WORD:[dns][question][type]} %{WORD:[dns][question][class]} 87 | 88 | # [DEPRECIATED] 89 | # PF 90 | PF_LOG_ENTRY %{PF_LOG_DATA}%{PF_IP_SPECIFIC_DATA}%{PF_IP_DATA}%{PF_PROTOCOL_DATA}? 91 | PF_LOG_DATA %{INT:[rule][ruleset]},%{INT:[rule][id]}?,,%{DATA:[rule][uuid]},%{DATA:[interface][name]},(?<[event][reason]>\b[\w\-]+\b),%{WORD:[event][action]},%{WORD:[network][direction]}, 92 | PF_IP_SPECIFIC_DATA %{PF_IPv4_SPECIFIC_DATA}|%{PF_IPv6_SPECIFIC_DATA} 93 | PF_IPv4_SPECIFIC_DATA (?<[network][type]>(4)),%{BASE16NUM:[pf][ipv4][tos]},%{WORD:[pf][ipv4][ecn]}?,%{INT:[pf][ipv4][ttl]},%{INT:[pf][ipv4][packet][id]},%{INT:[pf][ipv4][offset]},%{WORD:[pf][ipv4][flags]},%{INT:[network][iana_number]},%{WORD:[network][transport]}, 94 | PF_IP_DATA %{INT:[pf][packet][length]},%{IP:[source][ip]},%{IP:[destination][ip]}, 95 | PF_PROTOCOL_DATA %{PF_TCP_DATA}|%{PF_UDP_DATA}|%{PF_ICMP_DATA}|%{PF_IGMP_DATA}|%{PF_IPv6_VAR}|%{PF_IPv6_ICMP} 96 | PF_IPv6_SPECIFIC_DATA (?<[network][type]>(6)),%{BASE16NUM:[pf][ipv6][class]},%{WORD:[pf][ipv6][flow_label]},%{WORD:[pf][ipv6][hop_limit]},%{DATA:[pf][protocol][type]},%{INT:[pf][protocol][id]}, 97 | PF_IPv6_VAR %{WORD:type},%{WORD:option},%{WORD:Flags},%{WORD:Flags} 98 | PF_IPv6_ICMP 99 | 100 | # PF PROTOCOL 101 | PF_TCP_DATA %{INT:[source][port]},%{INT:[destination][port]},%{INT:[pf][transport][data_length]},(?<[pf][tcp][flags]>(\w*)?),(?<[pf][tcp][sequence_number]>(\d*)?):?\d*,(?<[pf][tcp][ack_number]>(\d*)?),(?<[pf][tcp][window]>(\d*)?),(?<[pf][tcp][urg]>(\w*)?),%{GREEDYDATA:[pf][tcp][options]} 102 | PF_UDP_DATA %{INT:[source][port]},%{INT:[destination][port]},%{INT:[pf][transport][data_length]}$ 103 | PF_IGMP_DATA datalength=%{INT:[network][packets]} 104 | PF_ICMP_DATA %{PF_ICMP_TYPE}%{PF_ICMP_RESPONSE} 105 | PF_ICMP_TYPE (?(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)), 106 | PF_ICMP_RESPONSE %{PF_ICMP_ECHO_REQ_REPLY}|%{PF_ICMP_UNREACHPORT}|%{PF_ICMP_UNREACHPROTO}|%{PF_ICMP_UNREACHABLE}|%{PF_ICMP_NEED_FLAG}|%{PF_ICMP_TSTAMP}|%{PF_ICMP_TSTAMP_REPLY} 107 | PF_ICMP_ECHO_REQ_REPLY %{INT:[pf][icmp][echo][id]},%{INT:[pf][icmp][echo][sequence]} 108 | PF_ICMP_UNREACHPORT %{IP:[pf][icmp][unreachport][destination][ip]},%{WORD:[pf][icmp][unreachport][protocol]},%{INT:[pf][icmp][unreachport][port]} 109 | PF_ICMP_UNREACHPROTO %{IP:[pf][icmp][unreach][destination][ip]},%{WORD:[pf][icmp][unreach][network][transport]} 110 | PF_ICMP_UNREACHABLE %{GREEDYDATA:[pf][icmp][unreachable]} 111 | PF_ICMP_NEED_FLAG %{IP:[pf][icmp][need_flag][ip]},%{INT:[pf][icmp][need_flag][mtu]} 112 | PF_ICMP_TSTAMP %{INT:[pf][icmp][tstamp][id]},%{INT:[pf][icmp][tstamp][sequence]} 113 | PF_ICMP_TSTAMP_REPLY %{INT:[pf][icmp][tstamp][reply][id]},%{INT:[pf][icmp][tstamp][reply][sequence]},%{INT:[pf][icmp][tstamp][reply][otime]},%{INT:[pf][icmp][tstamp][reply][rtime]},%{INT:[pf][icmp][tstamp][reply][ttime]} 114 | PF_SPEC \+ 115 | -------------------------------------------------------------------------------- /set-logstash-password.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | elastic_password="$(grep "ELASTIC_PASSWORD" .env | cut -d "=" -f 2)" 4 | logstash_system_password="$(grep "LOGSTASH_PASSWORD" .env | cut -d "=" -f 2)" 5 | if [ "$logstash_system_password" = "changeme" ]; then 6 | echo "Set the LOGSTASH_PASSWORD environment variable in the .env file"; 7 | exit 1; 8 | fi 9 | if [ "$elastic_password" = "changeme" ]; then 10 | echo "Set the ELASTIC_PASSWORD environment variable in the .env file"; 11 | exit 1; 12 | fi 13 | 14 | sed -i "s/changeme/${elastic_password}/" etc/logstash/config/logstash.yml 15 | sed -i "s/changeme/${elastic_password}/" etc/pfelk/conf.d/50-outputs.pfelk 16 | 17 | echo "Successfully changed passwords in logstash configs." 18 | --------------------------------------------------------------------------------