├── .DS_Store
├── README.md
├── config
├── .DS_Store
├── sigma-dns.properties
├── sigma-http.properties
├── sigma_titles.txt
└── splunk-zeek.yml
├── docker-compose.yml
├── images
├── .DS_Store
├── Sigma_RegEx.png
├── siem_optimization.png
└── splunk_savings.png
├── palo-alto-networks-add-on-for-splunk_710.tgz
├── scripts
├── sigma-dns.properties
├── sigma-http.properties
├── sigma_titles.txt
├── splunk-zeek.yml
├── startKafkaConnectComponents.sh
├── submit_s2s_source.sh
├── submit_splunk_raw_sink.sh
├── submit_splunk_rich_ssl_sink.sh
└── submit_splunk_sink.sh
├── splunk-add-on-for-cisco-asa_410.tgz
├── splunk-eventgen
├── .DS_Store
├── appserver
│ └── static
│ │ └── splunk-lab.png
├── default
│ ├── app.conf
│ ├── data
│ │ └── ui
│ │ │ ├── nav
│ │ │ └── default.xml
│ │ │ └── views
│ │ │ ├── README
│ │ │ ├── tailreader_check.xml
│ │ │ └── welcome.xml
│ └── eventgen.conf
├── metadata
│ ├── default.meta
│ └── local.meta
└── samples
│ ├── cisco_asa.sample
│ ├── external_ips.sample
│ ├── nginx.sample
│ └── synthetic_ips.sample
├── splunk-search
└── local
│ └── inputs.conf
├── splunk-uf1
├── .DS_Store
└── local
│ ├── inputs.conf
│ └── outputs.conf
└── statements.sql
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/.DS_Store
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Confluent Splunk-2-Splunk Source Demo
2 |
3 | The purpose of this project is to provide a demonstration on how to optimize your Splunk data ingestion by using Confluent. This guide will route data from a Splunk Universal Forwarder running an eventgenerator, to the Confluent Splunk-2-Splunk Source Connector while bypassing the Splunk indexing layer and retaining all of the critical metadata assosciated with each data source (source, sourcetype, host, event, _meta).
4 |
5 |
6 |
7 |
8 |
9 | As the data is flowing through Confluent, Kafka Streams and KsqlDB will run and output the following use cases, which will then be consumed from the Confluent managed Splunk Sink Connector:
10 | - Filter out of noisy meassages
11 | - Deduplicate and sum like events over a window period
12 | - Remove unnecsary fields and reduce message size
13 | The Confluent managed Splunk Sink Connector uses the Splunk HTTP Event Collector (HEC) as and endpoint to push the data to. Here is a visual representation of the end-to-end flow.
14 |
15 | To handle unstructured data the Sigma stream processor (https://github.com/confluentinc/kafka-sigma) application is used which enables a RegEx function through a web UI. This allows you to enter your RegEx'es to create key/value pairs out of named capture groups, matching to a splunk sourcetype. Here is a screenshot of the web UI.
16 |
17 |
18 |
19 |
20 |
21 | For more ksqldb Cyber Security or SIEM use cases inclduing: enriching data streams, Matching host names in a watchlist or to analyse syslogs events, refer to this CP-SIEM demo to learn more https://github.com/berthayes/cp-siem
22 |
23 | To get started, and if you are following allong on a local development environment, preferably on a linux os, you will need git cli, docker and docker-compose (if you haven't already). Also if possible, ensure you have reserved 8GB of ram to Docker as minium to run the instances.
24 |
25 | This demo is using the cisco:asa sample logs from the Splunk Boss of the SOC (BOTS) Version 3 Dataset (https://github.com/splunk/botsv3), and will replay random events with a event generator to a Splunk Universal Forwarder.
26 |
27 |
28 | ## Running on localhost
29 | ```
30 | 1. git clone https://github.com/JohnnyMirza/confluent_splunk_demo.git
31 | 2. cd confluent_splunk_demo
32 | 3. docker-compose up -d
33 | ```
34 | Wait about 5 minutes or so for everything to start up, then point your web browser to http://localhost:9021 for Confluent Control Center and http://localhost:8080 for the Sigma Rule UI.
35 |
36 | ## Running on an external host
37 | To run this environment on a system that is not your laptop/workstation, edit the docker-compose.yml file.
38 |
39 | Look for this line:
40 | ```
41 | CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
42 | ```
43 | And change it to something like this:
44 | ```
45 | CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://yourhost.yourdomain.com:8088"
46 | ```
47 | Then start up docker as above with:
48 | ```
49 | docker-compose up -d
50 | ```
51 | Wait about 5 minutes or so for everything to start up, then point your web browser to http://yourhost.yourdomain.com:9021 for Confluent Control Center and http://yourhost.yourdomain.com::8080 for the Sigma Rule UI.
52 |
53 | ## Demo Script
54 | ### Lets Examine the data streamin in.
55 |
56 | - As mentioned above, the cisco:asa logs are used for the demo
57 | - Go to localhost:9021 (or remote host URL)
58 | - Click on the Cluster->Topics->splunk-s2s-events
59 | - Observer the messages spooling, and the click the pause button and switch to card view
60 | - Look a specific record by expanding and then scroll through the fields
61 | - Notice the Splunk metadata fields (source, sourcetype, host, event, _meta)
62 |
63 | ### Lets examine and publish a sigma RegEx rule
64 | - Go to localhost:8080 for the Sigma RegEx Rule UI and click on the RegEx tab
65 | - Create the new RegEx rule for cisco:asa with the following example (refer to image above if needed)
66 | - ```
67 | sourcetype = cisco:asa
68 | Regular Expression = ^(?\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2})\s(?[^\s]+)\s\%ASA-\d-(?[^:]+):\s(?[^\s]+)\s(?[^\s]+)\ssrc\sinside:(?[0-9\.]+)\/(?[0-9]+)\sdst\soutside:(?[0-9\.]+)\/(?[0-9]+)
69 | Output Topic = firewalls
70 | Add the Custom Fields
71 | --location = edge
72 | --sourcetype = cisco:asa
73 | --index = main
74 | ```
75 | - The above RegEx will filter on the sourcetype=cisco:asa value from the splunk-s2s-events topic and then apply the RegEx string to the event field (which is the raw message). The RegEx will create the named capture groups as key/value pairs in the firewalls topic. For example: timestamp, hostname, messageID will be extracted as the key, and the RegEx in the group will be its value.
76 | - Navitage back to localhost:9021->Cluster-Topics
77 | - You should now notice a new topic called firewalls
78 | - Exam the data in firewalls topic and you should see the above mentioned keys and values
79 | - (If there is no firewalls topic, see troubleshooting below. Need to restart the kstreams apps)
80 |
81 | ### Query the Data with KsqlDB
82 | - From Control Center, navigate to KsqlDB and go to the editor
83 | - Create a new Splunk Stream from the splunk-s2s-events topic
84 | - ```
85 | CREATE STREAM SPLUNK (
86 | `event` VARCHAR,
87 | `time` BIGINT,
88 | `host` VARCHAR,
89 | `source` VARCHAR,
90 | `sourcetype` VARCHAR,
91 | `index` VARCHAR
92 | ) WITH (
93 | KAFKA_TOPIC='splunk-s2s-events', VALUE_FORMAT='JSON');
94 | ```
95 | - Lets also filter out all of the Splunk internal logs, and only focus on the cisco:asa sourcetype
96 | - ```
97 | CREATE STREAM CISCO_ASA as SELECT
98 | `event`,
99 | `source`,
100 | `sourcetype`,
101 | `index` FROM SPLUNK
102 | where `sourcetype` = 'cisco:asa'
103 | EMIT CHANGES;
104 | ```
105 | - Navigate to Flow and exam the data in the CISCO_ASA stream. This is all of the raw cisco asa logs and can be consumed by a s3 or elastic search sink connector to redistibute the data. Refer to this link for an example https://github.com/JohnnyMirza/splunk_forward_to_kafka
106 | - The noisy event we are filtering is messageID %ASA-4-106023, use KsqlDb to filter out the event
107 | - ```
108 | CREATE STREAM CISCO_ASA_FILTER_106023 WITH (KAFKA_TOPIC='CISCO_ASA_FILTER_106023', PARTITIONS=1, REPLICAS=1) AS SELECT
109 | SPLUNK.`event` `event`,
110 | SPLUNK.`source` `source`,
111 | SPLUNK.`sourcetype` `sourcetype`,
112 | SPLUNK.`index` `index`
113 | FROM SPLUNK SPLUNK
114 | WHERE ((SPLUNK.`sourcetype` = 'cisco:asa') AND (NOT (SPLUNK.`event` LIKE '%ASA-4-106023%')))
115 | EMIT CHANGES;
116 | ```
117 | - The new filtered stream 'CISCO_ASA_FILTER_106023' will sink the reduced logs to the Splunk instance using HEC
118 | - Next create a new Stream for the Firewalls data (the events that were extracted with the Sigma RegEx application)
119 | - ```
120 | CREATE STREAM FIREWALLS (
121 | `src` VARCHAR,
122 | `messageID` BIGINT,
123 | `index` VARCHAR,
124 | `dest` VARCHAR,
125 | `hostname` VARCHAR,
126 | `protocol` VARCHAR,
127 | `action` VARCHAR,
128 | `srcport` BIGINT,
129 | `sourcetype` VARCHAR,
130 | `destport` BIGINT,
131 | `location` VARCHAR,
132 | `timestamp` VARCHAR
133 | ) WITH (
134 | KAFKA_TOPIC='firewalls', value_format='JSON'
135 | );
136 | ```
137 | ### Finally, create a window aggregation table to dedupe events by Group
138 | - ```
139 | CREATE TABLE AGGREGATOR WITH (KAFKA_TOPIC='AGGREGATOR', KEY_FORMAT='JSON', PARTITIONS=1, REPLICAS=1) AS SELECT
140 | `hostname`,
141 | `messageID`,
142 | `action`,
143 | `src`,
144 | `dest`,
145 | `destport`,
146 | `sourcetype`,
147 | as_value(`hostname`) as hostname,
148 | as_value(`messageID`) as messageID,
149 | as_value(`action`) as action,
150 | as_value(`src`) as src,
151 | as_value(`dest`) as dest,
152 | as_value(`destport`) as dest_port,
153 | as_value(`sourcetype`) as sourcetype,
154 | TIMESTAMPTOSTRING(WINDOWSTART, 'yyyy-MM-dd HH:mm:ss', 'UTC') TIMESTAMP,
155 | 60 DURATION,
156 | COUNT(*) COUNTS
157 | FROM FIREWALLS FIREWALLS
158 | WINDOW TUMBLING ( SIZE 60 SECONDS )
159 | GROUP BY `sourcetype`, `action`, `hostname`, `messageID`, `src`, `dest`, `destport`
160 | EMIT CHANGES;
161 | ```
162 | ### Visualise the data in Splunk
163 | - Login to the splunk instance, if running locally: http://localhost:8000/en-GB/app/search/search (admin/Password1)
164 | - In the search bar run ``` index=* ```
165 | - The events will appear below the search bar
166 | - Click on sourcetype and you should see two values: 'cisco:asa' and 'httpevent'
167 | - The 'cisco:asa' sourcetype is the filtered 'CISCO_ASA_FILTER_106023' stream
168 | - The 'httpevent' is the AGGREGATOR topic
169 | - Run the below Splunk search query to compare and visualise the filtered and Aggregator events
170 | - ```
171 | index=* sourcetype=httpevent
172 | | bin span=5m _time
173 | | stats sum(COUNTS) as raw_events count(_raw) as filtered_events by _time, SOURCETYPE, HOSTNAME, MESSAGEID, , ACTION, SRC, DEST, DEST_PORT, DURATION
174 | | eval savings=round(((raw_events-filtered_events)/raw_events) * 100,2) . "%"
175 | | sort -savings
176 | ```
177 | - Here is a example of the data reducution from the AGGREGATOR topic in Splunk. Note this is event generated data and might not reflect a production environments
178 |
179 |
180 |
181 |
182 |
183 | ### Thanks To
184 | - *Phil Wild (https://github.com/pwildconfluentio) for helping this put together.*
185 | - *Michael Peacock (https://github.com/michaelpeacock) who created the Sigma RegEx App*
186 |
187 | ### TroubleShooting:
188 | - If the 'firewall's topic above does not apear after the regex, try and restart the Sigma RegEx app
189 | ```
190 | docker restart cyber-sigma-regex-ui
191 | docker restart cyber-sigma-streams
192 | ```
193 | - If there are no events after running the KsqlDB queries, ensure all of the fields are correct, and that you have added the custom fields in the Sigma RegEx
194 | - The following docker containers will be configured as part of this demo:
195 | ```
196 | Name
197 | ----------------
198 | broker
199 | control-center
200 | cyber-sigma-regex-ui
201 | cyber-sigma-streams
202 | connect
203 | ksqldb-server
204 | ksqldb-cli
205 | ksql-datagen
206 | schema-registry
207 | rest-proxy
208 | splunk_eventgen
209 | splunk_search
210 | splunk_uf1
211 | zookeeper
212 | ```
213 |
214 |
--------------------------------------------------------------------------------
/config/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/config/.DS_Store
--------------------------------------------------------------------------------
/config/sigma-dns.properties:
--------------------------------------------------------------------------------
1 | application.id=zeek-rules-streams-app
2 | bootstrap.server=broker:29092
3 | schema.registry=http://schema-registry:8081
4 | data.topic=dns
5 | output.topic=dns-detection
6 | field.mapping.file=/tmp/config/splunk-zeek.yml
7 | sigma.rules.topic=sigma-rules
8 | sigma.rule.filter.product=zeek
9 | sigma.rule.filter.service=dns
10 | #sigma.rule.filter.title=Domain User Enumeration Network Recon 01
11 | #sigma.rule.filter.list=/Users/mpeacock/Development/KafkaSigma/kafka-sigma-streams/src/config/sigma_titles.txt
--------------------------------------------------------------------------------
/config/sigma-http.properties:
--------------------------------------------------------------------------------
1 | application.id=operation-rules-streams-app
2 | bootstrap.server=127.0.0.1:9092
3 | data.topic=http
4 | output.topic=http-detection
5 | field.mapping.file=config/splunk-zeek.yml
6 | sigma.rules.topic=sigma-rules
7 | sigma.rule.filter.product=zeek
8 | sigma.rule.filter.service=http
9 | sigma.rule.filter.title=Simple Http
10 | #sigma.rule.filter.list=config/sigma_titles.txt
--------------------------------------------------------------------------------
/config/sigma_titles.txt:
--------------------------------------------------------------------------------
1 | Possible Windows Executable Download Without Matching Mime Type
2 | Executable from Webdav
3 | Possible Data Collection related to Office Docs and Email Archives and PDFs
4 | Domain User Enumeration Network Recon 01
--------------------------------------------------------------------------------
/config/splunk-zeek.yml:
--------------------------------------------------------------------------------
1 | title: Splunk Zeek sourcetype mappings
2 | order: 20
3 | backends:
4 | - splunk
5 | - splunkxml
6 | - corelight_splunk
7 | logsources:
8 | zeek-category-accounting:
9 | category: accounting
10 | rewrite:
11 | product: zeek
12 | service: syslog
13 | zeek-category-firewall:
14 | category: firewall
15 | rewrite:
16 | product: zeek
17 | service: conn
18 | zeek-category-dns:
19 | category: dns
20 | rewrite:
21 | product: zeek
22 | service: dns
23 | zeek-category-proxy:
24 | category: proxy
25 | rewrite:
26 | product: zeek
27 | service: http
28 | zeek-category-webserver:
29 | category: webserver
30 | rewrite:
31 | product: zeek
32 | service: http
33 | zeek-conn:
34 | product: zeek
35 | service: conn
36 | rewrite:
37 | product: zeek
38 | service: conn
39 | zeek-conn_long:
40 | product: zeek
41 | service: conn_long
42 | conditions:
43 | sourcetype: 'bro:conn_long:json'
44 | zeek-dce_rpc:
45 | product: zeek
46 | service: dce_rpc
47 | conditions:
48 | sourcetype: 'bro:dce_rpc:json'
49 | zeek-dns:
50 | product: zeek
51 | service: dns
52 | conditions:
53 | sourcetype: 'bro:dns:json'
54 | zeek-dnp3:
55 | product: zeek
56 | service: dnp3
57 | conditions:
58 | sourcetype: 'bro:dnp3:json'
59 | zeek-dpd:
60 | product: zeek
61 | service: dpd
62 | conditions:
63 | sourcetype: 'bro:dpd:json'
64 | zeek-files:
65 | product: zeek
66 | service: files
67 | conditions:
68 | sourcetype: 'bro:files:json'
69 | zeek-ftp:
70 | product: zeek
71 | service: ftp
72 | conditions:
73 | sourcetype: 'bro:ftp:json'
74 | zeek-gquic:
75 | product: zeek
76 | service: gquic
77 | conditions:
78 | sourcetype: 'bro:gquic:json'
79 | zeek-http:
80 | product: zeek
81 | service: http
82 | conditions:
83 | sourcetype: 'bro:http:json'
84 | zeek-http2:
85 | product: zeek
86 | service: http2
87 | conditions:
88 | sourcetype: 'bro:http2:json'
89 | zeek-intel:
90 | product: zeek
91 | service: intel
92 | conditions:
93 | sourcetype: 'bro:intel:json'
94 | zeek-irc:
95 | product: zeek
96 | service: irc
97 | conditions:
98 | sourcetype: 'bro:irc:json'
99 | zeek-kerberos:
100 | product: zeek
101 | service: kerberos
102 | conditions:
103 | sourcetype: 'bro:kerberos:json'
104 | zeek-known_certs:
105 | product: zeek
106 | service: known_certs
107 | conditions:
108 | sourcetype: 'bro:known_certs:json'
109 | zeek-known_hosts:
110 | product: zeek
111 | service: known_hosts
112 | conditions:
113 | sourcetype: 'bro:known_hosts:json'
114 | zeek-known_modbus:
115 | product: zeek
116 | service: known_modbus
117 | conditions:
118 | sourcetype: 'bro:known_modbus:json'
119 | zeek-known_services:
120 | product: zeek
121 | service: known_services
122 | conditions:
123 | sourcetype: 'bro:known_services:json'
124 | zeek-modbus:
125 | product: zeek
126 | service: modbus
127 | conditions:
128 | sourcetype: 'bro:modbus:json'
129 | zeek-modbus_register_change:
130 | product: zeek
131 | service: modbus_register_change
132 | conditions:
133 | sourcetype: 'bro:modbus_register_change:json'
134 | zeek-mqtt_connect:
135 | product: zeek
136 | service: mqtt_connect
137 | conditions:
138 | sourcetype: 'bro:mqtt_connect:json'
139 | zeek-mqtt_publish:
140 | product: zeek
141 | service: mqtt_publish
142 | conditions:
143 | sourcetype: 'bro:mqtt_publish:json'
144 | zeek-mqtt_subscribe:
145 | product: zeek
146 | service: mqtt_subscribe
147 | conditions:
148 | sourcetype: 'bro:mqtt_subscribe:json'
149 | zeek-mysql:
150 | product: zeek
151 | service: mysql
152 | conditions:
153 | sourcetype: 'bro:mysql:json'
154 | zeek-notice:
155 | product: zeek
156 | service: notice
157 | conditions:
158 | sourcetype: 'bro:notice:json'
159 | zeek-ntlm:
160 | product: zeek
161 | service: ntlm
162 | conditions:
163 | sourcetype: 'bro:ntlm:json'
164 | zeek-ntp:
165 | product: zeek
166 | service: ntp
167 | conditions:
168 | sourcetype: 'bro:ntp:json'
169 | zeek-ocsp:
170 | product: zeek
171 | service: ntp
172 | conditions:
173 | sourcetype: 'bro:ocsp:json'
174 | zeek-pe:
175 | product: zeek
176 | service: pe
177 | conditions:
178 | sourcetype: 'bro:pe:json'
179 | zeek-pop3:
180 | product: zeek
181 | service: pop3
182 | conditions:
183 | sourcetype: 'bro:pop3:json'
184 | zeek-radius:
185 | product: zeek
186 | service: radius
187 | conditions:
188 | sourcetype: 'bro:radius:json'
189 | zeek-rdp:
190 | product: zeek
191 | service: rdp
192 | conditions:
193 | sourcetype: 'bro:rdp:json'
194 | zeek-rfb:
195 | product: zeek
196 | service: rfb
197 | conditions:
198 | sourcetype: 'bro:rfb:json'
199 | zeek-sip:
200 | product: zeek
201 | service: sip
202 | conditions:
203 | sourcetype: 'bro:sip:json'
204 | zeek-smb_files:
205 | product: zeek
206 | service: smb_files
207 | conditions:
208 | sourcetype: 'bro:smb_files:json'
209 | zeek-smb_mapping:
210 | product: zeek
211 | service: smb_mapping
212 | conditions:
213 | sourcetype: 'bro:smb_mapping:json'
214 | zeek-smtp:
215 | product: zeek
216 | service: smtp
217 | conditions:
218 | sourcetype: 'bro:smtp:json'
219 | zeek-smtp_links:
220 | product: zeek
221 | service: smtp_links
222 | conditions:
223 | sourcetype: 'bro:smtp_links:json'
224 | zeek-snmp:
225 | product: zeek
226 | service: snmp
227 | conditions:
228 | sourcetype: 'bro:snmp:json'
229 | zeek-socks:
230 | product: zeek
231 | service: socks
232 | conditions:
233 | sourcetype: 'bro:socks:json'
234 | zeek-software:
235 | product: zeek
236 | service: software
237 | conditions:
238 | sourcetype: 'bro:software:json'
239 | zeek-ssh:
240 | product: zeek
241 | service: ssh
242 | conditions:
243 | sourcetype: 'bro:ssh:json'
244 | zeek-ssl:
245 | product: zeek
246 | service: ssl
247 | conditions:
248 | sourcetype: 'bro:ssl:json'
249 | zeek-tls: # In case people call it TLS even though log is called ssl
250 | product: zeek
251 | service: tls
252 | conditions:
253 | sourcetype: 'bro:ssl:json'
254 | zeek-syslog:
255 | product: zeek
256 | service: syslog
257 | conditions:
258 | sourcetype: 'bro:syslog:json'
259 | zeek-tunnel:
260 | product: zeek
261 | service: tunnel
262 | conditions:
263 | sourcetype: 'bro:tunnel:json'
264 | zeek-traceroute:
265 | product: zeek
266 | service: traceroute
267 | conditions:
268 | sourcetype: 'bro:traceroute:json'
269 | zeek-weird:
270 | product: zeek
271 | service: weird
272 | conditions:
273 | sourcetype: 'bro:weird:json'
274 | zeek-x509:
275 | product: zeek
276 | service: x509
277 | conditions:
278 | sourcetype: 'bro:x509:json'
279 | zeek-ip_search:
280 | product: zeek
281 | service: network
282 | conditions:
283 | sourcetype:
284 | - 'bro:conn:json'
285 | - 'bro:conn_long:json'
286 | - 'bro:dce_rpc:json'
287 | - 'bro:dhcp:json'
288 | - 'bro:dnp3:json'
289 | - 'bro:dns:json'
290 | - 'bro:ftp:json'
291 | - 'bro:gquic:json'
292 | - 'bro:http:json'
293 | - 'bro:irc:json'
294 | - 'bro:kerberos:json'
295 | - 'bro:modbus:json'
296 | - 'bro:mqtt_connect:json'
297 | - 'bro:mqtt_publish:json'
298 | - 'bro:mqtt_subscribe:json'
299 | - 'bro:mysql:json'
300 | - 'bro:ntlm:json'
301 | - 'bro:ntp:json'
302 | - 'bro:radius:json'
303 | - 'bro:rfb:json'
304 | - 'bro:sip:json'
305 | - 'bro:smb_files:json'
306 | - 'bro:smb_mapping:json'
307 | - 'bro:smtp:json'
308 | - 'bro:smtp_links:json'
309 | - 'bro:snmp:json'
310 | - 'bro:socks:json'
311 | - 'bro:ssh:json'
312 | - 'bro:ssl:json'
313 | - 'bro:tunnel:json'
314 | - 'bro:weird:json'
315 | fieldmappings:
316 | # All Logs Applied Mapping & Taxonomy
317 | mike: query
318 | dst_ip: id.resp_h
319 | dst_port: id.resp_p
320 | network_protocol: proto
321 | src_ip: id.orig_h
322 | src_port: id.orig_p
323 | # DNS matching Taxonomy & DNS Category
324 | answer: answers
325 | #question_length: # Does not exist in open source version
326 | record_type: qtype_name
327 | #parent_domain: # Does not exist in open source version
328 | # HTTP matching Taxonomy & Web/Proxy Category
329 | cs-bytes: request_body_len
330 | cs-cookie: cookie
331 | r-dns: host
332 | sc-bytes: response_body_len
333 | sc-status: status_code
334 | c-uri: uri
335 | c-uri-extension: uri
336 | c-uri-query: uri
337 | c-uri-stem: uri
338 | c-useragent: user_agent
339 | cs-host: host
340 | cs-method: method
341 | cs-referrer: referrer
342 | cs-version: version
343 | # Few other variations of names from zeek source itself
344 | id_orig_h: id.orig_h
345 | id_orig_p: id.orig_p
346 | id_resp_h: id.resp_h
347 | id_resp_p: id.resp_p
348 | # Temporary one off rule name fields
349 | agent.version: version
350 | c-cookie: cookie
351 | c-ip: id.orig_h
352 | cs-uri: uri
353 | clientip: id.orig_h
354 | clientIP: id.orig_h
355 | dest_domain:
356 | - query
357 | - host
358 | - server_name
359 | dest_ip: id.resp_h
360 | dest_port: id.resp_p
361 | #TODO:WhatShouldThisBe?==dest:
362 | #TODO:WhatShouldThisBe?==destination:
363 | #TODO:WhatShouldThisBe?==Destination:
364 | destination.hostname:
365 | - query
366 | - host
367 | - server_name
368 | DestinationAddress: id.resp_h
369 | DestinationHostname:
370 | - host
371 | - query
372 | - server_name
373 | DestinationIp: id.resp_h
374 | DestinationIP: id.resp_h
375 | DestinationPort: id.resp_p
376 | dst-ip: id.resp_h
377 | dstip: id.resp_h
378 | dstport: id.resp_p
379 | Host:
380 | - host
381 | - query
382 | - server_name
383 | HostVersion: http.version
384 | http_host:
385 | - host
386 | - query
387 | - server_name
388 | http_uri: uri
389 | http_url: uri
390 | http_user_agent: user_agent
391 | http.request.url-query-params: uri
392 | HttpMethod: method
393 | in_url: uri
394 | # parent_domain: # Not in open source zeek
395 | post_url_parameter: uri
396 | Request Url: uri
397 | request_url: uri
398 | request_URL: uri
399 | RequestUrl: uri
400 | #response: status_code
401 | resource.url: uri
402 | resource.URL: uri
403 | sc_status: status_code
404 | sender_domain:
405 | - query
406 | - server_name
407 | service.response_code: status_code
408 | source: id.orig_h
409 | SourceAddr: id.orig_h
410 | SourceAddress: id.orig_h
411 | SourceIP: id.orig_h
412 | SourceIp: id.orig_h
413 | SourceNetworkAddress: id.orig_h
414 | SourcePort: id.orig_p
415 | srcip: id.orig_h
416 | Status: status_code
417 | status: status_code
418 | url: uri
419 | URL: uri
420 | url_query: uri
421 | url.query: uri
422 | uri_path: uri
423 | user_agent: user_agent
424 | user_agent.name: user_agent
425 | user-agent: user_agent
426 | User-Agent: user_agent
427 | useragent: user_agent
428 | UserAgent: user_agent
429 | User Agent: user_agent
430 | web_dest:
431 | - host
432 | - query
433 | - server_name
434 | web.dest:
435 | - host
436 | - query
437 | - server_name
438 | Web.dest:
439 | - host
440 | - query
441 | - server_name
442 | web.host:
443 | - host
444 | - query
445 | - server_name
446 | Web.host:
447 | - host
448 | - query
449 | - server_name
450 | web_method: method
451 | Web_method: method
452 | web.method: method
453 | Web.method: method
454 | web_src: id.orig_h
455 | web_status: status_code
456 | Web_status: status_code
457 | web.status: status_code
458 | Web.status: status_code
459 | web_uri: uri
460 | web_url: uri
461 | # Most are in ECS, but for things not using Elastic - these need renamed
462 | destination.ip: id.resp_h
463 | destination.port: id.resp_p
464 | http.request.body.content: post_body
465 | source.domain:
466 | - host
467 | - query
468 | - server_name
469 | source.ip: id.orig_h
470 | source.port: id.orig_p
471 |
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
1 | ---
2 | version: '3'
3 | services:
4 |
5 | zookeeper:
6 | image: confluentinc/cp-zookeeper:7.3.0
7 | hostname: zookeeper
8 | container_name: zookeeper
9 | ports:
10 | - "2181:2181"
11 | environment:
12 | ZOOKEEPER_CLIENT_PORT: 2181
13 | ZOOKEEPER_TICK_TIME: 2000
14 |
15 | broker:
16 | image: confluentinc/cp-server:7.3.0
17 | hostname: broker
18 | container_name: broker
19 | depends_on:
20 | - zookeeper
21 | ports:
22 | - "9092:9092"
23 | - "9101:9101"
24 | environment:
25 | KAFKA_BROKER_ID: 1
26 | KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
27 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
28 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
29 | KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
30 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
31 | KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
32 | KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
33 | KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
34 | KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
35 | KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
36 | KAFKA_JMX_PORT: 9101
37 | KAFKA_JMX_HOSTNAME: localhost
38 | KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
39 | CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
40 | CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
41 | CONFLUENT_METRICS_ENABLE: 'true'
42 | CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
43 |
44 | schema-registry:
45 | image: confluentinc/cp-schema-registry:7.3.0
46 | hostname: schema-registry
47 | container_name: schema-registry
48 | depends_on:
49 | - broker
50 | ports:
51 | - "8081:8081"
52 | environment:
53 | SCHEMA_REGISTRY_HOST_NAME: schema-registry
54 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
55 | SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
56 |
57 | connect:
58 | image: cnfldemos/cp-server-connect-datagen:0.6.0-7.3.0
59 | hostname: connect
60 | container_name: connect
61 | depends_on:
62 | - broker
63 | - schema-registry
64 | ports:
65 | - "8083:8083"
66 | - "9997:9997"
67 | environment:
68 | CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
69 | CONNECT_REST_ADVERTISED_HOST_NAME: connect
70 | CONNECT_GROUP_ID: compose-connect-group
71 | CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
72 | CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
73 | CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
74 | CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
75 | CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
76 | CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
77 | CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
78 | CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
79 | CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
80 | CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
81 | # CLASSPATH required due to CC-2422
82 | CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-7.3.0.jar
83 | CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
84 | CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
85 | CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
86 | CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
87 | volumes:
88 | - $PWD/scripts/:/tmp/scripts/
89 | command:
90 | - /tmp/scripts/startKafkaConnectComponents.sh
91 |
92 | control-center:
93 | image: confluentinc/cp-enterprise-control-center:7.3.0
94 | hostname: control-center
95 | container_name: control-center
96 | depends_on:
97 | - broker
98 | - schema-registry
99 | - connect
100 | - ksqldb-server
101 | ports:
102 | - "9021:9021"
103 | environment:
104 | CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
105 | CONTROL_CENTER_CONNECT_CONNECT-DEFAULT_CLUSTER: 'connect:8083'
106 | CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
107 | CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
108 | CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
109 | CONTROL_CENTER_REPLICATION_FACTOR: 1
110 | CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
111 | CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
112 | CONFLUENT_METRICS_TOPIC_REPLICATION: 1
113 | PORT: 9021
114 |
115 | ksqldb-server:
116 | image: confluentinc/cp-ksqldb-server:7.3.0
117 | hostname: ksqldb-server
118 | container_name: ksqldb-server
119 | depends_on:
120 | - broker
121 | - connect
122 | ports:
123 | - "8088:8088"
124 | environment:
125 | KSQL_CONFIG_DIR: "/etc/ksql"
126 | KSQL_BOOTSTRAP_SERVERS: "broker:29092"
127 | KSQL_HOST_NAME: ksqldb-server
128 | KSQL_LISTENERS: "http://0.0.0.0:8088"
129 | KSQL_CACHE_MAX_BYTES_BUFFERING: 0
130 | KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
131 | KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
132 | KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
133 | KSQL_KSQL_CONNECT_URL: "http://connect:8083"
134 | KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
135 | KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
136 | KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
137 |
138 | ksqldb-cli:
139 | image: confluentinc/cp-ksqldb-cli:7.3.0
140 | container_name: ksqldb-cli
141 | depends_on:
142 | - broker
143 | - connect
144 | - ksqldb-server
145 | entrypoint: /bin/sh
146 | tty: true
147 |
148 | ksql-datagen:
149 | image: confluentinc/ksqldb-examples:7.3.0
150 | hostname: ksql-datagen
151 | container_name: ksql-datagen
152 | depends_on:
153 | - ksqldb-server
154 | - broker
155 | - schema-registry
156 | - connect
157 | command: "bash -c 'echo Waiting for Kafka to be ready... && \
158 | cub kafka-ready -b broker:29092 1 40 && \
159 | echo Waiting for Confluent Schema Registry to be ready... && \
160 | cub sr-ready schema-registry 8081 40 && \
161 | echo Waiting a few seconds for topic creation to finish... && \
162 | sleep 11 && \
163 | tail -f /dev/null'"
164 | environment:
165 | KSQL_CONFIG_DIR: "/etc/ksql"
166 | STREAMS_BOOTSTRAP_SERVERS: broker:29092
167 | STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
168 | STREAMS_SCHEMA_REGISTRY_PORT: 8081
169 |
170 | rest-proxy:
171 | image: confluentinc/cp-kafka-rest:7.3.0
172 | depends_on:
173 | - broker
174 | - schema-registry
175 | ports:
176 | - 8082:8082
177 | hostname: rest-proxy
178 | container_name: rest-proxy
179 | environment:
180 | KAFKA_REST_HOST_NAME: rest-proxy
181 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
182 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
183 | KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
184 |
185 | splunk_uf1:
186 | image: splunk/universalforwarder:8.2.1
187 | hostname: splunk_uf1
188 | container_name: splunk_uf1
189 | depends_on:
190 | - connect
191 | environment:
192 | - SPLUNK_START_ARGS=--accept-license
193 | - SPLUNK_PASSWORD=Password1
194 | - SPLUNK_APPS_URL=https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/main/splunk-add-on-for-cisco-asa_410.tgz
195 | volumes:
196 | - $PWD/splunk-uf1/:/opt/splunkforwarder/etc/apps/splunk-uf1/
197 | ports:
198 | - 3333:3333
199 |
200 | splunk_eventgen:
201 | image: guilhemmarchand/splunk-eventgen:latest
202 | container_name: splunk_eventgen
203 | restart: unless-stopped
204 | user: 'root'
205 | volumes:
206 | - $PWD/splunk-eventgen/:/opt/splunk-eventgen
207 | ports:
208 | - 6379:6379
209 | - 9500:9500
210 | depends_on:
211 | - splunk_uf1
212 | command: 'splunk_eventgen -v generate /opt/splunk-eventgen/default/eventgen.conf'
213 |
214 | splunk_search:
215 | image: splunk/splunk:latest
216 | container_name: splunk_search
217 | user: 'root'
218 | volumes:
219 | - $PWD/splunk-search/:/opt/splunk/etc/apps/splunk-search/
220 | depends_on:
221 | - connect
222 | environment:
223 | - SPLUNK_START_ARGS=--accept-license
224 | # - SPLUNK_HEC_TOKEN=3bca5f4c-1eff-4eee-9113-ea94c284478a
225 | - SPLUNK_APPS_URL=https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/main/splunk-add-on-for-cisco-asa_410.tgz
226 | - SPLUNK_PASSWORD=Password1
227 | ports:
228 | - 8089:8088
229 | - 8000:8000
230 |
231 | cyber-sigma-streams:
232 | image: michaelpeacock/confluent-sigma:1.2.1.1
233 | container_name: cyber-sigma-streams
234 | depends_on:
235 | - broker
236 | - connect
237 | - control-center
238 | - ksqldb-server
239 | hostname: cyber-sigma-streams
240 | volumes:
241 | - $PWD/scripts/:/tmp/config
242 | command:
243 | - bash
244 | - -c
245 | - |
246 | echo "Starting Streams app..."
247 | cd /tmp
248 | java -cp sigma-streams-1.2.1-fat.jar io.confluent.sigmarules.SigmaStreamsApp -c /tmp/config/sigma-dns.properties
249 | sleep infinity
250 |
251 | cyber-sigma-regex-ui:
252 | image: michaelpeacock/confluent-sigma-regex-ui:latest
253 | container_name: cyber-sigma-regex-ui
254 | depends_on:
255 | - broker
256 | - connect
257 | - control-center
258 | - ksqldb-server
259 | hostname: cyber-sigma-regex-ui
260 | ports:
261 | - 8080:8080
262 | environment:
263 | kafka_bootstrapAddress: 'broker:29092'
264 | kafka_schemaRegistry: 'http://schema-registry:8081'
265 | kafka_sigma_rules_topic: 'sigma-rules'
266 | confluent_regex_applicationID: 'regex-application'
267 | confluent_regex_inputTopic: 'splunk-s2s-events'
268 | confluent_regex_ruleTopic: 'regex-rules'
269 | confluent_regex_filterField: 'sourcetype'
270 | confluent_regex_regexField: 'event'
271 |
--------------------------------------------------------------------------------
/images/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/images/.DS_Store
--------------------------------------------------------------------------------
/images/Sigma_RegEx.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/images/Sigma_RegEx.png
--------------------------------------------------------------------------------
/images/siem_optimization.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/images/siem_optimization.png
--------------------------------------------------------------------------------
/images/splunk_savings.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/images/splunk_savings.png
--------------------------------------------------------------------------------
/palo-alto-networks-add-on-for-splunk_710.tgz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/palo-alto-networks-add-on-for-splunk_710.tgz
--------------------------------------------------------------------------------
/scripts/sigma-dns.properties:
--------------------------------------------------------------------------------
1 | application.id=zeek-rules-streams-app
2 | bootstrap.server=broker:29092
3 | schema.registry=http://schema-registry:8081
4 | data.topic=dns
5 | output.topic=dns-detection
6 | field.mapping.file=/tmp/config/splunk-zeek.yml
7 | sigma.rules.topic=sigma-rules
8 | sigma.rule.filter.product=zeek
9 | sigma.rule.filter.service=dns
10 | #sigma.rule.filter.title=Domain User Enumeration Network Recon 01
11 | #sigma.rule.filter.list=/Users/mpeacock/Development/KafkaSigma/kafka-sigma-streams/src/config/sigma_titles.txt
--------------------------------------------------------------------------------
/scripts/sigma-http.properties:
--------------------------------------------------------------------------------
1 | application.id=operation-rules-streams-app
2 | bootstrap.server=127.0.0.1:9092
3 | data.topic=http
4 | output.topic=http-detection
5 | field.mapping.file=config/splunk-zeek.yml
6 | sigma.rules.topic=sigma-rules
7 | sigma.rule.filter.product=zeek
8 | sigma.rule.filter.service=http
9 | sigma.rule.filter.title=Simple Http
10 | #sigma.rule.filter.list=config/sigma_titles.txt
--------------------------------------------------------------------------------
/scripts/sigma_titles.txt:
--------------------------------------------------------------------------------
1 | Possible Windows Executable Download Without Matching Mime Type
2 | Executable from Webdav
3 | Possible Data Collection related to Office Docs and Email Archives and PDFs
4 | Domain User Enumeration Network Recon 01
--------------------------------------------------------------------------------
/scripts/splunk-zeek.yml:
--------------------------------------------------------------------------------
1 | title: Splunk Zeek sourcetype mappings
2 | order: 20
3 | backends:
4 | - splunk
5 | - splunkxml
6 | - corelight_splunk
7 | logsources:
8 | zeek-category-accounting:
9 | category: accounting
10 | rewrite:
11 | product: zeek
12 | service: syslog
13 | zeek-category-firewall:
14 | category: firewall
15 | rewrite:
16 | product: zeek
17 | service: conn
18 | zeek-category-dns:
19 | category: dns
20 | rewrite:
21 | product: zeek
22 | service: dns
23 | zeek-category-proxy:
24 | category: proxy
25 | rewrite:
26 | product: zeek
27 | service: http
28 | zeek-category-webserver:
29 | category: webserver
30 | rewrite:
31 | product: zeek
32 | service: http
33 | zeek-conn:
34 | product: zeek
35 | service: conn
36 | rewrite:
37 | product: zeek
38 | service: conn
39 | zeek-conn_long:
40 | product: zeek
41 | service: conn_long
42 | conditions:
43 | sourcetype: 'bro:conn_long:json'
44 | zeek-dce_rpc:
45 | product: zeek
46 | service: dce_rpc
47 | conditions:
48 | sourcetype: 'bro:dce_rpc:json'
49 | zeek-dns:
50 | product: zeek
51 | service: dns
52 | conditions:
53 | sourcetype: 'bro:dns:json'
54 | zeek-dnp3:
55 | product: zeek
56 | service: dnp3
57 | conditions:
58 | sourcetype: 'bro:dnp3:json'
59 | zeek-dpd:
60 | product: zeek
61 | service: dpd
62 | conditions:
63 | sourcetype: 'bro:dpd:json'
64 | zeek-files:
65 | product: zeek
66 | service: files
67 | conditions:
68 | sourcetype: 'bro:files:json'
69 | zeek-ftp:
70 | product: zeek
71 | service: ftp
72 | conditions:
73 | sourcetype: 'bro:ftp:json'
74 | zeek-gquic:
75 | product: zeek
76 | service: gquic
77 | conditions:
78 | sourcetype: 'bro:gquic:json'
79 | zeek-http:
80 | product: zeek
81 | service: http
82 | conditions:
83 | sourcetype: 'bro:http:json'
84 | zeek-http2:
85 | product: zeek
86 | service: http2
87 | conditions:
88 | sourcetype: 'bro:http2:json'
89 | zeek-intel:
90 | product: zeek
91 | service: intel
92 | conditions:
93 | sourcetype: 'bro:intel:json'
94 | zeek-irc:
95 | product: zeek
96 | service: irc
97 | conditions:
98 | sourcetype: 'bro:irc:json'
99 | zeek-kerberos:
100 | product: zeek
101 | service: kerberos
102 | conditions:
103 | sourcetype: 'bro:kerberos:json'
104 | zeek-known_certs:
105 | product: zeek
106 | service: known_certs
107 | conditions:
108 | sourcetype: 'bro:known_certs:json'
109 | zeek-known_hosts:
110 | product: zeek
111 | service: known_hosts
112 | conditions:
113 | sourcetype: 'bro:known_hosts:json'
114 | zeek-known_modbus:
115 | product: zeek
116 | service: known_modbus
117 | conditions:
118 | sourcetype: 'bro:known_modbus:json'
119 | zeek-known_services:
120 | product: zeek
121 | service: known_services
122 | conditions:
123 | sourcetype: 'bro:known_services:json'
124 | zeek-modbus:
125 | product: zeek
126 | service: modbus
127 | conditions:
128 | sourcetype: 'bro:modbus:json'
129 | zeek-modbus_register_change:
130 | product: zeek
131 | service: modbus_register_change
132 | conditions:
133 | sourcetype: 'bro:modbus_register_change:json'
134 | zeek-mqtt_connect:
135 | product: zeek
136 | service: mqtt_connect
137 | conditions:
138 | sourcetype: 'bro:mqtt_connect:json'
139 | zeek-mqtt_publish:
140 | product: zeek
141 | service: mqtt_publish
142 | conditions:
143 | sourcetype: 'bro:mqtt_publish:json'
144 | zeek-mqtt_subscribe:
145 | product: zeek
146 | service: mqtt_subscribe
147 | conditions:
148 | sourcetype: 'bro:mqtt_subscribe:json'
149 | zeek-mysql:
150 | product: zeek
151 | service: mysql
152 | conditions:
153 | sourcetype: 'bro:mysql:json'
154 | zeek-notice:
155 | product: zeek
156 | service: notice
157 | conditions:
158 | sourcetype: 'bro:notice:json'
159 | zeek-ntlm:
160 | product: zeek
161 | service: ntlm
162 | conditions:
163 | sourcetype: 'bro:ntlm:json'
164 | zeek-ntp:
165 | product: zeek
166 | service: ntp
167 | conditions:
168 | sourcetype: 'bro:ntp:json'
169 | zeek-ocsp:
170 | product: zeek
171 | service: ntp
172 | conditions:
173 | sourcetype: 'bro:ocsp:json'
174 | zeek-pe:
175 | product: zeek
176 | service: pe
177 | conditions:
178 | sourcetype: 'bro:pe:json'
179 | zeek-pop3:
180 | product: zeek
181 | service: pop3
182 | conditions:
183 | sourcetype: 'bro:pop3:json'
184 | zeek-radius:
185 | product: zeek
186 | service: radius
187 | conditions:
188 | sourcetype: 'bro:radius:json'
189 | zeek-rdp:
190 | product: zeek
191 | service: rdp
192 | conditions:
193 | sourcetype: 'bro:rdp:json'
194 | zeek-rfb:
195 | product: zeek
196 | service: rfb
197 | conditions:
198 | sourcetype: 'bro:rfb:json'
199 | zeek-sip:
200 | product: zeek
201 | service: sip
202 | conditions:
203 | sourcetype: 'bro:sip:json'
204 | zeek-smb_files:
205 | product: zeek
206 | service: smb_files
207 | conditions:
208 | sourcetype: 'bro:smb_files:json'
209 | zeek-smb_mapping:
210 | product: zeek
211 | service: smb_mapping
212 | conditions:
213 | sourcetype: 'bro:smb_mapping:json'
214 | zeek-smtp:
215 | product: zeek
216 | service: smtp
217 | conditions:
218 | sourcetype: 'bro:smtp:json'
219 | zeek-smtp_links:
220 | product: zeek
221 | service: smtp_links
222 | conditions:
223 | sourcetype: 'bro:smtp_links:json'
224 | zeek-snmp:
225 | product: zeek
226 | service: snmp
227 | conditions:
228 | sourcetype: 'bro:snmp:json'
229 | zeek-socks:
230 | product: zeek
231 | service: socks
232 | conditions:
233 | sourcetype: 'bro:socks:json'
234 | zeek-software:
235 | product: zeek
236 | service: software
237 | conditions:
238 | sourcetype: 'bro:software:json'
239 | zeek-ssh:
240 | product: zeek
241 | service: ssh
242 | conditions:
243 | sourcetype: 'bro:ssh:json'
244 | zeek-ssl:
245 | product: zeek
246 | service: ssl
247 | conditions:
248 | sourcetype: 'bro:ssl:json'
249 | zeek-tls: # In case people call it TLS even though log is called ssl
250 | product: zeek
251 | service: tls
252 | conditions:
253 | sourcetype: 'bro:ssl:json'
254 | zeek-syslog:
255 | product: zeek
256 | service: syslog
257 | conditions:
258 | sourcetype: 'bro:syslog:json'
259 | zeek-tunnel:
260 | product: zeek
261 | service: tunnel
262 | conditions:
263 | sourcetype: 'bro:tunnel:json'
264 | zeek-traceroute:
265 | product: zeek
266 | service: traceroute
267 | conditions:
268 | sourcetype: 'bro:traceroute:json'
269 | zeek-weird:
270 | product: zeek
271 | service: weird
272 | conditions:
273 | sourcetype: 'bro:weird:json'
274 | zeek-x509:
275 | product: zeek
276 | service: x509
277 | conditions:
278 | sourcetype: 'bro:x509:json'
279 | zeek-ip_search:
280 | product: zeek
281 | service: network
282 | conditions:
283 | sourcetype:
284 | - 'bro:conn:json'
285 | - 'bro:conn_long:json'
286 | - 'bro:dce_rpc:json'
287 | - 'bro:dhcp:json'
288 | - 'bro:dnp3:json'
289 | - 'bro:dns:json'
290 | - 'bro:ftp:json'
291 | - 'bro:gquic:json'
292 | - 'bro:http:json'
293 | - 'bro:irc:json'
294 | - 'bro:kerberos:json'
295 | - 'bro:modbus:json'
296 | - 'bro:mqtt_connect:json'
297 | - 'bro:mqtt_publish:json'
298 | - 'bro:mqtt_subscribe:json'
299 | - 'bro:mysql:json'
300 | - 'bro:ntlm:json'
301 | - 'bro:ntp:json'
302 | - 'bro:radius:json'
303 | - 'bro:rfb:json'
304 | - 'bro:sip:json'
305 | - 'bro:smb_files:json'
306 | - 'bro:smb_mapping:json'
307 | - 'bro:smtp:json'
308 | - 'bro:smtp_links:json'
309 | - 'bro:snmp:json'
310 | - 'bro:socks:json'
311 | - 'bro:ssh:json'
312 | - 'bro:ssl:json'
313 | - 'bro:tunnel:json'
314 | - 'bro:weird:json'
315 | fieldmappings:
316 | # All Logs Applied Mapping & Taxonomy
317 | mike: query
318 | dst_ip: id.resp_h
319 | dst_port: id.resp_p
320 | network_protocol: proto
321 | src_ip: id.orig_h
322 | src_port: id.orig_p
323 | # DNS matching Taxonomy & DNS Category
324 | answer: answers
325 | #question_length: # Does not exist in open source version
326 | record_type: qtype_name
327 | #parent_domain: # Does not exist in open source version
328 | # HTTP matching Taxonomy & Web/Proxy Category
329 | cs-bytes: request_body_len
330 | cs-cookie: cookie
331 | r-dns: host
332 | sc-bytes: response_body_len
333 | sc-status: status_code
334 | c-uri: uri
335 | c-uri-extension: uri
336 | c-uri-query: uri
337 | c-uri-stem: uri
338 | c-useragent: user_agent
339 | cs-host: host
340 | cs-method: method
341 | cs-referrer: referrer
342 | cs-version: version
343 | # Few other variations of names from zeek source itself
344 | id_orig_h: id.orig_h
345 | id_orig_p: id.orig_p
346 | id_resp_h: id.resp_h
347 | id_resp_p: id.resp_p
348 | # Temporary one off rule name fields
349 | agent.version: version
350 | c-cookie: cookie
351 | c-ip: id.orig_h
352 | cs-uri: uri
353 | clientip: id.orig_h
354 | clientIP: id.orig_h
355 | dest_domain:
356 | - query
357 | - host
358 | - server_name
359 | dest_ip: id.resp_h
360 | dest_port: id.resp_p
361 | #TODO:WhatShouldThisBe?==dest:
362 | #TODO:WhatShouldThisBe?==destination:
363 | #TODO:WhatShouldThisBe?==Destination:
364 | destination.hostname:
365 | - query
366 | - host
367 | - server_name
368 | DestinationAddress: id.resp_h
369 | DestinationHostname:
370 | - host
371 | - query
372 | - server_name
373 | DestinationIp: id.resp_h
374 | DestinationIP: id.resp_h
375 | DestinationPort: id.resp_p
376 | dst-ip: id.resp_h
377 | dstip: id.resp_h
378 | dstport: id.resp_p
379 | Host:
380 | - host
381 | - query
382 | - server_name
383 | HostVersion: http.version
384 | http_host:
385 | - host
386 | - query
387 | - server_name
388 | http_uri: uri
389 | http_url: uri
390 | http_user_agent: user_agent
391 | http.request.url-query-params: uri
392 | HttpMethod: method
393 | in_url: uri
394 | # parent_domain: # Not in open source zeek
395 | post_url_parameter: uri
396 | Request Url: uri
397 | request_url: uri
398 | request_URL: uri
399 | RequestUrl: uri
400 | #response: status_code
401 | resource.url: uri
402 | resource.URL: uri
403 | sc_status: status_code
404 | sender_domain:
405 | - query
406 | - server_name
407 | service.response_code: status_code
408 | source: id.orig_h
409 | SourceAddr: id.orig_h
410 | SourceAddress: id.orig_h
411 | SourceIP: id.orig_h
412 | SourceIp: id.orig_h
413 | SourceNetworkAddress: id.orig_h
414 | SourcePort: id.orig_p
415 | srcip: id.orig_h
416 | Status: status_code
417 | status: status_code
418 | url: uri
419 | URL: uri
420 | url_query: uri
421 | url.query: uri
422 | uri_path: uri
423 | user_agent: user_agent
424 | user_agent.name: user_agent
425 | user-agent: user_agent
426 | User-Agent: user_agent
427 | useragent: user_agent
428 | UserAgent: user_agent
429 | User Agent: user_agent
430 | web_dest:
431 | - host
432 | - query
433 | - server_name
434 | web.dest:
435 | - host
436 | - query
437 | - server_name
438 | Web.dest:
439 | - host
440 | - query
441 | - server_name
442 | web.host:
443 | - host
444 | - query
445 | - server_name
446 | Web.host:
447 | - host
448 | - query
449 | - server_name
450 | web_method: method
451 | Web_method: method
452 | web.method: method
453 | Web.method: method
454 | web_src: id.orig_h
455 | web_status: status_code
456 | Web_status: status_code
457 | web.status: status_code
458 | Web.status: status_code
459 | web_uri: uri
460 | web_url: uri
461 | # Most are in ECS, but for things not using Elastic - these need renamed
462 | destination.ip: id.resp_h
463 | destination.port: id.resp_p
464 | http.request.body.content: post_body
465 | source.domain:
466 | - host
467 | - query
468 | - server_name
469 | source.ip: id.orig_h
470 | source.port: id.orig_p
471 |
--------------------------------------------------------------------------------
/scripts/startKafkaConnectComponents.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | echo "Installing connector plugins"
3 | confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:latest
4 | confluent-hub install --no-prompt splunk/kafka-connect-splunk:latest
5 | confluent-hub install --no-prompt confluentinc/kafka-connect-splunk-s2s:latest
6 | #
7 | echo "Launching Kafka Connect worker"
8 | /etc/confluent/docker/run &
9 | #
10 | echo "waiting 2 minutes for things to stabilise"
11 | sleep 120
12 | echo "Starting the s2s conector"
13 |
14 |
15 | HEADER="Content-Type: application/json"
16 | DATA=$( cat << EOF
17 | {
18 | "name": "splunk-s2s-source",
19 | "config": {
20 | "connector.class": "io.confluent.connect.splunk.s2s.SplunkS2SSourceConnector",
21 | "topics": "splunk-s2s-events",
22 | "splunk.s2s.port":"9997",
23 | "kafka.topic":"splunk-s2s-events",
24 | "key.converter":"org.apache.kafka.connect.storage.StringConverter",
25 | "value.converter":"org.apache.kafka.connect.json.JsonConverter",
26 | "key.converter.schemas.enable":"false",
27 | "value.converter.schemas.enable":"false",
28 | "confluent.topic.bootstrap.servers":"broker:29092",
29 | "confluent.topic.replication.factor":"1"
30 | }
31 | }
32 | EOF
33 | )
34 |
35 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
36 |
37 | echo "Starting the Splunk Sink connector - HEC Formatted"
38 |
39 | HEADER="Content-Type: application/json"
40 | DATA=$( cat << EOF
41 | {
42 | "name": "SPLUNKSINK_HEC",
43 | "config": {
44 | "confluent.topic.bootstrap.servers": "broker:29092",
45 | "name": "SPLUNKSINK_HEC",
46 | "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
47 | "tasks.max": "1",
48 | "key.converter": "org.apache.kafka.connect.storage.StringConverter",
49 | "value.converter": "org.apache.kafka.connect.storage.StringConverter",
50 | "topics": "CISCO_ASA_FILTER_106023",
51 | "splunk.hec.token": "3bca5f4c-1eff-4eee-9113-ea94c284478a",
52 | "splunk.hec.uri": "https://splunk_search:8088",
53 | "splunk.hec.ssl.validate.certs": "false",
54 | "splunk.hec.json.event.formatted": "true"
55 | }
56 | }
57 | EOF
58 | )
59 |
60 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
61 |
62 | echo "Starting the Splunk Sink connector - Raw"
63 |
64 | HEADER="Content-Type: application/json"
65 | DATA=$( cat << EOF
66 | {
67 | "name": "SPLUNKSINK_RAW",
68 | "config": {
69 | "confluent.topic.bootstrap.servers": "broker:29092",
70 | "name": "SPLUNKSINK_RAW",
71 | "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
72 | "tasks.max": "1",
73 | "key.converter": "org.apache.kafka.connect.storage.StringConverter",
74 | "value.converter": "org.apache.kafka.connect.storage.StringConverter",
75 | "topics": "AGGREGATOR",
76 | "splunk.hec.token": "3bca5f4c-1eff-4eee-9113-ea94c284478b",
77 | "splunk.hec.uri": "https://splunk_search:8088",
78 | "splunk.hec.ssl.validate.certs": "false",
79 | "splunk.hec.json.event.formatted": "false"
80 | }
81 | }
82 | EOF
83 | )
84 |
85 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
86 |
87 | echo "Sleeping forever"
88 | sleep infinity
89 |
90 |
--------------------------------------------------------------------------------
/scripts/submit_s2s_source.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | HEADER="Content-Type: application/json"
4 | DATA=$( cat << EOF
5 | {
6 | "name": "splunk-s2s-source",
7 | "config": {
8 | "connector.class": "io.confluent.connect.splunk.s2s.SplunkS2SSourceConnector",
9 | "topics": "splunk-s2s-events",
10 | "splunk.s2s.port":"9997",
11 | "kafka.topic":"splunk-s2s-events",
12 | "key.converter":"org.apache.kafka.connect.storage.StringConverter",
13 | "value.converter":"org.apache.kafka.connect.json.JsonConverter",
14 | "key.converter.schemas.enable":"false",
15 | "value.converter.schemas.enable":"false",
16 | "confluent.topic.bootstrap.servers":"broker:29092",
17 | "confluent.topic.replication.factor":"1"
18 | }
19 | }
20 | EOF
21 | )
22 |
23 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
24 |
--------------------------------------------------------------------------------
/scripts/submit_splunk_raw_sink.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | HEADER="Content-Type: application/json"
4 | DATA=$( cat << EOF
5 | {
6 | "name": "SPLUNKSINK_COUNTS",
7 | "config": {
8 | "confluent.topic.bootstrap.servers": "broker:29092",
9 | "name": "SPLUNKSINK_COUNTS",
10 | "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
11 | "tasks.max": "1",
12 | "key.converter": "org.apache.kafka.connect.storage.StringConverter",
13 | "value.converter": "org.apache.kafka.connect.storage.StringConverter",
14 | "topics": "AGGREGATOR",
15 | "splunk.hec.token": "c4a03fd1-805e-4392-86cf-155ae87ad27e",
16 | "splunk.hec.uri": "https://splunk_search:8088",
17 | "splunk.hec.ssl.validate.certs": "false",
18 | "splunk.hec.json.event.formatted": "false"
19 | }
20 | }
21 | EOF
22 | )
23 |
24 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
25 |
--------------------------------------------------------------------------------
/scripts/submit_splunk_rich_ssl_sink.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | HEADER="Content-Type: application/json"
4 | DATA=$( cat << EOF
5 | {
6 | "name": "SPLUNKSINK_RICH_SSL",
7 | "config": {
8 | "confluent.topic.bootstrap.servers": "broker:29092",
9 | "name": "SPLUNKSINK_RICH_SSL",
10 | "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
11 | "tasks.max": "1",
12 | "key.converter": "org.apache.kafka.connect.storage.StringConverter",
13 | "value.converter": "org.apache.kafka.connect.storage.StringConverter",
14 | "topics": "RICH_SSL",
15 | "splunk.hec.token": "72ad3ec8-f73a-4db9-b052-1dad9cc63b31",
16 | "splunk.hec.uri": "https://splunk_search:8088",
17 | "splunk.hec.ssl.validate.certs": "false",
18 | "splunk.hec.json.event.formatted": "false"
19 | }
20 | }
21 | EOF
22 | )
23 |
24 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
25 |
--------------------------------------------------------------------------------
/scripts/submit_splunk_sink.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | HEADER="Content-Type: application/json"
4 | DATA=$( cat << EOF
5 | {
6 | "name": "SPLUNKSINK",
7 | "config": {
8 | "confluent.topic.bootstrap.servers": "broker:29092",
9 | "name": "CISCO_ASA",
10 | "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
11 | "tasks.max": "1",
12 | "key.converter": "org.apache.kafka.connect.storage.StringConverter",
13 | "value.converter": "org.apache.kafka.connect.storage.StringConverter",
14 | "topics": "CISCO_ASA_FILTER_106023",
15 | "splunk.hec.token": "3bca5f4c-1eff-4eee-9113-ea94c284478a",
16 | "splunk.hec.uri": "https://splunk_search:8088",
17 | "splunk.hec.ssl.validate.certs": "false",
18 | "splunk.hec.json.event.formatted": "true"
19 | }
20 | }
21 | EOF
22 | )
23 |
24 | curl -X POST -H "${HEADER}" --data "${DATA}" http://localhost:8083/connectors
25 |
--------------------------------------------------------------------------------
/splunk-add-on-for-cisco-asa_410.tgz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/splunk-add-on-for-cisco-asa_410.tgz
--------------------------------------------------------------------------------
/splunk-eventgen/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/splunk-eventgen/.DS_Store
--------------------------------------------------------------------------------
/splunk-eventgen/appserver/static/splunk-lab.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/splunk-eventgen/appserver/static/splunk-lab.png
--------------------------------------------------------------------------------
/splunk-eventgen/default/app.conf:
--------------------------------------------------------------------------------
1 | #
2 | # Splunk app configuration file
3 | #
4 |
5 | [install]
6 | is_configured = 0
7 |
--------------------------------------------------------------------------------
/splunk-eventgen/default/data/ui/nav/default.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
--------------------------------------------------------------------------------
/splunk-eventgen/default/data/ui/views/README:
--------------------------------------------------------------------------------
1 | Add all the views that your app needs in this directory
2 |
--------------------------------------------------------------------------------
/splunk-eventgen/default/data/ui/views/tailreader_check.xml:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/splunk-eventgen/default/data/ui/views/welcome.xml:
--------------------------------------------------------------------------------
1 |
2 | Welcome To Splunk Lab!
3 |
4 |
5 |
6 |
7 |
8 |
9 | Welcome to Splunk Lab!
10 |
11 | About
12 |
13 |
14 |
15 | Splunk Lab is the quick and easy way to spin up an instance of Splunk in Docker to perform ad-hoc data analysis on one or more logfiles or REST/RSS endpoints!
16 |
17 |
18 | Documentation
19 |
20 |
31 |
32 | Examples
33 |
38 |
39 | Support
40 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 | Links of Interest
60 |
61 |
62 |
63 | Search existing events
64 |
65 |
71 |
72 | Dashboards
73 |
74 |
75 | Dashboards that are saved will persist outside of Splunk Lab (default is app/ directory...)
76 |
77 |
78 | Wordcloud App
79 |
80 |
81 | Rest Data Endpoints - Turn these on to get current BitCoin prices, stock prices, Philly Weather, and more!
82 |
83 | Syndication Feed - Turn these on to get RSS feeds from places like CNN, Flickr, and Splunk Questions
84 |
85 |
86 | Machine Learning
87 |
88 | If the splunk-lab-ml Docker image was used, these following modules will be available:
89 |
90 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 | Data Sources included in Splunk Lab
119 |
145 |
146 | To get started with these, head over to Syndication or REST
147 | under Settings -> Data Inputs
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 | Apps built with Splunk Lab
156 |
157 |
158 | Splunk Yelp Reviews - Lets you pull down Yelp reviews for venues and view visualizations and wordclouds of positive/negative reviews in a Splunk dashboard
159 |
160 | Splunk Telegram - This app lets you run Splunk against messages from Telegram groups and generate graphs and word clouds based on the activity in them.
161 |
162 | Splunk Network Health Check - Pings 1 or more hosts and graphs the results in Splunk so you can monitor network connectivity over time.
163 | ...plus a few other things that I'm not quite ready to release yet. :-)
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 | Copyrights
175 |
176 |
177 | Splunk is copyright by Splunk, Inc. Please stay within the confines of the 500 MB/day free license when using Splunk Lab, unless you brought your own license along.
178 | The various apps are copyright by the creators of those apps.
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
--------------------------------------------------------------------------------
/splunk-eventgen/default/eventgen.conf:
--------------------------------------------------------------------------------
1 | [cisco_asa.sample]
2 | mode = replay
3 | count = -1
4 | timeMultiple = 1.5
5 | sampletype = raw
6 | outputMode = tcpout
7 | # outputMode = s2s
8 | # splunkHost = connect
9 | # splunkPort = 9997
10 | source = udp:514
11 | host = NETWORK_FW
12 | index = main
13 | sourcetype = cisco:asa
14 | tcpDestinationHost = splunk_uf1
15 | tcpDestinationPort = 3333
16 | token.0.token = \w{3} \d{2} \d{2}:\d{2}:\d{2}
17 | token.0.replacementType = replaytimestamp
18 | token.0.replacement = %b %d %H:%M:%S
19 |
--------------------------------------------------------------------------------
/splunk-eventgen/metadata/default.meta:
--------------------------------------------------------------------------------
1 |
2 | # Application-level permissions
3 |
4 | []
5 | access = read : [ * ], write : [ admin, power ]
6 |
7 | ### EVENT TYPES
8 |
9 | [eventtypes]
10 | export = system
11 |
12 |
13 | ### PROPS
14 |
15 | [props]
16 | export = system
17 |
18 |
19 | ### TRANSFORMS
20 |
21 | [transforms]
22 | export = system
23 |
24 |
25 | ### LOOKUPS
26 |
27 | [lookups]
28 | export = system
29 |
30 |
31 | ### VIEWSTATES: even normal users should be able to create shared viewstates
32 |
33 | [viewstates]
34 | access = read : [ * ], write : [ * ]
35 | export = system
36 |
--------------------------------------------------------------------------------
/splunk-eventgen/metadata/local.meta:
--------------------------------------------------------------------------------
1 | [app/ui]
2 | version = 7.2.5
3 | modtime = 1555470805.598195000
4 |
5 | [app/launcher]
6 | version = 7.2.5
7 | modtime = 1555470805.599626000
8 |
9 | [views/welcome]
10 | owner = admin
11 | version = 8.0.1
12 | modtime = 1604447185.494287000
13 |
14 | [nav/default]
15 | version = 7.2.5
16 | modtime = 1558665628.774204000
17 |
18 | []
19 | access = read : [ * ], write : [ admin, power ]
20 | export = system
21 | version = 8.0.1
22 | modtime = 1606002449.098386000
23 |
24 | [props/nginx/EXTRACT-src%2Chttp_method%2Curi_path%2Cstatus%2Cbytes_out%2Chttp_user_agent]
25 | access = read : [ * ], write : [ admin ]
26 | export = none
27 | owner = admin
28 | version = 8.0.1
29 | modtime = 1606012450.513383000
30 |
31 | [props/nginx/FIELDALIAS-ip]
32 | export = none
33 | owner = admin
34 | version = 8.0.1
35 | modtime = 1606012526.404080000
36 |
37 | [inputs/rest%3A%2F%2FTEST]
38 | owner = admin
39 | version = 8.1.0.1
40 | modtime = 1606101676.205723000
41 |
42 | [inputs/rest%3A%2F%2FTEST3]
43 | owner = admin
44 | version = 8.1.0.1
45 | modtime = 1606101377.467165000
46 |
--------------------------------------------------------------------------------
/splunk-eventgen/samples/external_ips.sample:
--------------------------------------------------------------------------------
1 | 1.16.0.0
2 | 1.19.11.11
3 | 12.130.60.4
4 | 12.130.60.5
5 | 125.17.14.100
6 | 128.241.220.82
7 | 130.253.37.97
8 | 131.178.233.243
9 | 141.146.8.66
10 | 142.162.221.28
11 | 142.233.200.21
12 | 193.33.170.23
13 | 194.146.236.22
14 | 194.215.205.19
15 | 194.8.74.23
16 | 195.216.243.24
17 | 195.69.160.22
18 | 195.69.252.22
19 | 195.80.144.22
20 | 200.6.134.23
21 | 201.122.42.235
22 | 201.28.109.162
23 | 201.3.120.132
24 | 201.42.223.29
25 | 202.164.25.24
26 | 203.223.0.20
27 | 203.92.58.136
28 | 212.235.92.150
29 | 212.27.63.151
30 | 217.132.169.69
31 | 217.197.192.20
32 | 27.1.0.0
33 | 27.1.11.11
34 | 27.101.0.0
35 | 27.101.11.11
36 | 27.102.0.0
37 | 27.102.11.11
38 | 27.160.0.0
39 | 27.175.11.11
40 | 27.176.0.0
41 | 27.35.0.0
42 | 27.35.11.11
43 | 27.96.128.0
44 | 27.96.191.11
45 | 59.162.167.100
46 | 62.216.64.19
47 | 64.66.0.20
48 | 69.80.0.18
49 | 74.125.19.106
50 | 81.11.191.113
51 | 82.245.228.36
52 | 84.34.159.23
53 | 86.212.199.60
54 | 86.9.190.90
55 | 87.194.216.51
56 | 87.240.128.18
57 | 89.11.192.18
58 | 89.167.143.32
59 | 90.205.111.169
60 | 91.199.80.24
61 | 91.205.40.22
62 | 91.208.184.24
63 | 91.214.92.22
64 | 92.1.170.135
65 | 94.229.0.20
66 | 94.229.0.21
67 |
--------------------------------------------------------------------------------
/splunk-eventgen/samples/synthetic_ips.sample:
--------------------------------------------------------------------------------
1 | 10.0.69.1
2 | 10.0.69.1
3 | 10.0.69.2
4 | 10.0.69.3
5 | 10.0.69.4
6 | 10.0.69.5
7 | 10.0.69.6
8 | 10.0.69.7
9 | 10.0.69.8
10 | 10.0.69.9
11 | 10.0.69.10
12 | 10.0.69.11
13 | 10.0.69.12
14 | 10.0.69.13
15 | 10.0.69.14
16 | 10.0.69.15
17 | 10.0.69.16
18 | 10.0.69.17
19 | 10.0.69.18
20 | 10.0.69.19
21 | 10.0.69.20
22 | 10.0.69.21
23 | 10.0.69.22
24 | 10.0.69.23
25 | 10.0.69.24
26 | 10.0.69.25
27 | 10.0.69.26
28 | 10.0.69.27
29 | 10.0.69.28
30 | 10.0.69.29
31 | 10.0.69.30
32 | 10.0.69.31
33 | 10.0.69.32
34 | 10.0.69.33
35 | 10.0.69.34
36 | 10.0.69.35
37 | 10.0.69.36
38 | 10.0.69.37
39 | 10.0.69.38
40 | 10.0.69.39
41 | 10.0.69.40
42 | 10.0.69.41
43 | 10.0.69.42
44 | 10.0.69.43
45 | 10.0.69.44
46 | 10.0.69.45
47 | 10.0.69.46
48 | 10.0.69.47
49 | 10.0.69.48
50 | 10.0.69.49
51 | 10.0.69.50
52 | 10.0.69.51
53 | 10.0.69.52
54 | 10.0.69.53
55 | 10.0.69.54
56 | 10.0.69.55
57 | 10.0.69.56
58 | 10.0.69.57
59 | 10.0.69.58
60 | 10.0.69.59
61 | 10.0.69.60
62 | 10.0.69.61
63 | 10.0.69.62
64 | 10.0.69.63
65 | 10.0.69.64
66 | 10.0.69.65
67 | 10.0.69.66
68 | 10.0.69.67
69 | 10.0.69.68
70 | 10.0.69.69
71 | 10.0.69.70
72 | 10.0.69.71
73 | 10.0.69.72
74 | 10.0.69.73
75 | 10.0.69.74
76 | 10.0.69.75
77 | 10.0.69.76
78 | 10.0.69.77
79 | 10.0.69.78
80 | 10.0.69.79
81 | 10.0.69.80
82 | 10.0.69.81
83 | 10.0.69.82
84 | 10.0.69.83
85 | 10.0.69.84
86 | 10.0.69.85
87 | 10.0.69.86
88 | 10.0.69.87
89 | 10.0.69.88
90 | 10.0.69.89
91 | 10.0.69.90
92 | 10.0.69.91
93 | 10.0.69.92
94 | 10.0.69.93
95 | 10.0.69.94
96 | 10.0.69.95
97 | 10.0.69.96
98 | 10.0.69.97
99 | 10.0.69.98
100 | 10.0.69.99
101 | 10.0.69.100
102 | 10.0.69.101
103 | 10.0.69.102
104 | 10.0.69.103
105 | 10.0.69.104
106 | 10.0.69.105
107 | 10.0.69.106
108 | 10.0.69.107
109 | 10.0.69.108
110 | 10.0.69.109
111 | 10.0.69.110
112 | 10.0.69.111
113 | 10.0.69.112
114 | 10.0.69.113
115 | 10.0.69.114
116 | 10.0.69.115
117 | 10.0.69.116
118 | 10.0.69.117
119 | 10.0.69.118
120 | 10.0.69.119
121 | 10.0.69.120
122 | 10.0.69.121
123 | 10.0.69.122
124 | 10.0.69.123
125 | 10.0.69.124
126 | 10.0.69.125
127 | 10.0.69.126
128 | 10.0.69.127
129 | 10.0.69.128
130 | 10.0.69.129
131 | 10.0.69.130
132 | 10.0.69.131
133 | 10.0.69.132
134 | 10.0.69.133
135 | 10.0.69.134
136 | 10.0.69.135
137 | 10.0.69.136
138 | 10.0.69.137
139 | 10.0.69.138
140 | 10.0.69.139
141 | 10.0.69.140
142 | 10.0.69.141
143 | 10.0.69.142
144 | 10.0.69.143
145 | 10.0.69.144
146 | 10.0.69.145
147 | 10.0.69.146
148 | 10.0.69.147
149 | 10.0.69.148
150 | 10.0.69.149
151 | 10.0.69.150
152 | 10.0.69.151
153 | 10.0.69.152
154 | 10.0.69.153
155 | 10.0.69.154
156 | 10.0.69.155
157 | 10.0.69.156
158 | 10.0.69.157
159 | 10.0.69.158
160 | 10.0.69.159
161 | 10.0.69.160
162 | 10.0.69.161
163 | 10.0.69.162
164 | 10.0.69.163
165 | 10.0.69.164
166 | 10.0.69.165
167 | 10.0.69.166
168 | 10.0.69.167
169 | 10.0.69.168
170 | 10.0.69.169
171 | 10.0.69.170
172 | 10.0.69.171
173 | 10.0.69.172
174 | 10.0.69.173
175 | 10.0.69.174
176 | 10.0.69.175
177 | 10.0.69.176
178 | 10.0.69.177
179 | 10.0.69.178
180 | 10.0.69.179
181 | 10.0.69.180
182 | 10.0.69.181
183 | 10.0.69.182
184 | 10.0.69.183
185 | 10.0.69.184
186 | 10.0.69.185
187 | 10.0.69.186
188 | 10.0.69.187
189 | 10.0.69.188
190 | 10.0.69.189
191 | 10.0.69.190
192 | 10.0.69.191
193 | 10.0.69.192
194 | 10.0.69.193
195 | 10.0.69.194
196 | 10.0.69.195
197 | 10.0.69.196
198 | 10.0.69.197
199 | 10.0.69.198
200 | 10.0.69.199
201 | 10.0.69.200
202 | 10.0.69.201
203 | 10.0.69.202
204 | 10.0.69.203
205 | 10.0.69.204
206 | 10.0.69.205
207 | 10.0.69.206
208 | 10.0.69.207
209 | 10.0.69.208
210 | 10.0.69.209
211 | 10.0.69.210
212 | 10.0.69.211
213 | 10.0.69.212
214 | 10.0.69.213
215 | 10.0.69.214
216 | 10.0.69.215
217 | 10.0.69.216
218 | 10.0.69.217
219 | 10.0.69.218
220 | 10.0.69.219
221 | 10.0.69.220
222 | 10.0.69.221
223 | 10.0.69.222
224 | 10.0.69.223
225 | 10.0.69.224
226 | 10.0.69.225
227 | 10.0.69.226
228 | 10.0.69.227
229 | 10.0.69.228
230 | 10.0.69.229
231 | 10.0.69.230
232 | 10.0.69.231
233 | 10.0.69.232
234 | 10.0.69.233
235 | 10.0.69.234
236 | 10.0.69.235
237 | 10.0.69.236
238 | 10.0.69.237
239 | 10.0.69.238
240 | 10.0.69.239
241 | 10.0.69.240
242 | 10.0.69.241
243 | 10.0.69.242
244 | 10.0.69.243
245 | 10.0.69.244
246 | 10.0.69.245
247 | 10.0.69.246
248 | 10.0.69.247
249 | 10.0.69.248
250 | 10.0.69.249
251 | 10.0.69.250
252 | 10.0.69.251
253 | 10.0.69.252
254 | 10.0.69.253
255 | 10.0.69.254
256 | 10.0.69.255
257 |
--------------------------------------------------------------------------------
/splunk-search/local/inputs.conf:
--------------------------------------------------------------------------------
1 | [http://confluent_hec]
2 | disabled = 0
3 | token = 3bca5f4c-1eff-4eee-9113-ea94c284478a
4 |
5 | [http://confluent_raw]
6 | disabled = 0
7 | token = 3bca5f4c-1eff-4eee-9113-ea94c284478b
--------------------------------------------------------------------------------
/splunk-uf1/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/JohnnyMirza/confluent_splunk_demo/d11618c5eab0845f8642e78769296bb3a0d714d8/splunk-uf1/.DS_Store
--------------------------------------------------------------------------------
/splunk-uf1/local/inputs.conf:
--------------------------------------------------------------------------------
1 | [tcp://3333]
2 | connection_host = none
3 | index = main
4 | sourcetype = cisco:asa
5 | source = udp:514
6 | host = boundary-fw-1
7 |
--------------------------------------------------------------------------------
/splunk-uf1/local/outputs.conf:
--------------------------------------------------------------------------------
1 | [tcpout:confluent_s2s]
2 | server = connect:9997
3 |
4 | [tcpout]
5 | defaultGroup=confluent_s2s
6 |
--------------------------------------------------------------------------------
/statements.sql:
--------------------------------------------------------------------------------
1 | #example statements
2 |
3 | create STREAM SPLUNK (
4 | `event` VARCHAR,
5 | `time` BIGINT,
6 | `host` VARCHAR,
7 | `source` VARCHAR,
8 | `sourcetype` VARCHAR,
9 | `index` VARCHAR
10 | ) WITH (
11 | KAFKA_TOPIC='splunk-s2s-events', VALUE_FORMAT='JSON');
12 |
13 |
14 | CREATE STREAM CISCO_ASA as SELECT
15 | `event`,
16 | `source`,
17 | `sourcetype`,
18 | `index` FROM SPLUNK
19 | where `sourcetype` = 'cisco:asa'
20 | EMIT CHANGES;
21 |
22 |
23 | CREATE STREAM FIREWALLS (
24 | `src` VARCHAR,
25 | `messageID` BIGINT,
26 | `index` VARCHAR,
27 | `dest` VARCHAR,
28 | `hostname` VARCHAR,
29 | `protocol` VARCHAR,
30 | `action` VARCHAR,
31 | `srcport` BIGINT,
32 | `sourcetype` VARCHAR,
33 | `destport` BIGINT,
34 | `timestamp` VARCHAR
35 | ) WITH (
36 | KAFKA_TOPIC='firewalls', value_format='JSON'
37 | );
38 |
39 |
40 |
41 | CREATE TABLE AGGREGATOR WITH (KAFKA_TOPIC='AGGREGATOR', KEY_FORMAT='JSON', PARTITIONS=1, REPLICAS=1) AS SELECT
42 | `hostname`,
43 | `messageID`,
44 | `action`,
45 | `src`,
46 | `dest`,
47 | `dest_port`,
48 | `sourcetype`,
49 | as_value(`hostname`) as hostname,
50 | as_value(`messageID`) as messageID,
51 | as_value(`action`) as action,
52 | as_value(`src`) as src,
53 | as_value(`dest`) as dest,
54 | as_value(`destport`) as dest_port,
55 | as_value(`sourcetype`) as sourcetype,
56 | TIMESTAMPTOSTRING(WINDOWSTART, 'yyyy-MM-dd HH:mm:ss', 'UTC') TIMESTAMP,
57 | 300 DURATION,
58 | COUNT(*) COUNTS
59 | FROM FIREWALLS FIREWALLS
60 | WINDOW TUMBLING ( SIZE 300 SECONDS )
61 | GROUP BY `sourcetype`, `action`, `hostname`, `messageID`, `src`, `dest`, `destport`
62 | EMIT CHANGES;
63 |
64 |
65 | CREATE STREAM FIREWALLS (
66 | `src` VARCHAR,
67 | `messageID` BIGINT PRIMARY KEY,
68 | `index` VARCHAR,
69 | `dest` VARCHAR,
70 | `hostname` VARCHAR,
71 | `protocol` VARCHAR,
72 | `action` VARCHAR,
73 | `srcport` BIGINT,
74 | `location` VARCHAR,
75 | `sourcetype` VARCHAR,
76 | `destport` BIGINT,
77 | `timestamp` VARCHAR
78 | ) WITH (
79 | KAFKA_TOPIC='firewalls', value_format='JSON', key_format='JSON'
80 | );
81 |
82 |
83 | #### FW_DENY Stream
84 | CREATE STREAM FW_DENY WITH (KAFKA_TOPIC='FW_DENY', PARTITIONS=1, REPLICAS=1) AS SELECT *
85 | FROM FIREWALLS FIREWALLS
86 | WHERE (FIREWALLS.`action` = 'Deny')
87 | EMIT CHANGES;
88 |
89 |
90 |
91 | ^(?\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2})\s(?[^\s]+)\s\%ASA-\d-(?[^:]+):\s(?[^\s]+)\s(?[^\s]+)\ssrc\sinside:(?[0-9\.]+)\/(?[0-9]+)\sdst\soutside:(?[0-9\.]+)\/(?[0-9]+)
--------------------------------------------------------------------------------