├── .dockerignore
├── .github
└── FUNDING.yml
├── .gitignore
├── CODE_OF_CONDUCT.md
├── LICENSE
├── README.md
├── bin
├── attach.sh
├── build.sh
├── clean.sh
├── create-1-million-events.py
├── create-test-logfiles.sh
├── devel.sh
├── download.sh
├── kill.sh
├── lib.sh
├── logs.sh
├── pull.sh
├── push.sh
├── tarsplit
└── upload-file-to-s3.sh
├── docker
├── 0-0-core
├── 0-1-splunk
├── 0-2-apps
├── 1-splunk-lab
└── 1-splunk-lab-ml
├── entrypoint.sh
├── go.sh
├── img
├── app-tree.png
├── bella-italia.png
├── facebook-glassdoor.png
├── fitbit-sleep-dashboard.png
├── graph.txt
├── network-huge-outage.png
├── pa-furry-stats.jpg
├── snepchat-tag-cloud.jpg
├── splunk-cnn-headlines.png
├── splunk-lab.png
├── splunk-logo.jpg
├── splunk-rest-api-input.png
└── splunk-syndication-feed.png
├── logs
└── empty.txt
├── sample-app
├── .gitignore
├── Dockerfile
├── README.md
├── bin
│ ├── build.sh
│ └── devel.sh
├── go.sh
├── logs
│ └── empty
└── sample-app
│ ├── appserver
│ └── static
│ │ └── splunk-lab.png
│ ├── default
│ ├── app.conf
│ ├── authorize.conf
│ ├── data
│ │ └── ui
│ │ │ ├── nav
│ │ │ └── default.xml
│ │ │ └── views
│ │ │ ├── README
│ │ │ ├── tailreader_check.xml
│ │ │ └── welcome.xml
│ ├── eventgen.conf
│ └── props.conf
│ ├── local
│ ├── metadata
│ ├── default.meta
│ └── local.meta
│ ├── samples
│ ├── external_ips.sample
│ ├── fake.sample
│ ├── nginx.sample
│ └── synthetic_ips.sample
│ └── user-prefs.conf
├── splunk-config
├── health.conf
├── inputs.conf.in
├── inputs.conf.in.rest
├── inputs.conf.in.syndication
├── limits.conf
├── props.conf
├── server.conf
├── splunk-launch.conf
├── ui-prefs.conf
├── user-prefs.conf
├── user-seed.conf.in
└── web.conf.in
├── splunk-lab-app
├── appserver
│ └── static
│ │ └── splunk-lab.png
├── default
│ ├── app.conf
│ ├── authorize.conf
│ ├── data
│ │ └── ui
│ │ │ ├── nav
│ │ │ └── default.xml
│ │ │ └── views
│ │ │ ├── README
│ │ │ ├── tailreader_check.xml
│ │ │ └── welcome.xml
│ ├── eventgen.conf
│ └── props.conf
├── metadata
│ ├── default.meta
│ └── local.meta
├── samples
│ ├── external_ips.sample
│ ├── nginx.sample
│ └── synthetic_ips.sample
└── user-prefs.conf
└── vendor
└── README.md
/.dockerignore:
--------------------------------------------------------------------------------
1 |
2 | .git/
3 |
4 | data/
5 | app/
6 |
7 | logs/
8 | *.raw
9 | *.log
10 | *.txt
11 |
12 | devel/*
13 |
14 | cache/
15 | !cache/deploy/
16 |
17 |
--------------------------------------------------------------------------------
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
1 | # These are supported funding model platforms
2 |
3 | github: dmuth
4 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 |
2 | # Vim
3 | *.swp
4 | *~
5 |
6 | data/
7 | logs/
8 |
9 | # Steve Jobs
10 | .DS_Store
11 |
12 | !splunk-lab-app/default/data
13 | splunk-lab-app/local
14 |
15 | cache/
16 |
17 | # SSL Certs
18 | *.pem
19 | *.key
20 |
21 | # Don't store local Splunk settings
22 | app/
23 |
24 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
6 |
7 | ## Our Standards
8 |
9 | Examples of behavior that contributes to creating a positive environment include:
10 |
11 | * Using welcoming and inclusive language
12 | * Being respectful of differing viewpoints and experiences
13 | * Gracefully accepting constructive criticism
14 | * Focusing on what is best for the community
15 | * Showing empathy towards other community members
16 |
17 | Examples of unacceptable behavior by participants include:
18 |
19 | * The use of sexualized language or imagery and unwelcome sexual attention or advances
20 | * Trolling, insulting/derogatory comments, and personal or political attacks
21 | * Public or private harassment
22 | * Publishing others' private information, such as a physical or electronic address, without explicit permission
23 | * Other conduct which could reasonably be considered inappropriate in a professional setting
24 |
25 | ## Our Responsibilities
26 |
27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
28 |
29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
30 |
31 | ## Scope
32 |
33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
34 |
35 | ## Enforcement
36 |
37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at dmuth at dmuth DOT org. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
38 |
39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
40 |
41 | ## Attribution
42 |
43 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
44 |
45 | [homepage]: http://contributor-covenant.org
46 | [version]: http://contributor-covenant.org/version/1/4/
47 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | # Splunk Lab
5 |
6 | This project lets you stand up a Splunk instance in Docker on a quick and dirty basis.
7 |
8 | But what is Splunk? Splunk is a platform for big data collection and analytics. You feed your events from syslog, webserver logs, or application logs into Splunk, and can use queries to extract meaningful insights from that data.
9 |
10 |
11 | ## Quick Start!
12 |
13 | Paste either of these on the command line:
14 |
15 | `bash <(curl -s https://raw.githubusercontent.com/dmuth/splunk-lab/master/go.sh)`
16 |
17 | `bash <(curl -Ls https://bit.ly/splunklab)`
18 |
19 | ...and the script will print up what directory it will ingest logs from, your password, etc. Follow the on-screen
20 | instructions for setting environment variables and you'll be up and running in no time! Whatever logs you had sitting in your `logs/` directory will be searchable in Splunk with the search `index=main`.
21 |
22 | If you want to see neat things you can do in Splunk Lab, check out the Cookbook section.
23 |
24 | Also, the script will craete a directory called `bin/` with some helper scripts in it. Be sure to check them out!
25 |
26 |
27 | ### Useful links after starting
28 |
29 | - [https://localhost:8000/](https://localhost:8000/) - Default port to log into the local instance. Username is `admin`, password is what was set when starting Splunk Lab.
30 | - [Splunk Dashboard Examples](https://localhost:8000/en-US/app/simple_xml_examples/contents) - Wanna see what you can do with Splunk? Here are some example dashboards.
31 |
32 |
33 | ## Features
34 |
35 | - App databoards can be stored in the local filesystem (they don't dissappear when the container exits)
36 | - Ingested data can be stored in the local filesystem
37 | - Multiple REST and RSS endpoints "built in" to provide sources of data ingestion
38 | - Integration with REST API Modular Input
39 | - Splunk Machine Learning Toolkit included
40 | - `/etc/hosts` can be appended to with local ip/hostname entries
41 | - Ships with Eventgen to populate your index with fake webserver events for testing.
42 |
43 |
44 | ## Screenshots
45 |
46 | These are screenshots with actual data from production apps which I built on top of Splunk Lab:
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 | ## Splunk Lab Cookbook
64 |
65 | What can you do with Splunk Lab? Here are a few examples of ways you can use Splunk Lab:
66 |
67 | ### Ingest some logs for viewing, searching, and analysis
68 |
69 | - Drop your logs into the `logs/` directory.
70 | - `bash <(curl -Ls https://bit.ly/splunklab)`
71 | - Go to https://localhost:8000/
72 | - Ingsted data will be written to `data/` which will persist between runs.
73 |
74 | ### Ingest some logs for viewing, searching, and analysis but DON'T keep ingested data between runs
75 |
76 | - `SPLUNK_DATA=no bash <(curl -Ls https://bit.ly/splunklab)`
77 | - Note that `data/` will not be written to and launching a new container will cause `logs/` to be indexed again.
78 | - This will increase ingestion rate on Docker for OS/X, as there are some issues with the filesystem driver in OS/X Docker.
79 |
80 | ### Play around with synthetic webserver data
81 |
82 | - `SPLUNK_EVENTGEN=1 bash <(curl -Ls https://bit.ly/splunklab)`
83 | - Fake webserver logs will be written every 10 seconds and can be viewed with the query `index=main sourcetype=nginx`. The logs are based on actual HTTP requests which have come into the webserver hosting my blog.
84 |
85 | ### Adding Hostnames into /etc/hosts
86 |
87 | - Edit a local hosts file
88 | - `ETC_HOSTS=./hosts bash <(curl -Ls https://bit.ly/splunklab)`
89 | - This can be used in conjunction with something like Splunk Network Monitor to ping hosts that don't have DNS names, such as your home's webcam. :-)
90 |
91 | ### Get the Docker command line for any of the above
92 |
93 | - Run any of the above with `PRINT_DOCKER_CMD=1` set, and the Docker command line that's used will be written to stdout.
94 |
95 | ### Run Splunk Lab in Development Mode with a bash Shell
96 |
97 | This would normally be done with the script `./bin/devel.sh` when running from the repo,
98 | but if you're running Splunk Lab just with the Docker image, here's how to do it:
99 |
100 | `docker run -p 8000:8000 -e SPLUNK_PASSWORD=password1 -v $(pwd)/data:/data -v $(pwd)/logs:/logs --name splunk-lab --rm -it -v $(pwd):/mnt -e SPLUNK_DEVEL=1 dmuth1/splunk-lab bash`
101 |
102 | This is useful mainly if you want to poke around in Splunk Lab while it's running. Note that you
103 | could always just run `docker exec splunk-lab bash` instead of doing all of the above. :-)
104 |
105 |
106 | ## Splunk Apps Included
107 |
108 | The following Splunk apps are included in this Docker image:
109 |
110 | - Eventgen
111 | - Splunk Dashboard Examples
112 |
115 | - REST API Modular Input (requires registration)
116 | - Wordcloud Custom Visualization
117 | - Slack Notification Alert
118 | - Splunk Machine Learning Toolkit
119 | - Python for Scientific Computing (for Linux 64-bit)
120 | - NLP Text Analytics
121 | - Halo - Custom Visualization
122 | - Sankey Diagram - Custom Visualization
123 |
124 |
125 | All apps are covered under their own license. Please check the Apps page
126 | for more info.
127 |
128 | Splunk has its own license. Please abide by it.
129 |
130 |
131 | ## Free Sources of Data
132 |
133 | I put together this curated list of free sources of data which can be pulled into Splunk
134 | via one of the included apps:
135 |
136 | - RSS
137 | - Recent questions posted to Splunk Answers
138 | - CNN RSS feeds
139 | - Flickr's Public feed
140 | - Public Photos
141 | - Public photos tagged "cheetah"
142 | - REST (you will need to set `$REST_KEY` when starting Splunk Lab)
143 | - Non-streaming
144 | - Philadelphia Public Transit API
145 | - Regional Rail Train Data
146 | - Coinbase API
147 | - National Weather Service
148 | - Philadelphia Forecast
149 | - Philadelphia Hourly Forecast
150 | - Alpha Vantage - Free stock quotes
151 | - Streaming
152 | - Meetup RSVPs
153 | - RSVP Endpoint
154 |
155 |
156 | ## Apps Built With Splunk Lab
157 |
158 | Since building Splunk Lab, I have used it as the basis for building other projects:
159 |
160 | - SEPTA Stats
161 | - Website with real-time stats on Philadelphia Regional Rail.
162 | - Pulled down over 60 million train data points over 4 years using Splunk.
163 | - Splunk Twint
164 | - Splunk dashboards for Twitter timelines downloaded by Twint. This now a part of the TWINT Project.
165 | - Splunk Yelp Reviews
166 | - This project lets you pull down Yelp reviews for venues and view visualizations and wordclouds of positive/negative reviews in a Splunk dashboard.
167 | - Splunk Glassdoor Reviews
168 | - Similar to Splunk Yelp, this project lets you pull down company reviews from Glassdoor and Splunk them
169 | - Splunk Telegram
170 | - This app lets you run Splunk against messages from Telegram groups and generate graphs and word clouds based on the activity in them.
171 | - Splunk Network Health Check
172 | - Pings 1 or more hosts and graphs the results in Splunk so you can monitor network connectivity over time.
173 | - Splunk Fitbit
174 | - Analyzes data from your Fitbit
175 | - Splunk for AWS S3 Server Access Logs
176 | - App to analyize AWS S3 Access Logs
177 |
178 |
179 | Here's all of the above, presented as a graph:
180 |
181 |
182 |
183 |
184 | ## Building Your Own Apps Based on Splunk Lab
185 |
186 | A sample app (and instructions on how to use it) are in the
187 | sample-app directory.
188 | Feel free to expand on that app for your own apps.
189 |
190 |
191 | ## A Word About Security
192 |
193 | HTTPS is turned on by default. Passwords such as `password` and 12345 are not permitted.
194 |
195 | Please, for the love of god, use a strong password if you are deploying
196 | this on a public-facing machine.
197 |
198 |
199 | ## FAQ
200 |
201 | ### How do I get a valid SSL cert on localhost?
202 |
203 | Yes, you can!
204 |
205 | First, install mkcert and then run `mkcert -install && mkcert localhost 127.0.0.1 ::1` to generate a local CA and a cert/key combo for localhost.
206 |
207 | Then, when you run Splunk Lab, set the environment variables `SSL_KEY` and `SSL_CERT` and those files will be pulled into Splunk Lab.
208 |
209 | Example: `SSL_KEY=./localhost.key SSL_CERT=./localhost.pem ./go.sh`
210 |
211 |
212 | ### How do I get this to work in Vagrant?
213 |
214 | TL;DR If you're on a Mac, use OrbStack.
215 |
216 | If you're running Docker in Vagrant, or just plain Vagrant, you'll run into issues because Splunk does some low-level stuff with its Vagrant directory that will result in errors in `splunkd.log` that look like this:
217 |
218 | ```
219 | 11-15-2022 01:45:31.042 +0000 ERROR StreamGroup [217 IndexerTPoolWorker-0] - failed to drain remainder total_sz=24 bytes_freed=7977 avg_bytes_per_iv=332 sth=0x7fb586dfdba0: [1668476729, /opt/splunk/var/lib/splunk/_internaldb/db/hot_v1_1, 0x7fb587f7e840] reason=st_sync failed rc=-6 warm_rc=[-35,1]
220 | ```
221 |
222 | To work around this, disable sharing of Splunk's data directory by setting `SPLUNK_DATA=no`, like this:
223 |
224 | `SPLUNK_DATA=no SPLUNK_EVENTGEN=yes ./go.sh`
225 |
226 | By doing this, any data ingested into Spunk will not persist between runs. But to be fair, Splunk Lab is meant for development usage of Splunk, not long-term usage.
227 |
228 |
229 | ### Does this work on Macs?
230 |
231 | Sure does! I built this on a Mac. :-)
232 |
233 | For best results, run under OrbStack.
234 |
235 |
236 | ## Development
237 |
238 | I wrote a series of helper scripts in `bin/` to make the process easier:
239 |
240 | - `./bin/download.sh` - Download tarballs of various apps and splits some of them into chunks
241 | - If downloading a new version of Splunk, edit `bin/lib.sh` and bump the `SPLUNK_VERSION` and `SPLUNK_BUILD` variables.
242 | - `./bin/build.sh [ --force ]` - Build the containers.
243 | - Note that this downloads packages from an AWS S3 bucket that I created. This bucket is set to "requestor pays", so you'll need to make sure the `aws` CLI app set up.
244 | - If you are (re)building Splunk Lab, you'll want to use `--force`.
245 | - `./bin/upload-file-to-s3.sh` - Upload a specific file to S3. For rolling out new versions of apps
246 | - `./bin/devel.sh` - Build and tag the container, then start it with an interactive bash shell.
247 | - This is a wrapper for the above-mentioned `go.sh` script. Any environment variables that work there will work here.
248 | - **To force rebuilding a container during development** touch the associated Dockerfile in `docker/`. E.g. `touch docker/1-splunk-lab` to rebuild the contents of that container.
249 | - `./bin/push.sh` - Tag and push the container.
250 | - `./bin/create-1-million-events.py` - Create 1 million events in the file `1-million-events.txt` in the current directory.
251 | - If not in `logs/` but reachable from the Docker container, the file can then be oneshotted into Splunk with the following command: `/opt/splunk/bin/splunk add oneshot ./1-million-events.txt -index main -sourcetype oneshot-0001`
252 | - `./bin/kill.sh` - Kill a running `splunk-lab` container.
253 | - `./bin/attach.sh` - Attach to a running `splunk-lab` container.
254 | - `./bin/clean.sh` - Remove `logs/` and/or `data/` directories.
255 | - `./bin/tarsplit` - Local copy of my pacakge from https://github.com/dmuth/tarsplit
256 |
257 |
258 | ### Building a New Version of Splunk
259 |
260 | - Bump version number and build number in `bin/lib.sh`
261 | - Run `./bin/build.sh`, use `--force` if necessary
262 | - This can take several MINUTES, especially if no apps are cached locally
263 | - Run `SPLUNK_EVENTGEN=yes SPLUNK_ML=yes ./bin/devel.sh`
264 | - This will build and tag the container, and spawn an interactive shell
265 | - Run `/opt/splunk/bin/splunk version` inside the container to verify the version number
266 | - Go to https://localhost:8000/ and verify you can log into Splunk
267 | - Run the query `index=main earliest=-1d` and verify Eventgen events are coming in
268 | - Go to https://localhost:8000/en-US/app/Splunk_ML_Toolkit/contents and verify that the ML Toolkit has been installed.
269 | - Type `exit` in the shell to shut down the server
270 | - Run `./bin/push.sh` to deploy the image. This will take awhile.
271 |
272 |
273 | ### Building Container Internals
274 |
275 | - Here's the layout of the `cache/` directory
276 | - `cache/` - Where tarballs for Splunk and its apps hang out. These are downloaded when `bin/download.sh` is run for the first time.
277 | - `cache/deploy/` - When creating a specific Docker image, files are copied here so the Dockerfile can ingest them. (Or rather hardlinked to the files in the parent directory.)
278 | - `cache/build/` - 0-byte files are written here when a specific container is built, and on future builds, the age of that file is checked against the Dockerfile. If the Dockerfile is newer, then the container is (re-)built. Otherwise, it is skipped. This shortens a run of `bin/devel.sh` where no containers need to be built from 12 seconds on my 2020 iMac to 0.2 seconds.
279 |
280 |
281 | ### A word on default/ and local/ directories
282 |
283 | I had to struggle with this for awhile, so I'm mostly documenting it here.
284 |
285 | When in devel mode, `/opt/splunk/etc/apps/splunk-lab/` is mounted to `./splunk-lab-app/` via `go.sh`
286 | and the entrypoint script inside of the container symlinks `local/` to `default/`.
287 | This way, any changes that are made to dashboards will be propagated outside of
288 | the container and can be checked in to Git.
289 |
290 | When in production mode (e.g. running `./go.sh` directly), no symlink is created,
291 | instead `local/` is mounted by whatever `$SPLUNK_APP` is pointing to (default is `app/`), so that any
292 | changes made by the user will show up on their host, with Splunk Lab's `default/`
293 | directory being untouched.
294 |
295 |
296 | ## Additional Reading
297 |
298 | - Splunk Network Health Check
299 |
300 |
301 | ## Notes/Bugs
302 |
303 | - The Docker containers are **dmuth1/splunk-lab** and **dmuth1/splunk-lab-ml**. The latter has all of the Machine Learning apps built in to the image. Feel free to extend those for your own projects.
304 | - If I run `./bin/create-test-logfiles.sh 10000` and then start Splunk Lab on a Mac, all of the files will be Indexed without any major issues, but then the CPU will spin, and not from Splunk.
305 | - The root cause is that the filesystem code for Docker volume mappings on OS/X's Docker implementation is VERY inefficient in terms of both CPU and memory usage, especially when there are 10,000 files involved. The overhead is just crazy. When reading events from a directory mounted through Docker, I see about 100 events/sec. When the directory is local to the container, I see about 1,000 events/sec, for a 10x difference.
306 | - The HTTPS cert is self-signed with Splunk's own CA. If you're tired of seeing a Certificate Error every time you try connecting to Splunk, you can follow the instructions at https://stackoverflow.com/a/31900210/196073 to allow self-signed certificates for `localhost` in Google Chrome.
307 | - Please understand the implications before you do this.
308 |
309 |
310 | ## Credits
311 |
312 | - Splunk N' Box - Splunk N' Box is used to create entire Splunk clusters in Docker. It was the first actual use of Splunk I saw in Docker, and gave me the idea that hey, maybe I could run a stand-alone Splunk instance in Docker for ad-hoc data analysis!
313 | - Splunk, for having such a fantastic product which is also a great example of Operational Excellence!
314 | - Eventgen is a super cool way of generating simulating real data that can be used to generate dashboards for testing and training purposes.
315 | - This text to ASCII art generator, for the logo I used in the script.
316 | - The logo was made over at https://www.freelogodesign.org/
317 | - Lars Wirzenius for a review of this README.
318 |
319 |
320 |
321 |
322 | ## Copyrights
323 |
324 | - Splunk is copyright by Splunk, Inc. Please stay within the confines of the 500 MB/day free license when using Splunk Lab, unless you brought your own license along.
325 | - The various apps are copyright by the creators of those apps.
326 |
327 |
328 | ## Contact
329 |
330 | My email is doug.muth@gmail.com. I am also @dmuth on Twitter
331 | and Facebook!
332 |
333 |
334 |
--------------------------------------------------------------------------------
/bin/attach.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Attach to a running instance of Splunk Lab
4 | #
5 |
6 | # Errors are fatal
7 | set -e
8 |
9 | echo "# "
10 | echo "# Attaching to the Splunk Lab container..."
11 | echo "# "
12 |
13 | docker exec -it splunk-lab bash
14 |
15 | echo "# Done!"
16 |
17 |
--------------------------------------------------------------------------------
/bin/build.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Errors are fatal
4 | set -e
5 |
6 | CACHE="cache"
7 | DEPLOY="${CACHE}/deploy"
8 | BUILD="${CACHE}/build"
9 |
10 | # Load our variables
11 | . ./bin/lib.sh
12 |
13 | #
14 | # This is set to true if we build even a single container, and subsequent
15 | # containers will ignore the build file and instead force being built.
16 | #
17 | BUILDING=""
18 |
19 | # This will be set to --no-cache if we are ignoring Docker's cache
20 | NO_CACHE=""
21 |
22 | #
23 | # Change to the parent of this script
24 | #
25 | pushd $(dirname $0) > /dev/null
26 | cd ..
27 |
28 |
29 | function print_syntax() {
30 | echo "! "
31 | echo "! Syntax: $0 [ --force | --no-cache ]"
32 | echo "! "
33 | echo "! --force force rebuilding of cached containers"
34 | echo "! --no-cache Forces building and ignores the Docker cache, to ensure a clean rebuild."
35 | echo "! "
36 | exit 1
37 | } # print_syntax()
38 |
39 |
40 | if test "$1" == "-h" -o "$1" == "--help"
41 | then
42 | print_syntax
43 |
44 | elif test "$1" == "--force"
45 | then
46 | echo "# Removing build files..."
47 | rm -fv ${BUILD}/*
48 |
49 | elif test "$1" == "--no-cache"
50 | then
51 | echo "# Removing build files..."
52 | rm -fv ${BUILD}/*
53 | NO_CACHE="--no-cache"
54 |
55 | elif test "$1"
56 | then
57 | print_syntax
58 |
59 | fi
60 |
61 | #
62 | # Download and cache local copy of Splunk to speed up future builds.
63 | #
64 | mkdir -p ${CACHE} ${DEPLOY} ${BUILD}
65 |
66 | #
67 | # Remove this local/ symlink in case it happens to exist from
68 | # a previous run with devel.sh. Otherwise, it will make it into
69 | # the Dockerfile as /opt/splunk/etc/apps/splunk-lab/local
70 | # and then mounting to it from a production run will cause an empty directory.
71 | #
72 | # This may cause unnessary rebuilds of intermediate images, but it's less awful
73 | # then breaking prod. (And hopefully the builds will be cached)
74 | #
75 | rm -fv splunk-lab-app/local
76 |
77 | #
78 | # Download our packages from the Splunk Lab S3 bucket
79 | #
80 | ./bin/download.sh
81 |
82 | echo "# "
83 | echo "# Building Docker containers..."
84 | echo "# "
85 |
86 |
87 | DOCKER="0-0-core"
88 | if test ${BUILD}/${DOCKER} -nt docker/${DOCKER}
89 | then
90 | echo "# File '${BUILD}/${DOCKER}' is newer than our Dockerfile, we don't need to build anything!"
91 | else
92 | BUILDING=1
93 | fi
94 |
95 | if test "${BUILDING}"
96 | then
97 | docker build ${NO_CACHE} . -f docker/${DOCKER} -t splunk-lab-core-0
98 | touch ${BUILD}/${DOCKER}
99 | fi
100 |
101 |
102 | DOCKER="0-1-splunk"
103 | if test ${BUILD}/${DOCKER} -nt docker/${DOCKER}
104 | then
105 | echo "# File '${BUILD}/${DOCKER}' is newer than our Dockerfile, we don't need to build anything!"
106 | else
107 | BUILDING=1
108 | fi
109 |
110 | if test "${BUILDING}"
111 | then
112 | for I in $(seq -w 10)
113 | do
114 | ln -f ${CACHE}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-${I}-of-10 ${DEPLOY}
115 | done
116 |
117 | docker build ${NO_CACHE} \
118 | --build-arg SPLUNK_HOME=${SPLUNK_HOME} \
119 | --build-arg SPLUNK_VERSION=${SPLUNK_VERSION} \
120 | --build-arg SPLUNK_BUILD=${SPLUNK_BUILD} \
121 | --build-arg DEPLOY_SPLUNK_FILENAME=${DEPLOY}/${SPLUNK_FILENAME} \
122 | --build-arg DEPLOY=${DEPLOY} \
123 | . -f docker/${DOCKER} -t splunk-lab-core-1
124 | rm -f ${DEPLOY}/*
125 | touch ${BUILD}/${DOCKER}
126 | fi
127 |
128 |
129 | DOCKER="0-2-apps"
130 | if test ${BUILD}/${DOCKER} -nt docker/${DOCKER}
131 | then
132 | echo "# File '${BUILD}/${DOCKER}' is newer than our Dockerfile, we don't need to build anything!"
133 | else
134 | BUILDING=1
135 | fi
136 |
137 | if test "${BUILDING}"
138 | then
139 | ln -f ${CACHE}/syndication-input-rssatomrdf_124.tgz ${DEPLOY}
140 | ln -f ${CACHE}/wordcloud-custom-visualization_111.tgz ${DEPLOY}
141 | ln -f ${CACHE}/slack-notification-alert_203.tgz ${DEPLOY}
142 | ln -f ${CACHE}/splunk-dashboard-examples_800.tgz ${DEPLOY}
143 | ln -f ${CACHE}/eventgen_720.tgz ${DEPLOY}
144 | ln -f ${CACHE}/rest-api-modular-input_198.tgz ${DEPLOY}
145 | docker build ${NO_CACHE} \
146 | --build-arg DEPLOY=${DEPLOY} \
147 | . -f docker/${DOCKER} -t splunk-lab-core
148 | rm -f ${DEPLOY}/*
149 | touch ${BUILD}/${DOCKER}
150 |
151 | fi
152 |
153 | DOCKER="1-splunk-lab"
154 | if test ${BUILD}/${DOCKER} -nt docker/${DOCKER}
155 | then
156 | echo "# File '${BUILD}/${DOCKER}' is newer than our Dockerfile, we don't need to build anything!"
157 | else
158 | BUILDING=1
159 | fi
160 |
161 | if test "${BUILDING}"
162 | then
163 | docker build ${NO_CACHE} . -f docker/${DOCKER} -t splunk-lab
164 | touch ${BUILD}/${DOCKER}
165 | fi
166 |
167 |
168 | DOCKER="1-splunk-lab-ml"
169 | if test ${BUILD}/${DOCKER} -nt docker/${DOCKER}
170 | then
171 | echo "# File '${BUILD}/${DOCKER}' is newer than our Dockerfile, we don't need to build anything!"
172 | else
173 | BUILDING=1
174 | fi
175 |
176 | NUM_PARTS=8
177 | if test "${BUILDING}"
178 | then
179 | for I in $(seq -w ${NUM_PARTS})
180 | do
181 | ln -f ${CACHE}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-${I}-of-${NUM_PARTS} ${DEPLOY}
182 | done
183 |
184 | ln -f ${CACHE}/splunk-machine-learning-toolkit_520.tgz ${DEPLOY}
185 | ln -f ${CACHE}/nlp-text-analytics_102.tgz ${DEPLOY}
186 | ln -f ${CACHE}/halo-custom-visualization_113.tgz ${DEPLOY}
187 | ln -f ${CACHE}/sankey-diagram-custom-visualization_130.tgz ${DEPLOY}
188 | docker build ${NO_CACHE} \
189 | --build-arg DEPLOY=${DEPLOY} \
190 | . -f docker/${DOCKER} -t splunk-lab-ml
191 | rm -f ${DEPLOY}/*
192 | touch ${BUILD}/${DOCKER}
193 |
194 | fi
195 |
196 | echo "# "
197 | echo "# Tagging Docker containers..."
198 | echo "# "
199 | docker tag splunk-lab dmuth1/splunk-lab
200 | docker tag splunk-lab dmuth1/splunk-lab:latest
201 | docker tag splunk-lab dmuth1/splunk-lab:${SPLUNK_VERSION_MAJOR}
202 | docker tag splunk-lab dmuth1/splunk-lab:${SPLUNK_VERSION_MINOR}
203 |
204 | docker tag splunk-lab-ml dmuth1/splunk-lab-ml:latest
205 | docker tag splunk-lab-ml dmuth1/splunk-lab-ml:${SPLUNK_VERSION_MAJOR}
206 | docker tag splunk-lab-ml dmuth1/splunk-lab-ml:${SPLUNK_VERSION_MINOR}
207 |
208 |
209 | echo "# Done!"
210 |
211 |
--------------------------------------------------------------------------------
/bin/clean.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Clean up after runnning an instance of Splunk Lab
4 | #
5 |
6 | # Errors are fatal
7 | set -e
8 |
9 | CLEAN_DATA=""
10 | CLEAN_LOGS=""
11 |
12 | #
13 | # Change to the parent directory of this script
14 | #
15 | pushd $(dirname $0) > /dev/null
16 | cd ..
17 |
18 | #
19 | # Parse our args
20 | #
21 | if test "$1" == "data"
22 | then
23 | CLEAN_DATA=1
24 |
25 | elif test "$1" == "logs"
26 | then
27 | CLEAN_LOGS=1
28 |
29 | elif test "$1" == "all"
30 | then
31 | CLEAN_DATA=1
32 | CLEAN_LOGS=1
33 |
34 | else
35 | echo "! "
36 | echo "! Syntax: $0 ( data | logs | all )"
37 | echo "! "
38 | exit 1
39 | fi
40 |
41 |
42 | if test "$CLEAN_DATA"
43 | then
44 | echo "# "
45 | echo "# Removing data/ directory..."
46 | echo "# "
47 | if test ! "$CLEAN_LOGS"
48 | then
49 | echo "# (This means logs will be reingested on the next run...)"
50 | echo "# "
51 | fi
52 |
53 | rm -rf data/
54 |
55 | fi
56 |
57 |
58 | if test "$CLEAN_LOGS"
59 | then
60 | echo "# "
61 | echo "# Removing logs/ directory..."
62 | echo "# "
63 |
64 | rm -rf logs/
65 |
66 | echo "# "
67 | echo "# Creating empty logs/ directory..."
68 | echo "# "
69 | mkdir logs
70 | touch logs/empty.txt
71 |
72 | fi
73 |
74 | echo "# Done!"
75 |
76 |
--------------------------------------------------------------------------------
/bin/create-1-million-events.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | #
3 | # Create 1 million events with the current timestamp and drop them into the current directory.
4 | #
5 |
6 |
7 | from datetime import datetime
8 | from datetime import timezone
9 |
10 |
11 | num = 1000000
12 | filename = "1-million-events.txt"
13 | f = open(filename, "w")
14 |
15 | print(f"Writing {num} fake events to file '{filename}'...")
16 |
17 | for x in range(num):
18 | now = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3]
19 |
20 | line = (f"{now} Test event {x}/{num}\n")
21 | #print(line) # Debugging
22 | f.write(line)
23 |
24 | f.close()
25 |
26 | print("Done!")
27 |
28 |
29 |
--------------------------------------------------------------------------------
/bin/create-test-logfiles.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Create sample logfiles with a single event each, so we can
4 | # test out how Splunk handles many files at startup.
5 | #
6 |
7 |
8 | # Errors are fatal
9 | set -e
10 |
11 | if test ! "$1"
12 | then
13 | echo "! "
14 | echo "! Syntax: $0 num"
15 | echo "! "
16 | echo "! num - Number of logfiles to create"
17 | echo "! "
18 | exit 1
19 | fi
20 |
21 | NUM=$1
22 |
23 | #
24 | # Change to our logs directory
25 | #
26 | pushd $(dirname $0)/.. > /dev/null
27 | mkdir -p logs
28 | cd logs
29 |
30 | echo "# "
31 | echo "# Removing any previous test files..."
32 | echo "# "
33 | find . -name 'test-events-*' -delete
34 |
35 | echo "# "
36 | echo "# Creating ${NUM} new files..."
37 | echo "# "
38 | for I in $(seq -w $NUM)
39 | do
40 | FILE=test-events-${I}.txt
41 | echo $FILE
42 | echo "Test event ${I}/$NUM" > $FILE
43 | done
44 |
45 | echo "# Done!"
46 |
47 |
48 |
--------------------------------------------------------------------------------
/bin/devel.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Errors are fatal
4 | set -e
5 |
6 | CACHE="cache"
7 | BUILD="${CACHE}/build"
8 |
9 | #
10 | # Change to the parent of this script
11 | #
12 | pushd $(dirname $0) > /dev/null
13 | cd ..
14 |
15 | echo "# "
16 | echo "# Starting Splunk Lab in Development Mode."
17 | echo "# "
18 |
19 | #
20 | # Skip our building if NO_BUILD is set.
21 | #
22 | if test "$NO_BUILD"
23 | then
24 | echo "# "
25 | echo "# Skipping build due to \$NO_BUILD specified..."
26 | echo "# "
27 |
28 | else
29 | echo "# "
30 | echo "# Building containers..."
31 | echo "# Set NO_BUILD=1 to skip building if you already have the container."
32 | echo "# "
33 | ./bin/build.sh $@
34 |
35 | echo "# "
36 | echo "# Tagging container..."
37 | echo "# "
38 | docker tag splunk-lab dmuth1/splunk-lab
39 | docker tag splunk-lab-ml dmuth1/splunk-lab-ml
40 |
41 | fi
42 |
43 |
44 | SPLUNK_DEVEL=1 REST_KEY=${REST_KEY} SPLUNK_PASSWORD=${SPLUNK_PASSWORD:-password1} ./go.sh
45 |
46 |
47 |
--------------------------------------------------------------------------------
/bin/download.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # This script downloads our Splunk packages from the Splunk Lab S3
4 | # bucket that I created. That bucket is set to "Requestor Pays",
5 | # so that people building their own copies don't drive up my AWS bill. :-)
6 | #
7 |
8 |
9 | # Errors are fatal
10 | set -e
11 |
12 | # Change to our parent directory
13 | pushd $(dirname $0)/.. > /dev/null
14 |
15 | BUCKET="dmuth-splunk-lab"
16 | CACHE="cache"
17 |
18 | # Load our variables
19 | . ./bin/lib.sh
20 |
21 | mkdir -p ${CACHE}
22 | pushd ${CACHE} >/dev/null > /dev/null
23 |
24 | echo "# "
25 | echo "# Downloading Splunk..."
26 | echo "# "
27 | if test ! -f "${SPLUNK_FILENAME}"
28 | then
29 | wget -O ${SPLUNK_FILENAME}.tmp ${SPLUNK_URL}
30 | mv ${SPLUNK_FILENAME}.tmp ${SPLUNK_FILENAME}
31 |
32 | else
33 | echo "# Oh, ${SPLUNK_FILENAME} already exists, skipping!"
34 |
35 | fi
36 |
37 | NUM_PARTS=10
38 | if test ! -f "splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-${NUM_PARTS}-of-${NUM_PARTS}"
39 | then
40 | echo "# "
41 | echo "# Splitting up the Splunk tarball into ${NUM_PARTS} separate pieces..."
42 | echo "# "
43 | ../bin/tarsplit ${SPLUNK_FILENAME} ${NUM_PARTS}
44 |
45 | else
46 | echo "# Oh, splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-${NUM_PARTS}-of-${NUM_PARTS} already exists, skipping!"
47 |
48 | fi
49 |
50 | echo "# "
51 | echo "# Downloading packages from S3..."
52 | echo "# "
53 |
54 | FILES="halo-custom-visualization_113.tgz
55 | nlp-text-analytics_102.tgz
56 | python-for-scientific-computing-for-linux-64-bit_202.tgz
57 | rest-api-modular-input_198.tgz
58 | sankey-diagram-custom-visualization_130.tgz
59 | slack-notification-alert_203.tgz
60 | splunk-machine-learning-toolkit_520.tgz
61 | syndication-input-rssatomrdf_124.tgz
62 | wordcloud-custom-visualization_111.tgz
63 | splunk-dashboard-examples_800.tgz
64 | eventgen_720.tgz
65 | "
66 |
67 | for FILE in $FILES
68 | do
69 | if test -f $FILE
70 | then
71 | echo "# File '${FILE}' exists, skipping!"
72 | continue
73 | fi
74 |
75 | echo "# Downloading file '${FILE}'..."
76 |
77 | TMP=$(mktemp -t splunk-lab)
78 | aws s3api get-object \
79 | --bucket ${BUCKET} \
80 | --key ${FILE} \
81 | --request-payer requester \
82 | ${TMP}
83 | mv $TMP $FILE
84 |
85 | done
86 |
87 | NUM_PARTS=8
88 | if test ! -f "python-for-scientific-computing-for-linux-64-bit_202.tgz-part-${NUM_PARTS}-of-${NUM_PARTS}"
89 | then
90 | echo "# "
91 | echo "# Splitting up Python package into ${NUM_PARTS} separate pieces..."
92 | echo "# "
93 | ../bin/tarsplit "python-for-scientific-computing-for-linux-64-bit_202.tgz" ${NUM_PARTS}
94 | fi
95 |
96 |
97 | echo "# Done downloading packages!"
98 |
99 |
--------------------------------------------------------------------------------
/bin/kill.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Remove a running instance of Splunk Lab
4 | #
5 |
6 | # Errors are fatal
7 | set -e
8 |
9 | echo "# "
10 | echo "# Killing Splunk Lab..."
11 | echo "# "
12 |
13 | docker kill splunk-lab
14 |
15 | echo "# Done!"
16 |
17 |
--------------------------------------------------------------------------------
/bin/lib.sh:
--------------------------------------------------------------------------------
1 | #
2 | # This file should be included to set important variables.
3 | #
4 |
5 | SPLUNK_PRODUCT="splunk"
6 | SPLUNK_HOME="/opt/splunk"
7 |
8 | #
9 | # Version info
10 | #
11 | SPLUNK_VERSION="9.2.0.1"
12 | SPLUNK_VERSION_MAJOR="9"
13 | SPLUNK_VERSION_MINOR=${SPLUNK_VERSION}
14 | SPLUNK_BUILD="d8ae995bf219" # 9.2.0.1
15 |
16 |
17 | #
18 | # Download info
19 | #
20 | SPLUNK_FILENAME="splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz"
21 | SPLUNK_URL="https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME}"
22 |
23 | #
24 | # Cache info
25 | #
26 | SPLUNK_CACHE_FILENAME="${CACHE}/${SPLUNK_FILENAME}"
27 |
28 |
--------------------------------------------------------------------------------
/bin/logs.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Grab logs from a running instance of Splunk Lab
4 | #
5 |
6 | # Errors are fatal
7 | set -e
8 |
9 | echo "# "
10 | echo "# Grabbing logs from Splunk Lab..."
11 | echo "# "
12 | echo "# Press ctrl-C to exit..."
13 | echo "# "
14 |
15 | docker logs -f splunk-lab
16 |
17 | echo "# Done!"
18 |
19 |
--------------------------------------------------------------------------------
/bin/pull.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Errors are fatal
4 | set -e
5 |
6 | #
7 | # Change to the parent of this script
8 | #
9 | pushd $(dirname $0) > /dev/null
10 | cd ..
11 |
12 | echo "# "
13 | echo "# Pulling containers from Docker Hub..."
14 | echo "# "
15 | docker pull dmuth1/splunk-lab
16 | docker pull dmuth1/splunk-lab-ml
17 |
18 | echo "# "
19 | echo "# Tagging containers..."
20 | echo "# "
21 | docker tag dmuth1/splunk-lab splunk-lab
22 | docker tag dmuth1/splunk-lab-ml splunk-lab-ml
23 |
24 |
25 | echo "# Done!"
26 |
27 |
--------------------------------------------------------------------------------
/bin/push.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Errors are fatal
4 | set -e
5 |
6 | #
7 | # Change to the parent of this script
8 | #
9 | pushd $(dirname $0) > /dev/null
10 | cd ..
11 |
12 |
13 | # Load our variables
14 | . ./bin/lib.sh
15 |
16 |
17 | echo "# "
18 | echo "# Pushing containers to Docker Hub..."
19 | echo "# "
20 | docker push dmuth1/splunk-lab
21 | docker push dmuth1/splunk-lab:${SPLUNK_VERSION_MAJOR}
22 | docker push dmuth1/splunk-lab:${SPLUNK_VERSION_MINOR}
23 | docker push dmuth1/splunk-lab-ml
24 | docker push dmuth1/splunk-lab-ml:${SPLUNK_VERSION_MAJOR}
25 | docker push dmuth1/splunk-lab-ml:${SPLUNK_VERSION_MINOR}
26 |
27 |
28 | echo "# Done!"
29 |
30 |
--------------------------------------------------------------------------------
/bin/tarsplit:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | #
3 | # https://github.com/dmuth/tarsplit
4 | #
5 | # This script will take a tarball and split it into 2 or more parts.
6 | # Most importantly, these parts WILL BE ALONG FILE BOUNDARIES.
7 | # The reason for splitting along file foundaries is to that extraction
8 | # can be done with plain old tar.
9 | #
10 | # The advantage of this approach is that things like larger Docker images can
11 | # now be broken down into smaller layers, which each layer extracting a subset
12 | # of the original tarball's directory structure.
13 | #
14 | #
15 |
16 |
17 | import argparse
18 | import os
19 | import tarfile
20 |
21 | import humanize
22 | from tqdm import tqdm
23 |
24 |
25 | parser = argparse.ArgumentParser(description = "Split up a tarball into 2 more chunks of roughly equal size.")
26 | parser.add_argument("file", type = str,
27 | help = "The tarball to split up")
28 | parser.add_argument("num", type = int,
29 | help = "How many chunks to split into?")
30 | parser.add_argument("--dry-run", action = "store_true",
31 | help = "Perform a dry run and tell what we WOULD do.")
32 |
33 | args = parser.parse_args()
34 | #print(args) # Debugging
35 |
36 | #
37 | # Check our arguments.
38 | #
39 | def check_args(args):
40 | if args.num < 2:
41 | raise ValueError("Number of chunks cannot be less than 2!")
42 |
43 |
44 | #
45 | # Calculate the size per chunk.
46 | # This is based on uncompressed values, the resulting tarballs may vary in size.
47 | #
48 | def get_chunk_size(t, num):
49 |
50 | total_file_size = 0
51 | for f in t.getmembers():
52 | total_file_size += f.size
53 |
54 | retval = (total_file_size, int(total_file_size / args.num))
55 |
56 | return(retval)
57 |
58 |
59 | #
60 | # Get our filename for a specific part.
61 | #
62 | def open_chunkfile(file, part, num, dry_run = False):
63 |
64 | out = None
65 |
66 | num_len = len(str(num))
67 | part_formatted = str(part).zfill(num_len)
68 |
69 | filename = f"{os.path.basename(file)}-part-{part_formatted}-of-{num}"
70 | if not dry_run:
71 | out = tarfile.open(filename, "w:gz")
72 |
73 | return(filename, out)
74 |
75 |
76 | #
77 | # Our main entrypoint.
78 | #
79 | def main(args):
80 |
81 | check_args(args)
82 |
83 | t = tarfile.open(args.file, "r")
84 |
85 | print(f"Welcome to Tarsplit! Reading file {args.file}...")
86 | (total_file_size, chunk_size) = get_chunk_size(t, args.num)
87 |
88 | print(f"Total uncompressed file size: {humanize.naturalsize(total_file_size, binary = True)} bytes, "
89 | + f"num chunks: {args.num}, chunk size: {humanize.naturalsize(chunk_size, binary = True)} bytes")
90 |
91 | (filename, out) = open_chunkfile(args.file, 1, args.num, dry_run = args.dry_run)
92 |
93 | size = 0
94 | current_chunk = 1
95 | num_files_in_current_chunk = 0
96 |
97 | num_files = len(t.getmembers())
98 | pbar = tqdm(total = num_files)
99 |
100 | #
101 | # Loop through our files, and write them out to separate tarballs.
102 | #
103 | for f in t.getmembers():
104 |
105 | name = f.name
106 | size += f.size
107 |
108 | if name[len(name) - 1] == "/":
109 | print(f"File {name} ends in Slash, skipping due to a bug in the tarfile module. (Directory will still be created by files within that directory)")
110 | continue
111 |
112 | f = t.extractfile(name)
113 | info = t.getmember(name)
114 | if not args.dry_run:
115 | out.addfile(info, f)
116 |
117 | num_files_in_current_chunk += 1
118 | pbar.update(1)
119 |
120 | if current_chunk < args.num:
121 | if size >= chunk_size:
122 |
123 | if not args.dry_run:
124 | out.close()
125 | print(f"Successfully wrote {humanize.naturalsize(size, binary = True)}"
126 | + f" bytes in {num_files_in_current_chunk} files to {filename}")
127 | else:
128 | print(f"Would have written {humanize.naturalsize(size, binary = True)}"
129 | + f" bytes in {num_files_in_current_chunk} files to {filename}")
130 |
131 | size = 0
132 | current_chunk += 1
133 | num_files_in_current_chunk = 0
134 |
135 | (filename, out) = open_chunkfile(args.file, current_chunk, args.num,
136 | dry_run = args.dry_run)
137 |
138 | pbar.set_description(f"Writing split tarfile: {current_chunk} of {args.num}")
139 |
140 | t.close()
141 | pbar.close()
142 |
143 | if not args.dry_run:
144 | print(f"Successfully wrote {humanize.naturalsize(size, binary = True)} bytes in"
145 | + f" {num_files_in_current_chunk} files to {filename}")
146 | out.close()
147 | else:
148 | print(f"Would have written {humanize.naturalsize(size, binary = True)} bytes in"
149 | + f" {num_files_in_current_chunk} files to {filename}")
150 |
151 | print(f"Tarsplit complete on {args.file}!")
152 |
153 | main(args)
154 |
155 |
156 |
--------------------------------------------------------------------------------
/bin/upload-file-to-s3.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # This script copies a file to the S3 bucket, which is then
4 | # downloaded from when building the Docker container.
5 | #
6 |
7 |
8 | # Errors are fatal
9 | set -e
10 |
11 | BUCKET="dmuth-splunk-lab"
12 |
13 | if test ! "$1"
14 | then
15 | echo "! "
16 | echo "! Syntax: $0 file_to_upload"
17 | echo "! "
18 | exit 1
19 | fi
20 |
21 | FILE=$1
22 |
23 | echo "# "
24 | echo "# Uploading file ${FILE} to S3 bucket '${BUCKET}'..."
25 | echo "# "
26 | aws s3 cp ${FILE} s3://${BUCKET}
27 |
28 | echo "# Done!"
29 |
30 |
31 |
--------------------------------------------------------------------------------
/docker/0-0-core:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # Build splunk-lab-core, which is used as a base for splunk-lab and splunk-lab-ml
4 | #
5 | # The reason for this architecture is so that changes to things like config files
6 | # can be done AFTER the apps in each image are installed, so I don't have to keep
7 | # reinstalling the apps when I'm just tweaking a config file.
8 | #
9 | # Based on https://github.com/splunk/docker-splunk/blob/master/enterprise/Dockerfile
10 | #
11 | # I slimmed this down, as I have no desire to run as a separate user, set up a Deployment
12 | # Server, generate PDFs, etc. All I want to do is run a this single app.
13 | #
14 |
15 | #
16 | # Release names can be found at https://www.debian.org/releases/
17 | #
18 | # Slim saves me like 50 Megs, so I'll take it.
19 | #
20 | FROM debian:bookworm-slim
21 |
22 | ARG DEBIAN_FRONTEND=noninteractive
23 |
24 | ENV LANG en_US.utf8
25 |
26 | #
27 | # Change our sources to be HTTPS then install some things.
28 | #
29 | # I'm not thrilled that CAs apparently aren't shipped with Debian, so I gotta
30 | # download them in cleartext here. At least future downloads after this one
31 | # will be over HTTPS.
32 | #
33 | RUN apt-get update \
34 | && apt-get install -y ca-certificates \
35 | #
36 | # Change "http" to "https"
37 | #
38 | && sed -i -e "s/http:/https:/" /etc/apt/sources.list.d/debian.sources \
39 | && apt-get install -y --no-install-recommends apt-utils locales wget procps less \
40 | && rm -rf /var/lib/apt/lists/* \
41 | && localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 \
42 | && apt-get update
43 |
44 |
45 |
--------------------------------------------------------------------------------
/docker/0-1-splunk:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # This is our builder container.
4 | # We extract our tarball(s) here, and then copy the extracted files
5 | # over into our actual container.
6 | #
7 | FROM splunk-lab-core-0 as builder
8 |
9 | #
10 | # These are passed in with --build-args
11 | #
12 | ARG SPLUNK_HOME
13 | ARG DEPLOY
14 | ARG SPLUNK_VERSION
15 | ARG SPLUNK_BUILD
16 |
17 |
18 | #
19 | # I know that at first glance, this looks kinda crazy.
20 | # The reason for all these files is because the Splunk tarball is HUGE, and breaking it
21 | # up into smaller pieces keeps individual Docker layers from getting too big.
22 | # I'm not thrilled with this approach either, it's the best I can do for now, though.
23 | #
24 | # NOTE: Now that I have a builder container, this may no longer be necessary, and
25 | # I should revisit it in a future maintenance.
26 | #
27 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-01-of-10 /tmp
28 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-02-of-10 /tmp
29 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-03-of-10 /tmp
30 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-04-of-10 /tmp
31 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-05-of-10 /tmp
32 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-06-of-10 /tmp
33 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-07-of-10 /tmp
34 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-08-of-10 /tmp
35 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-09-of-10 /tmp
36 | COPY ${DEPLOY}/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-10-of-10 /tmp
37 |
38 | #
39 | # Download official Splunk release and unzip in /opt/splunk
40 | #
41 | RUN mkdir -p ${SPLUNK_HOME}
42 |
43 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-01-of-10 \
44 | --strip 1 -C ${SPLUNK_HOME}
45 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-02-of-10 \
46 | --strip 1 -C ${SPLUNK_HOME}
47 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-03-of-10 \
48 | --strip 1 -C ${SPLUNK_HOME}
49 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-04-of-10 \
50 | --strip 1 -C ${SPLUNK_HOME}
51 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-05-of-10 \
52 | --strip 1 -C ${SPLUNK_HOME}
53 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-06-of-10 \
54 | --strip 1 -C ${SPLUNK_HOME}
55 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-07-of-10 \
56 | --strip 1 -C ${SPLUNK_HOME}
57 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-08-of-10 \
58 | --strip 1 -C ${SPLUNK_HOME}
59 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-09-of-10 \
60 | --strip 1 -C ${SPLUNK_HOME}
61 | RUN tar xzf /tmp/splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz-part-10-of-10 \
62 | --strip 1 -C ${SPLUNK_HOME}
63 |
64 |
65 | #
66 | # This is the actual container we're building.
67 | # It copies Splunk from the previous container to this one, avoiding having
68 | # giant tarballs hanging out in this image, which makes the resulting Docker images
69 | # several hundred MB smaller. :-)
70 | #
71 | FROM splunk-lab-core-0
72 |
73 | COPY --from=builder /opt/splunk /opt/splunk
74 |
75 |
76 | #
77 | # Link to our data directory and search app.
78 | #
79 | # The /data/ directory is used in a volume mount but /app/ is not.
80 | # The reason I still have it is because it will be helpful for
81 | # anything who may be exploring the container.
82 | #
83 | # (Yes, that means *you*, Chris in Seattle)
84 | #
85 | RUN ln -s /opt/splunk/var/lib/splunk/ /data \
86 | && ln -s /opt/splunk/etc/apps/splunk-lab/local /app
87 |
88 |
89 |
90 |
--------------------------------------------------------------------------------
/docker/0-2-apps:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # This is our builder container.
4 | # We extract our tarball(s) here, and then copy the extracted files
5 | # over into our actual container.
6 | #
7 | FROM splunk-lab-core-1 as builder
8 |
9 |
10 | WORKDIR /tmp
11 |
12 | ARG DEPLOY
13 |
14 | COPY ${DEPLOY}/syndication-input-rssatomrdf_124.tgz /tmp/splunk-packages/
15 | COPY ${DEPLOY}/wordcloud-custom-visualization_111.tgz /tmp/splunk-packages/
16 | COPY ${DEPLOY}/slack-notification-alert_203.tgz /tmp/splunk-packages/
17 | COPY ${DEPLOY}/splunk-dashboard-examples_800.tgz /tmp/splunk-packages/
18 | COPY ${DEPLOY}/eventgen_720.tgz /tmp/splunk-packages/
19 | COPY ${DEPLOY}/rest-api-modular-input_198.tgz /tmp/splunk-packages/
20 |
21 |
22 | #
23 | # Install Syndication app
24 | # https://splunkbase.splunk.com/app/2646/
25 | #
26 | RUN tar xfvz /tmp/splunk-packages/syndication-input-rssatomrdf_124.tgz
27 |
28 | #
29 | # Install Rest API Modular Input
30 | # https://splunkbase.splunk.com/app/1546/#/details
31 | #
32 | RUN tar xfvz /tmp/splunk-packages/rest-api-modular-input_198.tgz
33 |
34 | #
35 | # Install Wordcloud app
36 | # https://splunkbase.splunk.com/app/3212/
37 | #
38 | RUN tar xfvz /tmp/splunk-packages/wordcloud-custom-visualization_111.tgz
39 |
40 | #
41 | # Install Slack Notification Alert
42 | # https://splunkbase.splunk.com/app/2878/
43 | #
44 | RUN tar xfvz /tmp/splunk-packages/slack-notification-alert_203.tgz
45 |
46 | #
47 | # Install Splunk Dashboard Examples
48 | # https://splunkbase.splunk.com/app/1603/
49 | #
50 | RUN tar xfvz /tmp/splunk-packages/splunk-dashboard-examples_800.tgz
51 |
52 | #
53 | # Install Eventgen
54 | # https://splunkbase.splunk.com/app/1924
55 | #
56 | RUN tar xfvz /tmp/splunk-packages/eventgen_720.tgz
57 |
58 | #
59 | # This is the actual container we're building.
60 | # It copies Splunk from the previous container to this one, avoiding having
61 | # giant tarballs hanging out in this image, which makes the resulting Docker images
62 | # several hundred MB smaller. :-)
63 | #
64 | FROM splunk-lab-core-1
65 |
66 | COPY --from=builder /tmp/syndication /opt/splunk/etc/apps/syndication
67 | COPY --from=builder /tmp/rest_ta /opt/splunk/etc/apps/rest_ta
68 | COPY --from=builder /tmp/wordcloud_app /opt/splunk/etc/apps/wordcloud_app
69 | COPY --from=builder /tmp/slack_alerts /opt/splunk/etc/apps/slack_alerts
70 | COPY --from=builder /tmp/simple_xml_examples /opt/splunk/etc/apps/simple_xml_examples
71 | COPY --from=builder /tmp/SA-Eventgen /opt/splunk/etc/apps/SA-Eventgen
72 |
73 |
74 |
--------------------------------------------------------------------------------
/docker/1-splunk-lab:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # Based on https://github.com/splunk/docker-splunk/blob/master/enterprise/Dockerfile
4 | #
5 | # I slimmed this down, as I have no desire to run as a separate user, set up a Deployment
6 | # Server, generate PDFs, etc. All I want to do is run a this single app.
7 | #
8 | FROM splunk-lab-core
9 |
10 |
11 | #
12 | # Copy in our READMEs
13 | #
14 | COPY vendor/README.md /README-vendor.md
15 | COPY README.md /README.md
16 |
17 |
18 | #
19 | # Copy in some Splunk configuration
20 | #
21 | COPY splunk-config/splunk-launch.conf /opt/splunk/etc/
22 | COPY splunk-config/* /opt/splunk/etc/system/local/
23 |
24 |
25 | #
26 | # Install the Splunk Lab app and set it to the default
27 | #
28 | WORKDIR /tmp
29 | COPY splunk-lab-app /opt/splunk/etc/apps/splunk-lab
30 | RUN mkdir -p /opt/splunk/etc/users/admin/user-prefs/local \
31 | && mv /opt/splunk/etc/apps/splunk-lab/user-prefs.conf /opt/splunk/etc/users/admin/user-prefs/local/
32 |
33 |
34 | #
35 | # Expose Splunk web
36 | #
37 | EXPOSE 8000/tcp
38 |
39 | COPY entrypoint.sh /
40 |
41 | ENTRYPOINT ["/entrypoint.sh"]
42 |
43 |
44 |
45 |
--------------------------------------------------------------------------------
/docker/1-splunk-lab-ml:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # This is based on our existing Splunk Lab
4 | #
5 | FROM splunk-lab-core as builder
6 |
7 | ARG DEPLOY
8 |
9 | WORKDIR /tmp
10 |
11 | COPY ${DEPLOY}/splunk-machine-learning-toolkit_520.tgz /tmp/splunk-packages/
12 | COPY ${DEPLOY}/nlp-text-analytics_102.tgz /tmp/splunk-packages/
13 | COPY ${DEPLOY}/halo-custom-visualization_113.tgz /tmp/splunk-packages/
14 | COPY ${DEPLOY}/sankey-diagram-custom-visualization_130.tgz /tmp/splunk-packages/
15 |
16 | #
17 | # I know that at first glance, this looks kinda crazy.
18 | # The reason for all these files is because the Python tarball is HUGE, and breaking it
19 | # up into smaller pieces keeps individual Docker layers from getting too big.
20 | # I'm not thrilled with this approach either, it's the best I can do for now, though.
21 | #
22 | #
23 | # NOTE: Now that I have a builder container, this may no longer be necessary, and
24 | # I should revisit it in a future maintenance.
25 | #
26 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-1-of-8 /tmp/
27 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-2-of-8 /tmp/
28 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-3-of-8 /tmp/
29 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-4-of-8 /tmp/
30 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-5-of-8 /tmp/
31 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-6-of-8 /tmp/
32 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-7-of-8 /tmp/
33 | COPY ${DEPLOY}/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-8-of-8 /tmp/
34 |
35 | #
36 | # Install Python for Scientific computing and Splunk ML Toolkit
37 | #
38 | RUN mkdir -p /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
39 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-1-of-8 \
40 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
41 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-2-of-8 \
42 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
43 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-3-of-8 \
44 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
45 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-4-of-8 \
46 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
47 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-5-of-8 \
48 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
49 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-6-of-8 \
50 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
51 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-7-of-8 \
52 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
53 | RUN tar xfvz /tmp/python-for-scientific-computing-for-linux-64-bit_202.tgz-part-8-of-8 \
54 | --strip 1 -C /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
55 |
56 |
57 | #
58 | # Machine Learning toolkit
59 | #
60 | RUN tar xfvz /tmp/splunk-packages/splunk-machine-learning-toolkit_520.tgz
61 |
62 | #
63 | # NLP text analytics
64 | #
65 | RUN tar xfvz /tmp/splunk-packages/nlp-text-analytics_102.tgz
66 |
67 | #
68 | # Halo custom visualization
69 | #
70 | RUN tar xfvz /tmp/splunk-packages/halo-custom-visualization_113.tgz
71 |
72 | #
73 | # Sankey custom visualization
74 | #
75 | RUN tar xfvz /tmp/splunk-packages/sankey-diagram-custom-visualization_130.tgz
76 |
77 |
78 | #
79 | # Now that apps are installed, we can install the config files, with a lot less
80 | # stuff being downlaoded/images being re-built.
81 | #
82 | FROM splunk-lab-core
83 |
84 | WORKDIR /tmp
85 |
86 | COPY --from=builder /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64 /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64
87 | COPY --from=builder /tmp/Splunk_ML_Toolkit /opt/splunk/etc/apps/Splunk_ML_Toolkit
88 | COPY --from=builder /tmp/nlp-text-analytics /opt/splunk/etc/apps/nlp-text-analytics
89 | COPY --from=builder /tmp/viz_halo /opt/splunk/etc/apps/viz_halo
90 | COPY --from=builder /tmp/sankey_diagram_app /opt/splunk/etc/apps/sankey_diagram_app
91 |
92 |
93 | #
94 | # Copy in our READMEs
95 | #
96 | COPY vendor/README.md /README-vendor.md
97 | COPY README.md /README.md
98 |
99 |
100 | #
101 | # Copy in some Splunk configuration
102 | #
103 | COPY splunk-config/splunk-launch.conf /opt/splunk/etc/
104 | COPY splunk-config/* /opt/splunk/etc/system/local/
105 |
106 |
107 | #
108 | # Install the Splunk Lab app and set it to the default
109 | #
110 | COPY splunk-lab-app /opt/splunk/etc/apps/splunk-lab
111 | RUN mkdir -p /opt/splunk/etc/users/admin/user-prefs/local \
112 | && mv /opt/splunk/etc/apps/splunk-lab/user-prefs.conf /opt/splunk/etc/users/admin/user-prefs/local/
113 |
114 |
115 | #
116 | # Expose Splunk web
117 | #
118 | EXPOSE 8000/tcp
119 |
120 | COPY entrypoint.sh /
121 |
122 | ENTRYPOINT ["/entrypoint.sh"]
123 |
124 |
125 |
--------------------------------------------------------------------------------
/entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Script to run from Splunk
4 | #
5 |
6 | # Errors are fatal
7 | set -e
8 |
9 | SPLUNK_PASSWORD="${SPLUNK_PASSWORD:=password}"
10 |
11 |
12 | #
13 | # Require the user to accept the license to continue
14 | #
15 | if test "$SPLUNK_START_ARGS" != "--accept-license"
16 | then
17 | echo "! "
18 | echo "! You need to accept the Splunk License in order to continue."
19 | echo "! Please restart this container with SPLUNK_START_ARGS set to \"--accept-license\" "
20 | echo "! as follows: "
21 | echo "! "
22 | echo "! SPLUNK_START_ARGS=--accept-license"
23 | echo "! "
24 | exit 1
25 | fi
26 |
27 | #
28 | # Check for bad passwords.
29 | #
30 | if test "$SPLUNK_PASSWORD" == "password"
31 | then
32 | echo "! "
33 | echo "! "
34 | echo "! Cowardly refusing to set the password to 'password'. Please set a different password."
35 | echo "! "
36 | echo "! If you need help picking a secure password, there's an app for that:"
37 | echo "! "
38 | echo "! https://diceware.dmuth.org/"
39 | echo "! "
40 | echo "! "
41 | exit 1
42 |
43 | elif test "$SPLUNK_PASSWORD" == "12345"
44 | then
45 | echo "! "
46 | echo "! "
47 | echo "! This is not Planet Spaceball. Please don't use 12345 as a password."
48 | echo "! "
49 | echo "! "
50 | exit 1
51 |
52 | fi
53 |
54 |
55 | PASSWORD_LEN=${#SPLUNK_PASSWORD}
56 | if test $PASSWORD_LEN -lt 8
57 | then
58 | echo "! "
59 | echo "! "
60 | echo "! Admin password needs to be at least 8 characters!"
61 | echo "! "
62 | echo "! Password specified: ${SPLUNK_PASSWORD}"
63 | echo "! "
64 | echo "! "
65 | exit 1
66 | fi
67 |
68 | #
69 | # Set out $SPLUNK_EVENTGEN_DISABLED value to a 1 or 0.
70 | #
71 | if test "${SPLUNK_EVENTGEN}"
72 | then
73 | SPLUNK_EVENTGEN_DISABLED=0
74 | else
75 | SPLUNK_EVENTGEN_DISABLED=1
76 | fi
77 |
78 | #
79 | # Set our default password
80 | #
81 | pushd /opt/splunk/etc/system/local/ >/dev/null
82 |
83 |
84 | cat user-seed.conf.in | sed -e "s/%password%/${SPLUNK_PASSWORD}/" > user-seed.conf
85 | cat inputs.conf.in \
86 | | sed -e "s/%DATE%/$(date +%Y%m%d-%H%M%S)/" \
87 | | sed -e "s/%EVENTGEN%/${SPLUNK_EVENTGEN_DISABLED}/" \
88 | > inputs.conf
89 |
90 | #
91 | # If we have an SSL cert and key, let's add those into the web.conf file
92 | #
93 | SSL_CERT=""
94 | SSL_KEY=""
95 | if test -r "/ssl.cert"
96 | then
97 | SSL_KEY="privKeyPath = /ssl.key"
98 | SSL_CERT="serverCert = /ssl.cert"
99 | fi
100 |
101 | cat web.conf.in \
102 | | sed -e "s|%ssl_key%|${SSL_KEY}|" \
103 | | sed -e "s|%ssl_cert%|${SSL_CERT}|" \
104 | > web.conf
105 |
106 | #
107 | # If we have a hosts file, append it to our /etc/hosts file.
108 | #
109 | if test -f /etc/hosts.extra
110 | then
111 | echo "We found /etc/hosts.extra, concatenating it into /etc/hosts..."
112 | cat /etc/hosts.extra >> /etc/hosts
113 | fi
114 |
115 |
116 | #
117 | # If a Rest Modular API key was specified, add the key into the REST sources
118 | # and append them to inputs.conf.
119 | #
120 | if test "$REST_KEY"
121 | then
122 | SRC="activation_key = Visit https://www.baboonbones.com/#activation"
123 | REPLACE="activation_key = ${REST_KEY}"
124 |
125 | cat inputs.conf.in.rest | sed -e "s|${SRC}|${REPLACE}|" >> /opt/splunk/etc/system/local/inputs.conf
126 |
127 | fi
128 |
129 | #
130 | # Are we using the RSS Syndication module?
131 | #
132 | if test "$RSS"
133 | then
134 | cat inputs.conf.in.syndication >> /opt/splunk/etc/system/local/inputs.conf
135 | fi
136 |
137 | popd > /dev/null
138 |
139 |
140 | #
141 | # If we're running in devel mode, link local to default so that any
142 | # changes we make to the app in Splunk go straight into default and
143 | # I don't have to move them by hand.
144 | #
145 | if test "$SPLUNK_DEVEL"
146 | then
147 | pushd /opt/splunk/etc/apps/splunk-lab >/dev/null
148 | ln -sfv default local
149 | popd > /dev/null
150 | fi
151 |
152 |
153 | #
154 | # Start Splunk
155 | #
156 | /opt/splunk/bin/splunk start --accept-license
157 |
158 |
159 | echo
160 | echo " ____ _ _ _ _ "
161 | echo " / ___| _ __ | | _ _ _ __ | | __ | | __ _ | |__ "
162 | echo " \___ \ | '_ \ | | | | | | | '_ \ | |/ / | | / _\` | | '_ \ "
163 | echo " ___) | | |_) | | | | |_| | | | | | | < | |___ | (_| | | |_) |"
164 | echo " |____/ | .__/ |_| \__,_| |_| |_| |_|\_\ |_____| \__,_| |_.__/ "
165 | echo " |_| "
166 | echo
167 |
168 |
169 | echo "# "
170 | echo "# Welcome to Splunk Lab!"
171 | echo "# "
172 | echo "# "
173 | echo "# Here are some ways in which to run this container: "
174 | echo "# "
175 | echo "# Persist data between runs:"
176 | echo "# docker run -p 8000:8000 -v \$(pwd)/data:/data dmuth1/splunk-lab "
177 | echo "# "
178 | echo "# Persist data, mount current directory as /mnt, and spawn an interactive shell: "
179 | echo "# docker run -p 8000:8000 -v \$(pwd)/data:/data -v \$(pwd):/mnt -it dmuth1/splunk-lab bash "
180 | echo "# "
181 |
182 |
183 | if test "$1"
184 | then
185 | echo "# "
186 | echo "# Arguments detected! "
187 | echo "# "
188 | echo "# Executing: $@"
189 | echo "# "
190 |
191 | exec "$@"
192 |
193 | fi
194 |
195 | #
196 | # Loop forever so that the container keeps running
197 | #
198 | # I used to tail splunk's stderr file, but it turns out that trying to use SmartStore
199 | # keeps the file from being created. Yikes!
200 | #
201 | echo "Now going into an endless loop so that the container keeps running..."
202 | while true
203 | do
204 | sleep 99999
205 | done
206 |
207 |
208 |
209 |
210 |
--------------------------------------------------------------------------------
/go.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Wrapper script to set up Splunk Lab
4 | #
5 | # To test this script out, set up a webserver:
6 | #
7 | # python -m SimpleHTTPServer 8001
8 | #
9 | # Then run the script:
10 | #
11 | # bash <(curl -s localhost:8000/go.sh)
12 | #
13 |
14 |
15 | # Errors are fatal
16 | set -e
17 |
18 |
19 | #
20 | # Set default values for our vars
21 | #
22 | SPLUNK_PASSWORD=${SPLUNK_PASSWORD:-password1}
23 | SPLUNK_DATA=${SPLUNK_DATA:-data}
24 | SPLUNK_LOGS=${SPLUNK_LOGS:-logs}
25 | SPLUNK_PORT=${SPLUNK_PORT:-8000}
26 | SPLUNK_APP="app"
27 | SPLUNK_ML=${SPLUNK_ML:--1}
28 | SPLUNK_DEVEL=${SPLUNK_DEVEL:-}
29 | SPLUNK_EVENTGEN=${SPLUNK_EVENTGEN:-}
30 | ETC_HOSTS=${ETC_HOSTS:-no}
31 | REST_KEY=${REST_KEY:-}
32 | RSS=${RSS:-}
33 | DOCKER_NAME=${DOCKER_NAME:-splunk-lab}
34 | DOCKER_RM=${DOCKER_RM:-1}
35 | DOCKER_CMD=${DOCKER_CMD:-}
36 | PRINT_DOCKER_CMD=${PRINT_DOCKER_CMD:-}
37 | SSL_CERT=${SSL_CERT:-}
38 | SSL_KEY=${SSL_KEY:-}
39 |
40 |
41 | #
42 | # Download all of our helper scripts.
43 | #
44 | function download_helper_scripts() {
45 |
46 | BASE_URL="https://raw.githubusercontent.com/dmuth/splunk-lab/master"
47 | #BASE_URL="http://localhost:8001" # Debugging
48 |
49 | if test ! $(type -P curl)
50 | then
51 | echo "! "
52 | echo "! Curl needs to be installed on your system so that I can fetch helper scripts."
53 | echo "! "
54 | exit 1
55 | fi
56 |
57 | #
58 | # Create our bin directory and copy down some helper scripts
59 | #
60 | mkdir -p bin
61 | FILES=(
62 | attach.sh
63 | clean.sh
64 | create-1-million-events.py
65 | create-test-logfiles.sh
66 | kill.sh
67 | )
68 |
69 | for FILE in ${FILES[@]}
70 | do
71 | URL=${BASE_URL}/bin/${FILE}
72 | DEST=bin/${FILE}
73 |
74 | if test -f ${DEST}
75 | then
76 | continue
77 | fi
78 |
79 | echo -n "# Downloading ${URL} to ${DEST}..."
80 | curl -s --fail ${URL} > ${DEST} || true
81 | if test ! -s ${DEST}
82 | then
83 | echo
84 | echo "! Unable to find ${URL}"
85 | rm -f ${DEST}
86 | exit 1
87 | fi
88 |
89 | chmod 755 ${DEST}
90 | echo "...done!"
91 |
92 | done
93 |
94 | } # End of download_helper_scripts()
95 |
96 |
97 | #
98 | # Go through all of our argument checking.
99 | # And where I say arguments, I really mean environment variables,
100 | # because there are so many different options!
101 | #
102 | function check_args() {
103 |
104 | if test "$SPLUNK_START_ARGS" != "--accept-license"
105 | then
106 | echo "! "
107 | echo "! You need to accept the Splunk License in order to continue."
108 | echo "! Please restart this container with SPLUNK_START_ARGS set to \"--accept-license\" "
109 | echo "! as follows: "
110 | echo "! "
111 | echo "! SPLUNK_START_ARGS=--accept-license"
112 | echo "! "
113 | exit 1
114 | fi
115 |
116 |
117 | #
118 | # Yes, I am aware that the password checking logic is a duplicate of what's in the
119 | # container's entry point script. But if someone is running Splunk Lab through this
120 | # script, I want bad passwords to cause failure as soon as possible, because it's
121 | # easier to troubleshoot here than through Docker logs.
122 | #
123 | if test "$SPLUNK_PASSWORD" == "password"
124 | then
125 | echo "! "
126 | echo "! "
127 | echo "! Cowardly refusing to set the password to 'password'. Please set a different password."
128 | echo "! "
129 | echo "! If you need help picking a secure password, there's an app for that:"
130 | echo "! "
131 | echo "! https://diceware.dmuth.org/"
132 | echo "! "
133 | echo "! "
134 | exit 1
135 | fi
136 |
137 | PASSWORD_LEN=${#SPLUNK_PASSWORD}
138 | if test $PASSWORD_LEN -lt 8
139 | then
140 | echo "! "
141 | echo "! "
142 | echo "! Admin password needs to be at least 8 characters!"
143 | echo "! "
144 | echo "! Password specified: ${SPLUNK_PASSWORD}"
145 | echo "! "
146 | echo "! "
147 | exit 1
148 | fi
149 |
150 |
151 | #
152 | # Massage -1 into an empty string. This is for the benefit of if we're
153 | # called from devel.sh.
154 | #
155 | if test "$SPLUNK_ML" == "-1"
156 | then
157 | SPLUNK_ML=""
158 | fi
159 |
160 |
161 | if ! test $(which docker)
162 | then
163 | echo "! "
164 | echo "! Docker not found in the system path!"
165 | echo "! "
166 | echo "! Please double-check that Docker is installed on your system, otherwise you "
167 | echo "! can go to https://www.docker.com/ to download Docker. "
168 | echo "! "
169 | exit 1
170 | fi
171 |
172 |
173 | #
174 | # Sanity check to make sure our log directory exists
175 | #
176 | if test ! -d "${SPLUNK_LOGS}"
177 | then
178 | echo "! "
179 | echo "! ERROR: Log directory '${SPLUNK_LOGS}' does not exist!"
180 | echo "! "
181 | echo "! Please set the environment variable \$SPLUNK_LOGS to the "
182 | echo "! directory you wish to ingest and re-run this script."
183 | echo "! "
184 | exit 1
185 | fi
186 |
187 |
188 | #
189 | # Sanity check
190 | #
191 | if test "$ETC_HOSTS" != "no"
192 | then
193 | if test ! -f ${ETC_HOSTS}
194 | then
195 | echo "! Unable to read file '${ETC_HOSTS}' specfied in \$ETC_HOSTS!"
196 | exit 1
197 | fi
198 | fi
199 |
200 |
201 | #
202 | # Sanity check to make sure that both SSL_CERT *and* SSL_KEY are specified.
203 | #
204 | if test "${SSL_CERT}"
205 | then
206 | if test ! "${SSL_KEY}"
207 | then
208 | echo "! "
209 | echo "! \$SSL_CERT is specified but not \$SSL_KEY!"
210 | echo "! "
211 | exit 1
212 | fi
213 |
214 | elif test "${SSL_KEY}"
215 | then
216 | if test ! "${SSL_CERT}"
217 | then
218 | echo "! "
219 | echo "! \$SSL_KEY is specified but not \$SSL_CERT!"
220 | echo "! "
221 | exit 1
222 | fi
223 |
224 | fi
225 |
226 | #
227 | # Sanity check to make sure that SSL cert and key are both readable
228 | #
229 | if test "${SSL_CERT}"
230 | then
231 | if test ! -r "${SSL_CERT}"
232 | then
233 | echo "! "
234 | echo "! SSL Cert File ${SSL_CERT} does not exist or is not readable!"
235 | echo "! "
236 | exit 1
237 | fi
238 |
239 | if test ! -r "${SSL_KEY}"
240 | then
241 | echo "! "
242 | echo "! SSL Cert File ${SSL_KEY} does not exist or is not readable!"
243 | echo "! "
244 | exit 1
245 | fi
246 |
247 | fi
248 |
249 | } # End of check_args()
250 |
251 |
252 | #
253 | # Create our Docker command from all of the arguments.
254 | #
255 | function create_docker_command() {
256 |
257 | #
258 | # Start forming our command
259 | #
260 | CMD="docker run \
261 | -p ${SPLUNK_PORT}:8000 \
262 | -e SPLUNK_PASSWORD=${SPLUNK_PASSWORD} "
263 |
264 | #
265 | # If SPLUNK_DATA is no, we're not exporting it.
266 | # Useful for re-importing everything every time.
267 | #
268 | if test "${SPLUNK_DATA}" != "no"
269 | then
270 | CMD="$CMD -v $(pwd)/${SPLUNK_DATA}:/data "
271 | fi
272 |
273 | #
274 | # If the logs value doesn't start with a leading slash, prefix it with the full path
275 | #
276 | if test ${SPLUNK_LOGS:0:1} != "/"
277 | then
278 | SPLUNK_LOGS="$(pwd)/${SPLUNK_LOGS}"
279 | fi
280 |
281 | CMD="${CMD} -v ${SPLUNK_LOGS}:/logs"
282 |
283 | if test "${REST_KEY}"
284 | then
285 | CMD="${CMD} -e REST_KEY=${REST_KEY}"
286 | fi
287 |
288 | if test "${RSS}"
289 | then
290 | CMD="${CMD} -e RSS=${RSS}"
291 | fi
292 |
293 | if test "${ETC_HOSTS}" != "no"
294 | then
295 | CMD="$CMD -v $(pwd)/${ETC_HOSTS}:/etc/hosts.extra "
296 | fi
297 |
298 | if test "${SPLUNK_EVENTGEN}"
299 | then
300 | CMD="${CMD} -e SPLUNK_EVENTGEN=${SPLUNK_EVENTGEN}"
301 | fi
302 |
303 | #
304 | # If SSL files don't start with a leading slash, prefix with the full path
305 | #
306 | if test "${SSL_CERT}"
307 | then
308 |
309 | if test ${SSL_CERT:0:1} != "/"
310 | then
311 | SSL_CERT="$(pwd)/${SSL_CERT}"
312 | fi
313 |
314 | if test ${SSL_KEY:0:1} != "/"
315 | then
316 | SSL_KEY="$(pwd)/${SSL_KEY}"
317 | fi
318 |
319 | CMD="${CMD} -v ${SSL_CERT}:/ssl.cert -v ${SSL_KEY}:/ssl.key"
320 |
321 | fi
322 |
323 |
324 | #
325 | # Again, doing the same unusual stuff that we are with DOCKER_RM,
326 | # since the default is to hvae a name.
327 | #
328 | if test "${DOCKER_NAME}" == "no"
329 | then
330 | DOCKER_NAME=""
331 | fi
332 |
333 | if test "${DOCKER_NAME}"
334 | then
335 | CMD="${CMD} --name ${DOCKER_NAME}"
336 | fi
337 |
338 | #
339 | # Only disable --rm if DOCKER_RM is set to "no".
340 | # We want --rm action by default, since we also have a default name
341 | # and don't want name conflicts.
342 | #
343 | if test "$DOCKER_RM" == "no"
344 | then
345 | DOCKER_RM=""
346 | fi
347 |
348 | if test "${DOCKER_RM}"
349 | then
350 | CMD="${CMD} --rm"
351 | fi
352 |
353 | if test "$SPLUNK_START_ARGS" -a "$SPLUNK_START_ARGS" != 0
354 | then
355 | CMD="${CMD} -e SPLUNK_START_ARGS=${SPLUNK_START_ARGS}"
356 | fi
357 |
358 | #
359 | # Only run in the foreground if devel mode is set.
360 | # Otherwise, giving users the option to run in foreground will only
361 | # confuse those that are new to Docker.
362 | #
363 | if test ! "$SPLUNK_DEVEL"
364 | then
365 | CMD="${CMD} -d "
366 | CMD="${CMD} -v $(pwd)/${SPLUNK_APP}:/opt/splunk/etc/apps/splunk-lab/local "
367 |
368 | else
369 | CMD="${CMD} -it"
370 | #
371 | # Utility mount :-)
372 | #
373 | CMD="${CMD} -v $(pwd):/mnt "
374 | #
375 | # In devel mode, we'll mount the splunk-lab/ directory to the app directory
376 | # here, and the entrypoint.sh script will create the local/ symlink
377 | # (with build.sh removing said symlink before building any images)
378 | #
379 | CMD="${CMD} -v $(pwd)/splunk-lab-app:/opt/splunk/etc/apps/splunk-lab "
380 | CMD="${CMD} -e SPLUNK_DEVEL=${SPLUNK_DEVEL} "
381 |
382 | fi
383 |
384 |
385 | if test "$DOCKER_CMD"
386 | then
387 | CMD="${CMD} ${DOCKER_CMD} "
388 | fi
389 |
390 |
391 | IMAGE="dmuth1/splunk-lab"
392 | #IMAGE="splunk-lab" # Debugging/testing
393 | if test "$SPLUNK_ML"
394 | then
395 | IMAGE="dmuth1/splunk-lab-ml"
396 | #IMAGE="splunk-lab-ml" # Debugging/testing
397 | fi
398 |
399 | CMD="${CMD} ${IMAGE}"
400 |
401 | if test "$SPLUNK_DEVEL"
402 | then
403 | CMD="${CMD} bash"
404 | fi
405 |
406 | #echo "CMD: $CMD" # Debugging
407 |
408 | } # End of create_docker_command()
409 |
410 |
411 | download_helper_scripts
412 | check_args
413 | create_docker_command
414 |
415 |
416 | #
417 | # If $PRINT_DOCKER_CMD is set, print out the Docker command that would be run then exit.
418 | #
419 | if test "${PRINT_DOCKER_CMD}"
420 | then
421 | echo "$CMD"
422 | exit
423 | fi
424 |
425 | echo
426 | echo " ____ _ _ _ _ "
427 | echo " / ___| _ __ | | _ _ _ __ | | __ | | __ _ | |__ "
428 | echo " \___ \ | '_ \ | | | | | | | '_ \ | |/ / | | / _\` | | '_ \ "
429 | echo " ___) | | |_) | | | | |_| | | | | | | < | |___ | (_| | | |_) |"
430 | echo " |____/ | .__/ |_| \__,_| |_| |_| |_|\_\ |_____| \__,_| |_.__/ "
431 | echo " |_| "
432 | echo
433 |
434 | echo "# "
435 | if test ! "${SPLUNK_DEVEL}"
436 | then
437 | echo "# About to run Splunk Lab!"
438 | else
439 | echo "# About to run Splunk Lab IN DEVELOPMENT MODE!"
440 | fi
441 | echo "# "
442 | echo "# Before we do, please take a few seconds to ensure that your options are correct:"
443 | echo "# "
444 | echo "# URL: https://localhost:${SPLUNK_PORT} (Change with \$SPLUNK_PORT)"
445 | echo "# Login/password: admin/${SPLUNK_PASSWORD} (Change with \$SPLUNK_PASSWORD)"
446 | echo "# "
447 | echo "# Logs will be read from: ${SPLUNK_LOGS} (Change with \$SPLUNK_LOGS)"
448 | echo "# App dashboards will be stored in: ${SPLUNK_APP} (Change with \$SPLUNK_APP)"
449 | if test "$REST_KEY"
450 | then
451 | echo "# REST API Modular Input key: ${REST_KEY}"
452 | else
453 | echo "# REST API Modular Input key: (Get yours at https://www.baboonbones.com/#activation and set with \$REST_KEY)"
454 | fi
455 |
456 | if test "$RSS"
457 | then
458 | echo "# Synication of RSS feeds?: YES"
459 | else
460 | echo "# Synication of RSS feeds?: NO (Enabled with \$RSS=yes)"
461 | fi
462 |
463 | if test "${SPLUNK_DATA}" != "no"
464 | then
465 | echo "# Indexed data will be stored in: ${SPLUNK_DATA} (Change with \$SPLUNK_DATA, disable with SPLUNK_DATA=no)"
466 | else
467 | echo "# Indexed data WILL NOT persist. (Change by setting \$SPLUNK_DATA)"
468 | fi
469 |
470 | if test "$DOCKER_NAME"
471 | then
472 | echo "# Docker container name: ${DOCKER_NAME} (Disable automatic name with \$DOCKER_NAME=no)"
473 | else
474 | echo "# Docker container name: (Set with \$DOCKER_NAME, if you like)"
475 | fi
476 |
477 | if test "$DOCKER_RM"
478 | then
479 | echo "# Removing container at exit? YES (Disable with \$DOCKER_RM=no)"
480 | else
481 | echo "# Removing container at exit? NO (Set with \$DOCKER_RM=1)"
482 | fi
483 |
484 | if test "$DOCKER_CMD"
485 | then
486 | echo "# Docker command injection: ${DOCKER_CMD}"
487 | else
488 | echo "# Docker command injection: (Feel free to set with \$DOCKER_CMD)"
489 | fi
490 |
491 | if test "$ETC_HOSTS" != "no"
492 | then
493 | echo "# /etc/hosts addition: ${ETC_HOSTS} (Disable with \$ETC_HOSTS=no)"
494 | else
495 | echo "# /etc/hosts addition: NO (Set with \$ETC_HOSTS=filename)"
496 | fi
497 |
498 | if test "$SPLUNK_EVENTGEN"
499 | then
500 | echo "# Fake Webserver Event Generation: YES (index=main sourcetype=nginx to view)"
501 | else
502 | echo "# Fake Webserver Event Generation: NO (Feel free to set with \$SPLUNK_EVENTGEN)"
503 | fi
504 |
505 | if test "${SSL_CERT}"
506 | then
507 | echo "# SSL Cert and Key? YES (${SSL_CERT}, ${SSL_KEY})"
508 | else
509 | echo "# SSL Cert and Key? NO (Specify with \$SSL_CERT and \$SSL_KEY)"
510 | fi
511 |
512 | echo "# "
513 | if test "$SPLUNK_ML"
514 | then
515 | echo "# Splunk Machine Learning Image? YES"
516 | else
517 | echo "# Splunk Machine Learning Image? NO (Enable by setting \$SPLUNK_ML in the environment)"
518 | fi
519 |
520 | echo "# "
521 |
522 | if test "$SPLUNK_PASSWORD" == "password1"
523 | then
524 | echo "# "
525 | echo "# PLEASE NOTE THAT YOU USED THE DEFAULT PASSWORD"
526 | echo "# "
527 | echo "# If you are testing this on localhost, you are probably fine."
528 | echo "# If you are not, then PLEASE use a different password for safety."
529 | echo "# If you have trouble coming up with a password, I have a utility "
530 | echo "# at https://diceware.dmuth.org/ which will help you pick a password "
531 | echo "# that can be remembered."
532 | echo "# "
533 | fi
534 |
535 | echo "> "
536 | echo "> Press ENTER to run Splunk Lab with the above settings, or ctrl-C to abort..."
537 | echo "> "
538 | read
539 |
540 |
541 | echo "# "
542 | echo "# Launching container..."
543 | echo "# "
544 |
545 | if test "$SPLUNK_DEVEL"
546 | then
547 | $CMD
548 |
549 | elif test ! "$DOCKER_NAME"
550 | then
551 | ID=$($CMD)
552 | SHORT_ID=$(echo $ID | cut -c-4)
553 |
554 | else
555 | ID=$($CMD)
556 | SHORT_ID=$DOCKER_NAME
557 |
558 | fi
559 |
560 | if test ! "$SPLUNK_DEVEL"
561 | then
562 | echo "#"
563 | echo "# Running Docker container with ID: ${ID}"
564 | echo "#"
565 | echo "# Inspect container logs with: docker logs ${SHORT_ID}"
566 | echo "#"
567 | echo "# Kill container with: docker kill ${SHORT_ID}"
568 | echo "#"
569 |
570 | else
571 | echo "# All done!"
572 |
573 | fi
574 |
575 |
576 |
577 |
--------------------------------------------------------------------------------
/img/app-tree.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/app-tree.png
--------------------------------------------------------------------------------
/img/bella-italia.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/bella-italia.png
--------------------------------------------------------------------------------
/img/facebook-glassdoor.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/facebook-glassdoor.png
--------------------------------------------------------------------------------
/img/fitbit-sleep-dashboard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/fitbit-sleep-dashboard.png
--------------------------------------------------------------------------------
/img/graph.txt:
--------------------------------------------------------------------------------
1 |
2 |
3 | #
4 | # Create at https://dreampuf.github.io/GraphvizOnline/
5 | #
6 | digraph G {
7 |
8 | subgraph cluster_splunk_lab {
9 | "Splunk\nLab" -> "SEPTA Stats";
10 | "Splunk\nLab" -> "Splunk\nTelegram";
11 | "Splunk\nLab" -> "Splunk Network\nHealth Check";
12 | "Splunk\nLab" -> "Splunk\nFitbit";
13 | "Splunk\nLab" -> "Splunk AWS S3 Logs";
14 | }
15 |
16 | subgraph cluster_splunk_lab_ml {
17 | "Splunk\nLab ML" -> "Splunk\nYelp";
18 | "Splunk\nLab ML" -> "Splunk\nGlassdoor";
19 | "Splunk\nLab ML" -> "Splunk\nTwint";
20 | }
21 |
22 | "Splunk\nLab Core" -> "Splunk\nLab";
23 | "Splunk\nLab Core" -> "Splunk\nLab ML";
24 |
25 |
26 | "Splunk\nLab" [shape=diamond]
27 | "Splunk\nLab ML" [shape=diamond]
28 |
29 | "Splunk\nTelegram" [shape=square]
30 | "Splunk Network\nHealth Check" [shape=rectangle]
31 | "Splunk\nFitbit" [shape=square]
32 |
33 | "Splunk\nYelp" [shape=square]
34 | "Splunk\nGlassdoor" [shape=square]
35 | "Splunk\nTwint" [shape=square]
36 |
37 | }
38 |
39 |
40 |
--------------------------------------------------------------------------------
/img/network-huge-outage.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/network-huge-outage.png
--------------------------------------------------------------------------------
/img/pa-furry-stats.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/pa-furry-stats.jpg
--------------------------------------------------------------------------------
/img/snepchat-tag-cloud.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/snepchat-tag-cloud.jpg
--------------------------------------------------------------------------------
/img/splunk-cnn-headlines.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/splunk-cnn-headlines.png
--------------------------------------------------------------------------------
/img/splunk-lab.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/splunk-lab.png
--------------------------------------------------------------------------------
/img/splunk-logo.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/splunk-logo.jpg
--------------------------------------------------------------------------------
/img/splunk-rest-api-input.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/splunk-rest-api-input.png
--------------------------------------------------------------------------------
/img/splunk-syndication-feed.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/img/splunk-syndication-feed.png
--------------------------------------------------------------------------------
/logs/empty.txt:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/logs/empty.txt
--------------------------------------------------------------------------------
/sample-app/.gitignore:
--------------------------------------------------------------------------------
1 |
2 | # Vim
3 | *.swp
4 | *~
5 |
6 | data/
7 | logs/
8 |
9 | # Steve Jobs
10 | .DS_Store
11 |
12 | !sample-app/default/data
13 |
14 |
15 |
--------------------------------------------------------------------------------
/sample-app/Dockerfile:
--------------------------------------------------------------------------------
1 |
2 | #
3 | # Based on https://github.com/splunk/docker-splunk/blob/master/enterprise/Dockerfile
4 | #
5 | # I slimmed this down, as I have no desire to run as a separate user, set up a Deployment
6 | # Server, generate PDFs, etc. All I want to do is run a this single app.
7 | #
8 | FROM dmuth1/splunk-lab
9 |
10 | #
11 | # Change our default app to sample-app
12 | #
13 | RUN sed -i -e "s/splunk-lab/sample-app/" /opt/splunk/etc/users/admin/user-prefs/local/user-prefs.conf
14 |
15 |
16 |
--------------------------------------------------------------------------------
/sample-app/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | # Sample App in Splunk Lab
6 |
7 | This directory contains a sample app built in Splunk Lab.
8 |
9 | It has customized build scripts and a Dockerfile which extends the official `splunk-lab` Docker image.
10 |
11 | ## Usage
12 |
13 | - `./go.sh` will run this app in the background in Docker.
14 | - It's not much different from `devel.sh`, except this app won't be mounted in `/mnt/
15 |
16 |
17 | ## Additional fake data types
18 |
19 | This is a new fake data type with fake IP addresses and incrementing IDs, for testing purposes.
20 | It can be viewed with this query: `index=main sourcetype=fake`
21 |
22 | Make sure to launch Splunk with `SPLUNK_EVENTGEN=1` to get fake data.
23 |
24 |
25 | ### Adding in more fake data types
26 |
27 | - Drop any samples in `sample-app/samples/`
28 | - Then edit `sample-app/default/eventgen.conf` to add the new data sources
29 | - Documentation on Eventgen can be found at http://splunk.github.io/eventgen/
30 | - If you used `bin/devel.sh` to start your Docker container, you can type `/opt/splunk/bin/splunk restart` in the shell to restart the Splunk server and speed things up.
31 |
32 |
33 | ## Development
34 |
35 | - `./bin/devel.sh` will run an instance of this app with a `bash` shell in the foreground
36 | - Anything in `logs/` will be ingested into Splunk.
37 | - Splunk Indexes will be written to `data/`
38 | - This directory will be mounted in `/mnt/`
39 | - The app directory `sample-app/` will be mounted in `/opt/splunk/etc/apps/sample-app/`
40 | - Any changes made in that directory will show up in the host machine's filesystem
41 |
42 |
43 |
44 |
--------------------------------------------------------------------------------
/sample-app/bin/build.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Build our sample Splunk app based on Splunk Lab.
4 | #
5 |
6 | # Errors are fatal
7 | set -e
8 |
9 | #
10 | # Change to this script's parent directory
11 | #
12 | pushd $(dirname $0) > /dev/null
13 | cd ..
14 |
15 | docker build -t splunk-lab-sample-app .
16 |
17 |
18 |
--------------------------------------------------------------------------------
/sample-app/bin/devel.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # Errors are fatal
4 | set -e
5 |
6 | #
7 | # Change to the parent of this script
8 | #
9 | pushd $(dirname $0) > /dev/null
10 | cd ..
11 |
12 | SPLUNK_DEVEL=1 REST_KEY=${REST_KEY} SPLUNK_PASSWORD=${SPLUNK_PASSWORD:-password1} ./go.sh
13 |
14 |
15 |
--------------------------------------------------------------------------------
/sample-app/go.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #
3 | # Wrapper script to set up Splunk Lab
4 | #
5 | # To test this script out, set up a webserver:
6 | #
7 | # python -m SimpleHTTPServer 8000
8 | #
9 | # Then run the script:
10 | #
11 | # bash <(curl -s localhost:8000/go.sh)
12 | #
13 |
14 |
15 | # Errors are fatal
16 | set -e
17 |
18 |
19 | #
20 | # Set default values for our vars
21 | #
22 | SPLUNK_PASSWORD=${SPLUNK_PASSWORD:-password1}
23 | SPLUNK_DATA=${SPLUNK_DATA:-data}
24 | SPLUNK_LOGS=${SPLUNK_LOGS:-logs}
25 | SPLUNK_PORT=${SPLUNK_PORT:-8000}
26 | SPLUNK_ML=${SPLUNK_ML:--1}
27 | SPLUNK_DEVEL=${SPLUNK_DEVEL:-}
28 | SPLUNK_EVENTGEN=${SPLUNK_EVENTGEN:-}
29 | ETC_HOSTS=${ETC_HOSTS:-no}
30 | REST_KEY=${REST_KEY:-}
31 | RSS=${RSS:-}
32 | DOCKER_RM=${DOCKER_RM:-1}
33 | DOCKER_CMD=${DOCKER_CMD:-}
34 | PRINT_DOCKER_CMD=${PRINT_DOCKER_CMD:-}
35 |
36 | #
37 | # Season this variable to taste for your app
38 | #
39 | DOCKER_NAME=${DOCKER_NAME:-splunk-lab-sample-app}
40 | SPLUNK_APP="sample-app"
41 |
42 |
43 | if test "$SPLUNK_START_ARGS" != "--accept-license"
44 | then
45 | echo "! "
46 | echo "! You need to accept the Splunk License in order to continue."
47 | echo "! Please restart this container with SPLUNK_START_ARGS set to \"--accept-license\" "
48 | echo "! as follows: "
49 | echo "! "
50 | echo "! SPLUNK_START_ARGS=--accept-license"
51 | echo "! "
52 | exit 1
53 | fi
54 |
55 |
56 | #
57 | # Yes, I am aware that the password checking logic is a duplicate of what's in the
58 | # container's entry point script. But if someone is running Splunk Lab through this
59 | # script, I want bad passwords to cause failure as soon as possible, because it's
60 | # easier to troubleshoot here than through Docker logs.
61 | #
62 | if test "$SPLUNK_PASSWORD" == "password"
63 | then
64 | echo "! "
65 | echo "! "
66 | echo "! Cowardly refusing to set the password to 'password'. Please set a different password."
67 | echo "! "
68 | echo "! If you need help picking a secure password, there's an app for that:"
69 | echo "! "
70 | echo "! https://diceware.dmuth.org/"
71 | echo "! "
72 | echo "! "
73 | exit 1
74 | fi
75 |
76 | PASSWORD_LEN=${#SPLUNK_PASSWORD}
77 | if test $PASSWORD_LEN -lt 8
78 | then
79 | echo "! "
80 | echo "! "
81 | echo "! Admin password needs to be at least 8 characters!"
82 | echo "! "
83 | echo "! Password specified: ${SPLUNK_PASSWORD}"
84 | echo "! "
85 | echo "! "
86 | exit 1
87 | fi
88 |
89 |
90 | #
91 | # Massage -1 into an empty string. This is for the benefit of if we're
92 | # called from devel.sh.
93 | #
94 | if test "$SPLUNK_ML" == "-1"
95 | then
96 | SPLUNK_ML=""
97 | fi
98 |
99 |
100 | if ! test $(which docker)
101 | then
102 | echo "! "
103 | echo "! Docker not found in the system path!"
104 | echo "! "
105 | echo "! Please double-check that Docker is installed on your system, otherwise you "
106 | echo "! can go to https://www.docker.com/ to download Docker. "
107 | echo "! "
108 | exit 1
109 | fi
110 |
111 |
112 | #
113 | # Sanity check to make sure our log directory exists
114 | #
115 | if test ! -d "${SPLUNK_LOGS}"
116 | then
117 | echo "! "
118 | echo "! ERROR: Log directory '${SPLUNK_LOGS}' does not exist!"
119 | echo "! "
120 | echo "! Please set the environment variable \$SPLUNK_LOGS to the "
121 | echo "! directory you wish to ingest and re-run this script."
122 | echo "! "
123 | exit 1
124 | fi
125 |
126 |
127 | #
128 | # Sanity check
129 | #
130 | if test "$ETC_HOSTS" != "no"
131 | then
132 | if test ! -f ${ETC_HOSTS}
133 | then
134 | echo "! Unable to read file '${ETC_HOSTS}' specfied in \$ETC_HOSTS!"
135 | exit 1
136 | fi
137 | fi
138 |
139 |
140 | #
141 | # Start forming our command
142 | #
143 | CMD="docker run \
144 | -p ${SPLUNK_PORT}:8000 \
145 | -e SPLUNK_PASSWORD=${SPLUNK_PASSWORD} "
146 |
147 | #
148 | # If SPLUNK_DATA is no, we're not exporting it.
149 | # Useful for re-importing everything every time.
150 | #
151 | if test "${SPLUNK_DATA}" != "no"
152 | then
153 | CMD="$CMD -v $(pwd)/${SPLUNK_DATA}:/data "
154 | fi
155 |
156 | #echo "CMD: $CMD" # Debugging
157 |
158 |
159 | #
160 | # If the logs value doesn't start with a leading slash, prefix it with the full path
161 | #
162 | if test ${SPLUNK_LOGS:0:1} != "/"
163 | then
164 | SPLUNK_LOGS="$(pwd)/${SPLUNK_LOGS}"
165 | fi
166 |
167 | CMD="${CMD} -v ${SPLUNK_LOGS}:/logs"
168 |
169 | if test "${REST_KEY}"
170 | then
171 | CMD="${CMD} -e REST_KEY=${REST_KEY}"
172 | fi
173 |
174 | if test "${RSS}"
175 | then
176 | CMD="${CMD} -e RSS=${RSS}"
177 | fi
178 |
179 | if test "${ETC_HOSTS}" != "no"
180 | then
181 | CMD="$CMD -v $(pwd)/${ETC_HOSTS}:/etc/hosts.extra "
182 | fi
183 |
184 | if test "${SPLUNK_EVENTGEN}"
185 | then
186 | CMD="${CMD} -e SPLUNK_EVENTGEN=${SPLUNK_EVENTGEN}"
187 | fi
188 |
189 | #
190 | # Again, doing the same unusual stuff that we are with DOCKER_RM,
191 | # since the default is to hvae a name.
192 | #
193 | if test "${DOCKER_NAME}" == "no"
194 | then
195 | DOCKER_NAME=""
196 | fi
197 |
198 | if test "${DOCKER_NAME}"
199 | then
200 | CMD="${CMD} --name ${DOCKER_NAME}"
201 | fi
202 |
203 | #
204 | # Only disable --rm if DOCKER_RM is set to "no".
205 | # We want --rm action by default, since we also have a default name
206 | # and don't want name conflicts.
207 | #
208 | if test "$DOCKER_RM" == "no"
209 | then
210 | DOCKER_RM=""
211 | fi
212 |
213 | if test "${DOCKER_RM}"
214 | then
215 | CMD="${CMD} --rm"
216 | fi
217 |
218 | if test "$SPLUNK_START_ARGS" -a "$SPLUNK_START_ARGS" != 0
219 | then
220 | CMD="${CMD} -e SPLUNK_START_ARGS=${SPLUNK_START_ARGS}"
221 | fi
222 |
223 |
224 | #
225 | # Only run in the foreground if devel mode is set.
226 | # Otherwise, giving users the option to run in foreground will only
227 | # confuse those that are new to Docker.
228 | #
229 | if test ! "$SPLUNK_DEVEL"
230 | then
231 | CMD="${CMD} -d "
232 |
233 | else
234 | CMD="${CMD} -it"
235 | #
236 | # Utility mount :-)
237 | #
238 | CMD="${CMD} -v $(pwd):/mnt "
239 | CMD="${CMD} -e SPLUNK_DEVEL=${SPLUNK_DEVEL} "
240 |
241 | fi
242 |
243 | #
244 | # Since this is a sample app, and I expect other apps might be based on it, we'll link
245 | # the local directory to the directory inside Splunk
246 | #
247 | CMD="${CMD} -v $(pwd)/sample-app:/opt/splunk/etc/apps/${SPLUNK_APP} "
248 |
249 | if test "$DOCKER_CMD"
250 | then
251 | CMD="${CMD} ${DOCKER_CMD} "
252 | fi
253 |
254 |
255 | IMAGE="splunk-lab-sample-app"
256 | #IMAGE="dmuth1/splunk-lab" # Original Splunk Lab image
257 | #IMAGE="splunk-lab" # Debugging/testing
258 | if test "$SPLUNK_ML"
259 | then
260 | IMAGE="splunk-lab-sample-app"
261 | #IMAGE="dmuth1/splunk-lab-ml" # Original Splunk Lab image
262 | #IMAGE="splunk-lab-ml" # Debugging/testing
263 | fi
264 |
265 | CMD="${CMD} ${IMAGE}"
266 |
267 | if test "$SPLUNK_DEVEL"
268 | then
269 | CMD="${CMD} bash"
270 | fi
271 |
272 | #
273 | # If $PRINT_DOCKER_CMD is set, print out the Docker command that would be run then exit.
274 | #
275 | if test "${PRINT_DOCKER_CMD}"
276 | then
277 | echo "$CMD"
278 | exit
279 | fi
280 |
281 | echo
282 | echo " ____ _ _ _ _ "
283 | echo " / ___| _ __ | | _ _ _ __ | | __ | | __ _ | |__ "
284 | echo " \___ \ | '_ \ | | | | | | | '_ \ | |/ / | | / _\` | | '_ \ "
285 | echo " ___) | | |_) | | | | |_| | | | | | | < | |___ | (_| | | |_) |"
286 | echo " |____/ | .__/ |_| \__,_| |_| |_| |_|\_\ |_____| \__,_| |_.__/ "
287 | echo " |_| "
288 | echo
289 |
290 | echo "# "
291 | if test ! "${SPLUNK_DEVEL}"
292 | then
293 | echo "# About to run Splunk Lab!"
294 | else
295 | echo "# About to run Splunk Lab IN DEVELOPMENT MODE!"
296 | fi
297 | echo "# "
298 | echo "# Before we do, please take a few seconds to ensure that your options are correct:"
299 | echo "# "
300 | echo "# URL: https://localhost:${SPLUNK_PORT} (Change with \$SPLUNK_PORT)"
301 | echo "# Login/password: admin/${SPLUNK_PASSWORD} (Change with \$SPLUNK_PASSWORD)"
302 | echo "# "
303 | echo "# Logs will be read from: ${SPLUNK_LOGS} (Change with \$SPLUNK_LOGS)"
304 | echo "# App dashboards will be stored in: ${SPLUNK_APP} (Change with \$SPLUNK_APP)"
305 | if test "$REST_KEY"
306 | then
307 | echo "# REST API Modular Input key: ${REST_KEY}"
308 | else
309 | echo "# REST API Modular Input key: (Get yours at https://www.baboonbones.com/#activation and set with \$REST_KEY)"
310 | fi
311 |
312 | if test "$RSS"
313 | then
314 | echo "# Synication of RSS feeds?: YES"
315 | else
316 | echo "# Synication of RSS feeds?: NO (Enabled with \$RSS=yes)"
317 | fi
318 |
319 | if test "${SPLUNK_DATA}" != "no"
320 | then
321 | echo "# Indexed data will be stored in: ${SPLUNK_DATA} (Change with \$SPLUNK_DATA, disable with SPLUNK_DATA=no)"
322 | else
323 | echo "# Indexed data WILL NOT persist. (Change by setting \$SPLUNK_DATA)"
324 | fi
325 |
326 | if test "$DOCKER_NAME"
327 | then
328 | echo "# Docker container name: ${DOCKER_NAME} (Disable automatic name with \$DOCKER_NAME=no)"
329 | else
330 | echo "# Docker container name: (Set with \$DOCKER_NAME, if you like)"
331 | fi
332 |
333 | if test "$DOCKER_RM"
334 | then
335 | echo "# Removing container at exit? YES (Disable with \$DOCKER_RM=no)"
336 | else
337 | echo "# Removing container at exit? NO (Set with \$DOCKER_RM=1)"
338 | fi
339 |
340 | if test "$DOCKER_CMD"
341 | then
342 | echo "# Docker command injection: ${DOCKER_CMD}"
343 | else
344 | echo "# Docker command injection: (Feel free to set with \$DOCKER_CMD)"
345 | fi
346 |
347 | if test "$ETC_HOSTS" != "no"
348 | then
349 | echo "# /etc/hosts addition: ${ETC_HOSTS} (Disable with \$ETC_HOSTS=no)"
350 | else
351 | echo "# /etc/hosts addition: NO (Set with \$ETC_HOSTS=filename)"
352 | fi
353 |
354 | if test "$SPLUNK_EVENTGEN"
355 | then
356 | echo "# Fake Webserver Event Generation: YES (index=main sourcetype=nginx to view)"
357 | else
358 | echo "# Fake Webserver Event Generation: NO (Feel free to set with \$SPLUNK_EVENTGEN)"
359 | fi
360 |
361 | echo "# "
362 | if test "$SPLUNK_ML"
363 | then
364 | echo "# Splunk Machine Learning Image? YES"
365 | else
366 | echo "# Splunk Machine Learning Image? NO (Enable by setting \$SPLUNK_ML in the environment)"
367 | fi
368 |
369 | echo "# "
370 |
371 | if test "$SPLUNK_PASSWORD" == "password1"
372 | then
373 | echo "# "
374 | echo "# PLEASE NOTE THAT YOU USED THE DEFAULT PASSWORD"
375 | echo "# "
376 | echo "# If you are testing this on localhost, you are probably fine."
377 | echo "# If you are not, then PLEASE use a different password for safety."
378 | echo "# If you have trouble coming up with a password, I have a utility "
379 | echo "# at https://diceware.dmuth.org/ which will help you pick a password "
380 | echo "# that can be remembered."
381 | echo "# "
382 | fi
383 |
384 | echo "> "
385 | echo "> Press ENTER to run Splunk Lab with the above settings, or ctrl-C to abort..."
386 | echo "> "
387 | read
388 |
389 |
390 | echo "# "
391 | echo "# Launching container..."
392 | echo "# "
393 |
394 | if test "$SPLUNK_DEVEL"
395 | then
396 | $CMD
397 |
398 | elif test ! "$DOCKER_NAME"
399 | then
400 | ID=$($CMD)
401 | SHORT_ID=$(echo $ID | cut -c-4)
402 |
403 | else
404 | ID=$($CMD)
405 | SHORT_ID=$DOCKER_NAME
406 |
407 | fi
408 |
409 | if test ! "$SPLUNK_DEVEL"
410 | then
411 | echo "#"
412 | echo "# Running Docker container with ID: ${ID}"
413 | echo "#"
414 | echo "# Inspect container logs with: docker logs ${SHORT_ID}"
415 | echo "#"
416 | echo "# Kill container with: docker kill ${SHORT_ID}"
417 | echo "#"
418 |
419 | else
420 | echo "# All done!"
421 |
422 | fi
423 |
424 |
425 |
426 |
--------------------------------------------------------------------------------
/sample-app/logs/empty:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/sample-app/logs/empty
--------------------------------------------------------------------------------
/sample-app/sample-app/appserver/static/splunk-lab.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dmuth/splunk-lab/434a53e500f520b804fd2718a1f10c045738ded3/sample-app/sample-app/appserver/static/splunk-lab.png
--------------------------------------------------------------------------------
/sample-app/sample-app/default/app.conf:
--------------------------------------------------------------------------------
1 | #
2 | # Splunk app configuration file
3 | #
4 |
5 | [install]
6 | is_configured = 0
7 |
8 | [ui]
9 | is_visible = 1
10 | label = Splunk Lab
11 |
12 | [launcher]
13 | author = Douglas Muth
14 | description = Splunk Lab: Run Splunk easily in Docker!
15 | version = 1.0.0
16 |
17 |
--------------------------------------------------------------------------------
/sample-app/sample-app/default/authorize.conf:
--------------------------------------------------------------------------------
1 | [role_admin]
2 | grantableRoles = admin
3 | importRoles = can_delete;power;user
4 | srchIndexesDefault = main
5 | srchMaxTime = 8640000
6 |
--------------------------------------------------------------------------------
/sample-app/sample-app/default/data/ui/nav/default.xml:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/sample-app/sample-app/default/data/ui/views/README:
--------------------------------------------------------------------------------
1 | Add all the views that your app needs in this directory
2 |
--------------------------------------------------------------------------------
/sample-app/sample-app/default/data/ui/views/tailreader_check.xml:
--------------------------------------------------------------------------------
1 |
13 | If you're seeing this, congratulations, you have successfully launched the sample app that ships with Splunk Lab. You are well on your way to building your very first app based on Splunk Lab. 14 |
15 | 16 |19 | Splunk Lab is the quick and easy way to spin up an instance of Splunk in Docker to perform ad-hoc data analysis on one or more logfiles or REST/RSS endpoints! 20 |
21 | 22 | 23 |If the splunk-lab-ml Docker image was used, these following modules will be available:
94 | 95 |15 | Splunk Lab is the quick and easy way to spin up an instance of Splunk in Docker to perform ad-hoc data analysis on one or more logfiles or REST/RSS endpoints! 16 |
17 | 18 |If the splunk-lab-ml Docker image was used, these following modules will be available:
89 | 90 |192 | 11-15-2022 01:45:31.042 +0000 ERROR StreamGroup [217 IndexerTPoolWorker-0] - failed to drain remainder total_sz=24 bytes_freed=7977 avg_bytes_per_iv=332 sth=0x7fb586dfdba0: [1668476729, /opt/splunk/var/lib/splunk/_internaldb/db/hot_v1_1, 0x7fb587f7e840] reason=st_sync failed rc=-6 warm_rc=[-35,1] 193 |194 | 195 | The fix for this the same as previous: start Splunk Lab with SPLUNK_DATA=no in the environment. 196 | 197 | 198 | Splunk just doesn't like Virtual Box, it seems. 🤷 199 | 200 | 201 |