├── .dockerignore
├── .github
├── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── enhancement_request.md
└── PULL_REQUEST_TEMPLATE.md
├── .gitignore
├── CLA.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Dockerfile
├── Gopkg.lock
├── Gopkg.toml
├── LICENSE
├── Makefile
├── README.md
├── config.json
├── driver.go
├── hec_client.go
├── http.go
├── main.go
├── message_processor.go
├── message_processor_test.go
├── partial_message_buffer.go
├── partial_message_buffer_test.go
├── splunk_logger.go
├── splunk_test.go
├── splunkhecmock_test.go
├── templates.go
└── test
├── LogEntry.proto
├── LogEntry_pb2.py
├── README.md
├── __init__.py
├── common.py
├── config_params
├── __init__.py
├── cacert.pem
└── test_cofig_params.py
├── conftest.py
├── malformed_data
├── __init__.py
└── test_malformed_events.py
├── partial_log
├── __init__.py
├── test_file.txt
└── test_partial_log.py
└── requirements.txt
/.dockerignore:
--------------------------------------------------------------------------------
1 | .git
2 | splunk-logging-plugin
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Report a bug encountered while operating Splunk Connect for Docker
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 |
14 |
15 |
16 | **What happened**:
17 |
18 | **What you expected to happen**:
19 |
20 | **How to reproduce it (as minimally and precisely as possible)**:
21 |
22 | **Anything else we need to know?**:
23 |
24 | **Environment**:
25 | - Docker version (use `docker version`):
26 | - OS (e.g: `cat /etc/os-release`):
27 | - Splunk version:
28 | - Others:
29 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/enhancement_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Enhancement Request
3 | about: Suggest an enhancement to the Splunk Connect for Docker project
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | <
11 |
12 | **What would you like to be added**:
13 |
14 | **Why is this needed**:
15 |
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | ## Proposed changes
2 |
3 | Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue.
4 |
5 | ## Types of changes
6 |
7 | What types of changes does your code introduce?
8 | _Put an `x` in the boxes that apply_
9 |
10 | - [ ] Bugfix (non-breaking change which fixes an issue)
11 | - [ ] New feature (non-breaking change which adds functionality)
12 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
13 |
14 | ## Checklist
15 |
16 | _Put an `x` in the boxes that apply._
17 |
18 | - [ ] I have read the [CONTRIBUTING](https://github.com/splunk/docker-logging-plugin/blob/develop/CONTRIBUTING.md) doc
19 | - [ ] I have read the [CLA](https://github.com/splunk/docker-logging-plugin/blob/develop/CLA.md)
20 | - [ ] I have added tests that prove my fix is effective or that my feature works
21 | - [ ] I have added necessary documentation (if appropriate)
22 | - [ ] Any dependent changes have been merged and published in downstream modules
23 |
24 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | splunk-logging-plugin
2 | splunk-logging-plugin.tar.gz
3 | .idea/
4 | .DS_Store
5 | *.pyc
6 | vendor/
--------------------------------------------------------------------------------
/CLA.md:
--------------------------------------------------------------------------------
1 | By submitting a Contribution to this Work, You agree that Your Contribution is made subject to the primary LICENSE
2 | file applicable to this Work. In addition, You represent that: (i) You are the copyright owner of the Contribution
3 | or (ii) You have the requisite rights to make the Contribution.
4 |
5 | Definitions:
6 |
7 | “You” shall mean: (i) yourself if you are making a Contribution on your own behalf; or (ii) your company,
8 | if you are making a Contribution on behalf of your company. If you are making a Contribution on behalf of your
9 | company, you represent that you have the requisite authority to do so.
10 |
11 | "Contribution" shall mean any original work of authorship, including any modifications or additions to an existing
12 | work, that is intentionally submitted by You for inclusion in, or documentation of, this project/repository. For the
13 | purposes of this definition, "submitted" means any form of electronic, verbal, or written communication submitted for
14 | inclusion in this project/repository, including but not limited to communication on electronic mailing lists, source
15 | code control systems, and issue tracking systems that are managed by, or on behalf of, the maintainers of
16 | the project/repository.
17 |
18 | “Work” shall mean the collective software, content, and documentation in this project/repository.
19 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
6 |
7 | ## Our Standards
8 |
9 | Examples of behavior that contributes to creating a positive environment include:
10 |
11 | * Using welcoming and inclusive language
12 | * Being respectful of differing viewpoints and experiences
13 | * Gracefully accepting constructive criticism
14 | * Focusing on what is best for the community
15 | * Showing empathy towards other community members
16 |
17 | Examples of unacceptable behavior by participants include:
18 |
19 | * The use of sexualized language or imagery and unwelcome sexual attention or advances
20 | * Trolling, insulting/derogatory comments, and personal or political attacks
21 | * Public or private harassment
22 | * Publishing others' private information, such as a physical or electronic address, without explicit permission
23 | * Other conduct which could reasonably be considered inappropriate in a professional setting
24 |
25 | ## Our Responsibilities
26 |
27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
28 |
29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
30 |
31 | ## Scope
32 |
33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
34 |
35 | ## Enforcement
36 |
37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at oss@splunk.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
38 |
39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
40 |
41 | ## Attribution
42 |
43 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
44 |
45 | [homepage]: http://contributor-covenant.org
46 | [version]: http://contributor-covenant.org/version/1/4/
47 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | CONTRIBUTING
2 |
3 | By submitting a Contribution to this Work, You agree that Your Contribution is made subject to the primary LICENSE
4 | file applicable to this Work. In addition, You represent that: (i) You are the copyright owner of the Contribution
5 | or (ii) You have the requisite rights to make the Contribution.
6 |
7 | Definitions:
8 |
9 | “You” shall mean: (i) yourself if you are making a Contribution on your own behalf; or (ii) your company,
10 | if you are making a Contribution on behalf of your company. If you are making a Contribution on behalf of your
11 | company, you represent that you have the requisite authority to do so.
12 |
13 | "Contribution" shall mean any original work of authorship, including any modifications or additions to an existing
14 | work, that is intentionally submitted by You for inclusion in, or documentation of, this project/repository. For the
15 | purposes of this definition, "submitted" means any form of electronic, verbal, or written communication submitted for
16 | inclusion in this project/repository, including but not limited to communication on electronic mailing lists, source
17 | code control systems, and issue tracking systems that are managed by, or on behalf of, the maintainers of
18 | the project/repository.
19 |
20 | “Work” shall mean the collective software, content, and documentation in this project/repository.
21 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM golang:1.9.2
2 |
3 | WORKDIR /go/src/github.com/splunk/splunk-logging-plugin/
4 |
5 | COPY . /go/src/github.com/splunk/splunk-logging-plugin/
6 |
7 | # install dep
8 | RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
9 |
10 | RUN cd /go/src/github.com/splunk/splunk-logging-plugin && dep ensure
11 |
12 | RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o /bin/splunk-logging-plugin .
13 |
14 | FROM alpine:3.7
15 | RUN apk --no-cache add ca-certificates
16 | COPY --from=0 /bin/splunk-logging-plugin /bin/
17 | WORKDIR /bin/
18 | ENTRYPOINT [ "/bin/splunk-logging-plugin" ]
19 |
--------------------------------------------------------------------------------
/Gopkg.lock:
--------------------------------------------------------------------------------
1 | # This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
2 |
3 |
4 | [[projects]]
5 | branch = "master"
6 | name = "github.com/Azure/go-ansiterm"
7 | packages = [
8 | ".",
9 | "winterm"
10 | ]
11 | revision = "d6e3b3328b783f23731bc4d058875b0371ff8109"
12 |
13 | [[projects]]
14 | name = "github.com/Microsoft/go-winio"
15 | packages = ["."]
16 | revision = "7da180ee92d8bd8bb8c37fc560e673e6557c392f"
17 | version = "v0.4.7"
18 |
19 | [[projects]]
20 | branch = "master"
21 | name = "github.com/Nvveen/Gotty"
22 | packages = ["."]
23 | revision = "cd527374f1e5bff4938207604a14f2e38a9cf512"
24 |
25 | [[projects]]
26 | name = "github.com/Sirupsen/logrus"
27 | packages = ["."]
28 | revision = "c155da19408a8799da419ed3eeb0cb5db0ad5dbc"
29 | version = "v1.0.5"
30 |
31 | [[projects]]
32 | name = "github.com/coreos/go-systemd"
33 | packages = ["activation"]
34 | revision = "40e2722dffead74698ca12a750f64ef313ddce05"
35 | version = "v16"
36 |
37 | [[projects]]
38 | name = "github.com/docker/docker"
39 | packages = [
40 | "api/types",
41 | "api/types/backend",
42 | "api/types/blkiodev",
43 | "api/types/container",
44 | "api/types/filters",
45 | "api/types/mount",
46 | "api/types/network",
47 | "api/types/plugins/logdriver",
48 | "api/types/registry",
49 | "api/types/strslice",
50 | "api/types/swarm",
51 | "api/types/versions",
52 | "daemon/logger",
53 | "daemon/logger/jsonfilelog",
54 | "daemon/logger/loggerutils",
55 | "pkg/filenotify",
56 | "pkg/ioutils",
57 | "pkg/jsonlog",
58 | "pkg/jsonmessage",
59 | "pkg/longpath",
60 | "pkg/plugingetter",
61 | "pkg/plugins",
62 | "pkg/plugins/transport",
63 | "pkg/progress",
64 | "pkg/pubsub",
65 | "pkg/random",
66 | "pkg/streamformatter",
67 | "pkg/stringid",
68 | "pkg/tailfile",
69 | "pkg/templates",
70 | "pkg/term",
71 | "pkg/term/windows",
72 | "pkg/urlutil"
73 | ]
74 | revision = "90d35abf7b3535c1c319c872900fbd76374e521c"
75 | version = "v17.03.0-ce"
76 |
77 | [[projects]]
78 | name = "github.com/docker/go-connections"
79 | packages = [
80 | "nat",
81 | "sockets",
82 | "tlsconfig"
83 | ]
84 | revision = "3ede32e2033de7505e6500d6c868c2b9ed9f169d"
85 | version = "v0.3.0"
86 |
87 | [[projects]]
88 | branch = "master"
89 | name = "github.com/docker/go-plugins-helpers"
90 | packages = ["sdk"]
91 | revision = "61cb8e2334204460162c8bd2417cd43cb71da66f"
92 |
93 | [[projects]]
94 | name = "github.com/docker/go-units"
95 | packages = ["."]
96 | revision = "47565b4f722fb6ceae66b95f853feed578a4a51c"
97 | version = "v0.3.3"
98 |
99 | [[projects]]
100 | name = "github.com/fsnotify/fsnotify"
101 | packages = ["."]
102 | revision = "c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9"
103 | version = "v1.4.7"
104 |
105 | [[projects]]
106 | name = "github.com/gogo/protobuf"
107 | packages = [
108 | "io",
109 | "proto"
110 | ]
111 | revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
112 | version = "v1.0.0"
113 |
114 | [[projects]]
115 | name = "github.com/pkg/errors"
116 | packages = ["."]
117 | revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
118 | version = "v0.8.0"
119 |
120 | [[projects]]
121 | branch = "master"
122 | name = "github.com/tonistiigi/fifo"
123 | packages = ["."]
124 | revision = "3d5202aec260678c48179c56f40e6f38a095738c"
125 |
126 | [[projects]]
127 | branch = "master"
128 | name = "golang.org/x/crypto"
129 | packages = ["ssh/terminal"]
130 | revision = "d6449816ce06963d9d136eee5a56fca5b0616e7e"
131 |
132 | [[projects]]
133 | branch = "master"
134 | name = "golang.org/x/net"
135 | packages = [
136 | "context",
137 | "internal/socks",
138 | "proxy"
139 | ]
140 | revision = "61147c48b25b599e5b561d2e9c4f3e1ef489ca41"
141 |
142 | [[projects]]
143 | branch = "master"
144 | name = "golang.org/x/sys"
145 | packages = [
146 | "unix",
147 | "windows"
148 | ]
149 | revision = "f6f352972f061230a99fbf49d1eb8073ebdb36cb"
150 |
151 | [[projects]]
152 | branch = "master"
153 | name = "golang.org/x/time"
154 | packages = ["rate"]
155 | revision = "fbb02b2291d28baffd63558aa44b4b56f178d650"
156 |
157 | [solve-meta]
158 | analyzer-name = "dep"
159 | analyzer-version = 1
160 | inputs-digest = "02df9713899c4a90650c4e69742fe73c5efdd7f1ec434d41792d4c27a5fd82de"
161 | solver-name = "gps-cdcl"
162 | solver-version = 1
163 |
--------------------------------------------------------------------------------
/Gopkg.toml:
--------------------------------------------------------------------------------
1 | # Gopkg.toml example
2 | #
3 | # Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
4 | # for detailed Gopkg.toml documentation.
5 | #
6 | # required = ["github.com/user/thing/cmd/thing"]
7 | # ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
8 | #
9 | # [[constraint]]
10 | # name = "github.com/user/project"
11 | # version = "1.0.0"
12 | #
13 | # [[constraint]]
14 | # name = "github.com/user/project2"
15 | # branch = "dev"
16 | # source = "github.com/myfork/project2"
17 | #
18 | # [[override]]
19 | # name = "github.com/x/y"
20 | # version = "2.4.0"
21 | #
22 | # [prune]
23 | # non-go = false
24 | # go-tests = true
25 | # unused-packages = true
26 |
27 |
28 | [[constraint]]
29 | name = "github.com/Sirupsen/logrus"
30 | version = "1.0.5"
31 |
32 | [[constraint]]
33 | name = "github.com/docker/docker"
34 | version = "17.3.0-ce"
35 |
36 | [[constraint]]
37 | branch = "master"
38 | name = "github.com/docker/go-plugins-helpers"
39 |
40 | [[constraint]]
41 | name = "github.com/gogo/protobuf"
42 | version = "1.0.0"
43 |
44 | [[constraint]]
45 | name = "github.com/pkg/errors"
46 | version = "0.8.0"
47 |
48 | [[constraint]]
49 | branch = "master"
50 | name = "github.com/tonistiigi/fifo"
51 |
52 | [prune]
53 | go-tests = true
54 | unused-packages = true
55 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "{}"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright {yyyy} {name of copyright owner}
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
203 | =======================================================================
204 | Splunk Connect for Docker Subcomponents:
205 |
206 | The Splunk Connect for Docker project contains subcomponents with separate copyright
207 | notices and license terms. Your use of the source code for the these
208 | subcomponents is subject to the terms and conditions of the following
209 | licenses.
210 |
211 | ========================================================================
212 | Apache License 2.0
213 | ========================================================================
214 | The following components are provided under the Apache License 2.0. See project link for details.
215 |
216 | (Apache License 2.0) logger, ioutils, urlutils, logdriver (https://github.com/moby/moby/blob/master/LICENSE)
217 | (Apache License 2.0) fifo (https://github.com/containerd/fifo/blob/master/LICENSE)
218 | (Apache License 2.0) go-plugin-helpers (https://github.com/docker/go-plugins-helpers/blob/master/LICENSE)
219 |
220 | ========================================================================
221 | MIT licenses
222 | ========================================================================
223 | The following components are provided under the MIT License. See project link for details.
224 |
225 | (MIT License) logrus (https://github.com/sirupsen/logrus/blob/master/LICENSE)
226 |
227 | ========================================================================
228 | BSD-style licenses
229 | ========================================================================
230 |
231 | The following components are provided under a BSD-style license. See project link for details.
232 |
233 | (BSD 2-Clause "Simplified" License) errors (https://github.com/pkg/errors/blob/master/LICENSE)
234 | (BSD 3-Clause) protobuf/io (https://github.com/gogo/protobuf/blob/master/LICENSE)
235 |
236 | ========================================================================
237 | Mozilla Public License 2.0
238 | ========================================================================
239 |
240 | The following components are provided under a Mozilla public license. See project link for details.
241 |
242 | (Mozilla Public License 2.0) templates.go from Keel (https://github.com/keel-hq/keel/blob/master/LICENSE)
243 |
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | PLUGIN_NAME=splunk/docker-logging-plugin
2 | PLUGIN_TAG=latest
3 | PLUGIN_DIR=./splunk-logging-plugin
4 |
5 | all: clean docker rootfs create
6 | package: clean docker rootfs zip
7 |
8 | clean:
9 | @echo "### rm ${PLUGIN_DIR}"
10 | rm -rf ${PLUGIN_DIR}
11 |
12 | docker:
13 | @echo "### docker build: rootfs image with splunk-logging-plugin"
14 | docker build -t ${PLUGIN_NAME}:rootfs .
15 |
16 | rootfs:
17 | @echo "### create rootfs directory in ${PLUGIN_DIR}/rootfs"
18 | mkdir -p ${PLUGIN_DIR}/rootfs
19 | docker create --name tmprootfs ${PLUGIN_NAME}:rootfs
20 | docker export tmprootfs | tar -x -C ${PLUGIN_DIR}/rootfs
21 | @echo "### copy config.json to ${PLUGIN_DIR}/"
22 | cp config.json ${PLUGIN_DIR}/
23 | docker rm -vf tmprootfs
24 |
25 | create:
26 | @echo "### remove existing plugin ${PLUGIN_NAME}:${PLUGIN_TAG} if exists"
27 | docker plugin rm -f ${PLUGIN_NAME}:${PLUGIN_TAG} || true
28 | @echo "### create new plugin ${PLUGIN_NAME}:${PLUGIN_TAG} from ${PLUGIN_DIR}"
29 | docker plugin create ${PLUGIN_NAME}:${PLUGIN_TAG} ${PLUGIN_DIR}
30 |
31 | zip:
32 | @echo "### create a tar.gz for plugin"
33 | tar -cvzf splunk-logging-plugin.tar.gz ${PLUGIN_DIR}
34 |
35 | enable:
36 | @echo "### enable plugin ${PLUGIN_NAME}:${PLUGIN_TAG}"
37 | docker plugin enable ${PLUGIN_NAME}:${PLUGIN_TAG}
38 |
39 | push: clean docker rootfs create enable
40 | @echo "### push plugin ${PLUGIN_NAME}:${PLUGIN_TAG}"
41 | docker plugin push ${PLUGIN_NAME}:${PLUGIN_TAG}
42 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | # What does Splunk Connect for Docker do?
3 | Splunk Connect for Docker is a plug-in that extends and expands Docker's logging capabilities so that customers can push their Docker and container logs to their Splunk on-premise or cloud deployment.
4 |
5 | Splunk Connect for Docker is a supported open source product. Customers with an active Splunk support contract receive Splunk Extension support under the Splunk Support Policy, which can be found at https://www.splunk.com/en_us/legal/splunk-software-support-policy.html.
6 |
7 | See the Docker Engine managed plugin system documentation at https://docs.docker.com/engine/extend/ on support for Microsoft Windows and other platforms. See the Prerequisites in this document for more information about system requirements.
8 |
9 | # Prerequisites
10 | Before you install Splunk Connect for Docker, make sure your system meets the following minimum prerequisites:
11 |
12 | * Docker Engine: Version 17.05 or later. If you plan to configure Splunk Connect for Docker via 'daemon.json', you must have the Docker Community Edition (Docker-ce) 18.03 equivalent or later installed.
13 | * Splunk Enterprise, Splunk Light, or Splunk Cloud version 6.6 or later. Splunk Connect for Docker plugin is not currently supported on Windows.
14 | * For customers deploying to Splunk Cloud, HEC must be enabled and a token must be generated by Splunk Support before logs can be ingested.
15 | * Configure an HEC token on Splunk Enterprise or Splunk Light (either single instance or distributed environment). Refer to the set up and use HTTP Event Collector documentation for more details.
16 | * Operating System Platform support as defined in Docker Engine managed plugin system documentation.
17 |
18 | # Install and configure Splunk Connect for Docker
19 |
20 | ## Step 1: Get an HTTP Event Collector token
21 | You must configure the Splunk HTTP Event Collector (HEC) to send your Docker container logging data to Splunk Enterprise or Splunk Cloud. HEC uses tokens as an alternative to embedding Splunk Enterprise or Splunk Cloud credentials in your app or supporting files. For more about how the HTTP event collector works, see http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector
22 |
23 | 1. Enable your HTTP Event collector:
24 | http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/HECWalkthrough#Enable_HEC
25 | 2. Create an HEC token:
26 | http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector
27 | http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UseHECusingconffiles
28 |
29 | Note the following when you generate your token:
30 | * Make sure that indexer acknowledgement is disabled for your token.
31 | * Optionally, enable the indexer acknowledgement functionality by clicking the Enable indexer management checkbox.
32 | * Do not generate your token using the default TLS cert provided by Splunk. The default certificates are not secure. For information about configuring Splunk to use self-signed or third-party certs, see http://docs.splunk.com/Documentation/Splunk/7.0.3/Security/AboutsecuringyourSplunkconfigurationwithSSL.
33 | * Splunk Cloud customers must file a support request in order to have a token generated.
34 |
35 | ## Step 2: Install the plugin
36 | There are multiple ways to install Splunk Connect for Docker, Splunk recommends installing from Docker Store (option 1) to ensure the most current and stable build.
37 |
38 | ### Install the Plugin from Docker Store
39 |
40 | 1. Pull the plugin from docker hub
41 | ```
42 | $ docker plugin install splunk/docker-logging-plugin:latest --alias splunk-logging-plugin
43 | ```
44 | 2. Enable the plugin if needed:
45 | ```
46 | $ docker plugin enable splunk-logging-plugin
47 | ```
48 | ### Install the plugin from the tar file
49 |
50 | 1. Clone the repository and check out release branch
51 | ```
52 | $ git clone https://github.com/splunk/docker-logging-plugin.git
53 | ```
54 | 2. Create the plugin package
55 | ```
56 | $ cd docker-logging-plugin
57 | $ make package # this creates a splunk-logging-plugin.tar.gz
58 | ```
59 | 3. unzip the package
60 | ```
61 | $ tar -xzf splunk-logging-plugin.tar.gz
62 | ```
63 | 4. Create the plugin
64 | ```
65 | $ docker plugin create splunk-logging-plugin:latest splunk-logging-plugin/
66 | ```
67 | 5. Verify that the plugin is installed by running the following command:
68 | ```
69 | $ docker plugin ls
70 | ```
71 | 6. Enable the plugin
72 | ```
73 | $ docker plugin enable splunk-logging-plugin:latest
74 | ```
75 | ## Step 3: Run containers with the plugin installed
76 |
77 | Splunk Connect for Docker continually listens for logs, but your containers must also be running so that the container logs are forwarded to Splunk Connect for Docker. The following examples describe how to configure containers to run with Splunk Connect for Docker.
78 |
79 | To start your containers, refer to the Docker Documentation found at:
80 |
81 | https://docs.docker.com/config/containers/logging/configure/
82 | https://docs.docker.com/config/containers/logging/configure/#configure-the-delivery-mode-of-log-messages-from-container-to-log-driver
83 |
84 | ## Examples
85 |
86 | This sample daemon.json command configures Splunk Connect for Docker for all containers on the docker engine. Splunk Software recommends that when working in a production environment, you pass your HEC token through daemon.json as opposed to the command line.
87 | ```
88 | {
89 | "log-driver": "splunk-logging-plugin",
90 | "log-opts": {
91 | "splunk-url": "",
92 | "splunk-token": "",
93 | "splunk-insecureskipverify": "true"
94 | }
95 | }
96 | ```
97 | This sample command configures Splunk Connect for Docker for a single container.
98 | ```
99 | $ docker run --log-driver=splunk-logging-plugin --log-opt splunk-url= --log-opt splunk-token= --log-opt splunk-insecureskipverify=true -d
100 | ```
101 | ## Step 4: Set Configuration variables
102 |
103 | Use the configuration variables to configure the behaviors and rules for Splunk Connect for Docker. For example you can confiugre your certificate security or how messages are formatted and distributed. Note the following:
104 |
105 | * Configurations that pass though docker run --log-opt are effective instantaneously.
106 | * You must restart the Docker engine after configuring through ``daemon.json``
107 |
108 | ### How to use the variables
109 |
110 | The following is an example of the logging options specified for the Splunk Enterprise instance. In this example:
111 |
112 | The path to the root certificate and Common Name is specified using an HTTPS scheme to be used for verification.
113 | ```
114 | $ docker run --log-driver=splunk-logging-plugin\
115 | --log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
116 | --log-opt splunk-url=https://splunkhost:8088 \
117 | --log-opt splunk-capath=/path/to/cert/cacert.pem \
118 | --log-opt splunk-caname=SplunkServerDefaultCert \
119 | --log-opt tag="{{.Name}}/{{.FullID}}" \
120 | --log-opt labels=location \
121 | --log-opt env=TEST \
122 | --env "TEST=false" \
123 | --label location=west \
124 |
125 | ```
126 | ### Required Variables
127 |
128 | Variable | Description
129 | ------------ | -------------
130 | splunk-token | Splunk HTTP Event Collector token.
131 | splunk-url | Path to your Splunk Enterprise, self-service Splunk Cloud instance, or Splunk Cloud managed cluster (including port and scheme used by HTTP Event Collector) in one of the following formats: https://your_splunk_instance:8088 or https://input-prd-p-XXXXXXX.cloud.splunk.com:8088 or https://http-inputs-XXXXXXXX.splunkcloud.com
132 |
133 |
134 | ### Optional Variables
135 |
136 | Variable | Description | Default
137 | ------------ | ------------- | -------------
138 | splunk-source | Event source |
139 | splunk-sourcetype | Event source type |
140 | splunk-index | Event index. (Note that HEC token must be configured to accept the specified index) |
141 | splunk-capath | Path to root certificate. (Must be specified if splunk-insecureskipverify is false) |
142 | splunk-caname | Name to use for validating server certificate; by default the hostname of the splunk-url is used. |
143 | splunk-insecureskipverify| "false" means that the service certificates are validated and "true" means that server certificates are not validated. | false
144 | splunk-format | Message format. Values can be inline, json, or raw. For more infomation about formats see the Messageformats option. | inline
145 | splunk-verify-connection| Upon plug-in startup, verify that Splunk Connect for Docker can connect to Splunk HEC endpoint. False indicates that Splunk Connect for Docker will start up and continue to try to connect to HEC and will push logs to buffer until connection has been establised. Logs will roll off buffer once buffer is full. True indicates that Splunk Connect for Docker will not start up if connection to HEC cannot be established. | false
146 | splunk-gzip | Enable/disable gzip compression to send events to Splunk Enterprise or Splunk Cloud instance. | false
147 | splunk-gzip-level | Set compression level for gzip. Valid values are -1 (default), 0 (no compression), 1 (best speed) … 9 (best compression). | -1
148 | tag | Specify tag for message, which interpret some markup. Refer to the log tag option documentation for customizing the log tag format. https://docs.docker.com/v17.09/engine/admin/logging/log_tags/ | {{.ID}} (12 characters of the container ID)
149 | labels | Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container. |
150 | env | Comma-separated list of keys of environment variables to be included in message if they specified for a container. |
151 | env-regex | A regular expression to match logging-related environment variables. Used for advanced log tag options. If there is collision between the label and env keys, the value of the env takes precedence. Both options add additional fields to the attributes of a logging message. |
152 |
153 |
154 | ### Advanced options - Environment Variables
155 |
156 | To overwrite these values through environment variables, use docker plugin set =. For more information, see https://docs.docker.com/engine/reference/commandline/plugin_set/ .
157 |
158 |
159 | Variable | Description | Default
160 | ------------ | ------------- | -------------
161 | SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY | How often plug-in posts messages when there is nothing to batch, i.e., the maximum time to wait for more messages to batch. The internal buffer used for batching is flushed either when the buffer is full (the disgnated batch size is reached) or the buffer timesout (specified by this frequency) | 5s
162 | SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE | The number of messages the plug-in should collect before sending them in one batch. | 1000
163 | SPLUNK_LOGGING_DRIVER_BUFFER_MAX | The maximum amount of messages to hold in buffer and retry when the plug-in cannot connect to remote server. | 10 * 1000
164 | SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE | How many pending messages can be in the channel used to send messages to background logger worker, which batches them. | 4 * 1000
165 | SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_HOLD_DURATION | Appends logs that are chunked by docker with 16kb limit. It specifies how long the system can wait for the next message to come. | 100ms
166 | SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_BUFFER_SIZE | Appends logs that are chunked by docker with 16kb limit. It specifies the biggest message in bytes that the system can reassemble. The value provided here should be smaller than or equal to the Splunk HEC limit. 1 MB is the default HEC setting. | 1048576 (1mb)
167 | SPLUNK_LOGGING_DRIVER_JSON_LOGS | Determines if JSON logging is enabled. https://docs.docker.com/config/containers/logging/json-file/ | true
168 | SPLUNK_TELEMETRY | Determines if telemetry is enabled. | true
169 |
170 |
171 | ### Message formats
172 | There are three logging plug-in messaging formats set under the optional variable splunk-format:
173 |
174 | * inline (this is the default format)
175 | * json
176 | * raw
177 |
178 | The default format is inline, where each log message is embedded as a string and is assigned to "line" field. For example:
179 | ```
180 | // Example #1
181 | {
182 | "attrs": {
183 | "env1": "val1",
184 | "label1": "label1"
185 | },
186 | "tag": "MyImage/MyContainer",
187 | "source": "stdout",
188 | "line": "my message"
189 | }
190 |
191 | // Example #2
192 | {
193 | "attrs": {
194 | "env1": "val1",
195 | "label1": "label1"
196 | },
197 | "tag": "MyImage/MyContainer",
198 | "source": "stdout",
199 | "line": "{\"foo\": \"bar\"}"
200 | }
201 | ```
202 | When messages are JSON objects, you may want to embed them in the message sent to Splunk.
203 |
204 | To format messages as json objects, set --log-opt splunk-format=json. The plug-in will try to parse every line as a JSON object and embed the json object to "line" field. If it cannot parse the message, it is sent inline. For example:
205 | ```
206 | //Example #1
207 | {
208 | "attrs": {
209 | "env1": "val1",
210 | "label1": "label1"
211 | },
212 | "tag": "MyImage/MyContainer",
213 | "source": "stdout",
214 | "line": "my message"
215 | }
216 |
217 | //Example #2
218 | {
219 | "attrs": {
220 | "env1": "val1",
221 | "label1": "label1"
222 | },
223 | "tag": "MyImage/MyContainer",
224 | "source": "stdout",
225 | "line": {
226 | "foo": "bar"
227 | }
228 | }
229 | ```
230 | If --log-opt splunk-format=raw, each message together with attributes (environment variables and labels) and tags are combined in a raw string. Attributes and tags are prefixed to the message. For example:
231 | ```
232 | MyImage/MyContainer env1=val1 label1=label1 my message
233 | MyImage/MyContainer env1=val1 label1=label1 {"foo": "bar"}
234 | ```
235 | # Troubleshooting
236 |
237 | If your Splunk Connector for Docker does not behave as expected, use the debug functionality and then refer to the following tips included in output.
238 |
239 | ## Enable Debug Mode to find log errors
240 |
241 | Plugin logs can be found as docker daemon log. To enable debug mode, export environment variable LOGGIN_LEVEL=DEBUG in docker engine environment. See the Docker documentation for information about how to enable debug mode in your docker environment: https://docs.docker.com/config/daemon/
242 |
243 | ## Use the debugger to check your debug the Splunk HEC connection
244 |
245 | Check HEC endpoint accessibility Docker environment. If the endpoint cannot be reached, debug logs are not sent to Splunk, or the logs or will buffer and drop as they roll off the buffer.
246 | ```
247 | Test HEC endpoint is accessible
248 | $ curl -k https://:8088/services/collector/health
249 | {"text":"HEC is healthy","code":200}
250 | ```
251 | ## Check your HEC configuration for clusters
252 |
253 | If you are using an Indexer Cluster, the current plugin accepts a single splunk-url value. We recommend that you configure a load balancer in front of your Indexer tier. Make sure the load balancer can successfully tunnel the HEC requests to the indexer tier. If HEC is configured in an Indexer Cluster environment, all indexers should have same HEC token configured. See http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector.
254 |
255 | ## Check your heavy forwarder connection
256 |
257 | If you ae using a heavy forwarder to preprocess the events (e.g: funnel multiple log lines to a single event), make sure that the heavy forwarder is properly connecting to the indexers. To troubleshoot the forwarder and receiver connection, see: https://docs.splunk.com/Documentation/SplunkCloud/7.0.0/Forwarding/Receiverconnection.
258 |
259 | ## Check the plugin's debug log in docker
260 |
261 | Stdout of a plugin is redirected to Docker logs. Such entries have a plugin= suffix.
262 |
263 | To find out the plugin ID of Splunk Connect for Docker, use the command below and look for Splunk Logging Plugin entry.
264 | ```
265 | # list all the plugins
266 | $ docker plugin ls
267 | ```
268 | Depending on your system, location of Docker daemon logging may vary. Refer to Docker documentation for Docker daemon log location for your specific platform. Here are a few examples:
269 |
270 | * Ubuntu (old using upstart ) - /var/logging/upstart/docker.logging
271 | * Ubuntu (new using systemd ) - sudo journalctl -fu docker.service
272 | * Boot2Docker - /var/logging/docker.logging
273 | * Debian GNU/Linux - /var/logging/daemon.logging
274 | * CentOS - /var/logging/daemon.logging | grep docker
275 | * CoreOS - journalctl -u docker.service
276 | * Fedora - journalctl -u docker.service
277 | * Red Hat Enterprise Linux Server - /var/logging/messages | grep docker
278 | * OpenSuSE - journalctl -u docker.service
279 | * OSX - ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/logging/docker.logging
280 | * Windows - Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.
281 |
282 |
--------------------------------------------------------------------------------
/config.json:
--------------------------------------------------------------------------------
1 | {
2 | "description": "Splunk Logging Plugin",
3 | "documentation": "https://github.com/splunk/docker-logging-plugin",
4 | "entrypoint": ["/bin/splunk-logging-plugin"],
5 | "network": {
6 | "type": "host"
7 | },
8 | "interface": {
9 | "types": ["docker.logdriver/1.0"],
10 | "socket": "splunklog.sock"
11 | },
12 | "env": [
13 | {
14 | "name": "LOG_LEVEL",
15 | "description": "Set log level to output for plugin logs",
16 | "value": "info",
17 | "settable": ["value"]
18 | },
19 | {
20 | "name": "SPLUNK_LOGGING_DRIVER_FIFO_ERROR_RETRY_TIME",
21 | "description": "Set number of retry when reading fifo from docker failed. -1 means retry forever",
22 | "value": "3",
23 | "settable": ["value"]
24 | },
25 | {
26 | "name": "SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY",
27 | "description": "Set how often do we send messages (if we are not reaching batch size)",
28 | "value": "5s",
29 | "settable": ["value"]
30 | },
31 | {
32 | "name": "SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE",
33 | "description": "Set number of messages to batch before buffer timeout",
34 | "value": "1000",
35 | "settable": ["value"]
36 | },
37 | {
38 | "name": "SPLUNK_LOGGING_DRIVER_BUFFER_MAX",
39 | "description": "Set maximum number of messages wait in the buffer before sent to Splunk",
40 | "value": "10000",
41 | "settable": ["value"]
42 | },
43 | {
44 | "name": "SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE",
45 | "description": "Set number of messages allowed to be queued in the channel when reading from the docker provided FIFO",
46 | "value": "4000",
47 | "settable": ["value"]
48 | },
49 | {
50 | "name": "SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_HOLD_DURATION",
51 | "description": "Used when logs that are chunked by docker with 16kb limit. Set how long the system can wait for the next message to come.",
52 | "value": "5s",
53 | "settable": ["value"]
54 | },
55 | {
56 | "name": "SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_BUFFER_SIZE",
57 | "description": "Used when logs that are chunked by docker with 16kb limit. Set the biggest message that the system can reassemble.",
58 | "value": "1048576",
59 | "settable": ["value"]
60 | },
61 | {
62 | "name": "SPLUNK_LOGGING_DRIVER_JSON_LOGS",
63 | "description": "Determines if JSON logging is enabled.",
64 | "value": "true",
65 | "settable": ["value"]
66 | },
67 | {
68 | "name": "SPLUNK_TELEMETRY",
69 | "description": "Determines if telemetry is enabled.",
70 | "value": "true",
71 | "settable": ["value"]
72 | }
73 | ]
74 | }
75 |
--------------------------------------------------------------------------------
/driver.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | // Package splunk provides the log driver for forwarding server logs to
18 | // Splunk HTTP Event Collector endpoint.
19 | package main
20 |
21 | import (
22 | "context"
23 | "encoding/binary"
24 | "fmt"
25 | "io"
26 | "os"
27 | "path/filepath"
28 | "sync"
29 | "syscall"
30 |
31 | "github.com/Sirupsen/logrus"
32 | "github.com/docker/docker/api/types/plugins/logdriver"
33 | "github.com/docker/docker/daemon/logger"
34 | "github.com/docker/docker/daemon/logger/jsonfilelog"
35 | protoio "github.com/gogo/protobuf/io"
36 | "github.com/pkg/errors"
37 | "github.com/tonistiigi/fifo"
38 | )
39 |
40 | type driver struct {
41 | mu sync.Mutex
42 | logs map[string]*logPair // map for file and logger
43 | idx map[string]*logPair // map for container_id and logger
44 | logger logger.Logger
45 | }
46 |
47 | type logPair struct {
48 | jsonl logger.Logger
49 | splunkl logger.Logger
50 | stream io.ReadCloser
51 | info logger.Info
52 | }
53 |
54 | func (lf *logPair) Close() {
55 | lf.stream.Close()
56 | lf.splunkl.Close()
57 | lf.jsonl.Close()
58 | }
59 |
60 | func newDriver() *driver {
61 | return &driver{
62 | logs: make(map[string]*logPair),
63 | idx: make(map[string]*logPair),
64 | }
65 | }
66 |
67 | func (d *driver) StartLogging(file string, logCtx logger.Info) error {
68 | d.mu.Lock()
69 | // check if the specific file is already attached to a logger
70 | if _, exists := d.logs[file]; exists {
71 | d.mu.Unlock()
72 | return fmt.Errorf("logger for %q already exists", file)
73 | }
74 | d.mu.Unlock()
75 |
76 | // if there isn't a logger for the file, create a logger hanlder
77 | if logCtx.LogPath == "" {
78 | logCtx.LogPath = filepath.Join("/var/log/docker", logCtx.ContainerID)
79 | }
80 | if err := os.MkdirAll(filepath.Dir(logCtx.LogPath), 0755); err != nil {
81 | return errors.Wrap(err, "error setting up logger dir")
82 | }
83 |
84 | //create a json logger for the file
85 | jsonl, err := jsonfilelog.New(logCtx)
86 | if err != nil {
87 | return errors.Wrap(err, "error creating jsonfile logger")
88 | }
89 |
90 | err = ValidateLogOpt(logCtx.Config)
91 | if err != nil {
92 | return errors.Wrapf(err, "error options logger splunk: %q", file)
93 | }
94 |
95 | //create a splunk logger for the file
96 | splunkl, err := New(logCtx)
97 | if err != nil {
98 | return errors.Wrap(err, "error creating splunk logger")
99 | }
100 |
101 | logrus.WithField("id", logCtx.ContainerID).WithField("file", file).WithField("logpath", logCtx.LogPath).Debugf("Start logging")
102 | // open the log file in the background with read only access
103 | f, err := fifo.OpenFifo(context.Background(), file, syscall.O_RDONLY, 0700)
104 | if err != nil {
105 | return errors.Wrapf(err, "error opening logger fifo: %q", file)
106 | }
107 |
108 | d.mu.Lock()
109 | lf := &logPair{jsonl, splunkl, f, logCtx}
110 | // add the json logger, splunk logger, log file, and logCtx to the logging driver
111 | d.logs[file] = lf
112 | d.idx[logCtx.ContainerID] = lf
113 | d.mu.Unlock()
114 |
115 | // start to process the logs generated by docker
116 | logrus.Debug("Start processing messages")
117 | mg := &messageProcessor{
118 | retryNumber: getAdvancedOptionInt(envVarReadFifoErrorRetryNumber, defaultReadFifoErrorRetryNumber),
119 | }
120 | go mg.process(lf)
121 | return nil
122 | }
123 |
124 | func (d *driver) StopLogging(file string) error {
125 | logrus.WithField("file", file).Debug("Stop logging")
126 | d.mu.Lock()
127 | lf, ok := d.logs[file]
128 | if ok {
129 | lf.Close()
130 | delete(d.logs, file)
131 | }
132 | d.mu.Unlock()
133 | return nil
134 | }
135 |
136 | func (d *driver) ReadLogs(info logger.Info, config logger.ReadConfig) (io.ReadCloser, error) {
137 | d.mu.Lock()
138 | lf, exists := d.idx[info.ContainerID]
139 | d.mu.Unlock()
140 | if !exists {
141 | return nil, fmt.Errorf("logger does not exist for %s", info.ContainerID)
142 | }
143 |
144 | r, w := io.Pipe()
145 | lr, ok := lf.jsonl.(logger.LogReader)
146 | if !ok {
147 | return nil, fmt.Errorf("logger does not support reading")
148 | }
149 |
150 | go func() {
151 | watcher := lr.ReadLogs(config)
152 |
153 | enc := protoio.NewUint32DelimitedWriter(w, binary.BigEndian)
154 | defer enc.Close()
155 | defer watcher.Close()
156 |
157 | var buf logdriver.LogEntry
158 | for {
159 | select {
160 | case msg, ok := <-watcher.Msg:
161 | if !ok {
162 | w.Close()
163 | return
164 | }
165 |
166 | buf.Line = msg.Line
167 | buf.Partial = msg.Partial
168 | buf.TimeNano = msg.Timestamp.UnixNano()
169 | buf.Source = msg.Source
170 |
171 | if err := enc.WriteMsg(&buf); err != nil {
172 | w.CloseWithError(err)
173 | return
174 | }
175 | case err := <-watcher.Err:
176 | w.CloseWithError(err)
177 | return
178 | }
179 |
180 | buf.Reset()
181 | }
182 | }()
183 |
184 | return r, nil
185 | }
186 |
--------------------------------------------------------------------------------
/hec_client.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "bytes"
21 | "compress/gzip"
22 | "encoding/json"
23 | "fmt"
24 | "io"
25 | "io/ioutil"
26 | "net/http"
27 | "time"
28 |
29 | "github.com/Sirupsen/logrus"
30 | )
31 |
32 | type hecClient struct {
33 | client *http.Client
34 | transport *http.Transport
35 |
36 | url string
37 | healthCheckURL string
38 | auth string
39 |
40 | // http compression
41 | gzipCompression bool
42 | gzipCompressionLevel int
43 |
44 | // Advanced options
45 | postMessagesFrequency time.Duration
46 | postMessagesBatchSize int
47 | bufferMaximum int
48 | }
49 |
50 | func (hec *hecClient) postMessages(messages []*splunkMessage, lastChance bool) []*splunkMessage {
51 | logrus.Debugf("Received %d messages.", len(messages))
52 | messagesLen := len(messages)
53 | for i := 0; i < messagesLen; i += hec.postMessagesBatchSize {
54 | upperBound := i + hec.postMessagesBatchSize
55 | if upperBound > messagesLen {
56 | upperBound = messagesLen
57 | }
58 | if err := hec.tryPostMessages(messages[i:upperBound]); err != nil {
59 | logrus.Error(err)
60 | if messagesLen-i >= hec.bufferMaximum || lastChance {
61 | // If this is last chance - print them all to the daemon log
62 | if lastChance {
63 | upperBound = messagesLen
64 | }
65 | // Not all sent, but buffer has got to its maximum, let's log all messages
66 | // we could not send and return buffer minus one batch size
67 | for j := i; j < upperBound; j++ {
68 | if jsonEvent, err := json.Marshal(messages[j]); err != nil {
69 | logrus.Error(err)
70 | } else {
71 | logrus.Error(fmt.Errorf("Failed to send a message '%s'", string(jsonEvent)))
72 | }
73 | }
74 | return messages[upperBound:messagesLen]
75 | }
76 | // Not all sent, returning buffer from where we have not sent messages
77 | logrus.Debugf("%d messages failed to sent", messagesLen)
78 | return messages[i:messagesLen]
79 | }
80 | }
81 | // All sent, return empty buffer
82 | logrus.Debugf("%d messages were sent successfully", messagesLen)
83 | return messages[:0]
84 | }
85 |
86 | func (hec *hecClient) tryPostMessages(messages []*splunkMessage) error {
87 | if len(messages) == 0 {
88 | logrus.Debug("No message to post")
89 | return nil
90 | }
91 | var buffer bytes.Buffer
92 | var writer io.Writer
93 | var gzipWriter *gzip.Writer
94 | var err error
95 | // If gzip compression is enabled - create gzip writer with specified compression
96 | // level. If gzip compression is disabled, use standard buffer as a writer
97 | if hec.gzipCompression {
98 | gzipWriter, err = gzip.NewWriterLevel(&buffer, hec.gzipCompressionLevel)
99 | if err != nil {
100 | return err
101 | }
102 | writer = gzipWriter
103 | } else {
104 | writer = &buffer
105 | }
106 | for _, message := range messages {
107 | jsonEvent, err := json.Marshal(message)
108 | if err != nil {
109 | return err
110 | }
111 | if _, err := writer.Write(jsonEvent); err != nil {
112 | return err
113 | }
114 | }
115 | // If gzip compression is enabled, tell it, that we are done
116 | if hec.gzipCompression {
117 | err = gzipWriter.Close()
118 | if err != nil {
119 | return err
120 | }
121 | }
122 | req, err := http.NewRequest("POST", hec.url, bytes.NewBuffer(buffer.Bytes()))
123 | if err != nil {
124 | return err
125 | }
126 | req.Header.Set("Content-Type", "application/json")
127 | req.Header.Set("Authorization", hec.auth)
128 | // Tell if we are sending gzip compressed body
129 | if hec.gzipCompression {
130 | req.Header.Set("Content-Encoding", "gzip")
131 | }
132 | res, err := hec.client.Do(req)
133 | if err != nil {
134 | return err
135 | }
136 | defer res.Body.Close()
137 | if res.StatusCode != http.StatusOK {
138 | var body []byte
139 | body, err = ioutil.ReadAll(res.Body)
140 | if err != nil {
141 | return err
142 | }
143 | return fmt.Errorf("%s: failed to send event - %s - %s", driverName, res.Status, body)
144 | }
145 | io.Copy(ioutil.Discard, res.Body)
146 | return nil
147 | }
148 |
149 | func (hec *hecClient) verifySplunkConnection(l *splunkLogger) error {
150 | req, err := http.NewRequest(http.MethodGet, hec.healthCheckURL, nil)
151 | if err != nil {
152 | return err
153 | }
154 | res, err := hec.client.Do(req)
155 | if err != nil {
156 | return err
157 | }
158 | if res.Body != nil {
159 | defer res.Body.Close()
160 | }
161 | if res.StatusCode != http.StatusOK {
162 | var body []byte
163 | body, err = ioutil.ReadAll(res.Body)
164 | if err != nil {
165 | return err
166 | }
167 | return fmt.Errorf("%s: failed to verify connection - %s - %s", driverName, res.Status, body)
168 | }
169 | return nil
170 | }
171 |
--------------------------------------------------------------------------------
/http.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "encoding/json"
21 | "errors"
22 | "io"
23 | "net/http"
24 |
25 | "github.com/docker/docker/daemon/logger"
26 | "github.com/docker/docker/pkg/ioutils"
27 | "github.com/docker/go-plugins-helpers/sdk"
28 | )
29 |
30 | type StartLoggingRequest struct {
31 | File string
32 | Info logger.Info
33 | }
34 |
35 | type StopLoggingRequest struct {
36 | File string
37 | }
38 |
39 | type CapabilitiesResponse struct {
40 | Err string
41 | Cap logger.Capability
42 | }
43 |
44 | type ReadLogsRequest struct {
45 | Info logger.Info
46 | Config logger.ReadConfig
47 | }
48 |
49 | func handlers(h *sdk.Handler, d *driver) {
50 | h.HandleFunc("/LogDriver.StartLogging", func(w http.ResponseWriter, r *http.Request) {
51 | var req StartLoggingRequest
52 | if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
53 | http.Error(w, err.Error(), http.StatusBadRequest)
54 | return
55 | }
56 | if req.Info.ContainerID == "" {
57 | respond(errors.New("must provide container id in log context"), w)
58 | return
59 | }
60 |
61 | err := d.StartLogging(req.File, req.Info)
62 | respond(err, w)
63 | })
64 |
65 | h.HandleFunc("/LogDriver.StopLogging", func(w http.ResponseWriter, r *http.Request) {
66 | var req StopLoggingRequest
67 | if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
68 | http.Error(w, err.Error(), http.StatusBadRequest)
69 | return
70 | }
71 | err := d.StopLogging(req.File)
72 | respond(err, w)
73 | })
74 |
75 | h.HandleFunc("/LogDriver.Capabilities", func(w http.ResponseWriter, r *http.Request) {
76 | json.NewEncoder(w).Encode(&CapabilitiesResponse{
77 | Cap: logger.Capability{ReadLogs: true},
78 | })
79 | })
80 |
81 | h.HandleFunc("/LogDriver.ReadLogs", func(w http.ResponseWriter, r *http.Request) {
82 | var req ReadLogsRequest
83 | if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
84 | http.Error(w, err.Error(), http.StatusBadRequest)
85 | return
86 | }
87 |
88 | stream, err := d.ReadLogs(req.Info, req.Config)
89 | if err != nil {
90 | http.Error(w, err.Error(), http.StatusInternalServerError)
91 | return
92 | }
93 | defer stream.Close()
94 |
95 | w.Header().Set("Content-Type", "application/x-json-stream")
96 | wf := ioutils.NewWriteFlusher(w)
97 | io.Copy(wf, stream)
98 | })
99 | }
100 |
101 | type response struct {
102 | Err string
103 | }
104 |
105 | func respond(err error, w http.ResponseWriter) {
106 | var res response
107 | if err != nil {
108 | res.Err = err.Error()
109 | }
110 | json.NewEncoder(w).Encode(&res)
111 | }
112 |
--------------------------------------------------------------------------------
/main.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "fmt"
21 | "os"
22 |
23 | "github.com/Sirupsen/logrus"
24 | "github.com/docker/go-plugins-helpers/sdk"
25 | )
26 |
27 | const socketAddress = "/run/docker/plugins/splunklog.sock"
28 |
29 | var logLevels = map[string]logrus.Level{
30 | "debug": logrus.DebugLevel,
31 | "info": logrus.InfoLevel,
32 | "warn": logrus.WarnLevel,
33 | "error": logrus.ErrorLevel,
34 | }
35 |
36 | func main() {
37 | levelVal := os.Getenv("LOG_LEVEL")
38 | if levelVal == "" {
39 | levelVal = "info"
40 | }
41 | if level, exists := logLevels[levelVal]; exists {
42 | logrus.SetLevel(level)
43 | } else {
44 | fmt.Fprintln(os.Stderr, "invalid log level: ", levelVal)
45 | os.Exit(1)
46 | }
47 |
48 | h := sdk.NewHandler(`{"Implements": ["LoggingDriver"]}`)
49 | handlers(&h, newDriver())
50 | if err := h.ServeUnix(socketAddress, 0); err != nil {
51 | panic(err)
52 | }
53 | }
54 |
--------------------------------------------------------------------------------
/message_processor.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "bytes"
21 | "encoding/binary"
22 | "io"
23 | "os"
24 | "strings"
25 | "time"
26 | "unicode/utf8"
27 |
28 | "github.com/Sirupsen/logrus"
29 | "github.com/docker/docker/api/types/plugins/logdriver"
30 | "github.com/docker/docker/daemon/logger"
31 | protoio "github.com/gogo/protobuf/io"
32 | )
33 |
34 | var (
35 | jsonLogs = getAdvancedOptionBool(envVarJSONLogs, defaultJSONLogs)
36 | )
37 |
38 | type messageProcessor struct {
39 | retryNumber int
40 | }
41 |
42 | func (mg messageProcessor) process(lf *logPair) {
43 | logrus.Debug("Start to consume log")
44 | mg.consumeLog(lf)
45 | }
46 |
47 | /*
48 | This is a routine to decode the log stream into LogEntry and store it in buffer
49 | and send the buffer to splunk logger and json logger
50 | */
51 | func (mg messageProcessor) consumeLog(lf *logPair) {
52 | // Initialize temp buffer
53 | tmpBuf := &partialMsgBuffer{
54 | bufferTimer: time.Now(),
55 | }
56 | // create a protobuf reader for the log stream
57 | dec := protoio.NewUint32DelimitedReader(lf.stream, binary.BigEndian, 1e6)
58 | defer dec.Close()
59 | defer lf.Close()
60 | // a temp buffer for each log entry
61 | var buf logdriver.LogEntry
62 | curRetryNumber := 0
63 | for {
64 | // reads a message from the log stream and put it in a buffer
65 | if err := dec.ReadMsg(&buf); err != nil {
66 | // exit the loop if reader reaches EOF or the fifo is closed by the writer
67 | if err == io.EOF || err == os.ErrClosed || strings.Contains(err.Error(), "file already closed") {
68 | logrus.WithField("id", lf.info.ContainerID).WithError(err).Info("shutting down loggers")
69 | return
70 | }
71 |
72 | // exit the loop if retry number reaches the specified number
73 | if mg.retryNumber != -1 && curRetryNumber > mg.retryNumber {
74 | logrus.WithField("id", lf.info.ContainerID).WithField("curRetryNumber", curRetryNumber).WithField("retryNumber", mg.retryNumber).WithError(err).Error("Stop retrying. Shutting down loggers")
75 | return
76 | }
77 |
78 | // if there is any other error, retry for robustness. If retryNumber is -1, retry forever
79 | curRetryNumber++
80 | logrus.WithField("id", lf.info.ContainerID).WithField("curRetryNumber", curRetryNumber).WithField("retryNumber", mg.retryNumber).WithError(err).Error("Encountered error and retrying")
81 | time.Sleep(500 * time.Millisecond)
82 | dec = protoio.NewUint32DelimitedReader(lf.stream, binary.BigEndian, 1e6)
83 | }
84 | curRetryNumber = 0
85 |
86 | if mg.shouldSendMessage(buf.Line) {
87 | if tmpBuf.tBuf.Len() == 0 {
88 | logrus.Debug("First messaging, reseting timer")
89 | tmpBuf.bufferTimer = time.Now()
90 | }
91 | // Append to temp buffer
92 | if err := tmpBuf.append(&buf); err == nil {
93 | // Send message to splunk and also json logger if enabled
94 | mg.sendMessage(lf.splunkl, &buf, tmpBuf, lf.info.ContainerID)
95 | if jsonLogs {
96 | mg.sendMessage(lf.jsonl, &buf, tmpBuf, lf.info.ContainerID)
97 | }
98 | //temp buffer and values reset
99 | tmpBuf.reset()
100 | }
101 | }
102 | buf.Reset()
103 | }
104 | }
105 |
106 | // send the log entry message to logger
107 | func (mg messageProcessor) sendMessage(l logger.Logger, buf *logdriver.LogEntry, t *partialMsgBuffer, containerid string) {
108 | var msg logger.Message
109 | // Only send if partial bit is not set or temp buffer size reached max or temp buffer timer expired
110 | // Check for temp buffer timer expiration
111 | if !buf.Partial || t.shouldFlush(time.Now()) {
112 | msg.Line = t.tBuf.Bytes()
113 | msg.Source = buf.Source
114 | msg.Partial = buf.Partial
115 | msg.Timestamp = time.Unix(0, buf.TimeNano)
116 |
117 | if err := l.Log(&msg); err != nil {
118 | logrus.WithField("id", containerid).WithError(err).WithField("message",
119 | msg).Error("Error writing log message")
120 | }
121 | t.bufferReset = true
122 | }
123 | }
124 |
125 | // shouldSendMessage() returns a boolean indicating
126 | // if the message should be sent to Splunk
127 | func (mg messageProcessor) shouldSendMessage(message []byte) bool {
128 | trimedLine := bytes.Fields(message)
129 | if len(trimedLine) == 0 {
130 | logrus.Info("Ignoring empty string")
131 | return false
132 | }
133 |
134 | // even if the message byte array is not a valid utf8 string
135 | // we are still sending the message to splunk
136 | if !utf8.Valid(message) {
137 | logrus.Warnf("%v is not UTF-8 decodable", message)
138 | return true
139 | }
140 | return true
141 | }
--------------------------------------------------------------------------------
/message_processor_test.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import "testing"
20 |
21 | func TestShouldSendMessage(t *testing.T) {
22 | mg := &messageProcessor{}
23 | test := []byte{' '}
24 | res := mg.shouldSendMessage(test)
25 |
26 | if res {
27 | t.Fatalf("%s should be empty event", test)
28 | }
29 |
30 | test = []byte{' ', ' '}
31 | res = mg.shouldSendMessage(test)
32 |
33 | if res {
34 | t.Fatalf("%s should be empty event", test)
35 | }
36 |
37 | test = []byte{}
38 | res = mg.shouldSendMessage(test)
39 |
40 | if res {
41 | t.Fatalf("%s should be empty event", test)
42 | }
43 |
44 | test = []byte{'a'}
45 | res = mg.shouldSendMessage(test)
46 |
47 | if !res {
48 | t.Fatalf("%s should be empty event", test)
49 | }
50 |
51 | test = []byte{1}
52 | res = mg.shouldSendMessage(test)
53 |
54 | if !res {
55 | t.Fatalf("%s should be empty event", test)
56 | }
57 |
58 | test = []byte{87, 65, 84}
59 | res = mg.shouldSendMessage(test)
60 |
61 | if !res {
62 | t.Fatalf("%s should be empty event", test)
63 | }
64 |
65 | test = []byte{0xff, 0xfe, 0xfd} // non utf-8 encodable
66 | res = mg.shouldSendMessage(test)
67 |
68 | if !res {
69 | t.Fatalf("%s is non utf8 decodable, but the event should still be sent", test)
70 | }
71 | }
72 |
--------------------------------------------------------------------------------
/partial_message_buffer.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "bytes"
21 | "time"
22 |
23 | "github.com/Sirupsen/logrus"
24 | "github.com/docker/docker/api/types/plugins/logdriver"
25 | )
26 |
27 | var (
28 | partialMsgBufferHoldDuration = getAdvancedOptionDuration(envVarPartialMsgBufferHoldDuration, defaultPartialMsgBufferHoldDuration)
29 | partialMsgBufferMaximum = getAdvancedOptionInt(envVarPartialMsgBufferMaximum, defaultPartialMsgBufferMaximum)
30 | )
31 |
32 | type partialMsgBuffer struct {
33 | tBuf bytes.Buffer
34 | bufferTimer time.Time
35 | bufferReset bool
36 | }
37 |
38 | func (b *partialMsgBuffer) append(l *logdriver.LogEntry) (err error) {
39 | // Add msg to temp buffer and disable buffer reset flag
40 | if !(b.shouldFlush(time.Now())){
41 | ps, err := b.tBuf.Write(l.Line)
42 | b.bufferReset = false
43 | if err != nil {
44 | logrus.WithError(err).WithField("Appending to Temp Buffer with size:", ps).Error(
45 | "Error appending to temp buffer")
46 | b.reset()
47 | return err
48 | }
49 | }
50 | return nil
51 | }
52 |
53 | func (b *partialMsgBuffer) reset() {
54 | if b.bufferReset {
55 | b.tBuf.Reset()
56 | b.bufferTimer = time.Now()
57 | logrus.WithField("resetBufferTimer", b.bufferTimer).Debug("resetting buffer Timer")
58 | }
59 | }
60 |
61 | func (b *partialMsgBuffer) hasHoldDurationExpired(t time.Time) bool {
62 | diff := t.Sub(b.bufferTimer)
63 | logrus.WithField("currentTime", t).WithField("bufferTime", b.bufferTimer).WithField("diff", diff).Debug("Timeout settings")
64 | logrus.WithField("partialMsgBufferHoldDuration", partialMsgBufferHoldDuration).WithField("hasHoldDurationExpired", diff > partialMsgBufferHoldDuration).Debug("check timeout")
65 | return diff > partialMsgBufferHoldDuration
66 | }
67 |
68 | func (b *partialMsgBuffer) hasLengthExceeded() bool {
69 | logrus.WithField("buffer size limit exceeded", partialMsgBufferMaximum < b.tBuf.Len()).Debug("check size")
70 | return partialMsgBufferMaximum < b.tBuf.Len()
71 | }
72 |
73 | func (b *partialMsgBuffer) shouldFlush(t time.Time) bool {
74 | logrus.WithField("should flush", b.hasLengthExceeded() || b.hasHoldDurationExpired(t)).Debug("flush check")
75 | return b.hasLengthExceeded() || b.hasHoldDurationExpired(t)
76 | }
77 |
--------------------------------------------------------------------------------
/partial_message_buffer_test.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import (
4 | "os"
5 | "testing"
6 | "time"
7 |
8 | "github.com/docker/docker/api/types/plugins/logdriver"
9 | )
10 |
11 | func TestAppend(t *testing.T) {
12 | buf := &partialMsgBuffer{
13 | bufferTimer: time.Now(),
14 | }
15 |
16 | entry := &logdriver.LogEntry{
17 | Source: "test",
18 | TimeNano: time.Now().UnixNano(),
19 | Line: []byte{'t', 'e', 's', 't'},
20 | Partial: false,
21 | }
22 | buf.append(entry)
23 |
24 | length1 := buf.tBuf.Len()
25 | if length1 == 0 {
26 | t.Fatal("append to partialMsgBuffer failed")
27 | }
28 |
29 | if buf.bufferReset {
30 | t.Fatal("bufferReset should be false")
31 | }
32 |
33 | entry2 := &logdriver.LogEntry{
34 | Source: "test",
35 | TimeNano: time.Now().UnixNano(),
36 | Line: []byte{'a', 'b', 'c'},
37 | Partial: false,
38 | }
39 |
40 | buf.append(entry2)
41 |
42 | length2 := buf.tBuf.Len()
43 |
44 | if length2 <= length1 {
45 | t.Fatal("append to partialMsgBuffer failed on 2nd entry")
46 | }
47 |
48 | if buf.bufferReset {
49 | t.Fatal("bufferReset should be false")
50 | }
51 | }
52 |
53 | func TestReset(t *testing.T) {
54 | buf := &partialMsgBuffer{
55 | bufferTimer: time.Now(),
56 | }
57 |
58 | beforeTime := buf.bufferTimer
59 |
60 | entry := &logdriver.LogEntry{
61 | Source: "test",
62 | TimeNano: time.Now().UnixNano(),
63 | Line: []byte{'t', 'e', 's', 't'},
64 | Partial: false,
65 | }
66 | buf.append(entry)
67 |
68 | buf.reset()
69 |
70 | if buf.bufferReset {
71 | t.Fatal("bufferReset should be false after append()")
72 | }
73 |
74 | if buf.tBuf.Len() == 0 {
75 | t.Fatal("tBuf should not be reset if bufferReset is false")
76 | }
77 |
78 | if buf.bufferTimer != beforeTime {
79 | t.Fatal("bufferTimer should not be reset if bufferReset is false")
80 | }
81 |
82 | buf.bufferReset = true
83 | buf.reset()
84 |
85 | if buf.tBuf.Len() > 0 {
86 | t.Fatal("tBuf should be reset if bufferReset is false")
87 | }
88 |
89 | if buf.bufferTimer == beforeTime {
90 | t.Fatal("bufferTimer should be reset")
91 | }
92 | }
93 |
94 | func TestHasHoldDurationExpired(t *testing.T) {
95 | test := 5 * time.Millisecond
96 | if err := os.Setenv(envVarPartialMsgBufferHoldDuration, test.String()); err != nil {
97 | t.Fatal(err)
98 | }
99 |
100 | partialMsgBufferHoldDuration = getAdvancedOptionDuration(envVarPartialMsgBufferHoldDuration, defaultPartialMsgBufferHoldDuration)
101 |
102 | startTime := time.Now()
103 | buf := &partialMsgBuffer{
104 | bufferTimer: startTime,
105 | }
106 |
107 | time.Sleep(3 * time.Millisecond)
108 | endTime := time.Now()
109 | expired := buf.hasHoldDurationExpired(endTime)
110 | shouldFlush := buf.shouldFlush(endTime)
111 |
112 | if expired {
113 | t.Fatal("bufferTimer should not have exipred.")
114 | }
115 |
116 | if shouldFlush {
117 | t.Fatal("tbuf should not be flushed")
118 | }
119 |
120 | time.Sleep(2 * time.Millisecond)
121 | endTime = time.Now()
122 | expired = buf.hasHoldDurationExpired(endTime)
123 | shouldFlush = buf.shouldFlush(endTime)
124 |
125 | if !expired {
126 | t.Fatal("bufferTimer should have exipred")
127 | }
128 |
129 | if !shouldFlush {
130 | t.Fatal("tbuf should be flushed when buffer expired")
131 | }
132 | }
133 |
134 | func TestHasLengthExceeded(t *testing.T) {
135 | if err := os.Setenv(envVarPartialMsgBufferMaximum, "10"); err != nil {
136 | t.Fatal(err)
137 | }
138 | partialMsgBufferMaximum = getAdvancedOptionInt(envVarPartialMsgBufferMaximum, defaultPartialMsgBufferMaximum)
139 |
140 | buf := &partialMsgBuffer{
141 | bufferTimer: time.Now(),
142 | }
143 |
144 | a := []byte{}
145 | entry := &logdriver.LogEntry{
146 | Source: "test",
147 | TimeNano: time.Now().UnixNano(),
148 | Line: a,
149 | Partial: false,
150 | }
151 | buf.append(entry)
152 | lengthExceeded := buf.hasLengthExceeded()
153 | shouldFlush := buf.shouldFlush(time.Now())
154 |
155 | if lengthExceeded {
156 | t.Fatalf("buffer size should not be exceed with lenth %v", buf.tBuf.Len())
157 | }
158 |
159 | if shouldFlush {
160 | t.Fatalf("tbuf should not be flushed with bufferSize %v and current length %v", partialMsgBufferMaximum, buf.tBuf.Len())
161 | }
162 |
163 | for i := 0; i < 9; i++ {
164 | a = append(a, 'x')
165 | }
166 |
167 | entry = &logdriver.LogEntry{
168 | Source: "test",
169 | TimeNano: time.Now().UnixNano(),
170 | Line: a,
171 | Partial: false,
172 | }
173 | buf.append(entry)
174 | lengthExceeded = buf.hasLengthExceeded()
175 | shouldFlush = buf.shouldFlush(time.Now())
176 |
177 | if lengthExceeded {
178 | t.Fatalf("buffer size should not be exceed with lenth %v", buf.tBuf.Len())
179 | }
180 |
181 | if shouldFlush {
182 | t.Fatalf("tbuf should not be flushed with bufferSize %v and current length %v", partialMsgBufferMaximum, buf.tBuf.Len())
183 | }
184 |
185 | a = append(a, 'x')
186 |
187 | entry = &logdriver.LogEntry{
188 | Source: "test",
189 | TimeNano: time.Now().UnixNano(),
190 | Line: a,
191 | Partial: false,
192 | }
193 | buf.append(entry)
194 | lengthExceeded = buf.hasLengthExceeded()
195 | shouldFlush = buf.shouldFlush(time.Now())
196 |
197 | if !lengthExceeded {
198 | t.Fatalf("buffer size should be exceed with lenth %v", buf.tBuf.Len())
199 | }
200 |
201 | if !shouldFlush {
202 | t.Fatalf("tbuf should be flushed with bufferSize %v and current length %v", partialMsgBufferMaximum, buf.tBuf.Len())
203 | }
204 |
205 | }
206 |
--------------------------------------------------------------------------------
/splunk_logger.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "bytes"
21 | "compress/gzip"
22 | "crypto/tls"
23 | "crypto/x509"
24 | "encoding/json"
25 | "fmt"
26 | "io/ioutil"
27 | "net/http"
28 | "net/url"
29 | "os"
30 | "strconv"
31 | "strings"
32 | "sync"
33 | "time"
34 |
35 | "github.com/Sirupsen/logrus"
36 | "github.com/docker/docker/daemon/logger"
37 | "github.com/docker/docker/daemon/logger/loggerutils"
38 | "github.com/docker/docker/pkg/urlutil"
39 | )
40 |
41 | const (
42 | driverName = "splunk"
43 | splunkURLKey = "splunk-url"
44 | splunkURLPathKey = "splunk-url-path"
45 | splunkTokenKey = "splunk-token"
46 | splunkSourceKey = "splunk-source"
47 | splunkSourceTypeKey = "splunk-sourcetype"
48 | splunkIndexKey = "splunk-index"
49 | splunkCAPathKey = "splunk-capath"
50 | splunkCANameKey = "splunk-caname"
51 | splunkInsecureSkipVerifyKey = "splunk-insecureskipverify"
52 | splunkFormatKey = "splunk-format"
53 | splunkVerifyConnectionKey = "splunk-verify-connection"
54 | splunkGzipCompressionKey = "splunk-gzip"
55 | splunkGzipCompressionLevelKey = "splunk-gzip-level"
56 | envKey = "env"
57 | envRegexKey = "env-regex"
58 | labelsKey = "labels"
59 | tagKey = "tag"
60 | )
61 |
62 | const (
63 | // How often do we send messages (if we are not reaching batch size)
64 | defaultPostMessagesFrequency = 5 * time.Second
65 | // How big can be batch of messages
66 | defaultPostMessagesBatchSize = 1000
67 | // Maximum number of messages we can store in buffer
68 | defaultBufferMaximum = 10 * defaultPostMessagesBatchSize
69 | // Number of messages allowed to be queued in the channel
70 | defaultStreamChannelSize = 4 * defaultPostMessagesBatchSize
71 | // Partial log hold duration (if we are not reaching max buffer size)
72 | defaultPartialMsgBufferHoldDuration = 5 * time.Second
73 | // Maximum buffer size for partial logging
74 | defaultPartialMsgBufferMaximum = 1024 * 1024
75 | // Number of retry if error happens while reading logs from docker provided fifo
76 | // -1 means retry forever
77 | defaultReadFifoErrorRetryNumber = 3
78 | // Determines if JSON logging is enabled
79 | defaultJSONLogs = true
80 | // Determines if telemetry is enabled
81 | defaultSplunkTelemetry = true
82 | )
83 |
84 | const (
85 | envVarPostMessagesFrequency = "SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY"
86 | envVarPostMessagesBatchSize = "SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE"
87 | envVarBufferMaximum = "SPLUNK_LOGGING_DRIVER_BUFFER_MAX"
88 | envVarStreamChannelSize = "SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE"
89 | envVarPartialMsgBufferHoldDuration = "SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_HOLD_DURATION"
90 | envVarPartialMsgBufferMaximum = "SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_BUFFER_SIZE"
91 | envVarReadFifoErrorRetryNumber = "SPLUNK_LOGGING_DRIVER_FIFO_ERROR_RETRY_TIME"
92 | envVarJSONLogs = "SPLUNK_LOGGING_DRIVER_JSON_LOGS"
93 | envVarSplunkTelemetry = "SPLUNK_TELEMETRY"
94 | )
95 |
96 | type splunkLoggerInterface interface {
97 | logger.Logger
98 | worker()
99 | }
100 |
101 | type splunkLogger struct {
102 | hec *hecClient
103 | nullMessage *splunkMessage
104 |
105 | // For synchronization between background worker and logger.
106 | // We use channel to send messages to worker go routine.
107 | // All other variables for blocking Close call before we flush all messages to HEC
108 | stream chan *splunkMessage
109 | lock sync.RWMutex
110 | closed bool
111 | closedCond *sync.Cond
112 | }
113 |
114 | type splunkLoggerInline struct {
115 | *splunkLogger
116 |
117 | nullEvent *splunkMessageEvent
118 | }
119 |
120 | type splunkLoggerJSON struct {
121 | *splunkLoggerInline
122 | }
123 |
124 | type splunkLoggerRaw struct {
125 | *splunkLogger
126 |
127 | prefix []byte
128 | }
129 |
130 | type splunkMessage struct {
131 | Event interface{} `json:"event"`
132 | Time string `json:"time"`
133 | Host string `json:"host"`
134 | Source string `json:"source,omitempty"`
135 | SourceType string `json:"sourcetype,omitempty"`
136 | Index string `json:"index,omitempty"`
137 | Entity string `json:"entity,omitempty"`
138 | }
139 |
140 | type splunkMessageEvent struct {
141 | Line interface{} `json:"line"`
142 | Source string `json:"source"`
143 | Tag string `json:"tag,omitempty"`
144 | Attrs map[string]string `json:"attrs,omitempty"`
145 | }
146 |
147 | const (
148 | splunkFormatRaw = "raw"
149 | splunkFormatJSON = "json"
150 | splunkFormatInline = "inline"
151 | )
152 |
153 | /*
154 | New Splunk Logger
155 | */
156 | func New(info logger.Info) (logger.Logger, error) {
157 | hostname, err := info.Hostname()
158 | if err != nil {
159 | return nil, fmt.Errorf("%s: cannot access hostname to set source field", driverName)
160 | }
161 |
162 | // Parse and validate Splunk URL
163 | splunkURL, err := parseURL(info)
164 | if err != nil {
165 | return nil, err
166 | }
167 |
168 | // Splunk Token is required parameter
169 | splunkToken, ok := info.Config[splunkTokenKey]
170 | if !ok {
171 | return nil, fmt.Errorf("%s: %s is expected", driverName, splunkTokenKey)
172 | }
173 |
174 | tlsConfig := &tls.Config{}
175 |
176 | // Splunk is using autogenerated certificates by default,
177 | // allow users to trust them with skipping verification
178 | if insecureSkipVerifyStr, ok := info.Config[splunkInsecureSkipVerifyKey]; ok {
179 | insecureSkipVerify, err := strconv.ParseBool(insecureSkipVerifyStr)
180 | if err != nil {
181 | return nil, err
182 | }
183 | tlsConfig.InsecureSkipVerify = insecureSkipVerify
184 | }
185 |
186 | // If path to the root certificate is provided - load it
187 | if caPath, ok := info.Config[splunkCAPathKey]; ok {
188 | caCert, err := ioutil.ReadFile(caPath)
189 | if err != nil {
190 | return nil, err
191 | }
192 | caPool := x509.NewCertPool()
193 | caPool.AppendCertsFromPEM(caCert)
194 | tlsConfig.RootCAs = caPool
195 | }
196 |
197 | if caName, ok := info.Config[splunkCANameKey]; ok {
198 | tlsConfig.ServerName = caName
199 | }
200 |
201 | gzipCompression := false
202 | if gzipCompressionStr, ok := info.Config[splunkGzipCompressionKey]; ok {
203 | gzipCompression, err = strconv.ParseBool(gzipCompressionStr)
204 | if err != nil {
205 | return nil, err
206 | }
207 | }
208 |
209 | gzipCompressionLevel := gzip.DefaultCompression
210 | if gzipCompressionLevelStr, ok := info.Config[splunkGzipCompressionLevelKey]; ok {
211 | var err error
212 | gzipCompressionLevel64, err := strconv.ParseInt(gzipCompressionLevelStr, 10, 32)
213 | if err != nil {
214 | return nil, err
215 | }
216 | gzipCompressionLevel = int(gzipCompressionLevel64)
217 | if gzipCompressionLevel < gzip.DefaultCompression || gzipCompressionLevel > gzip.BestCompression {
218 | err := fmt.Errorf("not supported level '%s' for %s (supported values between %d and %d)",
219 | gzipCompressionLevelStr, splunkGzipCompressionLevelKey, gzip.DefaultCompression, gzip.BestCompression)
220 | return nil, err
221 | }
222 | }
223 |
224 | transport := &http.Transport{
225 | TLSClientConfig: tlsConfig,
226 | }
227 | client := &http.Client{
228 | Transport: transport,
229 | }
230 |
231 | source := info.Config[splunkSourceKey]
232 | sourceType := info.Config[splunkSourceTypeKey]
233 | if sourceType == "" {
234 | sourceType = "splunk_connect_docker"
235 | }
236 | index := info.Config[splunkIndexKey]
237 |
238 | var nullMessage = &splunkMessage{
239 | Host: hostname,
240 | Source: source,
241 | SourceType: sourceType,
242 | Index: index,
243 | }
244 |
245 | // Allow user to remove tag from the messages by setting tag to empty string
246 | tag := ""
247 | if tagTemplate, ok := info.Config[tagKey]; !ok || tagTemplate != "" {
248 | tag, err = loggerutils.ParseLogTag(info, loggerutils.DefaultTemplate)
249 | if err != nil {
250 | return nil, err
251 | }
252 | }
253 |
254 | attrs, err := info.ExtraAttributes(nil)
255 | if err != nil {
256 | return nil, err
257 | }
258 |
259 | var (
260 | postMessagesFrequency = getAdvancedOptionDuration(envVarPostMessagesFrequency, defaultPostMessagesFrequency)
261 | postMessagesBatchSize = getAdvancedOptionInt(envVarPostMessagesBatchSize, defaultPostMessagesBatchSize)
262 | bufferMaximum = getAdvancedOptionInt(envVarBufferMaximum, defaultBufferMaximum)
263 | streamChannelSize = getAdvancedOptionInt(envVarStreamChannelSize, defaultStreamChannelSize)
264 | )
265 |
266 | logger := &splunkLogger{
267 | hec: &hecClient{
268 | client: client,
269 | transport: transport,
270 | url: splunkURL.String(),
271 | healthCheckURL: composeHealthCheckURL(splunkURL),
272 | auth: "Splunk " + splunkToken,
273 | gzipCompression: gzipCompression,
274 | gzipCompressionLevel: gzipCompressionLevel,
275 | postMessagesFrequency: postMessagesFrequency,
276 | postMessagesBatchSize: postMessagesBatchSize,
277 | bufferMaximum: bufferMaximum,
278 | },
279 | nullMessage: nullMessage,
280 | stream: make(chan *splunkMessage, streamChannelSize),
281 | }
282 |
283 | // By default we don't verify connection, but we allow user to enable that
284 | verifyConnection := false
285 | if verifyConnectionStr, ok := info.Config[splunkVerifyConnectionKey]; ok {
286 | var err error
287 | verifyConnection, err = strconv.ParseBool(verifyConnectionStr)
288 | if err != nil {
289 | return nil, err
290 | }
291 | }
292 | if verifyConnection {
293 | err = logger.hec.verifySplunkConnection(logger)
294 | if err != nil {
295 | return nil, err
296 | }
297 | }
298 |
299 | var splunkFormat string
300 | if splunkFormatParsed, ok := info.Config[splunkFormatKey]; ok {
301 | switch splunkFormatParsed {
302 | case splunkFormatInline:
303 | case splunkFormatJSON:
304 | case splunkFormatRaw:
305 | default:
306 | return nil, fmt.Errorf("unknown format specified %s, supported formats are inline, json and raw", splunkFormat)
307 | }
308 | splunkFormat = splunkFormatParsed
309 | } else {
310 | splunkFormat = splunkFormatInline
311 | }
312 |
313 | var loggerWrapper splunkLoggerInterface
314 |
315 | switch splunkFormat {
316 | case splunkFormatInline:
317 | nullEvent := &splunkMessageEvent{
318 | Tag: tag,
319 | Attrs: attrs,
320 | }
321 |
322 | loggerWrapper = &splunkLoggerInline{logger, nullEvent}
323 | case splunkFormatJSON:
324 | nullEvent := &splunkMessageEvent{
325 | Tag: tag,
326 | Attrs: attrs,
327 | }
328 |
329 | loggerWrapper = &splunkLoggerJSON{&splunkLoggerInline{logger, nullEvent}}
330 | case splunkFormatRaw:
331 | var prefix bytes.Buffer
332 | if tag != "" {
333 | prefix.WriteString(tag)
334 | prefix.WriteString(" ")
335 | }
336 | for key, value := range attrs {
337 | prefix.WriteString(key)
338 | prefix.WriteString("=")
339 | prefix.WriteString(value)
340 | prefix.WriteString(" ")
341 | }
342 |
343 | loggerWrapper = &splunkLoggerRaw{logger, prefix.Bytes()}
344 | default:
345 | return nil, fmt.Errorf("unexpected format %s", splunkFormat)
346 | }
347 |
348 | if getAdvancedOptionBool(envVarSplunkTelemetry, defaultSplunkTelemetry) {
349 | go telemetry(info, logger, sourceType, splunkFormat)
350 | }
351 |
352 | go loggerWrapper.worker()
353 |
354 | return loggerWrapper, nil
355 | }
356 |
357 | /*
358 | ValidateLogOpt validates the arguments passed in to the plugin
359 | */
360 | func ValidateLogOpt(cfg map[string]string) error {
361 | for key := range cfg {
362 | switch key {
363 | case splunkURLKey:
364 | case splunkURLPathKey:
365 | case splunkTokenKey:
366 | case splunkSourceKey:
367 | case splunkSourceTypeKey:
368 | case splunkIndexKey:
369 | case splunkCAPathKey:
370 | case splunkCANameKey:
371 | case splunkInsecureSkipVerifyKey:
372 | case splunkFormatKey:
373 | case splunkVerifyConnectionKey:
374 | case splunkGzipCompressionKey:
375 | case splunkGzipCompressionLevelKey:
376 | case envKey:
377 | case envRegexKey:
378 | case labelsKey:
379 | case tagKey:
380 | default:
381 | return fmt.Errorf("unknown log opt '%s' for %s log driver", key, driverName)
382 | }
383 | }
384 | return nil
385 | }
386 |
387 | func parseURL(info logger.Info) (*url.URL, error) {
388 | splunkURLStr, ok := info.Config[splunkURLKey]
389 | if !ok {
390 | return nil, fmt.Errorf("%s: %s is expected", driverName, splunkURLKey)
391 | }
392 |
393 | splunkURL, err := url.Parse(splunkURLStr)
394 | if err != nil {
395 | return nil, fmt.Errorf("%s: failed to parse %s as url value in %s", driverName, splunkURLStr, splunkURLKey)
396 | }
397 |
398 | if !urlutil.IsURL(splunkURLStr) ||
399 | !splunkURL.IsAbs() ||
400 | (splunkURL.Path != "" && splunkURL.Path != "/") ||
401 | splunkURL.RawQuery != "" ||
402 | splunkURL.Fragment != "" {
403 | return nil, fmt.Errorf("%s: expected format scheme://dns_name_or_ip:port for %s", driverName, splunkURLKey)
404 | }
405 |
406 | splunkURLPathStr, ok := info.Config[splunkURLPathKey]
407 | if !ok {
408 | splunkURL.Path = "/services/collector/event/1.0"
409 | } else {
410 | if strings.HasPrefix(splunkURLPathStr, "/") {
411 | splunkURL.Path = splunkURLPathStr
412 | } else {
413 | return nil, fmt.Errorf("%s: expected format /path/to/collector for %s", driverName, splunkURLPathKey)
414 | }
415 | }
416 |
417 | return splunkURL, nil
418 | }
419 |
420 | /*
421 | parseURL() makes sure that the URL is the format of: scheme://dns_name_or_ip:port
422 | */
423 | func composeHealthCheckURL(splunkURL *url.URL) string {
424 | return splunkURL.Scheme + "://" + splunkURL.Host + "/services/collector/health"
425 | }
426 |
427 | func getAdvancedOptionDuration(envName string, defaultValue time.Duration) time.Duration {
428 | valueStr := os.Getenv(envName)
429 | if valueStr == "" {
430 | return defaultValue
431 | }
432 | parsedValue, err := time.ParseDuration(valueStr)
433 | if err != nil {
434 | logrus.Error(fmt.Sprintf("Failed to parse value of %s as duration. Using default %v. %v", envName, defaultValue, err))
435 | return defaultValue
436 | }
437 | return parsedValue
438 | }
439 |
440 | func getAdvancedOptionInt(envName string, defaultValue int) int {
441 | valueStr := os.Getenv(envName)
442 | if valueStr == "" {
443 | return defaultValue
444 | }
445 | parsedValue, err := strconv.ParseInt(valueStr, 10, 32)
446 | if err != nil {
447 | logrus.Error(fmt.Sprintf("Failed to parse value of %s as integer. Using default %d. %v", envName, defaultValue, err))
448 | return defaultValue
449 | }
450 | return int(parsedValue)
451 | }
452 |
453 | func getAdvancedOptionBool(envName string, defaultValue bool) bool {
454 | valueStr := os.Getenv(envName)
455 | if valueStr == "" {
456 | return defaultValue
457 | }
458 | parsedValue, err := strconv.ParseBool(valueStr)
459 | if err != nil {
460 | logrus.Error(fmt.Sprintf("Failed to parse value of %s as boolean. Using default %v. %v", envName, defaultValue, err))
461 | return defaultValue
462 | }
463 | return bool(parsedValue)
464 | }
465 |
466 | // Log() takes in a log message reference and put it into a queue: stream
467 | // stream is used by the HEC workers
468 | func (l *splunkLoggerInline) Log(msg *logger.Message) error {
469 | message := l.createSplunkMessage(msg)
470 |
471 | event := *l.nullEvent
472 | event.Line = string(msg.Line)
473 | event.Source = msg.Source
474 |
475 | message.Event = &event
476 | logger.PutMessage(msg)
477 | return l.queueMessageAsync(message)
478 | }
479 |
480 | func (l *splunkLoggerJSON) Log(msg *logger.Message) error {
481 | message := l.createSplunkMessage(msg)
482 | event := *l.nullEvent
483 |
484 | var rawJSONMessage json.RawMessage
485 | if err := json.Unmarshal(msg.Line, &rawJSONMessage); err == nil {
486 | event.Line = &rawJSONMessage
487 | } else {
488 | event.Line = string(msg.Line)
489 | }
490 |
491 | event.Source = msg.Source
492 |
493 | message.Event = &event
494 | logger.PutMessage(msg)
495 | return l.queueMessageAsync(message)
496 | }
497 |
498 | func (l *splunkLoggerRaw) Log(msg *logger.Message) error {
499 | message := l.createSplunkMessage(msg)
500 |
501 | message.Event = string(append(l.prefix, msg.Line...))
502 | logger.PutMessage(msg)
503 | return l.queueMessageAsync(message)
504 | }
505 |
506 | func (l *splunkLogger) queueMessageAsync(message *splunkMessage) error {
507 | l.lock.RLock()
508 | defer l.lock.RUnlock()
509 | if l.closedCond != nil {
510 | return fmt.Errorf("%s: driver is closed", driverName)
511 | }
512 | l.stream <- message
513 | return nil
514 | }
515 |
516 | func telemetry(info logger.Info, l *splunkLogger, sourceType string, splunkFormat string) {
517 |
518 | //Send weekly
519 | waitTime := 7 * 24 * time.Hour
520 | timer := time.NewTicker(waitTime)
521 | messageArray := []*splunkMessage{}
522 | timestamp := strconv.FormatInt(time.Now().UTC().UnixNano()/int64(time.Second), 10)
523 |
524 | type telemetryEvent struct {
525 | Component string `json:"component"`
526 | Type string `json:"type"`
527 | Data struct {
528 | App string `json:"app"`
529 | JsonLogs bool `json:"jsonLogs"`
530 | PartialMsgBufferMaximum int `json:"partialMsgBufferMaximum"`
531 | PostMessagesBatchSize int `json:"postMessagesBatchSize"`
532 | PostMessagesFrequency int `json:"postMessagesFrequency"`
533 | SplunkFormat string `json:"splunkFormat"`
534 | StreamChannelSize int `json:"streamChannelSize"`
535 | Sourcetype string `json:"sourcetype"`
536 | } `json:"data"`
537 | OptInRequired int64 `json:"optInRequired"`
538 | }
539 |
540 | telem := &telemetryEvent{}
541 | telem.Component = "app.connect.docker"
542 | telem.Type = "event"
543 | telem.OptInRequired = 2
544 | telem.Data.App = "splunk_connect_docker"
545 | telem.Data.Sourcetype = sourceType
546 | telem.Data.SplunkFormat = splunkFormat
547 | telem.Data.PostMessagesFrequency = int(getAdvancedOptionDuration(envVarPostMessagesFrequency, defaultPostMessagesFrequency))
548 | telem.Data.PostMessagesBatchSize = getAdvancedOptionInt(envVarPostMessagesBatchSize, defaultPostMessagesBatchSize)
549 | telem.Data.StreamChannelSize = getAdvancedOptionInt(envVarStreamChannelSize, defaultStreamChannelSize)
550 | telem.Data.PartialMsgBufferMaximum = getAdvancedOptionInt(envVarBufferMaximum, defaultBufferMaximum)
551 | telem.Data.JsonLogs = getAdvancedOptionBool(envVarJSONLogs, defaultJSONLogs)
552 |
553 | var telemMessage = &splunkMessage{
554 | Host: "telemetry",
555 | Source: "telemetry",
556 | SourceType: "splunk_connect_telemetry",
557 | Index: "_introspection",
558 | Time: timestamp,
559 | Event: telem,
560 | }
561 |
562 | messageArray = append(messageArray, telemMessage)
563 |
564 | telemClient := hecClient{
565 | transport: l.hec.transport,
566 | client: l.hec.client,
567 | auth: l.hec.auth,
568 | url: l.hec.url,
569 | }
570 |
571 | time.Sleep(5 * time.Second)
572 | if err := telemClient.tryPostMessages(messageArray); err != nil {
573 | logrus.Error(err)
574 | }
575 |
576 | for {
577 | select {
578 | case <-timer.C:
579 | if err := telemClient.tryPostMessages(messageArray); err != nil {
580 | logrus.Error(err)
581 | }
582 | }
583 | }
584 |
585 | }
586 |
587 | /*
588 | main function that handles the log stream processing
589 | Do a HEC POST when
590 | - the number of messages matches the batch size
591 | - time out
592 | */
593 | func (l *splunkLogger) worker() {
594 | var messages []*splunkMessage
595 | timer := time.NewTicker(l.hec.postMessagesFrequency)
596 | for {
597 | select {
598 | case message, open := <-l.stream:
599 | // if the stream channel is closed, post the remaining messages in the buffer
600 | if !open {
601 | logrus.Debugf("stream is closed with %d events", len(messages))
602 | l.hec.postMessages(messages, true)
603 | l.lock.Lock()
604 | defer l.lock.Unlock()
605 | l.hec.transport.CloseIdleConnections()
606 | l.closed = true
607 | l.closedCond.Signal()
608 | return
609 | }
610 | messages = append(messages, message)
611 | // Only sending when we get exactly to the batch size,
612 | // This also helps not to fire postMessages on every new message,
613 | // when previous try failed.
614 | if len(messages)%l.hec.postMessagesBatchSize == 0 {
615 | messages = l.hec.postMessages(messages, false)
616 | }
617 | case <-timer.C:
618 | logrus.Debugf("messages buffer timeout, sending %d events", len(messages))
619 | messages = l.hec.postMessages(messages, false)
620 | }
621 | }
622 | }
623 |
624 | func (l *splunkLogger) Close() error {
625 | l.lock.Lock()
626 | defer l.lock.Unlock()
627 | if l.closedCond == nil {
628 | l.closedCond = sync.NewCond(&l.lock)
629 | close(l.stream)
630 | for !l.closed {
631 | l.closedCond.Wait()
632 | }
633 | }
634 | return nil
635 | }
636 |
637 | func (l *splunkLogger) Name() string {
638 | return driverName
639 | }
640 |
641 | func (l *splunkLogger) createSplunkMessage(msg *logger.Message) *splunkMessage {
642 | message := *l.nullMessage
643 | message.Time = fmt.Sprintf("%f", float64(msg.Timestamp.UnixNano())/float64(time.Second))
644 | return &message
645 | }
646 |
--------------------------------------------------------------------------------
/splunk_test.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "compress/gzip"
21 | "fmt"
22 | "os"
23 | "strings"
24 | "testing"
25 | "time"
26 |
27 | "github.com/docker/docker/daemon/logger"
28 | )
29 |
30 | // Validate options
31 | func TestValidateLogOpt(t *testing.T) {
32 | err := ValidateLogOpt(map[string]string{
33 | splunkURLKey: "http://127.0.0.1",
34 | splunkTokenKey: "2160C7EF-2CE9-4307-A180-F852B99CF417",
35 | splunkSourceKey: "mysource",
36 | splunkSourceTypeKey: "mysourcetype",
37 | splunkIndexKey: "myindex",
38 | splunkCAPathKey: "/usr/cert.pem",
39 | splunkCANameKey: "ca_name",
40 | splunkInsecureSkipVerifyKey: "true",
41 | splunkFormatKey: "json",
42 | splunkVerifyConnectionKey: "true",
43 | splunkGzipCompressionKey: "true",
44 | splunkGzipCompressionLevelKey: "1",
45 | envKey: "a",
46 | envRegexKey: "^foo",
47 | labelsKey: "b",
48 | tagKey: "c",
49 | })
50 | if err != nil {
51 | t.Fatal(err)
52 | }
53 |
54 | err = ValidateLogOpt(map[string]string{
55 | "not-supported-option": "a",
56 | })
57 | if err == nil {
58 | t.Fatal("Expecting error on unsupported options")
59 | }
60 | }
61 |
62 | // Driver require user to specify required options
63 | func TestNewMissedConfig(t *testing.T) {
64 | info := logger.Info{
65 | Config: map[string]string{},
66 | }
67 | _, err := New(info)
68 | if err == nil {
69 | t.Fatal("Logger driver should fail when no required parameters specified")
70 | }
71 | }
72 |
73 | // Driver require user to specify splunk-url
74 | func TestNewMissedUrl(t *testing.T) {
75 | info := logger.Info{
76 | Config: map[string]string{
77 | splunkTokenKey: "4642492F-D8BD-47F1-A005-0C08AE4657DF",
78 | },
79 | }
80 | _, err := New(info)
81 | if err.Error() != "splunk: splunk-url is expected" {
82 | t.Fatal("Logger driver should fail when no required parameters specified")
83 | }
84 | }
85 |
86 | //splunk-url needs to be in the format of scheme://dns_name_or_ip<:port>
87 | func TestUrlFormat(t *testing.T) {
88 | info := logger.Info{
89 | Config: map[string]string{
90 | splunkURLKey: "127.0.0.1",
91 | },
92 | }
93 | _, err := parseURL(info)
94 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
95 | t.Fatal("Logger driver should fail when no schema is specified")
96 | }
97 |
98 | info = logger.Info{
99 | Config: map[string]string{
100 | splunkURLKey: "www.google.com",
101 | },
102 | }
103 | _, err = parseURL(info)
104 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
105 | t.Fatal("Logger driver should fail when schema is not specified")
106 | }
107 |
108 | info = logger.Info{
109 | Config: map[string]string{
110 | splunkURLKey: "ftp://127.0.0.1",
111 | },
112 | }
113 | _, err = parseURL(info)
114 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
115 | t.Fatal("Logger driver should fail when schema is not http or https")
116 | }
117 |
118 | info = logger.Info{
119 | Config: map[string]string{
120 | splunkURLKey: "http://127.0.0.1:8088/test",
121 | },
122 | }
123 | _, err = parseURL(info)
124 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
125 | t.Fatal("Logger driver should fail when path is specified")
126 | }
127 |
128 | info = logger.Info{
129 | Config: map[string]string{
130 | splunkURLKey: "testURL",
131 | },
132 | }
133 | _, err = parseURL(info)
134 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
135 | t.Fatal("Logger driver should fail when no schema is specified")
136 | }
137 |
138 | info = logger.Info{
139 | Config: map[string]string{
140 | splunkURLKey: "http://www.host.com/?q=hello",
141 | },
142 | }
143 | _, err = parseURL(info)
144 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
145 | t.Fatal("Logger driver should fail when query parameter is specified")
146 | }
147 |
148 | info = logger.Info{
149 | Config: map[string]string{
150 | splunkURLKey: "http://www.host.com#hello",
151 | },
152 | }
153 | _, err = parseURL(info)
154 | if err.Error() != "splunk: expected format scheme://dns_name_or_ip:port for splunk-url" {
155 | t.Fatal("Logger driver should fail when fragment is specified")
156 | }
157 |
158 | info = logger.Info{
159 | Config: map[string]string{
160 | splunkURLKey: "127.0.1:8000",
161 | },
162 | }
163 | _, err = parseURL(info)
164 | if !strings.HasPrefix(err.Error(), "splunk: failed to parse") {
165 | t.Fatal("Logger driver should fail when path is specified")
166 | }
167 |
168 | info = logger.Info{
169 | Config: map[string]string{
170 | splunkURLKey: "https://127.0.1:8000",
171 | },
172 | }
173 |
174 | url, err := parseURL(info)
175 |
176 | if url.String() != "https://127.0.1:8000/services/collector/event/1.0" {
177 | t.Fatalf("%s is not the right format of HEC endpoint.", url.String())
178 | }
179 |
180 | info = logger.Info{
181 | Config: map[string]string{
182 | splunkURLKey: "https://127.0.1:8000/",
183 | },
184 | }
185 |
186 | url, err = parseURL(info)
187 |
188 | if url.String() != "https://127.0.1:8000/services/collector/event/1.0" {
189 | t.Fatalf("%s is not the right format of HEC endpoint.", url.String())
190 | }
191 | }
192 |
193 | // Driver require user to specify splunk-token
194 | func TestNewMissedToken(t *testing.T) {
195 | info := logger.Info{
196 | Config: map[string]string{
197 | splunkURLKey: "http://127.0.0.1:8088",
198 | },
199 | }
200 | _, err := New(info)
201 | if err.Error() != "splunk: splunk-token is expected" {
202 | t.Fatal("Logger driver should fail when no required parameters specified")
203 | }
204 | }
205 |
206 | // Test default settings
207 | func TestDefault(t *testing.T) {
208 | hec := NewHTTPEventCollectorMock(t)
209 |
210 | go hec.Serve()
211 |
212 | info := logger.Info{
213 | Config: map[string]string{
214 | splunkURLKey: hec.URL(),
215 | splunkTokenKey: hec.token,
216 | },
217 | ContainerID: "containeriid",
218 | ContainerName: "container_name",
219 | ContainerImageID: "contaimageid",
220 | ContainerImageName: "container_image_name",
221 | }
222 |
223 | hostname, err := info.Hostname()
224 | if err != nil {
225 | t.Fatal(err)
226 | }
227 |
228 | loggerDriver, err := New(info)
229 | if err != nil {
230 | t.Fatal(err)
231 | }
232 |
233 | if loggerDriver.Name() != driverName {
234 | t.Fatal("Unexpected logger driver name")
235 | }
236 |
237 | if hec.connectionVerified {
238 | t.Fatal("By default connection should not be verified")
239 | }
240 |
241 | splunkLoggerDriver, ok := loggerDriver.(*splunkLoggerInline)
242 | if !ok {
243 | t.Fatal("Unexpected Splunk Logging Driver type")
244 | }
245 |
246 | if splunkLoggerDriver.hec.url != hec.URL()+"/services/collector/event/1.0" ||
247 | splunkLoggerDriver.hec.auth != "Splunk "+hec.token ||
248 | splunkLoggerDriver.nullMessage.Host != hostname ||
249 | splunkLoggerDriver.nullMessage.Source != "" ||
250 | splunkLoggerDriver.nullMessage.SourceType != "splunk_connect_docker" ||
251 | splunkLoggerDriver.nullMessage.Index != "" ||
252 | splunkLoggerDriver.hec.gzipCompression != false ||
253 | splunkLoggerDriver.hec.postMessagesFrequency != defaultPostMessagesFrequency ||
254 | splunkLoggerDriver.hec.postMessagesBatchSize != defaultPostMessagesBatchSize ||
255 | splunkLoggerDriver.hec.bufferMaximum != defaultBufferMaximum ||
256 | cap(splunkLoggerDriver.stream) != defaultStreamChannelSize {
257 | t.Fatal("Found not default values setup in Splunk Logging Driver.")
258 | }
259 |
260 | message1Time := time.Now()
261 | if err := loggerDriver.Log(&logger.Message{Line: []byte("{\"a\":\"b\"}"), Source: "stdout", Timestamp: message1Time}); err != nil {
262 | t.Fatal(err)
263 | }
264 | message2Time := time.Now()
265 | if err := loggerDriver.Log(&logger.Message{Line: []byte("notajson"), Source: "stdout", Timestamp: message2Time}); err != nil {
266 | t.Fatal(err)
267 | }
268 |
269 | err = loggerDriver.Close()
270 | if err != nil {
271 | t.Fatal(err)
272 | }
273 |
274 | if len(hec.messages) != 2 {
275 | t.Fatalf("Unexpected values of messages: %v", len(hec.messages))
276 | //t.Fatal("Expected two messages")
277 | }
278 |
279 | if *hec.gzipEnabled {
280 | t.Fatal("Gzip should not be used")
281 | }
282 |
283 | message1 := hec.messages[0]
284 | if message1.Time != fmt.Sprintf("%f", float64(message1Time.UnixNano())/float64(time.Second)) ||
285 | message1.Host != hostname ||
286 | message1.Source != "" ||
287 | message1.SourceType != "splunk_connect_docker" ||
288 | message1.Index != "" {
289 | t.Fatalf("Unexpected values of message 1 %v", message1)
290 | }
291 |
292 | if event, err := message1.EventAsMap(); err != nil {
293 | t.Fatal(err)
294 | } else {
295 | if event["line"] != "{\"a\":\"b\"}" ||
296 | event["source"] != "stdout" ||
297 | event["tag"] != "containeriid" ||
298 | len(event) != 3 {
299 | t.Fatalf("Unexpected event in message %v", event)
300 | }
301 | }
302 |
303 | message2 := hec.messages[1]
304 | if message2.Time != fmt.Sprintf("%f", float64(message2Time.UnixNano())/float64(time.Second)) ||
305 | message2.Host != hostname ||
306 | message2.Source != "" ||
307 | message2.SourceType != "splunk_connect_docker" ||
308 | message2.Index != "" {
309 | t.Fatalf("Unexpected values of message 1 %v", message2)
310 | }
311 |
312 | if event, err := message2.EventAsMap(); err != nil {
313 | t.Fatal(err)
314 | } else {
315 | if event["line"] != "notajson" ||
316 | event["source"] != "stdout" ||
317 | event["tag"] != "containeriid" ||
318 | len(event) != 3 {
319 | t.Fatalf("Unexpected event in message %v", event)
320 | }
321 | }
322 |
323 | err = hec.Close()
324 | if err != nil {
325 | t.Fatal(err)
326 | }
327 | }
328 |
329 | // Verify inline format with a not default settings for most of options
330 | func TestInlineFormatWithNonDefaultOptions(t *testing.T) {
331 | hec := NewHTTPEventCollectorMock(t)
332 |
333 | go hec.Serve()
334 |
335 | info := logger.Info{
336 | Config: map[string]string{
337 | splunkURLKey: hec.URL(),
338 | splunkTokenKey: hec.token,
339 | splunkSourceKey: "mysource",
340 | splunkSourceTypeKey: "mysourcetype",
341 | splunkIndexKey: "myindex",
342 | splunkFormatKey: splunkFormatInline,
343 | splunkGzipCompressionKey: "true",
344 | tagKey: "{{.ImageName}}/{{.Name}}",
345 | labelsKey: "a",
346 | envRegexKey: "^foo",
347 | },
348 | ContainerID: "containeriid",
349 | ContainerName: "/container_name",
350 | ContainerImageID: "contaimageid",
351 | ContainerImageName: "container_image_name",
352 | ContainerLabels: map[string]string{
353 | "a": "b",
354 | },
355 | ContainerEnv: []string{"foo_finder=bar"},
356 | }
357 |
358 | hostname, err := info.Hostname()
359 | if err != nil {
360 | t.Fatal(err)
361 | }
362 |
363 | loggerDriver, err := New(info)
364 | if err != nil {
365 | t.Fatal(err)
366 | }
367 |
368 | if hec.connectionVerified {
369 | t.Fatal("By default connection should not be verified")
370 | }
371 |
372 | splunkLoggerDriver, ok := loggerDriver.(*splunkLoggerInline)
373 | if !ok {
374 | t.Fatal("Unexpected Splunk Logging Driver type")
375 | }
376 |
377 | if splunkLoggerDriver.hec.url != hec.URL()+"/services/collector/event/1.0" ||
378 | splunkLoggerDriver.hec.auth != "Splunk "+hec.token ||
379 | splunkLoggerDriver.nullMessage.Host != hostname ||
380 | splunkLoggerDriver.nullMessage.Source != "mysource" ||
381 | splunkLoggerDriver.nullMessage.SourceType != "mysourcetype" ||
382 | splunkLoggerDriver.nullMessage.Index != "myindex" ||
383 | splunkLoggerDriver.hec.gzipCompression != true ||
384 | splunkLoggerDriver.hec.gzipCompressionLevel != gzip.DefaultCompression ||
385 | splunkLoggerDriver.hec.postMessagesFrequency != defaultPostMessagesFrequency ||
386 | splunkLoggerDriver.hec.postMessagesBatchSize != defaultPostMessagesBatchSize ||
387 | splunkLoggerDriver.hec.bufferMaximum != defaultBufferMaximum ||
388 | cap(splunkLoggerDriver.stream) != defaultStreamChannelSize {
389 | t.Fatal("Values do not match configuration.")
390 | }
391 |
392 | messageTime := time.Now()
393 | if err := loggerDriver.Log(&logger.Message{Line: []byte("1"), Source: "stdout", Timestamp: messageTime}); err != nil {
394 | t.Fatal(err)
395 | }
396 |
397 | err = loggerDriver.Close()
398 | if err != nil {
399 | t.Fatal(err)
400 | }
401 |
402 | if len(hec.messages) != 1 {
403 | t.Fatal("Expected one message")
404 | }
405 |
406 | if !*hec.gzipEnabled {
407 | t.Fatal("Gzip should be used")
408 | }
409 |
410 | message := hec.messages[0]
411 | if message.Time != fmt.Sprintf("%f", float64(messageTime.UnixNano())/float64(time.Second)) ||
412 | message.Host != hostname ||
413 | message.Source != "mysource" ||
414 | message.SourceType != "mysourcetype" ||
415 | message.Index != "myindex" {
416 | t.Fatalf("Unexpected values of message %v", message)
417 | }
418 |
419 | if event, err := message.EventAsMap(); err != nil {
420 | t.Fatal(err)
421 | } else {
422 | if event["line"] != "1" ||
423 | event["source"] != "stdout" ||
424 | event["tag"] != "container_image_name/container_name" ||
425 | event["attrs"].(map[string]interface{})["a"] != "b" ||
426 | event["attrs"].(map[string]interface{})["foo_finder"] != "bar" ||
427 | len(event) != 4 {
428 | t.Fatalf("Unexpected event in message %v", event)
429 | }
430 | }
431 |
432 | err = hec.Close()
433 | if err != nil {
434 | t.Fatal(err)
435 | }
436 | }
437 |
438 | // Verify JSON format
439 | func TestJsonFormat(t *testing.T) {
440 | hec := NewHTTPEventCollectorMock(t)
441 |
442 | go hec.Serve()
443 |
444 | info := logger.Info{
445 | Config: map[string]string{
446 | splunkURLKey: hec.URL(),
447 | splunkTokenKey: hec.token,
448 | splunkFormatKey: splunkFormatJSON,
449 | splunkGzipCompressionKey: "true",
450 | splunkGzipCompressionLevelKey: "1",
451 | },
452 | ContainerID: "containeriid",
453 | ContainerName: "/container_name",
454 | ContainerImageID: "contaimageid",
455 | ContainerImageName: "container_image_name",
456 | }
457 |
458 | hostname, err := info.Hostname()
459 | if err != nil {
460 | t.Fatal(err)
461 | }
462 |
463 | loggerDriver, err := New(info)
464 | if err != nil {
465 | t.Fatal(err)
466 | }
467 |
468 | if hec.connectionVerified {
469 | t.Fatal("By default connection should not be verified")
470 | }
471 |
472 | splunkLoggerDriver, ok := loggerDriver.(*splunkLoggerJSON)
473 | if !ok {
474 | t.Fatal("Unexpected Splunk Logging Driver type")
475 | }
476 |
477 | if splunkLoggerDriver.hec.url != hec.URL()+"/services/collector/event/1.0" ||
478 | splunkLoggerDriver.hec.auth != "Splunk "+hec.token ||
479 | splunkLoggerDriver.nullMessage.Host != hostname ||
480 | splunkLoggerDriver.nullMessage.Source != "" ||
481 | splunkLoggerDriver.nullMessage.SourceType != "splunk_connect_docker" ||
482 | splunkLoggerDriver.nullMessage.Index != "" ||
483 | splunkLoggerDriver.hec.gzipCompression != true ||
484 | splunkLoggerDriver.hec.gzipCompressionLevel != gzip.BestSpeed ||
485 | splunkLoggerDriver.hec.postMessagesFrequency != defaultPostMessagesFrequency ||
486 | splunkLoggerDriver.hec.postMessagesBatchSize != defaultPostMessagesBatchSize ||
487 | splunkLoggerDriver.hec.bufferMaximum != defaultBufferMaximum ||
488 | cap(splunkLoggerDriver.stream) != defaultStreamChannelSize {
489 | t.Fatal("Values do not match configuration.")
490 | }
491 |
492 | message1Time := time.Now()
493 | if err := loggerDriver.Log(&logger.Message{Line: []byte("{\"a\":\"b\"}"), Source: "stdout", Timestamp: message1Time}); err != nil {
494 | t.Fatal(err)
495 | }
496 | message2Time := time.Now()
497 | if err := loggerDriver.Log(&logger.Message{Line: []byte("notjson"), Source: "stdout", Timestamp: message2Time}); err != nil {
498 | t.Fatal(err)
499 | }
500 |
501 | err = loggerDriver.Close()
502 | if err != nil {
503 | t.Fatal(err)
504 | }
505 |
506 | if len(hec.messages) != 2 {
507 | t.Fatal("Expected two messages")
508 | }
509 |
510 | message1 := hec.messages[0]
511 | if message1.Time != fmt.Sprintf("%f", float64(message1Time.UnixNano())/float64(time.Second)) ||
512 | message1.Host != hostname ||
513 | message1.Source != "" ||
514 | message1.SourceType != "splunk_connect_docker" ||
515 | message1.Index != "" {
516 | t.Fatalf("Unexpected values of message 1 %v", message1)
517 | }
518 |
519 | if event, err := message1.EventAsMap(); err != nil {
520 | t.Fatal(err)
521 | } else {
522 | if event["line"].(map[string]interface{})["a"] != "b" ||
523 | event["source"] != "stdout" ||
524 | event["tag"] != "containeriid" ||
525 | len(event) != 3 {
526 | t.Fatalf("Unexpected event in message 1 %v", event)
527 | }
528 | }
529 |
530 | message2 := hec.messages[1]
531 | if message2.Time != fmt.Sprintf("%f", float64(message2Time.UnixNano())/float64(time.Second)) ||
532 | message2.Host != hostname ||
533 | message2.Source != "" ||
534 | message2.SourceType != "splunk_connect_docker" ||
535 | message2.Index != "" {
536 | t.Fatalf("Unexpected values of message 2 %v", message2)
537 | }
538 |
539 | // If message cannot be parsed as JSON - it should be sent as a line
540 | if event, err := message2.EventAsMap(); err != nil {
541 | t.Fatal(err)
542 | } else {
543 | if event["line"] != "notjson" ||
544 | event["source"] != "stdout" ||
545 | event["tag"] != "containeriid" ||
546 | len(event) != 3 {
547 | t.Fatalf("Unexpected event in message 2 %v", event)
548 | }
549 | }
550 |
551 | err = hec.Close()
552 | if err != nil {
553 | t.Fatal(err)
554 | }
555 | }
556 |
557 | // Verify raw format
558 | func TestRawFormat(t *testing.T) {
559 | hec := NewHTTPEventCollectorMock(t)
560 |
561 | go hec.Serve()
562 |
563 | info := logger.Info{
564 | Config: map[string]string{
565 | splunkURLKey: hec.URL(),
566 | splunkTokenKey: hec.token,
567 | splunkFormatKey: splunkFormatRaw,
568 | },
569 | ContainerID: "containeriid",
570 | ContainerName: "/container_name",
571 | ContainerImageID: "contaimageid",
572 | ContainerImageName: "container_image_name",
573 | }
574 |
575 | hostname, err := info.Hostname()
576 | if err != nil {
577 | t.Fatal(err)
578 | }
579 |
580 | loggerDriver, err := New(info)
581 | if err != nil {
582 | t.Fatal(err)
583 | }
584 |
585 | if hec.connectionVerified {
586 | t.Fatal("By default connection should not be verified")
587 | }
588 |
589 | splunkLoggerDriver, ok := loggerDriver.(*splunkLoggerRaw)
590 | if !ok {
591 | t.Fatal("Unexpected Splunk Logging Driver type")
592 | }
593 |
594 | if splunkLoggerDriver.hec.url != hec.URL()+"/services/collector/event/1.0" ||
595 | splunkLoggerDriver.hec.auth != "Splunk "+hec.token ||
596 | splunkLoggerDriver.nullMessage.Host != hostname ||
597 | splunkLoggerDriver.nullMessage.Source != "" ||
598 | splunkLoggerDriver.nullMessage.SourceType != "splunk_connect_docker" ||
599 | splunkLoggerDriver.nullMessage.Index != "" ||
600 | splunkLoggerDriver.hec.gzipCompression != false ||
601 | splunkLoggerDriver.hec.postMessagesFrequency != defaultPostMessagesFrequency ||
602 | splunkLoggerDriver.hec.postMessagesBatchSize != defaultPostMessagesBatchSize ||
603 | splunkLoggerDriver.hec.bufferMaximum != defaultBufferMaximum ||
604 | cap(splunkLoggerDriver.stream) != defaultStreamChannelSize ||
605 | string(splunkLoggerDriver.prefix) != "containeriid " {
606 | t.Fatal("Values do not match configuration.")
607 | }
608 |
609 | message1Time := time.Now()
610 | if err := loggerDriver.Log(&logger.Message{Line: []byte("{\"a\":\"b\"}"), Source: "stdout", Timestamp: message1Time}); err != nil {
611 | t.Fatal(err)
612 | }
613 | message2Time := time.Now()
614 | if err := loggerDriver.Log(&logger.Message{Line: []byte("notjson"), Source: "stdout", Timestamp: message2Time}); err != nil {
615 | t.Fatal(err)
616 | }
617 |
618 | err = loggerDriver.Close()
619 | if err != nil {
620 | t.Fatal(err)
621 | }
622 |
623 | if len(hec.messages) != 2 {
624 | t.Fatal("Expected two messages")
625 | }
626 |
627 | message1 := hec.messages[0]
628 | if message1.Time != fmt.Sprintf("%f", float64(message1Time.UnixNano())/float64(time.Second)) ||
629 | message1.Host != hostname ||
630 | message1.Source != "" ||
631 | message1.SourceType != "splunk_connect_docker" ||
632 | message1.Index != "" {
633 | t.Fatalf("Unexpected values of message 1 %v", message1)
634 | }
635 |
636 | if event, err := message1.EventAsString(); err != nil {
637 | t.Fatal(err)
638 | } else {
639 | if event != "containeriid {\"a\":\"b\"}" {
640 | t.Fatalf("Unexpected event in message 1 %v", event)
641 | }
642 | }
643 |
644 | message2 := hec.messages[1]
645 | if message2.Time != fmt.Sprintf("%f", float64(message2Time.UnixNano())/float64(time.Second)) ||
646 | message2.Host != hostname ||
647 | message2.Source != "" ||
648 | message2.SourceType != "splunk_connect_docker" ||
649 | message2.Index != "" {
650 | t.Fatalf("Unexpected values of message 2 %v", message2)
651 | }
652 |
653 | if event, err := message2.EventAsString(); err != nil {
654 | t.Fatal(err)
655 | } else {
656 | if event != "containeriid notjson" {
657 | t.Fatalf("Unexpected event in message 1 %v", event)
658 | }
659 | }
660 |
661 | err = hec.Close()
662 | if err != nil {
663 | t.Fatal(err)
664 | }
665 | }
666 |
667 | // Verify raw format with labels
668 | func TestRawFormatWithLabels(t *testing.T) {
669 | hec := NewHTTPEventCollectorMock(t)
670 |
671 | go hec.Serve()
672 |
673 | info := logger.Info{
674 | Config: map[string]string{
675 | splunkURLKey: hec.URL(),
676 | splunkTokenKey: hec.token,
677 | splunkFormatKey: splunkFormatRaw,
678 | labelsKey: "a",
679 | },
680 | ContainerID: "containeriid",
681 | ContainerName: "/container_name",
682 | ContainerImageID: "contaimageid",
683 | ContainerImageName: "container_image_name",
684 | ContainerLabels: map[string]string{
685 | "a": "b",
686 | },
687 | }
688 |
689 | hostname, err := info.Hostname()
690 | if err != nil {
691 | t.Fatal(err)
692 | }
693 |
694 | loggerDriver, err := New(info)
695 | if err != nil {
696 | t.Fatal(err)
697 | }
698 |
699 | if hec.connectionVerified {
700 | t.Fatal("By default connection should not be verified")
701 | }
702 |
703 | splunkLoggerDriver, ok := loggerDriver.(*splunkLoggerRaw)
704 | if !ok {
705 | t.Fatal("Unexpected Splunk Logging Driver type")
706 | }
707 |
708 | if splunkLoggerDriver.hec.url != hec.URL()+"/services/collector/event/1.0" ||
709 | splunkLoggerDriver.hec.auth != "Splunk "+hec.token ||
710 | splunkLoggerDriver.nullMessage.Host != hostname ||
711 | splunkLoggerDriver.nullMessage.Source != "" ||
712 | splunkLoggerDriver.nullMessage.SourceType != "splunk_connect_docker" ||
713 | splunkLoggerDriver.nullMessage.Index != "" ||
714 | splunkLoggerDriver.hec.gzipCompression != false ||
715 | splunkLoggerDriver.hec.postMessagesFrequency != defaultPostMessagesFrequency ||
716 | splunkLoggerDriver.hec.postMessagesBatchSize != defaultPostMessagesBatchSize ||
717 | splunkLoggerDriver.hec.bufferMaximum != defaultBufferMaximum ||
718 | cap(splunkLoggerDriver.stream) != defaultStreamChannelSize ||
719 | string(splunkLoggerDriver.prefix) != "containeriid a=b " {
720 | t.Fatal("Values do not match configuration.")
721 | }
722 |
723 | message1Time := time.Now()
724 | if err := loggerDriver.Log(&logger.Message{Line: []byte("{\"a\":\"b\"}"), Source: "stdout", Timestamp: message1Time}); err != nil {
725 | t.Fatal(err)
726 | }
727 | message2Time := time.Now()
728 | if err := loggerDriver.Log(&logger.Message{Line: []byte("notjson"), Source: "stdout", Timestamp: message2Time}); err != nil {
729 | t.Fatal(err)
730 | }
731 |
732 | err = loggerDriver.Close()
733 | if err != nil {
734 | t.Fatal(err)
735 | }
736 |
737 | if len(hec.messages) != 2 {
738 | t.Fatal("Expected two messages")
739 | }
740 |
741 | message1 := hec.messages[0]
742 | if message1.Time != fmt.Sprintf("%f", float64(message1Time.UnixNano())/float64(time.Second)) ||
743 | message1.Host != hostname ||
744 | message1.Source != "" ||
745 | message1.SourceType != "splunk_connect_docker" ||
746 | message1.Index != "" {
747 | t.Fatalf("Unexpected values of message 1 %v", message1)
748 | }
749 |
750 | if event, err := message1.EventAsString(); err != nil {
751 | t.Fatal(err)
752 | } else {
753 | if event != "containeriid a=b {\"a\":\"b\"}" {
754 | t.Fatalf("Unexpected event in message 1 %v", event)
755 | }
756 | }
757 |
758 | message2 := hec.messages[1]
759 | if message2.Time != fmt.Sprintf("%f", float64(message2Time.UnixNano())/float64(time.Second)) ||
760 | message2.Host != hostname ||
761 | message2.Source != "" ||
762 | message2.SourceType != "splunk_connect_docker" ||
763 | message2.Index != "" {
764 | t.Fatalf("Unexpected values of message 2 %v", message2)
765 | }
766 |
767 | if event, err := message2.EventAsString(); err != nil {
768 | t.Fatal(err)
769 | } else {
770 | if event != "containeriid a=b notjson" {
771 | t.Fatalf("Unexpected event in message 2 %v", event)
772 | }
773 | }
774 |
775 | err = hec.Close()
776 | if err != nil {
777 | t.Fatal(err)
778 | }
779 | }
780 |
781 | // Verify that Splunk Logging Driver can accept tag="" which will allow to send raw messages
782 | // in the same way we get them in stdout/stderr
783 | func TestRawFormatWithoutTag(t *testing.T) {
784 | hec := NewHTTPEventCollectorMock(t)
785 |
786 | go hec.Serve()
787 |
788 | info := logger.Info{
789 | Config: map[string]string{
790 | splunkURLKey: hec.URL(),
791 | splunkTokenKey: hec.token,
792 | splunkFormatKey: splunkFormatRaw,
793 | tagKey: "",
794 | },
795 | ContainerID: "containeriid",
796 | ContainerName: "/container_name",
797 | ContainerImageID: "contaimageid",
798 | ContainerImageName: "container_image_name",
799 | }
800 |
801 | hostname, err := info.Hostname()
802 | if err != nil {
803 | t.Fatal(err)
804 | }
805 |
806 | loggerDriver, err := New(info)
807 | if err != nil {
808 | t.Fatal(err)
809 | }
810 |
811 | if hec.connectionVerified {
812 | t.Fatal("By default connection should not be verified")
813 | }
814 |
815 | splunkLoggerDriver, ok := loggerDriver.(*splunkLoggerRaw)
816 | if !ok {
817 | t.Fatal("Unexpected Splunk Logging Driver type")
818 | }
819 |
820 | if splunkLoggerDriver.hec.url != hec.URL()+"/services/collector/event/1.0" ||
821 | splunkLoggerDriver.hec.auth != "Splunk "+hec.token ||
822 | splunkLoggerDriver.nullMessage.Host != hostname ||
823 | splunkLoggerDriver.nullMessage.Source != "" ||
824 | splunkLoggerDriver.nullMessage.SourceType != "splunk_connect_docker" ||
825 | splunkLoggerDriver.nullMessage.Index != "" ||
826 | splunkLoggerDriver.hec.gzipCompression != false ||
827 | splunkLoggerDriver.hec.postMessagesFrequency != defaultPostMessagesFrequency ||
828 | splunkLoggerDriver.hec.postMessagesBatchSize != defaultPostMessagesBatchSize ||
829 | splunkLoggerDriver.hec.bufferMaximum != defaultBufferMaximum ||
830 | cap(splunkLoggerDriver.stream) != defaultStreamChannelSize ||
831 | string(splunkLoggerDriver.prefix) != "" {
832 | t.Log(string(splunkLoggerDriver.prefix) + "a")
833 | t.Fatal("Values do not match configuration.")
834 | }
835 |
836 | message1Time := time.Now()
837 | if err := loggerDriver.Log(&logger.Message{Line: []byte("{\"a\":\"b\"}"), Source: "stdout", Timestamp: message1Time}); err != nil {
838 | t.Fatal(err)
839 | }
840 | message2Time := time.Now()
841 | if err := loggerDriver.Log(&logger.Message{Line: []byte("notjson"), Source: "stdout", Timestamp: message2Time}); err != nil {
842 | t.Fatal(err)
843 | }
844 |
845 | err = loggerDriver.Close()
846 | if err != nil {
847 | t.Fatal(err)
848 | }
849 |
850 | if len(hec.messages) != 2 {
851 | t.Fatal("Expected two messages")
852 | }
853 |
854 | message1 := hec.messages[0]
855 | if message1.Time != fmt.Sprintf("%f", float64(message1Time.UnixNano())/float64(time.Second)) ||
856 | message1.Host != hostname ||
857 | message1.Source != "" ||
858 | message1.SourceType != "splunk_connect_docker" ||
859 | message1.Index != "" {
860 | t.Fatalf("Unexpected values of message 1 %v", message1)
861 | }
862 |
863 | if event, err := message1.EventAsString(); err != nil {
864 | t.Fatal(err)
865 | } else {
866 | if event != "{\"a\":\"b\"}" {
867 | t.Fatalf("Unexpected event in message 1 %v", event)
868 | }
869 | }
870 |
871 | message2 := hec.messages[1]
872 | if message2.Time != fmt.Sprintf("%f", float64(message2Time.UnixNano())/float64(time.Second)) ||
873 | message2.Host != hostname ||
874 | message2.Source != "" ||
875 | message2.SourceType != "splunk_connect_docker" ||
876 | message2.Index != "" {
877 | t.Fatalf("Unexpected values of message 2 %v", message2)
878 | }
879 |
880 | if event, err := message2.EventAsString(); err != nil {
881 | t.Fatal(err)
882 | } else {
883 | if event != "notjson" {
884 | t.Fatalf("Unexpected event in message 2 %v", event)
885 | }
886 | }
887 |
888 | err = hec.Close()
889 | if err != nil {
890 | t.Fatal(err)
891 | }
892 | }
893 |
894 | // Verify that we will send messages in batches with default batching parameters,
895 | // but change frequency to be sure that numOfRequests will match expected 17 requests
896 | func TestBatching(t *testing.T) {
897 | if err := os.Setenv(envVarPostMessagesFrequency, "10h"); err != nil {
898 | t.Fatal(err)
899 | }
900 |
901 | hec := NewHTTPEventCollectorMock(t)
902 |
903 | go hec.Serve()
904 |
905 | info := logger.Info{
906 | Config: map[string]string{
907 | splunkURLKey: hec.URL(),
908 | splunkTokenKey: hec.token,
909 | },
910 | ContainerID: "containeriid",
911 | ContainerName: "/container_name",
912 | ContainerImageID: "contaimageid",
913 | ContainerImageName: "container_image_name",
914 | }
915 |
916 | loggerDriver, err := New(info)
917 | if err != nil {
918 | t.Fatal(err)
919 | }
920 |
921 | for i := 0; i < defaultStreamChannelSize*4; i++ {
922 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
923 | t.Fatal(err)
924 | }
925 | }
926 |
927 | err = loggerDriver.Close()
928 | if err != nil {
929 | t.Fatal(err)
930 | }
931 |
932 | if len(hec.messages) != defaultStreamChannelSize*4 {
933 | t.Fatal("Not all messages delivered")
934 | }
935 |
936 | for i, message := range hec.messages {
937 | if event, err := message.EventAsMap(); err != nil {
938 | t.Fatal(err)
939 | } else {
940 | if event["line"] != fmt.Sprintf("%d", i) {
941 | t.Fatalf("Unexpected event in message %v", event)
942 | }
943 | }
944 | }
945 |
946 | // 16 batches
947 | if hec.numOfRequests != 16 {
948 | t.Fatalf("Unexpected number of requests %d", hec.numOfRequests)
949 | }
950 |
951 | err = hec.Close()
952 | if err != nil {
953 | t.Fatal(err)
954 | }
955 |
956 | if err := os.Setenv(envVarPostMessagesFrequency, ""); err != nil {
957 | t.Fatal(err)
958 | }
959 | }
960 |
961 | // Verify that test is using time to fire events not rare than specified frequency
962 | func TestFrequency(t *testing.T) {
963 | if err := os.Setenv(envVarPostMessagesFrequency, "5ms"); err != nil {
964 | t.Fatal(err)
965 | }
966 |
967 | hec := NewHTTPEventCollectorMock(t)
968 |
969 | go hec.Serve()
970 |
971 | info := logger.Info{
972 | Config: map[string]string{
973 | splunkURLKey: hec.URL(),
974 | splunkTokenKey: hec.token,
975 | },
976 | ContainerID: "containeriid",
977 | ContainerName: "/container_name",
978 | ContainerImageID: "contaimageid",
979 | ContainerImageName: "container_image_name",
980 | }
981 |
982 | loggerDriver, err := New(info)
983 | if err != nil {
984 | t.Fatal(err)
985 | }
986 |
987 | for i := 0; i < 10; i++ {
988 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
989 | t.Fatal(err)
990 | }
991 | time.Sleep(15 * time.Millisecond)
992 | }
993 |
994 | err = loggerDriver.Close()
995 | if err != nil {
996 | t.Fatal(err)
997 | }
998 |
999 | if len(hec.messages) != 10 {
1000 | t.Fatal("Not all messages delivered")
1001 | }
1002 |
1003 | for i, message := range hec.messages {
1004 | if event, err := message.EventAsMap(); err != nil {
1005 | t.Fatal(err)
1006 | } else {
1007 | if event["line"] != fmt.Sprintf("%d", i) {
1008 | t.Fatalf("Unexpected event in message %v", event)
1009 | }
1010 | }
1011 | }
1012 |
1013 | // 1 to verify connection and 10 to verify that we have sent messages with required frequency,
1014 | // but because frequency is too small (to keep test quick), instead of 11, use 9 if context switches will be slow
1015 | if hec.numOfRequests < 9 {
1016 | t.Fatalf("Unexpected number of requests %d", hec.numOfRequests)
1017 | }
1018 |
1019 | err = hec.Close()
1020 | if err != nil {
1021 | t.Fatal(err)
1022 | }
1023 |
1024 | if err := os.Setenv(envVarPostMessagesFrequency, ""); err != nil {
1025 | t.Fatal(err)
1026 | }
1027 | }
1028 |
1029 | // Simulate behavior similar to first version of Splunk Logging Driver, when we were sending one message
1030 | // per request
1031 | func TestOneMessagePerRequest(t *testing.T) {
1032 | if err := os.Setenv(envVarPostMessagesFrequency, "10h"); err != nil {
1033 | t.Fatal(err)
1034 | }
1035 |
1036 | if err := os.Setenv(envVarPostMessagesBatchSize, "1"); err != nil {
1037 | t.Fatal(err)
1038 | }
1039 |
1040 | if err := os.Setenv(envVarBufferMaximum, "1"); err != nil {
1041 | t.Fatal(err)
1042 | }
1043 |
1044 | if err := os.Setenv(envVarStreamChannelSize, "0"); err != nil {
1045 | t.Fatal(err)
1046 | }
1047 |
1048 | hec := NewHTTPEventCollectorMock(t)
1049 |
1050 | go hec.Serve()
1051 |
1052 | info := logger.Info{
1053 | Config: map[string]string{
1054 | splunkURLKey: hec.URL(),
1055 | splunkTokenKey: hec.token,
1056 | },
1057 | ContainerID: "containeriid",
1058 | ContainerName: "/container_name",
1059 | ContainerImageID: "contaimageid",
1060 | ContainerImageName: "container_image_name",
1061 | }
1062 |
1063 | loggerDriver, err := New(info)
1064 | if err != nil {
1065 | t.Fatal(err)
1066 | }
1067 |
1068 | for i := 0; i < 10; i++ {
1069 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
1070 | t.Fatal(err)
1071 | }
1072 | }
1073 |
1074 | err = loggerDriver.Close()
1075 | if err != nil {
1076 | t.Fatal(err)
1077 | }
1078 |
1079 | if len(hec.messages) != 10 {
1080 | t.Fatal("Not all messages delivered")
1081 | }
1082 |
1083 | for i, message := range hec.messages {
1084 | if event, err := message.EventAsMap(); err != nil {
1085 | t.Fatal(err)
1086 | } else {
1087 | if event["line"] != fmt.Sprintf("%d", i) {
1088 | t.Fatalf("Unexpected event in message %v", event)
1089 | }
1090 | }
1091 | }
1092 |
1093 | // 10 messages
1094 | if hec.numOfRequests != 10 {
1095 | t.Fatalf("Unexpected number of requests %d", hec.numOfRequests)
1096 | }
1097 |
1098 | err = hec.Close()
1099 | if err != nil {
1100 | t.Fatal(err)
1101 | }
1102 |
1103 | if err := os.Setenv(envVarPostMessagesFrequency, ""); err != nil {
1104 | t.Fatal(err)
1105 | }
1106 |
1107 | if err := os.Setenv(envVarPostMessagesBatchSize, ""); err != nil {
1108 | t.Fatal(err)
1109 | }
1110 |
1111 | if err := os.Setenv(envVarBufferMaximum, ""); err != nil {
1112 | t.Fatal(err)
1113 | }
1114 |
1115 | if err := os.Setenv(envVarStreamChannelSize, ""); err != nil {
1116 | t.Fatal(err)
1117 | }
1118 | }
1119 |
1120 | // Driver should not be created when HEC is unresponsive
1121 | func TestVerify(t *testing.T) {
1122 | hec := NewHTTPEventCollectorMock(t)
1123 | hec.simulateServerError = true
1124 | go hec.Serve()
1125 |
1126 | info := logger.Info{
1127 | Config: map[string]string{
1128 | splunkURLKey: hec.URL(),
1129 | splunkTokenKey: hec.token,
1130 | splunkVerifyConnectionKey: "true",
1131 | },
1132 | ContainerID: "containeriid",
1133 | ContainerName: "/container_name",
1134 | ContainerImageID: "contaimageid",
1135 | ContainerImageName: "container_image_name",
1136 | }
1137 |
1138 | _, err := New(info)
1139 | if err == nil {
1140 | t.Fatal("Expecting driver to fail, when server is unresponsive")
1141 | }
1142 |
1143 | err = hec.Close()
1144 | if err != nil {
1145 | t.Fatal(err)
1146 | }
1147 | }
1148 |
1149 | // Verify that user can specify to skip verification that Splunk HEC is working.
1150 | // Also in this test we verify retry logic.
1151 | func TestSkipVerify(t *testing.T) {
1152 | hec := NewHTTPEventCollectorMock(t)
1153 | hec.simulateServerError = true
1154 | go hec.Serve()
1155 |
1156 | info := logger.Info{
1157 | Config: map[string]string{
1158 | splunkURLKey: hec.URL(),
1159 | splunkTokenKey: hec.token,
1160 | splunkVerifyConnectionKey: "false",
1161 | },
1162 | ContainerID: "containeriid",
1163 | ContainerName: "/container_name",
1164 | ContainerImageID: "contaimageid",
1165 | ContainerImageName: "container_image_name",
1166 | }
1167 |
1168 | loggerDriver, err := New(info)
1169 | if err != nil {
1170 | t.Fatal(err)
1171 | }
1172 |
1173 | if hec.connectionVerified {
1174 | t.Fatal("Connection should not be verified")
1175 | }
1176 |
1177 | for i := 0; i < defaultStreamChannelSize*2; i++ {
1178 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
1179 | t.Fatal(err)
1180 | }
1181 | }
1182 |
1183 | if len(hec.messages) != 0 {
1184 | t.Fatal("No messages should be accepted at this point")
1185 | }
1186 |
1187 | hec.simulateServerError = false
1188 |
1189 | for i := defaultStreamChannelSize * 2; i < defaultStreamChannelSize*4; i++ {
1190 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
1191 | t.Fatal(err)
1192 | }
1193 | }
1194 |
1195 | err = loggerDriver.Close()
1196 | if err != nil {
1197 | t.Fatal(err)
1198 | }
1199 |
1200 | if len(hec.messages) != defaultStreamChannelSize*4 {
1201 | t.Fatal("Not all messages delivered")
1202 | }
1203 |
1204 | for i, message := range hec.messages {
1205 | if event, err := message.EventAsMap(); err != nil {
1206 | t.Fatal(err)
1207 | } else {
1208 | if event["line"] != fmt.Sprintf("%d", i) {
1209 | t.Fatalf("Unexpected event in message %v", event)
1210 | }
1211 | }
1212 | }
1213 |
1214 | err = hec.Close()
1215 | if err != nil {
1216 | t.Fatal(err)
1217 | }
1218 | }
1219 |
1220 | // Verify logic for when we filled whole buffer
1221 | func TestBufferMaximum(t *testing.T) {
1222 | if err := os.Setenv(envVarPostMessagesBatchSize, "2"); err != nil {
1223 | t.Fatal(err)
1224 | }
1225 |
1226 | if err := os.Setenv(envVarBufferMaximum, "10"); err != nil {
1227 | t.Fatal(err)
1228 | }
1229 |
1230 | if err := os.Setenv(envVarStreamChannelSize, "0"); err != nil {
1231 | t.Fatal(err)
1232 | }
1233 |
1234 | hec := NewHTTPEventCollectorMock(t)
1235 | hec.simulateServerError = true
1236 | go hec.Serve()
1237 |
1238 | info := logger.Info{
1239 | Config: map[string]string{
1240 | splunkURLKey: hec.URL(),
1241 | splunkTokenKey: hec.token,
1242 | splunkVerifyConnectionKey: "false",
1243 | },
1244 | ContainerID: "containeriid",
1245 | ContainerName: "/container_name",
1246 | ContainerImageID: "contaimageid",
1247 | ContainerImageName: "container_image_name",
1248 | }
1249 |
1250 | loggerDriver, err := New(info)
1251 | if err != nil {
1252 | t.Fatal(err)
1253 | }
1254 |
1255 | if hec.connectionVerified {
1256 | t.Fatal("Connection should not be verified")
1257 | }
1258 |
1259 | for i := 0; i < 11; i++ {
1260 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
1261 | t.Fatal(err)
1262 | }
1263 | }
1264 |
1265 | if len(hec.messages) != 0 {
1266 | t.Fatal("No messages should be accepted at this point")
1267 | }
1268 |
1269 | hec.simulateServerError = false
1270 |
1271 | err = loggerDriver.Close()
1272 | if err != nil {
1273 | t.Fatal(err)
1274 | }
1275 |
1276 | if len(hec.messages) != 9 {
1277 | t.Fatalf("Expected # of messages %d, got %d", 9, len(hec.messages))
1278 | }
1279 |
1280 | // First 1000 messages are written to daemon log when buffer was full
1281 | for i, message := range hec.messages {
1282 | if event, err := message.EventAsMap(); err != nil {
1283 | t.Fatal(err)
1284 | } else {
1285 | if event["line"] != fmt.Sprintf("%d", i+2) {
1286 | t.Fatalf("Unexpected event in message %v", event)
1287 | }
1288 | }
1289 | }
1290 |
1291 | err = hec.Close()
1292 | if err != nil {
1293 | t.Fatal(err)
1294 | }
1295 |
1296 | if err := os.Setenv(envVarPostMessagesBatchSize, ""); err != nil {
1297 | t.Fatal(err)
1298 | }
1299 |
1300 | if err := os.Setenv(envVarBufferMaximum, ""); err != nil {
1301 | t.Fatal(err)
1302 | }
1303 |
1304 | if err := os.Setenv(envVarStreamChannelSize, ""); err != nil {
1305 | t.Fatal(err)
1306 | }
1307 | }
1308 |
1309 | // Verify that we are not blocking close when HEC is down for the whole time
1310 | func TestServerAlwaysDown(t *testing.T) {
1311 | if err := os.Setenv(envVarPostMessagesBatchSize, "2"); err != nil {
1312 | t.Fatal(err)
1313 | }
1314 |
1315 | if err := os.Setenv(envVarBufferMaximum, "4"); err != nil {
1316 | t.Fatal(err)
1317 | }
1318 |
1319 | if err := os.Setenv(envVarStreamChannelSize, "0"); err != nil {
1320 | t.Fatal(err)
1321 | }
1322 |
1323 | hec := NewHTTPEventCollectorMock(t)
1324 | hec.simulateServerError = true
1325 | go hec.Serve()
1326 |
1327 | info := logger.Info{
1328 | Config: map[string]string{
1329 | splunkURLKey: hec.URL(),
1330 | splunkTokenKey: hec.token,
1331 | splunkVerifyConnectionKey: "false",
1332 | },
1333 | ContainerID: "containeriid",
1334 | ContainerName: "/container_name",
1335 | ContainerImageID: "contaimageid",
1336 | ContainerImageName: "container_image_name",
1337 | }
1338 |
1339 | loggerDriver, err := New(info)
1340 | if err != nil {
1341 | t.Fatal(err)
1342 | }
1343 |
1344 | if hec.connectionVerified {
1345 | t.Fatal("Connection should not be verified")
1346 | }
1347 |
1348 | for i := 0; i < 5; i++ {
1349 | if err := loggerDriver.Log(&logger.Message{Line: []byte(fmt.Sprintf("%d", i)), Source: "stdout", Timestamp: time.Now()}); err != nil {
1350 | t.Fatal(err)
1351 | }
1352 | }
1353 |
1354 | err = loggerDriver.Close()
1355 | if err != nil {
1356 | t.Fatal(err)
1357 | }
1358 |
1359 | if len(hec.messages) != 0 {
1360 | t.Fatal("No messages should be sent")
1361 | }
1362 |
1363 | err = hec.Close()
1364 | if err != nil {
1365 | t.Fatal(err)
1366 | }
1367 |
1368 | if err := os.Setenv(envVarPostMessagesBatchSize, ""); err != nil {
1369 | t.Fatal(err)
1370 | }
1371 |
1372 | if err := os.Setenv(envVarBufferMaximum, ""); err != nil {
1373 | t.Fatal(err)
1374 | }
1375 |
1376 | if err := os.Setenv(envVarStreamChannelSize, ""); err != nil {
1377 | t.Fatal(err)
1378 | }
1379 | }
1380 |
1381 | // Cannot send messages after we close driver
1382 | func TestCannotSendAfterClose(t *testing.T) {
1383 | hec := NewHTTPEventCollectorMock(t)
1384 | go hec.Serve()
1385 |
1386 | info := logger.Info{
1387 | Config: map[string]string{
1388 | splunkURLKey: hec.URL(),
1389 | splunkTokenKey: hec.token,
1390 | },
1391 | ContainerID: "containeriid",
1392 | ContainerName: "/container_name",
1393 | ContainerImageID: "contaimageid",
1394 | ContainerImageName: "container_image_name",
1395 | }
1396 |
1397 | loggerDriver, err := New(info)
1398 | if err != nil {
1399 | t.Fatal(err)
1400 | }
1401 |
1402 | if err := loggerDriver.Log(&logger.Message{Line: []byte("message1"), Source: "stdout", Timestamp: time.Now()}); err != nil {
1403 | t.Fatal(err)
1404 | }
1405 |
1406 | err = loggerDriver.Close()
1407 | if err != nil {
1408 | t.Fatal(err)
1409 | }
1410 |
1411 | if err := loggerDriver.Log(&logger.Message{Line: []byte("message2"), Source: "stdout", Timestamp: time.Now()}); err == nil {
1412 | t.Fatal("Driver should not allow to send messages after close")
1413 | }
1414 |
1415 | if len(hec.messages) != 1 {
1416 | t.Fatal("Only one message should be sent")
1417 | }
1418 |
1419 | message := hec.messages[0]
1420 | if event, err := message.EventAsMap(); err != nil {
1421 | t.Fatal(err)
1422 | } else {
1423 | if event["line"] != "message1" {
1424 | t.Fatalf("Unexpected event in message %v", event)
1425 | }
1426 | }
1427 |
1428 | err = hec.Close()
1429 | if err != nil {
1430 | t.Fatal(err)
1431 | }
1432 | }
1433 |
1434 | func TestSetTelemetry(t *testing.T) {
1435 | if err := os.Setenv(envVarSplunkTelemetry, ""); err != nil {
1436 | t.Fatal(err)
1437 | }
1438 | }
1439 |
--------------------------------------------------------------------------------
/splunkhecmock_test.go:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2018 Splunk, Inc..
3 | *
4 | * Licensed under the Apache License, Version 2.0 (the "License");
5 | * you may not use this file except in compliance with the License.
6 | * You may obtain a copy of the License at
7 | *
8 | * http://www.apache.org/licenses/LICENSE-2.0
9 | *
10 | * Unless required by applicable law or agreed to in writing, software
11 | * distributed under the License is distributed on an "AS IS" BASIS,
12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | * See the License for the specific language governing permissions and
14 | * limitations under the License.
15 | */
16 |
17 | package main
18 |
19 | import (
20 | "compress/gzip"
21 | "encoding/json"
22 | "fmt"
23 | "io"
24 | "io/ioutil"
25 | "net"
26 | "net/http"
27 | "testing"
28 | )
29 |
30 | func (message *splunkMessage) EventAsString() (string, error) {
31 | if val, ok := message.Event.(string); ok {
32 | return val, nil
33 | }
34 | return "", fmt.Errorf("cannot cast Event %v to string", message.Event)
35 | }
36 |
37 | func (message *splunkMessage) EventAsMap() (map[string]interface{}, error) {
38 | if val, ok := message.Event.(map[string]interface{}); ok {
39 | return val, nil
40 | }
41 | return nil, fmt.Errorf("cannot cast Event %v to map", message.Event)
42 | }
43 |
44 | type HTTPEventCollectorMock struct {
45 | tcpAddr *net.TCPAddr
46 | tcpListener *net.TCPListener
47 |
48 | token string
49 | simulateServerError bool
50 |
51 | test *testing.T
52 |
53 | connectionVerified bool
54 | gzipEnabled *bool
55 | messages []*splunkMessage
56 | numOfRequests int
57 | }
58 |
59 | func NewHTTPEventCollectorMock(t *testing.T) *HTTPEventCollectorMock {
60 | tcpAddr := &net.TCPAddr{IP: []byte{127, 0, 0, 1}, Port: 0, Zone: ""}
61 | tcpListener, err := net.ListenTCP("tcp", tcpAddr)
62 | if err != nil {
63 | t.Fatal(err)
64 | }
65 | return &HTTPEventCollectorMock{
66 | tcpAddr: tcpAddr,
67 | tcpListener: tcpListener,
68 | token: "4642492F-D8BD-47F1-A005-0C08AE4657DF",
69 | simulateServerError: false,
70 | test: t,
71 | connectionVerified: false}
72 | }
73 |
74 | func (hec *HTTPEventCollectorMock) URL() string {
75 | return "http://" + hec.tcpListener.Addr().String()
76 | }
77 |
78 | func (hec *HTTPEventCollectorMock) Serve() error {
79 | return http.Serve(hec.tcpListener, hec)
80 | }
81 |
82 | func (hec *HTTPEventCollectorMock) Close() error {
83 | return hec.tcpListener.Close()
84 | }
85 |
86 | func (hec *HTTPEventCollectorMock) ServeHTTP(writer http.ResponseWriter, request *http.Request) {
87 | var err error
88 |
89 | hec.numOfRequests++
90 |
91 | if hec.simulateServerError {
92 | if request.Body != nil {
93 | defer request.Body.Close()
94 | }
95 | writer.WriteHeader(http.StatusInternalServerError)
96 | return
97 | }
98 |
99 | switch request.Method {
100 | case http.MethodGet:
101 | // Verify that options method is getting called only once
102 | if hec.connectionVerified {
103 | hec.test.Errorf("Connection should not be verified more than once. Got second request with %s method.", request.Method)
104 | }
105 | hec.connectionVerified = true
106 | writer.WriteHeader(http.StatusOK)
107 | case http.MethodPost:
108 | // Always verify that Driver is using correct path to HEC
109 | if request.URL.String() != "/services/collector/event/1.0" {
110 | hec.test.Errorf("Unexpected path %v", request.URL)
111 | }
112 | defer request.Body.Close()
113 |
114 | if authorization, ok := request.Header["Authorization"]; !ok || authorization[0] != ("Splunk "+hec.token) {
115 | hec.test.Error("Authorization header is invalid.")
116 | }
117 |
118 | gzipEnabled := false
119 | if contentEncoding, ok := request.Header["Content-Encoding"]; ok && contentEncoding[0] == "gzip" {
120 | gzipEnabled = true
121 | }
122 |
123 | if hec.gzipEnabled == nil {
124 | hec.gzipEnabled = &gzipEnabled
125 | } else if gzipEnabled != *hec.gzipEnabled {
126 | // Nothing wrong with that, but we just know that Splunk Logging Driver does not do that
127 | hec.test.Error("Driver should not change Content Encoding.")
128 | }
129 |
130 | var gzipReader *gzip.Reader
131 | var reader io.Reader
132 | if gzipEnabled {
133 | gzipReader, err = gzip.NewReader(request.Body)
134 | if err != nil {
135 | hec.test.Fatal(err)
136 | }
137 | reader = gzipReader
138 | } else {
139 | reader = request.Body
140 | }
141 |
142 | // Read body
143 | var body []byte
144 | body, err = ioutil.ReadAll(reader)
145 | if err != nil {
146 | hec.test.Fatal(err)
147 | }
148 |
149 | // Parse message
150 | messageStart := 0
151 | for i := 0; i < len(body); i++ {
152 | if i == len(body)-1 || (body[i] == '}' && body[i+1] == '{') {
153 | var message splunkMessage
154 | err = json.Unmarshal(body[messageStart:i+1], &message)
155 | if err != nil {
156 | hec.test.Log(string(body[messageStart : i+1]))
157 | hec.test.Fatal(err)
158 | }
159 | hec.messages = append(hec.messages, &message)
160 | messageStart = i + 1
161 | }
162 | }
163 |
164 | if gzipEnabled {
165 | gzipReader.Close()
166 | }
167 |
168 | writer.WriteHeader(http.StatusOK)
169 | default:
170 | hec.test.Errorf("Unexpected HTTP method %s", http.MethodOptions)
171 | writer.WriteHeader(http.StatusBadRequest)
172 | }
173 | }
174 |
--------------------------------------------------------------------------------
/templates.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import (
4 | "bytes"
5 | "encoding/json"
6 | "strings"
7 | "text/template"
8 | )
9 |
10 | // basicFunctions are the set of initial
11 | // functions provided to every template.
12 | var basicFunctions = template.FuncMap{
13 | "json": func(v interface{}) string {
14 | buf := &bytes.Buffer{}
15 | enc := json.NewEncoder(buf)
16 | enc.SetEscapeHTML(false)
17 | enc.Encode(v)
18 | // Remove the trailing new line added by the encoder
19 | return strings.TrimSpace(buf.String())
20 | },
21 | "split": strings.Split,
22 | "join": strings.Join,
23 | "title": strings.Title,
24 | "lower": strings.ToLower,
25 | "upper": strings.ToUpper,
26 | "pad": padWithSpace,
27 | "truncate": truncateWithLength,
28 | }
29 |
30 | // NewParse creates a new tagged template with the basic functions
31 | // and parses the given format.
32 | func NewParse(tag, format string) (*template.Template, error) {
33 | return template.New(tag).Funcs(basicFunctions).Parse(format)
34 | }
35 |
36 | // padWithSpace adds whitespace to the input if the input is non-empty
37 | func padWithSpace(source string, prefix, suffix int) string {
38 | if source == "" {
39 | return source
40 | }
41 | return strings.Repeat(" ", prefix) + source + strings.Repeat(" ", suffix)
42 | }
43 |
44 | // truncateWithLength truncates the source string up to the length provided by the input
45 | func truncateWithLength(source string, length int) string {
46 | if len(source) < length {
47 | return source
48 | }
49 | return source[:length]
50 | }
51 |
--------------------------------------------------------------------------------
/test/LogEntry.proto:
--------------------------------------------------------------------------------
1 | message LogEntry {
2 | optional string source = 1;
3 | optional int64 time_nano = 2;
4 | required bytes line = 3;
5 | optional bool partial = 4;
6 | }
--------------------------------------------------------------------------------
/test/LogEntry_pb2.py:
--------------------------------------------------------------------------------
1 | # Generated by the protocol buffer compiler. DO NOT EDIT!
2 | # source: LogEntry.proto
3 |
4 | import sys
5 | _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
6 | from google.protobuf import descriptor as _descriptor
7 | from google.protobuf import message as _message
8 | from google.protobuf import reflection as _reflection
9 | from google.protobuf import symbol_database as _symbol_database
10 | from google.protobuf import descriptor_pb2
11 | # @@protoc_insertion_point(imports)
12 |
13 | _sym_db = _symbol_database.Default()
14 |
15 |
16 |
17 |
18 | DESCRIPTOR = _descriptor.FileDescriptor(
19 | name='LogEntry.proto',
20 | package='',
21 | syntax='proto2',
22 | serialized_pb=_b('\n\x0eLogEntry.proto\"L\n\x08LogEntry\x12\x0e\n\x06source\x18\x01 \x01(\t\x12\x11\n\ttime_nano\x18\x02 \x01(\x03\x12\x0c\n\x04line\x18\x03 \x02(\x0c\x12\x0f\n\x07partial\x18\x04 \x01(\x08')
23 | )
24 |
25 |
26 |
27 |
28 | _LOGENTRY = _descriptor.Descriptor(
29 | name='LogEntry',
30 | full_name='LogEntry',
31 | filename=None,
32 | file=DESCRIPTOR,
33 | containing_type=None,
34 | fields=[
35 | _descriptor.FieldDescriptor(
36 | name='source', full_name='LogEntry.source', index=0,
37 | number=1, type=9, cpp_type=9, label=1,
38 | has_default_value=False, default_value=_b("").decode('utf-8'),
39 | message_type=None, enum_type=None, containing_type=None,
40 | is_extension=False, extension_scope=None,
41 | options=None, file=DESCRIPTOR),
42 | _descriptor.FieldDescriptor(
43 | name='time_nano', full_name='LogEntry.time_nano', index=1,
44 | number=2, type=3, cpp_type=2, label=1,
45 | has_default_value=False, default_value=0,
46 | message_type=None, enum_type=None, containing_type=None,
47 | is_extension=False, extension_scope=None,
48 | options=None, file=DESCRIPTOR),
49 | _descriptor.FieldDescriptor(
50 | name='line', full_name='LogEntry.line', index=2,
51 | number=3, type=12, cpp_type=9, label=2,
52 | has_default_value=False, default_value=_b(""),
53 | message_type=None, enum_type=None, containing_type=None,
54 | is_extension=False, extension_scope=None,
55 | options=None, file=DESCRIPTOR),
56 | _descriptor.FieldDescriptor(
57 | name='partial', full_name='LogEntry.partial', index=3,
58 | number=4, type=8, cpp_type=7, label=1,
59 | has_default_value=False, default_value=False,
60 | message_type=None, enum_type=None, containing_type=None,
61 | is_extension=False, extension_scope=None,
62 | options=None, file=DESCRIPTOR),
63 | ],
64 | extensions=[
65 | ],
66 | nested_types=[],
67 | enum_types=[
68 | ],
69 | options=None,
70 | is_extendable=False,
71 | syntax='proto2',
72 | extension_ranges=[],
73 | oneofs=[
74 | ],
75 | serialized_start=18,
76 | serialized_end=94,
77 | )
78 |
79 | DESCRIPTOR.message_types_by_name['LogEntry'] = _LOGENTRY
80 | _sym_db.RegisterFileDescriptor(DESCRIPTOR)
81 |
82 | LogEntry = _reflection.GeneratedProtocolMessageType('LogEntry', (_message.Message,), dict(
83 | DESCRIPTOR = _LOGENTRY,
84 | __module__ = 'LogEntry_pb2'
85 | # @@protoc_insertion_point(class_scope:LogEntry)
86 | ))
87 | _sym_db.RegisterMessage(LogEntry)
88 |
89 |
90 | # @@protoc_insertion_point(module_scope)
91 |
--------------------------------------------------------------------------------
/test/README.md:
--------------------------------------------------------------------------------
1 | # Prerequsite
2 | * The plugin binary should exist on the system and it should be able to run with the user you run the pytest.
3 | * A Splunk instance (HEC port and splunkd port) should be accessible by the pytest.
4 | * Splunk HEC token should not overwrite index. The tests relies on "index=main"
5 | * Python version must be > 3.x
6 |
7 | # Testing Instructions
8 | 0. (Optional) Use a virtual environment for the test
9 | `virtualenv --python=python3.6 venv`
10 | `source venv/bin/activate`
11 | 1. Install the dependencies
12 | `pip install -r requirements.txt`
13 | 2. Start the test with the required options configured
14 | `python -m pytest `
15 |
16 | **Options are:**
17 | --splunkd-url
18 | * Description: splunkd url used to send test data to. Eg: https://localhost:8089
19 | * Default: https://localhost:8089
20 |
21 | --splunk-user
22 | * Description: splunk username
23 | * Default: admin
24 |
25 | --splunk-password
26 | * Description: splunk user password
27 | * Default: changeme
28 |
29 | --splunk-hec-url
30 | * Description: splunk hec endpoint used by logging plugin.
31 | * Default: https://localhost:8088
32 |
33 | --splunk-hec-token
34 | * Description: splunk hec token for authentication.
35 | * Required
36 |
37 | --docker-plugin-path
38 | * Description: docker plugin binary path
39 | * Required
40 |
41 | --fifo-path
42 | * Description: full file path to the fifo
43 | * Required
44 |
45 |
--------------------------------------------------------------------------------
/test/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/splunk/docker-logging-plugin/f376affbeb8fa210d38cc1657014f1a9cbf2fb79/test/__init__.py
--------------------------------------------------------------------------------
/test/common.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright 2018 Splunk, Inc..
3 |
4 | Licensed under the Apache License, Version 2.0 (the "License");
5 | you may not use this file except in compliance with the License.
6 | You may obtain a copy of the License at
7 |
8 | http://www.apache.org/licenses/LICENSE-2.0
9 |
10 | Unless required by applicable law or agreed to in writing, software
11 | distributed under the License is distributed on an "AS IS" BASIS,
12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | See the License for the specific language governing permissions and
14 | limitations under the License.
15 | """
16 |
17 | import json
18 | import logging
19 | import time
20 | import LogEntry_pb2
21 | import subprocess
22 | import struct
23 | import requests
24 | import os
25 | import sys
26 | from multiprocessing import Pool
27 | import requests_unixsocket
28 | from requests.packages.urllib3.util.retry import Retry
29 | from requests.adapters import HTTPAdapter
30 |
31 |
32 | TIMEROUT = 500
33 | SOCKET_START_URL = "http+unix://%2Frun%2Fdocker%2Fplugins%2Fsplunklog.sock/" \
34 | "LogDriver.StartLogging"
35 | SOCKET_STOP_URL = "http+unix://%2Frun%2Fdocker%2Fplugins%2Fsplunklog.sock/" \
36 | "LogDriver.StopLogging"
37 |
38 |
39 | logger = logging.getLogger(__name__)
40 | logger.setLevel(logging.INFO)
41 | formatter = logging.Formatter('%(asctime)s - %(name)s -' +
42 | ' %(levelname)s - %(message)s')
43 | handler = logging.StreamHandler(sys.stdout)
44 | handler.setFormatter(formatter)
45 | logger.addHandler(handler)
46 |
47 |
48 | def start_logging_plugin(plugin_path):
49 | '''Start to run logging plugin binary'''
50 | args = (plugin_path)
51 | popen = subprocess.Popen(args, stdout=subprocess.PIPE)
52 |
53 | return popen
54 |
55 |
56 | def kill_logging_plugin(plugin_path):
57 | '''Kill running logging plugin process'''
58 | os.system("killall " + plugin_path)
59 |
60 |
61 | def start_log_producer_from_input(file_path, test_input, u_id, timeout=0):
62 | '''
63 | Spawn a thread to write logs to fifo from the given test input
64 | @param: file_path
65 | @param: test_input
66 | type: array of tuples of (string, Boolean)
67 | @param: u_id
68 | @param: timeout
69 | '''
70 | pool = Pool(processes=1) # Start a worker processes.
71 | pool.apply_async(__write_to_fifo, [file_path, test_input, u_id, timeout])
72 |
73 |
74 | def start_log_producer_from_file(file_path, u_id, input_file):
75 | '''
76 | Spawn a thread to write logs to fifo by streaming the
77 | content from given file
78 | @param: file_path
79 | @param: u_id
80 | @param: input_file
81 | '''
82 | pool = Pool(processes=1) # Start a worker processes.
83 | pool.apply_async(__write_file_to_fifo, [file_path, u_id, input_file])
84 |
85 |
86 | def __write_to_fifo(fifo_path, test_input, u_id, timeout=0):
87 | f_writer = __open_fifo(fifo_path)
88 |
89 | for message, partial in test_input:
90 | logger.info("Writing data in protobuf with source=%s", u_id)
91 | if timeout != 0:
92 | time.sleep(timeout)
93 | __write_proto_buf_message(f_writer,
94 | message=message,
95 | partial=partial,
96 | source=u_id)
97 |
98 | __close_fifo(f_writer)
99 |
100 |
101 | def __write_file_to_fifo(fifo_path, u_id, input_file):
102 | f_writer = __open_fifo(fifo_path)
103 |
104 | logger.info("Writing data in protobuf with source=%s", u_id)
105 |
106 | message = ""
107 | with open(input_file, "r") as f:
108 | message = f.read(15360) # 15kb
109 | while not message.endswith("\n"):
110 | __write_proto_buf_message(f_writer,
111 | message=message,
112 | partial=True,
113 | source=u_id)
114 | message = f.read(15360)
115 | # write the remaining
116 | __write_proto_buf_message(f_writer,
117 | message=message,
118 | partial=False,
119 | source=u_id)
120 |
121 | f.close()
122 | __close_fifo(f_writer)
123 |
124 |
125 | def __open_fifo(fifo_location):
126 | '''create and open a file'''
127 | if os.path.exists(fifo_location):
128 | os.unlink(fifo_location)
129 |
130 | os.mkfifo(fifo_location)
131 | fifo_writer = open(fifo_location, 'wb')
132 |
133 | return fifo_writer
134 |
135 |
136 | def __write_proto_buf_message(fifo_writer=None,
137 | source="test",
138 | time_nano=int(time.time() * 1000000000),
139 | message="",
140 | partial=False,
141 | id=""):
142 | '''
143 | write to fifo in the format of LogMessage protobuf
144 | '''
145 | log = LogEntry_pb2.LogEntry()
146 | log.source = source
147 | log.time_nano = time_nano
148 | log.line = bytes(message, "utf8")
149 | log.partial = partial
150 |
151 | buf = log.SerializeToString(log)
152 | size = len(buf)
153 |
154 | size_buffer = bytearray(4)
155 | struct.pack_into(">i", size_buffer, 0, size)
156 | fifo_writer.write(size_buffer)
157 | fifo_writer.write(buf)
158 | fifo_writer.flush()
159 |
160 |
161 | def __close_fifo(fifo_writer):
162 | '''close a file'''
163 | # os.close(fifo_writer)
164 | fifo_writer.close()
165 |
166 |
167 | def request_start_logging(file_path, hec_url, hec_token, options={}):
168 | '''
169 | send a request to the plugin to start logging
170 | :param file_path: the file path
171 | :type file_path: string
172 |
173 | :param hec_url: the file path
174 | :type hec_url: string
175 |
176 | :param hec_token: the file path
177 | :type hec_token: string
178 | '''
179 | config = {}
180 | config["splunk-url"] = hec_url
181 | config["splunk-token"] = hec_token
182 | config["splunk-insecureskipverify"] = "true"
183 | config["splunk-format"] = "json"
184 | config["tag"] = ""
185 |
186 | config = {**config, **options}
187 |
188 | req_obj = {
189 | "File": file_path,
190 | "Info": {
191 | "ContainerID": "test",
192 | "Config": config,
193 | "LogPath": "/home/ec2-user/test.txt"
194 | }
195 | }
196 |
197 | headers = {
198 | "Content-Type": "application/json",
199 | "Host": "localhost"
200 | }
201 |
202 | session = requests_unixsocket.Session()
203 | res = session.post(
204 | SOCKET_START_URL,
205 | data=json.dumps(req_obj),
206 | headers=headers)
207 |
208 | if res.status_code != 200:
209 | raise Exception("Can't establish socket connection")
210 |
211 | logger.info(res.json())
212 |
213 |
214 | def request_stop_logging(file_path):
215 | '''
216 | send a request to the plugin to stop logging
217 | '''
218 | req_obj = {
219 | "File": file_path
220 | }
221 | session = requests_unixsocket.Session()
222 | res = session.post(
223 | SOCKET_STOP_URL,
224 | data=json.dumps(req_obj)
225 | )
226 |
227 | if res.status_code != 200:
228 | raise Exception("Can't establish socket connection")
229 |
230 | logger.info(res.json())
231 |
232 |
233 | def check_events_from_splunk(index="main",
234 | id=None,
235 | start_time="-24h@h",
236 | end_time="now",
237 | url="",
238 | user="",
239 | password=""):
240 | '''
241 | send a search request to splunk and return the events from the result
242 | '''
243 | query = _compose_search_query(index, id)
244 | events = _collect_events(query, start_time, end_time, url, user, password)
245 |
246 | return events
247 |
248 |
249 | def _compose_search_query(index="main", id=None):
250 | return "search index={0} {1}".format(index, id)
251 |
252 |
253 | def _collect_events(query, start_time, end_time, url="", user="", password=""):
254 | '''
255 | Collect events by running the given search query
256 | @param: query (search query)
257 | @param: start_time (search start time)
258 | @param: end_time (search end time)
259 | returns events
260 | '''
261 |
262 | search_url = '{0}/services/search/jobs?output_mode=json'.format(
263 | url)
264 | logger.info('requesting: %s', search_url)
265 | data = {
266 | 'search': query,
267 | 'earliest_time': start_time,
268 | 'latest_time': end_time,
269 | }
270 |
271 | create_job = _requests_retry_session().post(
272 | search_url,
273 | auth=(user, password),
274 | verify=False, data=data)
275 | _check_request_status(create_job)
276 |
277 | json_res = create_job.json()
278 | job_id = json_res['sid']
279 | events = _wait_for_job_and__get_events(job_id, url, user, password)
280 |
281 | return events
282 |
283 |
284 | def _wait_for_job_and__get_events(job_id, url="", user="", password=""):
285 | '''
286 | Wait for the search job to finish and collect the result events
287 | @param: job_id
288 | returns events
289 | '''
290 | events = []
291 | job_url = '{0}/services/search/jobs/{1}?output_mode=json'.format(
292 | url, str(job_id))
293 | logger.info('requesting: %s', job_url)
294 |
295 | for _ in range(TIMEROUT):
296 | res = _requests_retry_session().get(
297 | job_url,
298 | auth=(user, password),
299 | verify=False)
300 | _check_request_status(res)
301 |
302 | job_res = res.json()
303 | dispatch_state = job_res['entry'][0]['content']['dispatchState']
304 |
305 | if dispatch_state == 'DONE':
306 | events = _get_events(job_id, url, user, password)
307 | break
308 | if dispatch_state == 'FAILED':
309 | raise Exception('Search job: {0} failed'.format(job_url))
310 | time.sleep(1)
311 |
312 | return events
313 |
314 |
315 | def _get_events(job_id, url="", user="", password=""):
316 | '''
317 | collect the result events from a search job
318 | @param: job_id
319 | returns events
320 | '''
321 | event_url = '{0}/services/search/jobs/{1}/events?output_mode=json'.format(
322 | url, str(job_id))
323 | logger.info('requesting: %s', event_url)
324 |
325 | event_job = _requests_retry_session().get(
326 | event_url, auth=(user, password),
327 | verify=False)
328 | _check_request_status(event_job)
329 |
330 | event_job_json = event_job.json()
331 | events = event_job_json['results']
332 |
333 | return events
334 |
335 |
336 | def _check_request_status(req_obj):
337 | '''
338 | check if a request is successful
339 | @param: req_obj
340 | returns True/False
341 | '''
342 | if not req_obj.ok:
343 | raise Exception('status code: {0} \n details: {1}'.format(
344 | str(req_obj.status_code), req_obj.text))
345 |
346 |
347 | def _requests_retry_session(
348 | retries=10,
349 | backoff_factor=0.1,
350 | status_forcelist=(500, 502, 504)):
351 | '''
352 | create a retry session for HTTP/HTTPS requests
353 | @param: retries (num of retry time)
354 | @param: backoff_factor
355 | @param: status_forcelist (list of error status code to trigger retry)
356 | @param: session
357 | returns: session
358 | '''
359 | session = requests.Session()
360 | retry = Retry(
361 | total=int(retries),
362 | backoff_factor=backoff_factor,
363 | method_whitelist=frozenset(['GET', 'POST']),
364 | status_forcelist=status_forcelist,
365 | )
366 | adapter = HTTPAdapter(max_retries=retry)
367 | session.mount('http://', adapter)
368 | session.mount('https://', adapter)
369 |
370 | return session
371 |
--------------------------------------------------------------------------------
/test/config_params/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/splunk/docker-logging-plugin/f376affbeb8fa210d38cc1657014f1a9cbf2fb79/test/config_params/__init__.py
--------------------------------------------------------------------------------
/test/config_params/cacert.pem:
--------------------------------------------------------------------------------
1 | -----BEGIN CERTIFICATE-----
2 | MIIDejCCAmICCQCNHBN8tj/FwzANBgkqhkiG9w0BAQsFADB/MQswCQYDVQQGEwJV
3 | UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoM
4 | BlNwbHVuazEXMBUGA1UEAwwOU3BsdW5rQ29tbW9uQ0ExITAfBgkqhkiG9w0BCQEW
5 | EnN1cHBvcnRAc3BsdW5rLmNvbTAeFw0xNzAxMzAyMDI2NTRaFw0yNzAxMjgyMDI2
6 | NTRaMH8xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNU2FuIEZy
7 | YW5jaXNjbzEPMA0GA1UECgwGU3BsdW5rMRcwFQYDVQQDDA5TcGx1bmtDb21tb25D
8 | QTEhMB8GCSqGSIb3DQEJARYSc3VwcG9ydEBzcGx1bmsuY29tMIIBIjANBgkqhkiG
9 | 9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzB9ltVEGk73QvPlxXtA0qMW/SLDQlQMFJ/C/
10 | tXRVJdQsmcW4WsaETteeWZh8AgozO1LqOa3I6UmrWLcv4LmUAh/T3iZWXzHLIqFN
11 | WLSVU+2g0Xkn43xSgQEPSvEK1NqZRZv1SWvx3+oGHgu03AZrqTj0HyLujqUDARFX
12 | sRvBPW/VfDkomHj9b8IuK3qOUwQtIOUr+oKx1tM1J7VNN5NflLw9NdHtlfblw0Ys
13 | 5xI5Qxu3rcCxkKQuwz9KRe4iijOIRMAKX28pbakxU9Nk38Ac3PNadgIk0s7R829k
14 | 980sqGWkd06+C17OxgjpQbvLOR20FtmQybttUsXGR7Bp07YStwIDAQABMA0GCSqG
15 | SIb3DQEBCwUAA4IBAQCxhQd6KXP2VzK2cwAqdK74bGwl5WnvsyqdPWkdANiKksr4
16 | ZybJZNfdfRso3fA2oK1R8i5Ca8LK3V/UuAsXvG6/ikJtWsJ9jf+eYLou8lS6NVJO
17 | xDN/gxPcHrhToGqi1wfPwDQrNVofZcuQNklcdgZ1+XVuotfTCOXHrRoNmZX+HgkY
18 | gEtPG+r1VwSFowfYqyFXQ5CUeRa3JB7/ObF15WfGUYplbd3wQz/M3PLNKLvz5a1z
19 | LMNXDwN5Pvyb2epyO8LPJu4dGTB4jOGpYLUjG1UUqJo9Oa6D99rv6sId+8qjERtl
20 | ZZc1oaC0PKSzBmq+TpbR27B8Zra3gpoA+gavdRZj
21 | -----END CERTIFICATE-----
22 |
--------------------------------------------------------------------------------
/test/config_params/test_cofig_params.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import time
3 | import uuid
4 | import os
5 | import logging
6 | import json
7 | import socket
8 | from urllib.parse import urlparse
9 | from ..common import request_start_logging, \
10 | check_events_from_splunk, request_stop_logging, \
11 | start_log_producer_from_input
12 |
13 |
14 | @pytest.mark.parametrize("test_input,expected", [
15 | (None, 1)
16 | ])
17 |
18 | def test_splunk_index_1(setup, test_input, expected):
19 | '''
20 | Test that user specified index can successfully index the
21 | log stream from docker. If no index is specified, default
22 | index "main" will be used.
23 |
24 | Note that the HEC token on splunk side needs to be configured
25 | to accept the specified index.
26 | '''
27 | logging.getLogger().info("testing test_splunk_index input={0} \
28 | expected={1} event(s)".format(test_input, expected))
29 | u_id = str(uuid.uuid4())
30 |
31 | file_path = setup["fifo_path"]
32 | start_log_producer_from_input(file_path, [("test index", False)], u_id)
33 |
34 | index = test_input if test_input else "main"
35 | request_start_logging(file_path,
36 | setup["splunk_hec_url"],
37 | setup["splunk_hec_token"],
38 | options={"splunk-index": index})
39 |
40 | # wait for 10 seconds to allow messages to be sent
41 | time.sleep(5)
42 | request_stop_logging(file_path)
43 |
44 | # check that events get to splunk
45 | events = check_events_from_splunk(index=index,
46 | id=u_id,
47 | start_time="-15m@m",
48 | url=setup["splunkd_url"],
49 | user=setup["splunk_user"],
50 | password=setup["splunk_password"])
51 | logging.getLogger().info("Splunk received %s events in the last minute" +
52 | " with u_id=%s",
53 | len(events), u_id)
54 | assert len(events) == expected
55 |
56 | @pytest.mark.parametrize("test_input,expected", [
57 | ("history", 1)
58 | ])
59 |
60 | def test_splunk_index_2(setup, test_input, expected):
61 | '''
62 | Test that user specified index can successfully index the
63 | log stream from docker. If no index is specified, default
64 | index "main" will be used.
65 |
66 | Note that the HEC token on splunk side needs to be configured
67 | to accept the specified index.
68 | '''
69 | logging.getLogger().info("testing test_splunk_index input={0} \
70 | expected={1} event(s)".format(test_input, expected))
71 | u_id = str(uuid.uuid4())
72 |
73 | file_path = setup["fifo_path"]
74 | start_log_producer_from_input(file_path, [("test index", False)], u_id)
75 |
76 | index = test_input if test_input else "main"
77 | request_start_logging(file_path,
78 | setup["splunk_hec_url"],
79 | setup["splunk_hec_token"],
80 | options={"splunk-index": index})
81 |
82 | # wait for 10 seconds to allow messages to be sent
83 | time.sleep(5)
84 | request_stop_logging(file_path)
85 |
86 | # check that events get to splunk
87 | events = check_events_from_splunk(index=index,
88 | id=u_id,
89 | start_time="-15m@m",
90 | url=setup["splunkd_url"],
91 | user=setup["splunk_user"],
92 | password=setup["splunk_password"])
93 | logging.getLogger().info("Splunk received %s events in the last minute" +
94 | " with u_id=%s",
95 | len(events), u_id)
96 | assert len(events) == expected
97 |
98 |
99 |
100 | @pytest.mark.parametrize("test_input,expected", [
101 | (None, 1)
102 | ])
103 | def test_splunk_source_1(setup, test_input, expected):
104 | '''
105 | Test that docker logs can be indexed with the specified
106 | source successfully. If no source is specified, the default
107 | source from docker is used
108 | '''
109 |
110 | logging.getLogger().info("testing test_splunk_source input={0} \
111 | expected={1} event(s)".format(test_input, expected))
112 | u_id = str(uuid.uuid4())
113 |
114 | file_path = setup["fifo_path"]
115 | start_log_producer_from_input(file_path, [("test source", False)], u_id)
116 |
117 | options = {}
118 | if test_input:
119 | options = {"splunk-source": test_input}
120 |
121 | request_start_logging(file_path,
122 | setup["splunk_hec_url"],
123 | setup["splunk_hec_token"],
124 | options=options)
125 |
126 | source = test_input if test_input else "*"
127 | # wait for 10 seconds to allow messages to be sent
128 | time.sleep(5)
129 | request_stop_logging(file_path)
130 |
131 | # check that events get to splunk
132 | events = check_events_from_splunk(id="source={0} {1}".format(source, u_id),
133 | start_time="-15m@m",
134 | url=setup["splunkd_url"],
135 | user=setup["splunk_user"],
136 | password=setup["splunk_password"])
137 | logging.getLogger().info("Splunk received %s events in the last minute" +
138 | " with u_id=%s",
139 | len(events), u_id)
140 | assert len(events) == expected
141 |
142 | @pytest.mark.parametrize("test_input,expected", [
143 | ("test_source", 1)
144 | ])
145 | def test_splunk_source_2(setup, test_input, expected):
146 | '''
147 | Test that docker logs can be indexed with the specified
148 | source successfully. If no source is specified, the default
149 | source from docker is used
150 | '''
151 |
152 | logging.getLogger().info("testing test_splunk_source input={0} \
153 | expected={1} event(s)".format(test_input, expected))
154 | u_id = str(uuid.uuid4())
155 |
156 | file_path = setup["fifo_path"]
157 | start_log_producer_from_input(file_path, [("test source", False)], u_id)
158 |
159 | options = {}
160 | if test_input:
161 | options = {"splunk-source": test_input}
162 |
163 | request_start_logging(file_path,
164 | setup["splunk_hec_url"],
165 | setup["splunk_hec_token"],
166 | options=options)
167 |
168 | source = test_input if test_input else "*"
169 | # wait for 10 seconds to allow messages to be sent
170 | time.sleep(5)
171 | request_stop_logging(file_path)
172 |
173 | # check that events get to splunk
174 | events = check_events_from_splunk(id="source={0} {1}".format(source, u_id),
175 | start_time="-15m@m",
176 | url=setup["splunkd_url"],
177 | user=setup["splunk_user"],
178 | password=setup["splunk_password"])
179 | logging.getLogger().info("Splunk received %s events in the last minute" +
180 | " with u_id=%s",
181 | len(events), u_id)
182 | assert len(events) == expected
183 |
184 |
185 |
186 | @pytest.mark.parametrize("test_input,expected", [
187 | (None, 1)
188 | ])
189 | def test_splunk_source_type_1(setup, test_input, expected):
190 | '''
191 | Test that docker logs can be indexed with the specified
192 | sourcetype successfully. If no source is specified, the default
193 | "splunk_connect_docker" is used
194 | '''
195 |
196 | logging.getLogger().info("testing test_splunk_source_type input={0} \
197 | expected={1} event(s)".format(test_input, expected))
198 | u_id = str(uuid.uuid4())
199 |
200 | file_path = setup["fifo_path"]
201 | start_log_producer_from_input(file_path, [("test source", False)], u_id)
202 |
203 | options = {}
204 | if test_input:
205 | options = {"splunk-sourcetype": test_input}
206 | request_start_logging(file_path,
207 | setup["splunk_hec_url"],
208 | setup["splunk_hec_token"],
209 | options=options)
210 |
211 | sourcetype = test_input if test_input else "splunk_connect_docker"
212 |
213 | # wait for 10 seconds to allow messages to be sent
214 | time.sleep(5)
215 | request_stop_logging(file_path)
216 |
217 | # check that events get to splunk
218 | events = check_events_from_splunk(id="sourcetype={0} {1}"
219 | .format(sourcetype, u_id),
220 | start_time="-15m@m",
221 | url=setup["splunkd_url"],
222 | user=setup["splunk_user"],
223 | password=setup["splunk_password"])
224 | logging.getLogger().info("Splunk received %s events in the last minute" +
225 | " with u_id=%s",
226 | len(events), u_id)
227 | assert len(events) == expected
228 |
229 | @pytest.mark.parametrize("test_input,expected", [
230 | ("test_source_type", 1)
231 | ])
232 | def test_splunk_source_type_2(setup, test_input, expected):
233 | '''
234 | Test that docker logs can be indexed with the specified
235 | sourcetype successfully. If no source is specified, the default
236 | "splunk_connect_docker" is used
237 | '''
238 |
239 | logging.getLogger().info("testing test_splunk_source_type input={0} \
240 | expected={1} event(s)".format(test_input, expected))
241 | u_id = str(uuid.uuid4())
242 |
243 | file_path = setup["fifo_path"]
244 | start_log_producer_from_input(file_path, [("test source", False)], u_id)
245 |
246 | options = {}
247 | if test_input:
248 | options = {"splunk-sourcetype": test_input}
249 | request_start_logging(file_path,
250 | setup["splunk_hec_url"],
251 | setup["splunk_hec_token"],
252 | options=options)
253 |
254 | sourcetype = test_input if test_input else "splunk_connect_docker"
255 |
256 | # wait for 10 seconds to allow messages to be sent
257 | time.sleep(5)
258 | request_stop_logging(file_path)
259 |
260 | # check that events get to splunk
261 | events = check_events_from_splunk(id="sourcetype={0} {1}"
262 | .format(sourcetype, u_id),
263 | start_time="-15m@m",
264 | url=setup["splunkd_url"],
265 | user=setup["splunk_user"],
266 | password=setup["splunk_password"])
267 | logging.getLogger().info("Splunk received %s events in the last minute" +
268 | " with u_id=%s",
269 | len(events), u_id)
270 | assert len(events) == expected
271 |
272 |
273 | def test_splunk_ca(setup):
274 | '''
275 | Test that docker logging plugin can use the server certificate to
276 | verify the server identity
277 |
278 | The server cert used here is the default CA shipping in splunk
279 | '''
280 | logging.getLogger().info("testing test_splunk_ca")
281 | u_id = str(uuid.uuid4())
282 |
283 | file_path = setup["fifo_path"]
284 | start_log_producer_from_input(file_path, [("test ca", False)], u_id)
285 | current_dir = os.path.dirname(os.path.realpath(__file__))
286 |
287 | options = {
288 | "splunk-insecureskipverify": "false",
289 | "splunk-capath": current_dir + "/cacert.pem",
290 | "splunk-caname": "SplunkServerDefaultCert"
291 | }
292 |
293 | parsed_url = urlparse(setup["splunk_hec_url"])
294 | hec_ip = parsed_url.hostname
295 | hec_port = parsed_url.port
296 |
297 | # check if it is an IP address
298 | try:
299 | socket.inet_aton(hec_ip)
300 | except socket.error:
301 | # if it is not an IP address, it is a hostname
302 | # do a hostname to IP lookup
303 | hec_ip = socket.gethostbyname(hec_ip)
304 |
305 | splunk_hec_url = "https://SplunkServerDefaultCert:{0}".format(hec_port)
306 |
307 | if "SplunkServerDefaultCert" not in open('/etc/hosts').read():
308 | file = open("/etc/hosts", "a")
309 | file.write("{0}\tSplunkServerDefaultCert".format(hec_ip))
310 | file.close()
311 |
312 | request_start_logging(file_path,
313 | splunk_hec_url,
314 | setup["splunk_hec_token"],
315 | options=options)
316 |
317 | # wait for 10 seconds to allow messages to be sent
318 | time.sleep(5)
319 | request_stop_logging(file_path)
320 |
321 | # check that events get to splunk
322 | events = check_events_from_splunk(id=u_id,
323 | start_time="-15m@m",
324 | url=setup["splunkd_url"],
325 | user=setup["splunk_user"],
326 | password=setup["splunk_password"])
327 | logging.getLogger().info("Splunk received %s events in the last minute" +
328 | " with u_id=%s",
329 | len(events), u_id)
330 | assert len(events) == 1
331 |
332 |
333 | @pytest.mark.parametrize("test_input,expected", [
334 | ("json", 1)
335 | ])
336 | def test_splunk_format_json(setup, test_input, expected):
337 | '''
338 | Test that different splunk format: json, raw, inline can be handled
339 | correctly.
340 |
341 | json: tries to parse the given message in to a json object and send both
342 | source and log message to splunk
343 | inline: treats the given message as a string and wrap it in a json object
344 | with source and send the json string to splunk
345 | raw: sends the raw message to splunk
346 | '''
347 | logging.getLogger().info("testing test_splunk_format input={0} \
348 | expected={1} event(s)".format(test_input, expected))
349 | u_id = str(uuid.uuid4())
350 |
351 | file_path = setup["fifo_path"]
352 | test_string = '{"test": true, "id": "' + u_id + '"}'
353 | start_log_producer_from_input(file_path, [(test_string, False)], u_id)
354 |
355 | options = {}
356 | if test_input:
357 | options = {"splunk-format": test_input}
358 |
359 | request_start_logging(file_path,
360 | setup["splunk_hec_url"],
361 | setup["splunk_hec_token"],
362 | options=options)
363 |
364 | # wait for 10 seconds to allow messages to be sent
365 | time.sleep(5)
366 | request_stop_logging(file_path)
367 |
368 | # check that events get to splunk
369 | events = check_events_from_splunk(id=u_id,
370 | start_time="-15m@m",
371 | url=setup["splunkd_url"],
372 | user=setup["splunk_user"],
373 | password=setup["splunk_password"])
374 | logging.getLogger().info("Splunk received %s events in the last minute" +
375 | " with u_id=%s",
376 | len(events), u_id)
377 | assert len(events) == expected
378 |
379 | event = events[0]["_raw"]
380 | if test_input == "json" or test_input == "inline":
381 | try:
382 | parsed_event = json.loads(event)
383 | except Exception:
384 | pytest.fail("{0} can't be parsed to json"
385 | .format(event))
386 | if test_input == "json":
387 | assert parsed_event["line"] == json.loads(test_string)
388 | elif test_input == "inline":
389 | assert parsed_event["line"] == test_string
390 | elif test_input == "raw":
391 | assert event == test_string
392 |
393 | @pytest.mark.parametrize("test_input,expected", [
394 | ("inline", 1)
395 | ])
396 | def test_splunk_format_inline(setup, test_input, expected):
397 | '''
398 | Test that different splunk format: json, raw, inline can be handled
399 | correctly.
400 |
401 | json: tries to parse the given message in to a json object and send both
402 | source and log message to splunk
403 | inline: treats the given message as a string and wrap it in a json object
404 | with source and send the json string to splunk
405 | raw: sends the raw message to splunk
406 | '''
407 | logging.getLogger().info("testing test_splunk_format input={0} \
408 | expected={1} event(s)".format(test_input, expected))
409 | u_id = str(uuid.uuid4())
410 |
411 | file_path = setup["fifo_path"]
412 | test_string = '{"test": true, "id": "' + u_id + '"}'
413 | start_log_producer_from_input(file_path, [(test_string, False)], u_id)
414 |
415 | options = {}
416 | if test_input:
417 | options = {"splunk-format": test_input}
418 |
419 | request_start_logging(file_path,
420 | setup["splunk_hec_url"],
421 | setup["splunk_hec_token"],
422 | options=options)
423 |
424 | # wait for 10 seconds to allow messages to be sent
425 | time.sleep(5)
426 | request_stop_logging(file_path)
427 |
428 | # check that events get to splunk
429 | events = check_events_from_splunk(id=u_id,
430 | start_time="-15m@m",
431 | url=setup["splunkd_url"],
432 | user=setup["splunk_user"],
433 | password=setup["splunk_password"])
434 | logging.getLogger().info("Splunk received %s events in the last minute" +
435 | " with u_id=%s",
436 | len(events), u_id)
437 | assert len(events) == expected
438 |
439 | event = events[0]["_raw"]
440 | if test_input == "json" or test_input == "inline":
441 | try:
442 | parsed_event = json.loads(event)
443 | except Exception:
444 | pytest.fail("{0} can't be parsed to json"
445 | .format(event))
446 | if test_input == "json":
447 | assert parsed_event["line"] == json.loads(test_string)
448 | elif test_input == "inline":
449 | assert parsed_event["line"] == test_string
450 | elif test_input == "raw":
451 | assert event == test_string
452 |
453 | @pytest.mark.parametrize("test_input,expected", [
454 | ("raw", 1)
455 | ])
456 | def test_splunk_format_raw(setup, test_input, expected):
457 | '''
458 | Test that different splunk format: json, raw, inline can be handled
459 | correctly.
460 |
461 | json: tries to parse the given message in to a json object and send both
462 | source and log message to splunk
463 | inline: treats the given message as a string and wrap it in a json object
464 | with source and send the json string to splunk
465 | raw: sends the raw message to splunk
466 | '''
467 | logging.getLogger().info("testing test_splunk_format input={0} \
468 | expected={1} event(s)".format(test_input, expected))
469 | u_id = str(uuid.uuid4())
470 |
471 | file_path = setup["fifo_path"]
472 | test_string = '{"test": true, "id": "' + u_id + '"}'
473 | start_log_producer_from_input(file_path, [(test_string, False)], u_id)
474 |
475 | options = {}
476 | if test_input:
477 | options = {"splunk-format": test_input}
478 |
479 | request_start_logging(file_path,
480 | setup["splunk_hec_url"],
481 | setup["splunk_hec_token"],
482 | options=options)
483 |
484 | # wait for 10 seconds to allow messages to be sent
485 | time.sleep(5)
486 | request_stop_logging(file_path)
487 |
488 | # check that events get to splunk
489 | events = check_events_from_splunk(id=u_id,
490 | start_time="-15m@m",
491 | url=setup["splunkd_url"],
492 | user=setup["splunk_user"],
493 | password=setup["splunk_password"])
494 | logging.getLogger().info("Splunk received %s events in the last minute" +
495 | " with u_id=%s",
496 | len(events), u_id)
497 | assert len(events) == expected
498 |
499 | event = events[0]["_raw"]
500 | if test_input == "json" or test_input == "inline":
501 | try:
502 | parsed_event = json.loads(event)
503 | except Exception:
504 | pytest.fail("{0} can't be parsed to json"
505 | .format(event))
506 | if test_input == "json":
507 | assert parsed_event["line"] == json.loads(test_string)
508 | elif test_input == "inline":
509 | assert parsed_event["line"] == test_string
510 | elif test_input == "raw":
511 | assert event == test_string
512 |
513 |
514 | @pytest.mark.parametrize("test_input, has_exception", [
515 | ("true", True)
516 | ])
517 | def test_splunk_verify_connection(setup, test_input, has_exception):
518 | '''
519 | Test that splunk-verify-connection option works as expected.
520 |
521 | In this test, given a wrong splunk hec endpoint/token:
522 | - if splunk-verify-connection == false, the plugin will
523 | NOT throw any exception at the start up
524 | - if splunk-verify-connection == true, the plugin will
525 | error out at the start up
526 | '''
527 | logging.getLogger().info("testing splunk_verify_connection")
528 | file_path = setup["fifo_path"]
529 | u_id = str(uuid.uuid4())
530 | start_log_producer_from_input(file_path, [("test verify", False)], u_id)
531 | options = {"splunk-verify-connection": test_input}
532 | try:
533 | request_start_logging(file_path,
534 | "https://localhost:8088",
535 | "00000-00000-0000-00000", # this should fail
536 | options=options)
537 |
538 | assert not has_exception
539 | except Exception as ex:
540 | assert has_exception
541 |
542 |
543 | @pytest.mark.parametrize("test_input, has_exception", [
544 | ("-1", True)
545 | ])
546 | def test_splunk_gzip_1(setup, test_input, has_exception):
547 | '''
548 | Test that the http events can be send to splunk with gzip enabled.
549 |
550 | Acceptable gzip levels are from 0 - 9. Gzip level out of the range
551 | will thrown an exception.
552 | '''
553 | logging.getLogger().info("testing test_splunk_gzip")
554 | u_id = str(uuid.uuid4())
555 |
556 | file_path = setup["fifo_path"]
557 | start_log_producer_from_input(file_path, [("test gzip", False)], u_id)
558 |
559 | options = {"splunk-gzip": "true", "splunk-gzip-level": test_input}
560 |
561 | try:
562 | request_start_logging(file_path,
563 | setup["splunk_hec_url"],
564 | setup["splunk_hec_token"],
565 | options=options)
566 | except Exception as ex:
567 | assert has_exception
568 |
569 | if not has_exception:
570 | # wait for 10 seconds to allow messages to be sent
571 | time.sleep(5)
572 | request_stop_logging(file_path)
573 |
574 | # check that events get to splunk
575 | events = check_events_from_splunk(id=u_id,
576 | start_time="-15m@m",
577 | url=setup["splunkd_url"],
578 | user=setup["splunk_user"],
579 | password=setup["splunk_password"])
580 | logging.getLogger().info("Splunk received %s events in the last " +
581 | "minute with u_id=%s",
582 | len(events), u_id)
583 | assert len(events) == 1
584 |
585 | @pytest.mark.parametrize("test_input, has_exception", [
586 | ("0", False)
587 | ])
588 | def test_splunk_gzip_2(setup, test_input, has_exception):
589 | '''
590 | Test that the http events can be send to splunk with gzip enabled.
591 |
592 | Acceptable gzip levels are from 0 - 9. Gzip level out of the range
593 | will thrown an exception.
594 | '''
595 | logging.getLogger().info("testing test_splunk_gzip")
596 | u_id = str(uuid.uuid4())
597 |
598 | file_path = setup["fifo_path"]
599 | start_log_producer_from_input(file_path, [("test gzip", False)], u_id)
600 |
601 | options = {"splunk-gzip": "true", "splunk-gzip-level": test_input}
602 |
603 | try:
604 | request_start_logging(file_path,
605 | setup["splunk_hec_url"],
606 | setup["splunk_hec_token"],
607 | options=options)
608 | except Exception as ex:
609 | assert has_exception
610 |
611 | if not has_exception:
612 | # wait for 10 seconds to allow messages to be sent
613 | time.sleep(5)
614 | request_stop_logging(file_path)
615 |
616 | # check that events get to splunk
617 | events = check_events_from_splunk(id=u_id,
618 | start_time="-15m@m",
619 | url=setup["splunkd_url"],
620 | user=setup["splunk_user"],
621 | password=setup["splunk_password"])
622 | logging.getLogger().info("Splunk received %s events in the last " +
623 | "minute with u_id=%s",
624 | len(events), u_id)
625 | assert len(events) == 1
626 |
627 | @pytest.mark.parametrize("test_input, has_exception", [
628 | ("5", False)
629 | ])
630 | def test_splunk_gzip_3(setup, test_input, has_exception):
631 | '''
632 | Test that the http events can be send to splunk with gzip enabled.
633 |
634 | Acceptable gzip levels are from 0 - 9. Gzip level out of the range
635 | will thrown an exception.
636 | '''
637 | logging.getLogger().info("testing test_splunk_gzip")
638 | u_id = str(uuid.uuid4())
639 |
640 | file_path = setup["fifo_path"]
641 | start_log_producer_from_input(file_path, [("test gzip", False)], u_id)
642 |
643 | options = {"splunk-gzip": "true", "splunk-gzip-level": test_input}
644 |
645 | try:
646 | request_start_logging(file_path,
647 | setup["splunk_hec_url"],
648 | setup["splunk_hec_token"],
649 | options=options)
650 | except Exception as ex:
651 | assert has_exception
652 |
653 | if not has_exception:
654 | # wait for 10 seconds to allow messages to be sent
655 | time.sleep(5)
656 | request_stop_logging(file_path)
657 |
658 | # check that events get to splunk
659 | events = check_events_from_splunk(id=u_id,
660 | start_time="-15m@m",
661 | url=setup["splunkd_url"],
662 | user=setup["splunk_user"],
663 | password=setup["splunk_password"])
664 | logging.getLogger().info("Splunk received %s events in the last " +
665 | "minute with u_id=%s",
666 | len(events), u_id)
667 | assert len(events) == 1
668 |
669 | @pytest.mark.parametrize("test_input, has_exception", [
670 | ("9", False)
671 | ])
672 | def test_splunk_gzip_4(setup, test_input, has_exception):
673 | '''
674 | Test that the http events can be send to splunk with gzip enabled.
675 |
676 | Acceptable gzip levels are from 0 - 9. Gzip level out of the range
677 | will thrown an exception.
678 | '''
679 | logging.getLogger().info("testing test_splunk_gzip")
680 | u_id = str(uuid.uuid4())
681 |
682 | file_path = setup["fifo_path"]
683 | start_log_producer_from_input(file_path, [("test gzip", False)], u_id)
684 |
685 | options = {"splunk-gzip": "true", "splunk-gzip-level": test_input}
686 |
687 | try:
688 | request_start_logging(file_path,
689 | setup["splunk_hec_url"],
690 | setup["splunk_hec_token"],
691 | options=options)
692 | except Exception as ex:
693 | assert has_exception
694 |
695 | if not has_exception:
696 | # wait for 10 seconds to allow messages to be sent
697 | time.sleep(5)
698 | request_stop_logging(file_path)
699 |
700 | # check that events get to splunk
701 | events = check_events_from_splunk(id=u_id,
702 | start_time="-15m@m",
703 | url=setup["splunkd_url"],
704 | user=setup["splunk_user"],
705 | password=setup["splunk_password"])
706 | logging.getLogger().info("Splunk received %s events in the last " +
707 | "minute with u_id=%s",
708 | len(events), u_id)
709 | assert len(events) == 1
710 |
711 | @pytest.mark.parametrize("test_input, has_exception", [
712 | ("10", True)
713 | ])
714 | def test_splunk_gzip_5(setup, test_input, has_exception):
715 | '''
716 | Test that the http events can be send to splunk with gzip enabled.
717 |
718 | Acceptable gzip levels are from 0 - 9. Gzip level out of the range
719 | will thrown an exception.
720 | '''
721 | logging.getLogger().info("testing test_splunk_gzip")
722 | u_id = str(uuid.uuid4())
723 |
724 | file_path = setup["fifo_path"]
725 | start_log_producer_from_input(file_path, [("test gzip", False)], u_id)
726 |
727 | options = {"splunk-gzip": "true", "splunk-gzip-level": test_input}
728 |
729 | try:
730 | request_start_logging(file_path,
731 | setup["splunk_hec_url"],
732 | setup["splunk_hec_token"],
733 | options=options)
734 | except Exception as ex:
735 | assert has_exception
736 |
737 | if not has_exception:
738 | # wait for 10 seconds to allow messages to be sent
739 | time.sleep(5)
740 | request_stop_logging(file_path)
741 |
742 | # check that events get to splunk
743 | events = check_events_from_splunk(id=u_id,
744 | start_time="-15m@m",
745 | url=setup["splunkd_url"],
746 | user=setup["splunk_user"],
747 | password=setup["splunk_password"])
748 | logging.getLogger().info("Splunk received %s events in the last " +
749 | "minute with u_id=%s",
750 | len(events), u_id)
751 | assert len(events) == 1
752 |
753 |
754 | def test_splunk_tag(setup):
755 | '''
756 | Test the users can add customized tag to the events and splunk
757 | preserves the tags added.
758 | '''
759 | logging.getLogger().info("testing test_splunk_tag")
760 | u_id = str(uuid.uuid4())
761 |
762 | file_path = setup["fifo_path"]
763 | start_log_producer_from_input(file_path, [("test tag", False)], u_id)
764 |
765 | options = {"tag": "test-tag"}
766 | request_start_logging(file_path,
767 | setup["splunk_hec_url"],
768 | setup["splunk_hec_token"],
769 | options=options)
770 |
771 | # wait for 10 seconds to allow messages to be sent
772 | time.sleep(5)
773 | request_stop_logging(file_path)
774 |
775 | # check that events get to splunk
776 | events = check_events_from_splunk(id=u_id,
777 | start_time="-15m@m",
778 | url=setup["splunkd_url"],
779 | user=setup["splunk_user"],
780 | password=setup["splunk_password"])
781 | logging.getLogger().info("Splunk received %s events in the last minute" +
782 | " with u_id=%s",
783 | len(events), u_id)
784 | assert len(events) == 1
785 | parsed_event = json.loads(events[0]["_raw"])
786 | assert parsed_event["tag"] == options["tag"]
787 |
788 |
789 | def test_splunk_telemety(setup):
790 | '''
791 | Test that telemetry event is sent to _introspection index.
792 | '''
793 | logging.getLogger().info("testing telemetry")
794 | u_id = str(uuid.uuid4())
795 |
796 | file_path = setup["fifo_path"]
797 | start_log_producer_from_input(file_path, [("test telemetry", False)], u_id)
798 |
799 | options = {"splunk-sourcetype": "telemetry"}
800 |
801 | request_start_logging(file_path,
802 | setup["splunk_hec_url"],
803 | setup["splunk_hec_token"],
804 | options=options)
805 |
806 | # wait for 10 seconds to allow messages to be sent
807 | time.sleep(5)
808 | request_stop_logging(file_path)
809 |
810 | index = "_introspection"
811 | sourcetype = "telemetry"
812 |
813 |
814 | # check that events get to splunk
815 | events = check_events_from_splunk(index=index,
816 | id="data.sourcetype={0}".format(sourcetype),
817 | start_time="-5m@m",
818 | url=setup["splunkd_url"],
819 | user=setup["splunk_user"],
820 | password=setup["splunk_password"])
821 | logging.getLogger().info("Splunk received %s events in the last minute" +
822 | " with component=app.connect.docker",
823 | len(events))
824 | assert len(events) == 1
825 |
--------------------------------------------------------------------------------
/test/conftest.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright 2018 Splunk, Inc..
3 |
4 | Licensed under the Apache License, Version 2.0 (the "License");
5 | you may not use this file except in compliance with the License.
6 | You may obtain a copy of the License at
7 |
8 | http://www.apache.org/licenses/LICENSE-2.0
9 |
10 | Unless required by applicable law or agreed to in writing, software
11 | distributed under the License is distributed on an "AS IS" BASIS,
12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | See the License for the specific language governing permissions and
14 | limitations under the License.
15 | """
16 |
17 | import pytest
18 | import time
19 | from common import start_logging_plugin, \
20 | kill_logging_plugin
21 |
22 |
23 | def pytest_addoption(parser):
24 | parser.addoption("--splunkd-url",
25 | help="splunkd url used to send test data to. \
26 | Eg: https://localhost:8089",
27 | default="https://localhost:8089")
28 | parser.addoption("--splunk-user",
29 | help="splunk username",
30 | default="admin")
31 | parser.addoption("--splunk-password",
32 | help="splunk user password",
33 | default="changeme")
34 | parser.addoption("--splunk-hec-url",
35 | help="splunk hec endpoint used by logging plugin.",
36 | default="https://localhost:8088")
37 | parser.addoption("--splunk-hec-token",
38 | help="splunk hec token for authentication.",
39 | required=True),
40 | parser.addoption("--docker-plugin-path",
41 | help="docker plugin binary path",
42 | required=True)
43 | parser.addoption("--fifo-path",
44 | help="full file path to the fifo",
45 | required=True)
46 |
47 |
48 | @pytest.fixture(scope="function")
49 | def setup(request):
50 | config = {}
51 | config["splunkd_url"] = request.config.getoption("--splunkd-url")
52 | config["splunk_hec_url"] = request.config.getoption("--splunk-hec-url")
53 | config["splunk_hec_token"] = request.config.getoption("--splunk-hec-token")
54 | config["splunk_user"] = request.config.getoption("--splunk-user")
55 | config["splunk_password"] = request.config.getoption("--splunk-password")
56 | config["plugin_path"] = request.config.getoption("--docker-plugin-path")
57 | config["fifo_path"] = request.config.getoption("--fifo-path")
58 |
59 | kill_logging_plugin(config["plugin_path"])
60 | start_logging_plugin(config["plugin_path"])
61 | time.sleep(2)
62 | request.addfinalizer(lambda: teardown_method(config["plugin_path"]))
63 |
64 | return config
65 |
66 |
67 | def teardown_method(plugin_path):
68 | kill_logging_plugin(plugin_path)
69 |
--------------------------------------------------------------------------------
/test/malformed_data/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/splunk/docker-logging-plugin/f376affbeb8fa210d38cc1657014f1a9cbf2fb79/test/malformed_data/__init__.py
--------------------------------------------------------------------------------
/test/malformed_data/test_malformed_events.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright 2018 Splunk, Inc..
3 |
4 | Licensed under the Apache License, Version 2.0 (the "License");
5 | you may not use this file except in compliance with the License.
6 | You may obtain a copy of the License at
7 |
8 | http://www.apache.org/licenses/LICENSE-2.0
9 |
10 | Unless required by applicable law or agreed to in writing, software
11 | distributed under the License is distributed on an "AS IS" BASIS,
12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | See the License for the specific language governing permissions and
14 | limitations under the License.
15 | """
16 |
17 | import pytest
18 | import time
19 | import uuid
20 | import logging
21 | from ..common import request_start_logging, \
22 | check_events_from_splunk, request_stop_logging, \
23 | start_log_producer_from_input
24 |
25 |
26 | @pytest.mark.parametrize("test_input,expected", [
27 | ([("", False)], 0) # should not be sent to splunk
28 | ])
29 | def test_malformed_empty_string_1(setup, test_input, expected):
30 | '''
31 | Test that the logging plugin can handle various type of input correctly.
32 | Expected behavior is that:
33 | * Empty string or string with only spaces should be neglected
34 | * Non empty strings should be sent to Splunk
35 | * Non UTF-8 decodable string should still be sent to Splunk
36 | '''
37 | logging.getLogger().info("testing test_malformed_empty_string input={0} \
38 | expected={1} event(s)".format(test_input, expected))
39 | u_id = str(uuid.uuid4())
40 |
41 | file_path = setup["fifo_path"]
42 | start_log_producer_from_input(file_path, test_input, u_id)
43 |
44 | request_start_logging(file_path,
45 | setup["splunk_hec_url"],
46 | setup["splunk_hec_token"])
47 |
48 | # wait for 10 seconds to allow messages to be sent
49 | time.sleep(10)
50 | request_stop_logging(file_path)
51 |
52 | # check that events get to splunk
53 | events = check_events_from_splunk(id=u_id,
54 | start_time="-15m@m",
55 | url=setup["splunkd_url"],
56 | user=setup["splunk_user"],
57 | password=setup["splunk_password"])
58 | logging.getLogger().info("Splunk received %s events in the last minute " +
59 | "with u_id=%s",
60 | len(events), u_id)
61 | assert len(events) == expected
62 |
63 | @pytest.mark.parametrize("test_input,expected", [
64 | ([(" ", False)], 0) # should not be sent to splunk
65 | ])
66 | def test_malformed_empty_string_2(setup, test_input, expected):
67 | '''
68 | Test that the logging plugin can handle various type of input correctly.
69 | Expected behavior is that:
70 | * Empty string or string with only spaces should be neglected
71 | * Non empty strings should be sent to Splunk
72 | * Non UTF-8 decodable string should still be sent to Splunk
73 | '''
74 | logging.getLogger().info("testing test_malformed_empty_string input={0} \
75 | expected={1} event(s)".format(test_input, expected))
76 | u_id = str(uuid.uuid4())
77 |
78 | file_path = setup["fifo_path"]
79 | start_log_producer_from_input(file_path, test_input, u_id)
80 |
81 | request_start_logging(file_path,
82 | setup["splunk_hec_url"],
83 | setup["splunk_hec_token"])
84 |
85 | # wait for 10 seconds to allow messages to be sent
86 | time.sleep(10)
87 | request_stop_logging(file_path)
88 |
89 | # check that events get to splunk
90 | events = check_events_from_splunk(id=u_id,
91 | start_time="-15m@m",
92 | url=setup["splunkd_url"],
93 | user=setup["splunk_user"],
94 | password=setup["splunk_password"])
95 | logging.getLogger().info("Splunk received %s events in the last minute " +
96 | "with u_id=%s",
97 | len(events), u_id)
98 | assert len(events) == expected
99 |
100 | @pytest.mark.parametrize("test_input,expected", [
101 | ([("\xF0\xA4\xAD", False)], 1) # non utf-8 decodable chars
102 | # should make it to splunk
103 | # be sent to splunk
104 | ])
105 |
106 | def test_malformed_empty_string_3(setup, test_input, expected):
107 | '''
108 | Test that the logging plugin can handle various type of input correctly.
109 | Expected behavior is that:
110 | * Empty string or string with only spaces should be neglected
111 | * Non empty strings should be sent to Splunk
112 | * Non UTF-8 decodable string should still be sent to Splunk
113 | '''
114 | logging.getLogger().info("testing test_malformed_empty_string input={0} \
115 | expected={1} event(s)".format(test_input, expected))
116 | u_id = str(uuid.uuid4())
117 |
118 | file_path = setup["fifo_path"]
119 | start_log_producer_from_input(file_path, test_input, u_id)
120 |
121 | request_start_logging(file_path,
122 | setup["splunk_hec_url"],
123 | setup["splunk_hec_token"])
124 |
125 | # wait for 10 seconds to allow messages to be sent
126 | time.sleep(10)
127 | request_stop_logging(file_path)
128 |
129 | # check that events get to splunk
130 | events = check_events_from_splunk(id=u_id,
131 | start_time="-15m@m",
132 | url=setup["splunkd_url"],
133 | user=setup["splunk_user"],
134 | password=setup["splunk_password"])
135 | logging.getLogger().info("Splunk received %s events in the last minute " +
136 | "with u_id=%s",
137 | len(events), u_id)
138 | assert len(events) == expected
139 |
140 | @pytest.mark.parametrize("test_input,expected", [
141 | ([("hello", False)], 1) # normal string should always to sent to splunk
142 | ])
143 |
144 | def test_malformed_empty_string_4(setup, test_input, expected):
145 | '''
146 | Test that the logging plugin can handle various type of input correctly.
147 | Expected behavior is that:
148 | * Empty string or string with only spaces should be neglected
149 | * Non empty strings should be sent to Splunk
150 | * Non UTF-8 decodable string should still be sent to Splunk
151 | '''
152 | logging.getLogger().info("testing test_malformed_empty_string input={0} \
153 | expected={1} event(s)".format(test_input, expected))
154 | u_id = str(uuid.uuid4())
155 |
156 | file_path = setup["fifo_path"]
157 | start_log_producer_from_input(file_path, test_input, u_id)
158 |
159 | request_start_logging(file_path,
160 | setup["splunk_hec_url"],
161 | setup["splunk_hec_token"])
162 |
163 | # wait for 10 seconds to allow messages to be sent
164 | time.sleep(10)
165 | request_stop_logging(file_path)
166 |
167 | # check that events get to splunk
168 | events = check_events_from_splunk(id=u_id,
169 | start_time="-15m@m",
170 | url=setup["splunkd_url"],
171 | user=setup["splunk_user"],
172 | password=setup["splunk_password"])
173 | logging.getLogger().info("Splunk received %s events in the last minute " +
174 | "with u_id=%s",
175 | len(events), u_id)
176 | assert len(events) == expected
177 | @pytest.mark.parametrize("test_input,expected", [
178 | ([("{'test': 'incomplete}", False)], 1) # malformed json string should
179 | # be sent to splunk
180 | ])
181 |
182 | def test_malformed_empty_string_5(setup, test_input, expected):
183 | '''
184 | Test that the logging plugin can handle various type of input correctly.
185 | Expected behavior is that:
186 | * Empty string or string with only spaces should be neglected
187 | * Non empty strings should be sent to Splunk
188 | * Non UTF-8 decodable string should still be sent to Splunk
189 | '''
190 | logging.getLogger().info("testing test_malformed_empty_string input={0} \
191 | expected={1} event(s)".format(test_input, expected))
192 | u_id = str(uuid.uuid4())
193 |
194 | file_path = setup["fifo_path"]
195 | start_log_producer_from_input(file_path, test_input, u_id)
196 |
197 | request_start_logging(file_path,
198 | setup["splunk_hec_url"],
199 | setup["splunk_hec_token"])
200 |
201 | # wait for 10 seconds to allow messages to be sent
202 | time.sleep(10)
203 | request_stop_logging(file_path)
204 |
205 | # check that events get to splunk
206 | events = check_events_from_splunk(id=u_id,
207 | start_time="-15m@m",
208 | url=setup["splunkd_url"],
209 | user=setup["splunk_user"],
210 | password=setup["splunk_password"])
211 | logging.getLogger().info("Splunk received %s events in the last minute " +
212 | "with u_id=%s",
213 | len(events), u_id)
214 | assert len(events) == expected
215 |
--------------------------------------------------------------------------------
/test/partial_log/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/splunk/docker-logging-plugin/f376affbeb8fa210d38cc1657014f1a9cbf2fb79/test/partial_log/__init__.py
--------------------------------------------------------------------------------
/test/partial_log/test_partial_log.py:
--------------------------------------------------------------------------------
1 | """
2 | Copyright 2018 Splunk, Inc..
3 |
4 | Licensed under the Apache License, Version 2.0 (the "License");
5 | you may not use this file except in compliance with the License.
6 | You may obtain a copy of the License at
7 |
8 | http://www.apache.org/licenses/LICENSE-2.0
9 |
10 | Unless required by applicable law or agreed to in writing, software
11 | distributed under the License is distributed on an "AS IS" BASIS,
12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | See the License for the specific language governing permissions and
14 | limitations under the License.
15 | """
16 |
17 | import pytest
18 | import time
19 | import uuid
20 | import os
21 | import logging
22 | from ..common import request_start_logging, \
23 | check_events_from_splunk, request_stop_logging, \
24 | start_log_producer_from_input, start_log_producer_from_file, kill_logging_plugin
25 |
26 |
27 | @pytest.mark.parametrize("test_input, expected", [
28 | ([("start", True), ("in the middle", True), ("end", False)], 1)
29 | ])
30 | def test_partial_log_1(setup, test_input, expected):
31 | '''
32 | Test that the logging plugin can handle partial logs correctly.
33 | Expected behavior is that the plugin keeps appending the message
34 | hen partial flag is True and stop and flush when it reaches False
35 | '''
36 | logging.getLogger().info("testing test_partial_log input={0} \
37 | expected={1} event(s)".format(test_input, expected))
38 | u_id = str(uuid.uuid4())
39 |
40 | file_path = setup["fifo_path"]
41 | start_log_producer_from_input(file_path, test_input, u_id)
42 | request_start_logging(file_path,
43 | setup["splunk_hec_url"],
44 | setup["splunk_hec_token"])
45 |
46 | # wait for 15 seconds to allow messages to be sent
47 | time.sleep(15)
48 | request_stop_logging(file_path)
49 |
50 | # check that events get to splunk
51 | events = check_events_from_splunk(id=u_id,
52 | start_time="-15m@m",
53 | url=setup["splunkd_url"],
54 | user=setup["splunk_user"],
55 | password=setup["splunk_password"])
56 | logging.getLogger().info("Splunk received %s events in \
57 | the last minute with u_id=%s",
58 | len(events), u_id)
59 | assert len(events) == expected
60 |
61 | kill_logging_plugin
62 |
63 | @pytest.mark.parametrize("test_input, expected", [
64 | ([("start2", False), ("new start", True), ("end2", False)], 2)
65 | ])
66 | def test_partial_log_2(setup, test_input, expected):
67 | '''
68 | Test that the logging plugin can handle partial logs correctly.
69 | Expected behavior is that the plugin keeps appending the message
70 | hen partial flag is True and stop and flush when it reaches False
71 | '''
72 | logging.getLogger().info("testing test_partial_log input={0} \
73 | expected={1} event(s)".format(test_input, expected))
74 | u_id = str(uuid.uuid4())
75 |
76 | file_path = setup["fifo_path"]
77 | start_log_producer_from_input(file_path, test_input, u_id)
78 | request_start_logging(file_path,
79 | setup["splunk_hec_url"],
80 | setup["splunk_hec_token"])
81 |
82 | # wait for 15 seconds to allow messages to be sent
83 | time.sleep(15)
84 | request_stop_logging(file_path)
85 |
86 | # check that events get to splunk
87 | events = check_events_from_splunk(id=u_id,
88 | start_time="-15m@m",
89 | url=setup["splunkd_url"],
90 | user=setup["splunk_user"],
91 | password=setup["splunk_password"])
92 | logging.getLogger().info("Splunk received %s events in \
93 | the last minute with u_id=%s",
94 | len(events), u_id)
95 | assert len(events) == expected
96 |
97 | kill_logging_plugin
98 |
99 |
100 | @pytest.mark.parametrize("test_input, expected", [
101 | ([("start", True), ("mid", True), ("end", False)], 2)
102 | ])
103 | def test_partial_log_flush_timeout_1(setup, test_input, expected):
104 | '''
105 | Test that the logging plugin can flush the buffer for partial
106 | log. If the next partial message didn't arrive in expected
107 | time (default 5 sec), it should flush the buffer anyway. There
108 | is an architectural restriction that the buffer flush can only
109 | occur on receipt of a new message beyond the timeout of the buffer.
110 | '''
111 | logging.getLogger().info("testing test_partial_log_flush_timeout input={0} \
112 | expected={1} event(s)".format(test_input, expected))
113 | u_id = str(uuid.uuid4())
114 |
115 | file_path = setup["fifo_path"]
116 |
117 | start_log_producer_from_input(file_path, test_input, u_id, 10)
118 | request_start_logging(file_path,
119 | setup["splunk_hec_url"],
120 | setup["splunk_hec_token"])
121 |
122 | # wait for 70 seconds to allow messages to be sent
123 | time.sleep(70)
124 | request_stop_logging(file_path)
125 |
126 | # check that events get to splunk
127 | events = check_events_from_splunk(id=u_id,
128 | start_time="-15m@m",
129 | url=setup["splunkd_url"],
130 | user=setup["splunk_user"],
131 | password=setup["splunk_password"])
132 | logging.getLogger().info("Splunk received %s events in the last minute " +
133 | "with u_id=%s",
134 | len(events), u_id)
135 | assert len(events) == expected
136 |
137 | kill_logging_plugin
138 |
139 | @pytest.mark.parametrize("test_input, expected", [
140 | ([("start2", True), ("new start", False), ("end2", True), ("start3", False), ("new start", True), ("end3", False)], 3)
141 | ])
142 | def test_partial_log_flush_timeout_2(setup, test_input, expected):
143 | '''
144 | Test that the logging plugin can flush the buffer for partial
145 | log. If the next partial message didn't arrive in expected
146 | time (default 5 sec), it should flush the buffer anyway. There
147 | is an architectural restriction that the buffer flush can only
148 | occur on receipt of a new message beyond the timeout of the buffer.
149 | '''
150 | logging.getLogger().info("testing test_partial_log_flush_timeout input={0} \
151 | expected={1} event(s)".format(test_input, expected))
152 | u_id = str(uuid.uuid4())
153 |
154 | file_path = setup["fifo_path"]
155 |
156 | start_log_producer_from_input(file_path, test_input, u_id, 10)
157 | request_start_logging(file_path,
158 | setup["splunk_hec_url"],
159 | setup["splunk_hec_token"])
160 |
161 | # wait for 70 seconds to allow messages to be sent
162 | time.sleep(70)
163 | request_stop_logging(file_path)
164 |
165 | # check that events get to splunk
166 | events = check_events_from_splunk(id=u_id,
167 | start_time="-15m@m",
168 | url=setup["splunkd_url"],
169 | user=setup["splunk_user"],
170 | password=setup["splunk_password"])
171 | logging.getLogger().info("Splunk received %s events in the last minute " +
172 | "with u_id=%s",
173 | len(events), u_id)
174 | assert len(events) == expected
175 |
176 | kill_logging_plugin
177 |
178 |
179 | def test_partial_log_flush_size_limit(setup):
180 | '''
181 | Test that the logging plugin can flush the buffer when it reaches
182 | the buffer size limit (1mb)
183 | '''
184 | logging.getLogger().info("testing test_partial_log_flush_size_limit")
185 | u_id = str(uuid.uuid4())
186 |
187 | file_path = setup["fifo_path"]
188 | __location__ = os.path.realpath(os.path.join(os.getcwd(),
189 | os.path.dirname(__file__)))
190 | test_file_path = os.path.join(__location__, "test_file.txt")
191 |
192 | start_log_producer_from_file(file_path, u_id, test_file_path)
193 |
194 | request_start_logging(file_path,
195 | setup["splunk_hec_url"],
196 | setup["splunk_hec_token"])
197 |
198 | # wait for 15 seconds to allow messages to be sent
199 | time.sleep(15)
200 | request_stop_logging(file_path)
201 |
202 | # check that events get to splunk
203 | events = check_events_from_splunk(id=u_id,
204 | start_time="-15m@m",
205 | url=setup["splunkd_url"],
206 | user=setup["splunk_user"],
207 | password=setup["splunk_password"])
208 | logging.getLogger().info("Splunk received %s events in the last minute "
209 | "with u_id=%s",
210 | len(events), u_id)
211 |
212 | assert len(events) == 2
213 |
214 | kill_logging_plugin
--------------------------------------------------------------------------------
/test/requirements.txt:
--------------------------------------------------------------------------------
1 | pytest
2 | requests
3 | requests_unixsocket
4 | protobuf
--------------------------------------------------------------------------------