├── .gitignore ├── CHANGELOG.md ├── CONTRIBUTORS ├── DEVELOPER.md ├── Gemfile ├── LICENSE ├── README.md ├── Rakefile ├── docs └── index.asciidoc ├── lib └── logstash │ └── inputs │ └── okta_system_log.rb ├── logstash-input-okta_system_log.gemspec └── spec └── inputs └── okta_system_log_spec.rb /.gitignore: -------------------------------------------------------------------------------- 1 | Gemfile.lock 2 | logstash-input-okta_system_log-*.gem 3 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## 0.10.0 2 | - Use the new rate limit API headers 3 | ## 0.9.1 4 | - Updates dependencies to standard 5 | ## 0.9.0 6 | - Finalizes plugin for distribution using logstash-input-file style sincedb 7 | ## 0.1.0 8 | - Initial plugin port from okta_enterprise 9 | -------------------------------------------------------------------------------- /CONTRIBUTORS: -------------------------------------------------------------------------------- 1 | The following is a list of people who have contributed ideas, code, bug 2 | reports, or in general have helped logstash along its way. 3 | 4 | Contributors: 5 | * Security Risk Advisors 6 | 7 | Note: If you've sent us patches, bug reports, or otherwise contributed to 8 | Logstash, and you aren't on the list above and want to be, please let us know 9 | and we'll make sure you're here. Contributions from folks like you are what make 10 | open source awesome. 11 | -------------------------------------------------------------------------------- /DEVELOPER.md: -------------------------------------------------------------------------------- 1 | # logstash-input-okta_system_log 2 | -------------------------------------------------------------------------------- /Gemfile: -------------------------------------------------------------------------------- 1 | source 'https://rubygems.org' 2 | gemspec 3 | 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Licensed under the Apache License, Version 2.0 (the "License"); 2 | you may not use this file except in compliance with the License. 3 | You may obtain a copy of the License at 4 | 5 | http://www.apache.org/licenses/LICENSE-2.0 6 | 7 | Unless required by applicable law or agreed to in writing, software 8 | distributed under the License is distributed on an "AS IS" BASIS, 9 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 10 | See the License for the specific language governing permissions and 11 | limitations under the License. 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Looking for the docs? 2 | 3 | You can find them here: [docs](docs/index.asciidoc) 4 | 5 | # Logstash Plugin 6 | 7 | This is a plugin for [Logstash](https://github.com/elastic/logstash). 8 | 9 | It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way. 10 | 11 | ## Documentation 12 | 13 | Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/). 14 | 15 | - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive 16 | - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide 17 | 18 | ## Need Help? 19 | 20 | Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum. 21 | 22 | ## Developing 23 | 24 | ### 1. Plugin Development and Testing 25 | 26 | #### Code 27 | - To get started, you'll need JRuby with the Bundler gem installed. 28 | 29 | - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example). 30 | 31 | - Install dependencies 32 | ```sh 33 | bundle install 34 | ``` 35 | 36 | #### Test 37 | 38 | - Update your dependencies 39 | 40 | ```sh 41 | bundle install 42 | ``` 43 | 44 | - Run tests 45 | 46 | ```sh 47 | bundle exec rspec 48 | ``` 49 | 50 | ### 2. Running your unpublished Plugin in Logstash 51 | 52 | #### 2.1 Run in a local Logstash clone 53 | 54 | - Edit Logstash `Gemfile` and add the local plugin path, for example: 55 | ```ruby 56 | gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome" 57 | ``` 58 | - Install plugin 59 | ```sh 60 | bin/logstash-plugin install --no-verify 61 | ``` 62 | - Run Logstash with your plugin 63 | ```sh 64 | bin/logstash -e 'filter {awesome {}}' 65 | ``` 66 | At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash. 67 | 68 | #### 2.2 Run in an installed Logstash 69 | 70 | You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using: 71 | 72 | - Build your plugin gem 73 | ```sh 74 | gem build logstash-filter-awesome.gemspec 75 | ``` 76 | - Install the plugin from the Logstash home 77 | ```sh 78 | bin/logstash-plugin install /your/local/plugin/logstash-filter-awesome.gem 79 | ``` 80 | - Start Logstash and proceed to test the plugin 81 | 82 | ## Contributing 83 | 84 | All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin. 85 | 86 | Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here. 87 | 88 | It is more important to the community that you are able to contribute. 89 | 90 | For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file. 91 | -------------------------------------------------------------------------------- /Rakefile: -------------------------------------------------------------------------------- 1 | require "logstash/devutils/rake" 2 | -------------------------------------------------------------------------------- /docs/index.asciidoc: -------------------------------------------------------------------------------- 1 | :plugin: okta_system_log 2 | :type: input 3 | :default_codec: json 4 | 5 | /////////////////////////////////////////// 6 | START - GENERATED VARIABLES, DO NOT EDIT! 7 | /////////////////////////////////////////// 8 | :version: %VERSION% 9 | :release_date: %RELEASE_DATE% 10 | :changelog_url: %CHANGELOG_URL% 11 | :include_path: ../../../../logstash/docs/include 12 | /////////////////////////////////////////// 13 | END - GENERATED VARIABLES, DO NOT EDIT! 14 | /////////////////////////////////////////// 15 | 16 | :note-caption: :information_source: 17 | 18 | [id="plugins-{type}s-{plugin}"] 19 | 20 | === Okta System Log input plugin 21 | 22 | include::{include_path}/plugin_header.asciidoc[] 23 | 24 | ==== Description 25 | 26 | This Logstash input plugin allows you to call an the Okta System Log API, process it as an event, 27 | and send it on its merry way. The idea behind this plugin is to be able to pull data 28 | from web-based services but still process them in an on-prem SIEM. 29 | The plugin supports the rufus style scheduling. 30 | 31 | ==== Example 32 | This is a basic configuration. The API key is passed through using the secret store or env variable. 33 | While it is possible to just put the API key directly into the file, it is NOT recommended. 34 | The config should look like this: 35 | 36 | [source,ruby] 37 | ---------------------------------- 38 | input { 39 | okta_system_log { 40 | schedule => { every => "1m" } 41 | limit => 1000 42 | auth_token_key => "${key}" 43 | hostname => "uri.okta.com" 44 | } 45 | } 46 | 47 | output { 48 | stdout { 49 | codec => rubydebug 50 | } 51 | } 52 | ---------------------------------- 53 | 54 | Like HTTP poller, this plugin supports the same `metadata_target` and `target` options, 55 | as well as various scheduling options. 56 | 57 | 58 | [source,ruby] 59 | ---------------------------------- 60 | input { 61 | okta_system_log { 62 | schedule => { every => "1m" } 63 | limit => 1000 64 | auth_token_key => "${OKTA_API_KEY}" 65 | hostname => "uri.okta.com" 66 | # Supports "cron", "every", "at" and "in" schedules by rufus scheduler 67 | schedule => { cron => "* * * * * UTC"} 68 | # A hash of request metadata info (timing, response headers, etc.) will be sent here 69 | metadata_target => "http_poller_metadata" 70 | } 71 | } 72 | 73 | output { 74 | stdout { 75 | codec => rubydebug 76 | } 77 | } 78 | ---------------------------------- 79 | 80 | ==== Tracking of current position in watched files 81 | 82 | The plugin keeps track of the current position of the stream by 83 | recording it in a separate state file. This makes it 84 | possible to stop and restart Logstash and have it pick up where it 85 | left off without missing the lines that were added to the file while 86 | Logstash was stopped. 87 | 88 | By default, the state file is placed in the data directory of Logstash 89 | with a filename based on the name of the okta instance (i.e. the `hostname` option). 90 | If you need to explictly set the state file location you can do so 91 | with the `state_file_path` option. 92 | 93 | 94 | 95 | ==== Using the HTTP poller with custom a custom CA or self signed cert. 96 | 97 | If you have a self signed cert you will need to convert your server's certificate 98 | to a valid# `.jks` or `.p12` file. 99 | An easy way to do it is to run the following one-liner, 100 | substituting your server's URL for the placeholder `MYURL` and `MYPORT`. 101 | 102 | [source,ruby] 103 | ---------------------------------- 104 | openssl s_client -showcerts -connect MYURL:MYPORT /dev/null|openssl x509 -outform PEM > downloaded_cert.pem; keytool -import -alias test -file downloaded_cert.pem -keystore downloaded_truststore.jks 105 | ---------------------------------- 106 | 107 | The above snippet will create two files 108 | `downloaded_cert.pem` and `downloaded_truststore.jks`. 109 | You will be prompted to set a password for the `jks` file 110 | during this process. To configure logstash 111 | use a config like the one that follows. 112 | 113 | 114 | [source,ruby] 115 | ---------------------------------- 116 | okta_system_log { 117 | schedule => { every => "30s" } 118 | limit => 1000 119 | auth_token_key => "${key}" 120 | hostname => "uri.okta.com" 121 | 122 | truststore => "/path/to/downloaded_truststore.jks" 123 | truststore_password => "mypassword" 124 | schedule => { cron => "* * * * * UTC"} 125 | } 126 | ---------------------------------- 127 | 128 | 129 | [id="plugins-{type}s-{plugin}-options"] 130 | ==== Okta System Log Input Configuration Options 131 | 132 | This plugin supports the following configuration options plus the <> described later. 133 | 134 | [NOTE] 135 | ==== 136 | The options specific to okta_system_log 137 | are listed first. + 138 | General plugin options are listed after afterwards. 139 | ==== 140 | ==== Plugin specific options 141 | [cols="<,<,<",options="header",] 142 | |======================================================================= 143 | |Setting |Input type|Required 144 | | <> |a valid filesystem path|Yes (or use `auth_token_key`) 145 | | <> |<>|Yes (or use `auth_token_file`) 146 | | <> |<>|No 147 | | <> |<>|Yes 148 | | <> |<>|No 149 | | <> |<>|No 150 | | <> |<>|Yes 151 | | <> |<>|No 152 | | <> |a valid filesystem path|No 153 | | <> |<>|No 154 | | <> |<>|No 155 | |======================================================================= 156 | 157 | ==== Generic HTTP Poller options 158 | [cols="<,<,<",options="header",] 159 | |======================================================================= 160 | |Setting |Input type|Required 161 | | <> |<>|no 162 | | <> |<>|No 163 | | <> |<>|No 164 | | <> |a valid filesystem path|No 165 | | <> |a valid filesystem path|No 166 | | <> |a valid filesystem path|No 167 | | <> |<>|No 168 | | <> |<>|No 169 | | <> |<>|No 170 | | <> |<>|No 171 | | <> |a valid filesystem path|No 172 | | <> |<>|No 173 | | <> |<>|No 174 | | <> |<>|No 175 | | <> |<>|No 176 | | <> |<>|No 177 | | <> |<<,>>|No 178 | | <> |<>|No 179 | | <> |<>|No 180 | | <> |<>|No 181 | | <> |<>|No 182 | | <> |a valid filesystem path|No 183 | | <> |<>|No 184 | | <> |<>|No 185 | | <> |<>|No 186 | |======================================================================= 187 | 188 | Also see <> for a list of options supported by all 189 | input plugins. 190 | 191 |   192 | 193 | [id="plugins-{type}s-{plugin}-auth_token_file"] 194 | ===== `auth_token_file` 195 | 196 | * Value type is <> 197 | * There is no default value for this setting. 198 | * This option is deprecated and will be removed in future versions of the plugin 199 | in favor of `auth_token_key` 200 | 201 | The file in which the auth_token for Okta will be contained. This will contain the `auth_token` 202 | which can have a lot access to your Okta instance. 203 | 204 | [id="plugins-{type}s-{plugin}-auth_token_key"] 205 | ===== `auth_token_key` 206 | 207 | * Value type is <> 208 | * There is no default value for this setting. 209 | * Secret store docs: https://www.elastic.co/guide/en/logstash/current/keystore.html 210 | 211 | The auth token used to authenticate to Okta. This method is provided solely to add the auth_token 212 | via secrets store or env variable. 213 | 214 | [id="plugins-{type}s-{plugin}-user"] 215 | ===== `user` 216 | 217 | * Value type is <> 218 | * There is no default value for this setting. 219 | 220 | Username to use with HTTP authentication for ALL requests. Note that you can also set this per-URL. 221 | If you set this you must also set the `password` option. 222 | 223 | [id="plugins-{type}s-{plugin}-password"] 224 | ===== `password` 225 | 226 | * Value type is <> 227 | * There is no default value for this setting. 228 | 229 | Password to be used in conjunction with the username for HTTP authentication. 230 | 231 | [id="plugins-{type}s-{plugin}-automatic_retries"] 232 | ===== `automatic_retries` 233 | 234 | * Value type is <> 235 | * Default value is `1` 236 | 237 | How many times should the client retry a failing URL. We highly recommend NOT setting this value 238 | to zero if keepalive is enabled. Some servers incorrectly end keepalives early requiring a retry! 239 | Note: if `retry_non_idempotent` is set only GET, HEAD, PUT, DELETE, OPTIONS, and TRACE requests will be retried. 240 | 241 | [id="plugins-{type}s-{plugin}-cacert"] 242 | ===== `cacert` 243 | 244 | * Value type is <> 245 | * There is no default value for this setting. 246 | 247 | If you need to use a custom X.509 CA (.pem certs) specify the path to that here 248 | 249 | [id="plugins-{type}s-{plugin}-client_cert"] 250 | ===== `client_cert` 251 | 252 | * Value type is <> 253 | * There is no default value for this setting. 254 | 255 | If you'd like to use a client certificate (note, most people don't want this) set the path to the x509 cert here 256 | 257 | [id="plugins-{type}s-{plugin}-client_key"] 258 | ===== `client_key` 259 | 260 | * Value type is <> 261 | * There is no default value for this setting. 262 | 263 | If you're using a client certificate specify the path to the encryption key here 264 | 265 | [id="plugins-{type}s-{plugin}-connect_timeout"] 266 | ===== `connect_timeout` 267 | 268 | * Value type is <> 269 | * Default value is `10` 270 | 271 | Timeout (in seconds) to wait for a connection to be established. Default is `10s` 272 | 273 | [id="plugins-{type}s-{plugin}-cookies"] 274 | ===== `cookies` 275 | 276 | * Value type is <> 277 | * Default value is `true` 278 | 279 | Enable cookie support. With this enabled the client will persist cookies 280 | across requests as a normal web browser would. Enabled by default 281 | 282 | [id="plugins-{type}s-{plugin}-filter"] 283 | ===== `filter` 284 | 285 | * Value type is <> 286 | * There is no default value for this setting. 287 | * Docs: https://developer.okta.com/docs/api/resources/system_log#expression-filter 288 | * The plugin will not validate the filter. 289 | 290 | An expression filter is useful for performing structured queries 291 | where constraints on LogEvent attribute values can be explicitly targeted. 292 | Use single quotes in the config file, e.g. 'published gt "2017-01-01T00:00:00.000Z"' 293 | 294 | [id="plugins-{type}s-{plugin}-follow_redirects"] 295 | ===== `follow_redirects` 296 | 297 | * Value type is <> 298 | * Default value is `true` 299 | 300 | Should redirects be followed? Defaults to `true` 301 | 302 | [id="plugins-{type}s-{plugin}-hostname"] 303 | ===== `hostname` 304 | 305 | * Value type is <> 306 | * There is no default value for this setting. 307 | 308 | The Okta hostname to poll for logs. 309 | 310 | Examples: 311 | 312 | * dev-instance.oktapreview.com 313 | * org-name.okta.com 314 | 315 | [id="plugins-{type}s-{plugin}-limit"] 316 | ===== `limit` 317 | 318 | * Value type is <> 319 | * Default value is `1000` 320 | 321 | The number of events to pull from the API, between 1 and 1000. Defaults to `1000` 322 | 323 | [id="plugins-{type}s-{plugin}-keepalive"] 324 | ===== `keepalive` 325 | 326 | * Value type is <> 327 | * Default value is `true` 328 | 329 | Turn this on to enable HTTP keepalive support. We highly recommend setting `automatic_retries` to at least 330 | one with this to fix interactions with broken keepalive implementations. 331 | 332 | [id="plugins-{type}s-{plugin}-keystore"] 333 | ===== `keystore` 334 | 335 | * Value type is <> 336 | * There is no default value for this setting. 337 | 338 | If you need to use a custom keystore (`.jks`) specify that here. This does not work with .pem keys! 339 | 340 | [id="plugins-{type}s-{plugin}-keystore_password"] 341 | ===== `keystore_password` 342 | 343 | * Value type is <> 344 | * There is no default value for this setting. 345 | 346 | Specify the keystore password here. 347 | Note, most .jks files created with keytool require a password! 348 | 349 | [id="plugins-{type}s-{plugin}-keystore_type"] 350 | ===== `keystore_type` 351 | 352 | * Value type is <> 353 | * Default value is `"JKS"` 354 | 355 | Specify the keystore type here. One of `JKS` or `PKCS12`. Default is `JKS` 356 | 357 | [id="plugins-{type}s-{plugin}-metadata_target"] 358 | ===== `metadata_target` 359 | 360 | * Value type is <> 361 | * Default value is `"@metadata"` 362 | 363 | If you'd like to work with the request/response metadata. 364 | Set this value to the name of the field you'd like to store a nested 365 | hash of metadata. 366 | 367 | [id="plugins-{type}s-{plugin}-pool_max"] 368 | ===== `pool_max` 369 | 370 | * Value type is <> 371 | * Default value is `50` 372 | 373 | Max number of concurrent connections. Defaults to `50` 374 | 375 | [id="plugins-{type}s-{plugin}-pool_max_per_route"] 376 | ===== `pool_max_per_route` 377 | 378 | * Value type is <> 379 | * Default value is `25` 380 | 381 | Max number of concurrent connections to a single host. Defaults to `25` 382 | 383 | [id="plugins-{type}s-{plugin}-proxy"] 384 | ===== `proxy` 385 | 386 | * Value type is <> 387 | * There is no default value for this setting. 388 | 389 | If you'd like to use an HTTP proxy . This supports multiple configuration syntaxes: 390 | 391 | 1. Proxy host in form: `http://proxy.org:1234` 392 | 2. Proxy host in form: `{host => "proxy.org", port => 80, scheme => 'http', user => 'username@host', password => 'password'}` 393 | 3. Proxy host in form: `{url => 'http://proxy.org:1234', user => 'username@host', password => 'password'}` 394 | 395 | [id="plugins-{type}s-{plugin}-q"] 396 | ===== `q` 397 | 398 | * Value type is <> 399 | * There is no default value for this setting. 400 | * Docs: https://developer.okta.com/docs/api/resources/system_log#keyword-filter 401 | * Documentation Bug: https://github.com/okta/okta.github.io/issues/2500 402 | * The plugin will URL encode the list 403 | * The query cannot have more than ten items 404 | * Query items cannot have a space 405 | * Query items cannot be longer than 40 chars 406 | 407 | 408 | The query parameter q can be used to perform keyword matching 409 | against a LogEvents object’s attribute values. 410 | In order to satisfy the constraint, all supplied keywords must be matched exactly. 411 | Note that matching is case-insensitive. 412 | 413 | Examples: 414 | a) ["foo", "bar"] 415 | b) ["new", "york"] 416 | 417 | [id="plugins-{type}s-{plugin}-rate_limit"] 418 | ==== `rate_limit` 419 | 420 | * Value type is <> 421 | * The value is eventually mapped to a float between 0.1 -> 1.0 422 | * The default value is `RATE_MEDIUM` or `"0.5"` 423 | * The valid standard options are: 424 | * `RATE_SLOW`: 0.4 425 | * `RATE_MEDIUM`: 0.5 426 | * `RATE_FAST`: 0.6 427 | * The float values must be entered *as strings* 428 | e.g. `"0.3"` or `"0.9"` 429 | * Ref: https://developer.okta.com/docs/reference/api/system-log/#system-events 430 | 431 | The rate limit parameter rate_limt is used to adjust 432 | how often requests are made against the System Log API. 433 | Using the `x-rate-limit-remaining` and `x-rate-limit-limit` 434 | header values to throttle the number of requests. 435 | 436 | The default value of 0.5 will avoid generating rate limit warnings. 437 | 438 | [id="plugins-{type}s-{plugin}-since"] 439 | ===== `since` 440 | 441 | * Value type is <> 442 | * There is no default value for this setting. 443 | * This plugin will URL encode the parameter. 444 | * Docs: https://developer.okta.com/docs/api/resources/system_log#request-parameters 445 | 446 | Filters the lower time bound of the log events `published` property. 447 | The API will only fetch events seven days before `now` by default. 448 | Since Okta documents state that logs are stored for 90 days, 449 | the date should be set accordingly. 450 | Provide the date as an RFC 3339 formatted date 451 | 452 | Example: 453 | * 2016-10-09T22:25:06-07:00 454 | 455 | [id="plugins-{type}s-{plugin}-state_file_path"] 456 | ===== `state_file_path` 457 | 458 | * Value type is <> 459 | * There is no default value for this setting. 460 | 461 | Path of the state file (keeps track of the current position 462 | of monitored log files) that will be written to disk. 463 | The default will write state files to `/plugins/inputs/okta_system_log` 464 | 465 | NOTE: it must be a file path and not a directory path 466 | 467 | [id="plugins-{type}s-{plugin}-state_file_fatal_failure"] 468 | ===== `state_file_fatal_failure` 469 | 470 | * Value type is <> 471 | * Default value is `false` 472 | 473 | `state_file_fatal_failure` dictates the behavior 474 | of the plugin when the state_file cannot update. + 475 | When set to `true` a failed write to the state 476 | file will cause the plugin to exit. + 477 | When set to `false` a failed write to the state 478 | file will generate an error. 479 | 480 | [id="plugins-{type}s-{plugin}-request_timeout"] 481 | ===== `request_timeout` 482 | 483 | * Value type is <> 484 | * Default value is `60` 485 | 486 | Timeout (in seconds) for the entire request. 487 | 488 | [id="plugins-{type}s-{plugin}-retry_non_idempotent"] 489 | ===== `retry_non_idempotent` 490 | 491 | * Value type is <> 492 | * Default value is `false` 493 | 494 | If `automatic_retries` is enabled this will cause non-idempotent HTTP verbs (such as POST) to be retried. 495 | 496 | [id="plugins-{type}s-{plugin}-schedule"] 497 | ===== `schedule` 498 | 499 | * Value type is <> 500 | * There is no default value for this setting. 501 | * Reccomended that the schedule be *at least* once a minute 502 | 503 | Schedule of when to periodically poll from the urls 504 | Format: A hash with 505 | + key: "cron" | "every" | "in" | "at" 506 | + value: string 507 | Examples: 508 | a) { "every" => "1h" } 509 | b) { "cron" => "* * * * * UTC" } 510 | See: rufus/scheduler for details about different schedule options and value string format 511 | 512 | [id="plugins-{type}s-{plugin}-socket_timeout"] 513 | ===== `socket_timeout` 514 | 515 | * Value type is <> 516 | * Default value is `10` 517 | 518 | Timeout (in seconds) to wait for data on the socket. Default is `10s` 519 | 520 | [id="plugins-{type}s-{plugin}-target"] 521 | ===== `target` 522 | 523 | * Value type is <> 524 | * There is no default value for this setting. 525 | 526 | Define the target field for placing the received data. If this setting is omitted, the data will be stored at the root (top level) of the event. 527 | 528 | [id="plugins-{type}s-{plugin}-truststore"] 529 | ===== `truststore` 530 | 531 | * Value type is <> 532 | * There is no default value for this setting. 533 | 534 | If you need to use a custom truststore (`.jks`) specify that here. This does not work with .pem certs! 535 | 536 | [id="plugins-{type}s-{plugin}-truststore_password"] 537 | ===== `truststore_password` 538 | 539 | * Value type is <> 540 | * There is no default value for this setting. 541 | 542 | Specify the truststore password here. 543 | Note, most .jks files created with keytool require a password! 544 | 545 | [id="plugins-{type}s-{plugin}-truststore_type"] 546 | ===== `truststore_type` 547 | 548 | * Value type is <> 549 | * Default value is `"JKS"` 550 | 551 | Specify the truststore type here. One of `JKS` or `PKCS12`. Default is `JKS` 552 | 553 | [id="plugins-{type}s-{plugin}-urls"] 554 | ===== `urls` 555 | 556 | * This is a required setting. 557 | * Value type is <> 558 | * There is no default value for this setting. 559 | 560 | A Hash of urls in this format : `"name" => "url"`. 561 | The name and the url will be passed in the outputed event 562 | 563 | [id="plugins-{type}s-{plugin}-validate_after_inactivity"] 564 | ===== `validate_after_inactivity` 565 | 566 | * Value type is <> 567 | * Default value is `200` 568 | 569 | How long to wait before checking if the connection is stale before executing a request on a connection using keepalive. 570 | # You may want to set this lower, possibly to 0 if you get connection errors regularly 571 | Quoting the Apache commons docs (this client is based Apache Commmons): 572 | 'Defines period of inactivity in milliseconds after which persistent connections must be re-validated prior to being leased to the consumer. Non-positive value passed to this method disables connection validation. This check helps detect connections that have become stale (half-closed) while kept inactive in the pool.' 573 | See https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html#setValidateAfterInactivity(int)[these docs for more info] 574 | 575 | 576 | 577 | [id="plugins-{type}s-{plugin}-common-options"] 578 | include::{include_path}/{type}.asciidoc[] 579 | 580 | :default_codec!: 581 | -------------------------------------------------------------------------------- /lib/logstash/inputs/okta_system_log.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/inputs/base" 3 | require "logstash/namespace" 4 | require "rufus/scheduler" 5 | require "socket" # for Socket.gethostname 6 | require "logstash/plugin_mixins/http_client" 7 | require "manticore" 8 | require "uri" 9 | 10 | 11 | class LogStash::Inputs::OktaSystemLog < LogStash::Inputs::Base 12 | include LogStash::PluginMixins::HttpClient 13 | 14 | MAX_MMAP_FILE_SIZE = 1 * 2**10 15 | OKTA_EVENT_LOG_PATH = "/api/v1/logs" 16 | AUTH_TEST_URL = "?limit=1#auth-test" 17 | 18 | HTTP_OK_200 = 200 19 | HTTP_BAD_REQUEST_400 = 400 20 | HTTP_UNAUTHORIZED_401 = 401 21 | HTTP_TOO_MANY_REQUESTS_429 = 429 22 | 23 | # Sleep Timers 24 | SLEEP_API_RATE_LIMIT = 1 25 | SLEEP_STATE_FILE_RETRY = 0.25 26 | 27 | config_name "okta_system_log" 28 | 29 | # If undefined, Logstash will complain, even if codec is unused. 30 | default :codec, "json" 31 | 32 | # Schedule of when to periodically poll from the url 33 | # Format: A hash with 34 | # + key: "cron" | "every" | "in" | "at" 35 | # + value: string 36 | # Examples: 37 | # a) { "every" => "1h" } 38 | # b) { "cron" => "* * * * * UTC" } 39 | # See: rufus/scheduler for details about different schedule options and value string format 40 | # See here for rate limits: https://developer.okta.com/docs/api/resources/system_log#rate-limits 41 | config :schedule, :validate => :hash, :required => true 42 | 43 | # The Okta host which you would like to use 44 | # The system log path will be appended onto this host 45 | # Ex: dev-instance.oktapreview.com 46 | # Ex: org-name.okta.com 47 | # 48 | # Format: Hostname 49 | config :hostname, :validate => :string 50 | 51 | # The date and time after which to fetch events 52 | # NOTE: By default the API will only fetch events seven days before time of the first call 53 | # To get more data, please select the desired date to start fetching data 54 | # Docs: https://developer.okta.com/docs/api/resources/system_log#request-parameters 55 | # Okta log retention by default is 90 days, it is suggested to set the date accordingly 56 | # 57 | # Format: string with a RFC 3339 formatted date (e.g. 2016-10-09T22:25:06-07:00) 58 | config :since, :validate => :string 59 | 60 | # Set how many messages you want to pull with each request 61 | # The default, `1000`, means to fetch 1000 events at a time. 62 | # 63 | # Format: Number between 1 and 1000 64 | # Default: 1000 65 | config :limit, :validate => :number, :default => 1000 66 | 67 | # The free form filter to use to filter data to requirements. 68 | # Docs: https://developer.okta.com/docs/api/resources/system_log#expression-filter 69 | # The filter will be URL encoded by the plugin 70 | # The plugin will not validate the filter. 71 | # Use single quotes in the config file, 72 | # e.g. 'published gt "2017-01-01T00:00:00.000Z"' 73 | # 74 | # Format: Plain text filter field. 75 | config :filter, :validate => :string 76 | 77 | # Filters the log events results by one or more exact keywords in a list 78 | # Docs: https://developer.okta.com/docs/api/resources/system_log#keyword-filter 79 | # Documentation bug: https://github.com/okta/okta.github.io/issues/2500 80 | # The plugin will URL encode the list 81 | # The query cannot have more than ten items 82 | # Query items cannot have a space 83 | # Query items cannot be longer than 40 chars 84 | # 85 | # Format: A list with the items to query on 86 | # Ex. ["foo", "bar"] 87 | # Ex. ["new", "york"] 88 | config :q, :validate => :string, :list => true 89 | 90 | # rate_limit will set the pace of collection to the desired limit 91 | # Based on: https://developer.okta.com/docs/reference/api/system-log/#system-events 92 | # It supports three convenience parameters of RATE_SLOW, RATE_MEDIUM and RATE_FAST 93 | # A user can also set a value of 0.1 -> 1.0, the plugin will automatically _floor_ 94 | # the value to the tenths place 95 | # This value represents the percentage of the allocated rate limit to consume 96 | # Defaults to RATE_MEDIUM 97 | # The default and slower (e.g. lower) parameters will not generate errors 98 | # RATE_FAST and faster (e.g. higher) parameters _may_ generate warnings and errors 99 | # RATE_SLOW: 0.4 100 | # RATE_MEDIUM: 0.5 101 | # RATE_FAST: 0.6 102 | # 103 | # Format: Either the convenience or a string with the decimal of 0.1 -> 1.0 104 | # Ex. "RATE_MEDIUM" 105 | # Ex. "0.3" 106 | config :rate_limit, :validate => :string, :default => "RATE_MEDIUM" 107 | 108 | # The file in which the auth_token for Okta will be contained. 109 | # This will contain the auth_token which can have a lot access to your Okta instance. 110 | # It cannot be stressed enough how important it is to protect this file. 111 | # NOTE: This option is deprecated and will be removed in favor of the secrets store. 112 | # 113 | # Format: File path 114 | config :auth_token_file, :validate => :path, :deprecated => true 115 | 116 | # The auth token used to authenticate to Okta. 117 | # NOTE: Avoid storing the auth_token directly in the config file. 118 | # This method is provided solely to add the auth_token via secrets store. 119 | # Docs: https://www.elastic.co/guide/en/logstash/current/keystore.html 120 | # WARNING: This will contain the auth_token which can have a lot access to your Okta instance. 121 | # 122 | # Format: File path 123 | config :auth_token_key, :validate => :password 124 | 125 | # Path to the state file (keeps track of the current position 126 | # of the API) that will be written to disk. 127 | # The default will write state files to `/plugins/inputs/okta_system_log` 128 | # NOTE: it must be a file path and not a directory path 129 | # 130 | # Format: Filepath 131 | config :state_file_path, :validate => :string 132 | 133 | # Option to cause a fatal error if the state file can't update 134 | # Normal operation will generate an error when state file update fails 135 | # However, it will continue pull events from API 136 | # This option will reverse that paradigm and exit if a failure occurs 137 | # 138 | # Format: Boolean 139 | config :state_file_fatal_failure, :validate => :boolean, :default => false 140 | 141 | # If you'd like to work with the request/response metadata. 142 | # Set this value to the name of the field you'd like to store a nested 143 | # hash of metadata. 144 | config :metadata_target, :validate => :string, :default => '@metadata' 145 | 146 | # Define the target field for placing the received data. 147 | # If this setting is omitted 148 | # the data will be stored at the root (top level) of the event. 149 | # 150 | # Format: String 151 | config :target, :validate => :string 152 | 153 | # The URL for the Okta instance to access 154 | # NOTE: This is useful for an iPaaS instance 155 | # 156 | # Format: URI 157 | config :custom_url, :validate => :uri, :required => false 158 | 159 | # Custom authorization header to be added instead of default header 160 | # This is useful for an iPaaS only 161 | # Example: Basic dXNlcjpwYXNzd29yZA== 162 | # This will be added to the authorization header accordingly 163 | # Authorization: Basic dXNlcjpwYXNzd29yZA== 164 | # NOTE: It is suggested to use the secrets store to store the header 165 | # It is an error to set both this and the auth_token 166 | # 167 | # Format: string 168 | config :custom_auth_header, :validate => :password, :required => false 169 | 170 | # This option is obsoleted in favor of hostname or custom_url. 171 | # THe URL for the Okta instance to access 172 | # 173 | # Format: URI 174 | config :url, :validate => :uri, 175 | :obsolete => "url is obsolete, please use hostname or custom_url instead" 176 | 177 | # This option is obsolete 178 | # The throttle value to use for noisy log lines (at the info level) 179 | # Currently just one log statement (successful HTTP connects) 180 | # The value is used to mod a counter, so set it appropriately for log levels 181 | # NOTE: This value will be ignored when the log level is debug or trace 182 | # 183 | # Format: Integer 184 | config :log_throttle, :validate => :number, 185 | :obsolete => "Log throttling is longer required" 186 | 187 | # This option is obsoleted in favor of limit. 188 | # Set how many messages you want to pull with each request 189 | # 190 | # The default, `1000`, means to fetch 1000 events at a time. 191 | # Any value less than 1 will fetch all possible events. 192 | config :chunk_size, :validate => :number, 193 | :obsolete => "chunk_size is obsolete, please use limit instead" 194 | 195 | # This option is obsoleted in favor of since. 196 | # The date and time after which to fetch events 197 | # 198 | # Format: string with a RFC 3339 formatted date 199 | # Ex. 2016-10-09T22:25:06-07:00 200 | config :start_date, :validate => :string, 201 | :obsolete => "start_date is obsolete, please use since instead" 202 | 203 | # This option is obsoleted in favor of auth_token_key. 204 | # The auth token used to authenticate to Okta. 205 | # WARNING: Avoid storing the auth_token directly in this file. 206 | # This method is provided solely to add the auth_token via environment variable. 207 | # This will contain the auth_token which can have a lot access to your Okta instance. 208 | # 209 | # Format: File path 210 | config :auth_token_env, :validate => :string, 211 | :obsolete => "auth_token_env is obsolete, please use auth_token_key instead" 212 | 213 | # This option is obsoleted in favor of state_file_path. 214 | # The base filename to store the pointer to the current location in the logs 215 | # This file will be renamed with each new reference to limit loss of this data 216 | # The location will need at least write and execute privs for the logstash user 217 | # 218 | # Format: Filepath 219 | # This is not the filepath of the file itself, but to generate the file. 220 | config :state_file_base, :validate => :string, 221 | :obsolete => "state_file_base is obsolete, use state_file_path instead" 222 | 223 | # Based on data from here: https://developer.okta.com/docs/reference/api/system-log/#system-events 224 | # -- For One App and Enterprise orgs, the warning is sent when the org is at 60% of its limit. 225 | RATE_OPTIONS = {"RATE_SLOW" => 0.4, "RATE_MEDIUM" => 0.5, "RATE_FAST" => 0.6} 226 | RATE_OPTIONS.default = false 227 | 228 | public 229 | Schedule_types = %w(cron every at in) 230 | def register 231 | 232 | @trace_log_method = detect_trace_log_method() 233 | 234 | if (@limit < 1 or @limit > 1000 or !@limit.integer?) 235 | @logger.fatal("Invalid `limit` value: #{@limit}. " + 236 | "Config limit should be an integer between 1 and 1000.") 237 | raise LogStash::ConfigurationError, "Invalid `limit` value: #{@limit}. " + 238 | "Config limit should be an integer between 1 and 1000." 239 | end 240 | 241 | unless (@hostname.nil? ^ @custom_url.nil?) 242 | @logger.fatal("Please configure the hostname " + 243 | "or the custom_url to use.") 244 | raise LogStash::ConfigurationError, "Please configure the hostname " + 245 | "or the custom_url to use." 246 | end 247 | 248 | if (@hostname) 249 | begin 250 | url_obj = URI::HTTPS.build( 251 | :host => @hostname, 252 | :path => OKTA_EVENT_LOG_PATH) 253 | rescue URI::InvalidComponentError 254 | @logger.fatal("Invalid hostname, " + 255 | "could not configure URL. hostname = #{@hostname}.") 256 | raise LogStash::ConfigurationError, "Invalid hostname, " + 257 | "could not configure URL. hostname = #{@hostname}." 258 | end 259 | end 260 | if (@custom_url) 261 | begin 262 | # The URL comes in as a SafeURI object which doesn't get parsed nicely. 263 | # Cast to string helps with that 264 | # Really only happens during tests and not during normal operations 265 | url_obj = URI.parse(@custom_url.to_s) 266 | unless (url_obj.kind_of? URI::HTTP or url_obj.kind_of? URI::HTTPS) 267 | raise LogStash::ConfigurationError, "Invalid custom_url, " + 268 | "please verify the URL. custom_url = #{@custom_url}" 269 | @logger.fatal("Invalid custom_url, " + 270 | "please verify the URL. custom_url = #{@custom_url}") 271 | end 272 | rescue URI::InvalidURIError 273 | @logger.fatal("Invalid custom_url, " + 274 | "please verify the URL. custom_url = #{@custom_url}") 275 | raise LogStash::ConfigurationError, "Invalid custom_url, " + 276 | "please verify the URL. custom_url = #{@custom_url}" 277 | end 278 | 279 | end 280 | 281 | if (@since) 282 | begin 283 | @since = DateTime.parse(@since).rfc3339(0) 284 | rescue ArgumentError => e 285 | @logger.fatal("since must be of the form " + 286 | "yyyy-MM-dd’‘T’‘HH:mm:ssZZ, e.g. 2013-01-01T12:00:00-07:00.") 287 | raise LogStash::ConfigurationError, "since must be of the form " + 288 | "yyyy-MM-dd’‘T’‘HH:mm:ssZZ, e.g. 2013-01-01T12:00:00-07:00." 289 | end 290 | end 291 | 292 | if (@q) 293 | if (@q.length > 10) 294 | msg = "q cannot have more than 10 terms. " + 295 | "Use the `filter` to limit the query." 296 | @logger.fatal(msg) 297 | raise LogStash::ConfigurationError, msg 298 | end 299 | space_errors = [] 300 | length_errors = [] 301 | for item in @q 302 | if (item.include? " ") 303 | space_errors.push(item) 304 | elsif (item.length > 40) 305 | length_errors.push(item) 306 | end 307 | end 308 | if (space_errors.length > 0) 309 | @logger.fatal("q items cannot contain a space. " + 310 | "Items: #{space_errors.join(" ")}.") 311 | raise LogStash::ConfigurationError, "q items cannot contain a space. " + 312 | "Items: #{space_errors.join(" ")}." 313 | end 314 | if (length_errors.length > 0) 315 | msg = "q items cannot contain be longer than 40 characters. " + 316 | "Items: #{length_errors.join(" ")}." 317 | @logger.fatal(msg) 318 | raise LogStash::ConfigurationError, msg 319 | end 320 | end 321 | 322 | if (@custom_auth_header) 323 | if (@auth_token_key or @auth_token_file) 324 | @logger.fatal("If custom_auth_header is used " + 325 | "you cannot set auth_token_key or auth_token_file") 326 | raise LogStash::ConfigurationError, "If custom_auth_header is used " + 327 | "you cannot set auth_token_key or auth_token_file" 328 | end 329 | else 330 | unless (@auth_token_key.nil? ^ @auth_token_file.nil?) 331 | auth_message = "Set only the auth_token_key or auth_token_file." 332 | @logger.fatal(auth_message) 333 | raise LogStash::ConfigurationError, auth_message 334 | end 335 | 336 | if (@auth_token_file) 337 | begin 338 | auth_file_size = File.size(@auth_token_file) 339 | if (auth_file_size > MAX_MMAP_FILE_SIZE) 340 | @logger.fatal("The auth_token file " + 341 | "is too large to map") 342 | raise LogStash::ConfigurationError, "The auth_token file " + 343 | "is too large to map" 344 | else 345 | @auth_token = LogStash::Util::Password.new( 346 | File.read(@auth_token_file, auth_file_size).chomp) 347 | @logger.info("Successfully opened auth_token_file", 348 | :auth_token_file => @auth_token_file) 349 | end 350 | rescue LogStash::ConfigurationError 351 | raise 352 | rescue => e 353 | # This is a bug in older versions of logstash, confirmed here: 354 | # https://discuss.elastic.co/t/logstash-configurationerror-but-configurationok-logstash-2-4-0/65727/2 355 | @logger.fatal(e.inspect) 356 | raise LogStash::ConfigurationError, e.inspect 357 | end 358 | else 359 | @auth_token = @auth_token_key 360 | end 361 | 362 | if (@auth_token) 363 | begin 364 | response = client.get( 365 | url_obj.to_s+AUTH_TEST_URL, 366 | headers: {'Authorization' => "SSWS #{@auth_token.value}"}, 367 | request_timeout: 2, 368 | connect_timeout: 2, 369 | socket_timeout: 2) 370 | if (response.code == HTTP_UNAUTHORIZED_401) 371 | @logger.fatal("The auth_code provided " + 372 | "was not valid, please check the input") 373 | raise LogStash::ConfigurationError, "The auth_code provided " + 374 | "was not valid, please check the input" 375 | end 376 | rescue LogStash::ConfigurationError 377 | raise 378 | rescue Manticore::ManticoreException => m 379 | msg = "There was a connection error verifying the auth_token, " + 380 | "continuing without verification" 381 | @logger.error(msg, :client_error => m.inspect) 382 | rescue => e 383 | @logger.fatal("Could not verify auth_token, " + 384 | "error: #{e.inspect}") 385 | raise LogStash::ConfigurationError, "Could not verify auth_token, " + 386 | "error: #{e.inspect}" 387 | end 388 | end 389 | end 390 | 391 | if (RATE_OPTIONS[@rate_limit] != false) 392 | @rate_limit = RATE_OPTIONS[@rate_limit] 393 | else 394 | @rate_limit = @rate_limit.to_f.floor 1 395 | end 396 | 397 | if (@rate_limit < 0.1 or @rate_limit > 1.0) 398 | raise LogStash::ConfigurationError, "rate_limit should be between " + 399 | "'0.1' and '1.0' or 'RATE_SLOW', 'RATE_MEDIUM' or 'RATE_FAST'" 400 | end 401 | 402 | @rate_limit_factor = 1.0 - @rate_limit 403 | 404 | params_event = Hash.new 405 | params_event[:limit] = @limit if @limit > 0 406 | params_event[:since] = @since if @since 407 | params_event[:filter] = @filter if @filter 408 | params_event[:q] = @q.join(" ") if @q 409 | url_obj.query = URI.encode_www_form(params_event) 410 | 411 | 412 | # This check is Logstash 5 specific. If the class does not exist, and it 413 | # won't in older versions of Logstash, then we need to set it to nil. 414 | settings = defined?(LogStash::SETTINGS) ? LogStash::SETTINGS : nil 415 | 416 | if (@state_file_path.nil?) 417 | begin 418 | base_state_file_path = build_state_file_base(settings) 419 | rescue LogStash::ConfigurationError 420 | raise 421 | rescue => e 422 | @logger.fatal("Could not set up state file", :exception => e.inspect) 423 | raise LogStash::ConfigurationError, e.inspect 424 | end 425 | file_prefix = "#{@hostname}_system_log_state" 426 | case Dir[File.join(base_state_file_path,"#{file_prefix}*")].size 427 | when 0 428 | # Build a file name randomly 429 | @state_file_path = File.join( 430 | base_state_file_path, 431 | rand_filename("#{file_prefix}")) 432 | @logger.info('No state_file_path set, generating one based on the ' + 433 | '"hostname" setting', 434 | :state_file_path => @state_file_path.to_s, 435 | :hostname => @hostname) 436 | when 1 437 | @state_file_path = Dir[File.join(base_state_file_path,"#{file_prefix}*")].last 438 | @logger.info('Found state file based on the "hostname" setting', 439 | :state_file_path => @state_file_path.to_s, 440 | :hostname => @hostname) 441 | else 442 | msg = "There is more than one file" + 443 | "in the state file base dir (possibly an error?)." + 444 | "Please keep the latest/most relevant file.\n" + 445 | "Directory: #{base_state_file_path}" 446 | @logger.fatal(msg) 447 | raise LogStash::ConfigurationError, msg 448 | end 449 | 450 | else 451 | @state_file_path = File.path(@state_file_path) 452 | if (File.directory?(@state_file_path)) 453 | @logger.fatal("The `state_file_path` argument must point to a file, " + 454 | "received a directory: #{@state_file_path}") 455 | raise LogStash::ConfigurationError, "The `state_file_path` argument " + 456 | "must point to a file, received a directory: #{@state_file_path}" 457 | end 458 | end 459 | begin 460 | @state_file_stat = detect_state_file_mode(@state_file_path) 461 | rescue => e 462 | @logger.fatal("Error getting state file info. " + 463 | "Exception: #{e.inspect}") 464 | raise LogStash::ConfigurationError, "Error getting state file info. " + 465 | "Exception: #{e.inspect}" 466 | end 467 | 468 | @write_method = detect_write_method(@state_file_path) 469 | 470 | begin 471 | state_file_size = File.size(@state_file_path) 472 | if (state_file_size > 0) 473 | if (state_file_size > MAX_MMAP_FILE_SIZE) 474 | @logger.fatal("The state file: " + 475 | "#{@state_file_path} is too large to map") 476 | raise LogStash::ConfigurationError, "The state file: " + 477 | "#{@state_file_path} is too large to map" 478 | end 479 | state_url = File.read(@state_file_path, state_file_size).chomp 480 | if (state_url.length > 0) 481 | state_url_obj = URI.parse(state_url) 482 | @logger.info( 483 | "Successfully opened state_file_path", 484 | :state_url => state_url_obj.to_s, 485 | :state_file_path => @state_file_path) 486 | if (@custom_url) 487 | unless (url_obj.hostname == state_url_obj.hostname) 488 | @logger.fatal("The state URL " + 489 | "does not match configured URL. ", 490 | :configured_url => url_obj.to_s, 491 | :state_url => state_url_obj.to_s) 492 | raise LogStash::ConfigurationError, "The state URL " + 493 | "does not match configured URL. " + 494 | "Configured url: #{url_obj.to_s}, state_url: #{state_url_obj.to_s}" 495 | end 496 | else 497 | unless (state_url_obj.hostname == @hostname and 498 | state_url_obj.path == OKTA_EVENT_LOG_PATH) 499 | @logger.fatal("The state URL " + 500 | "does not match configured URL. " + 501 | :configured_url => url_obj.to_s, 502 | :state_url => state_url_obj.to_s) 503 | raise LogStash::ConfigurationError, "The state URL " + 504 | "does not match configured URL. " + 505 | "Configured url: #{url_obj.to_s}, state_url: #{state_url_obj.to_s}" 506 | end 507 | end 508 | url_obj = state_url_obj 509 | end 510 | end 511 | rescue LogStash::ConfigurationError 512 | raise 513 | rescue URI::InvalidURIError => e 514 | @logger.fatal("Could not parse url " + 515 | "from state_file_path. URL: #{state_url}. Error: #{e.inspect}.") 516 | raise LogStash::ConfigurationError, "Could not parse url " + 517 | "from state_file_path. URL: #{state_url}. Error: #{e.inspect}." 518 | rescue => e 519 | @logger.fatal(e.inspect) 520 | raise LogStash::ConfigurationError, e.inspect 521 | end 522 | 523 | @url = url_obj.to_s 524 | 525 | @logger.info("Created initial URL to call", :url => @url) 526 | @host = Socket.gethostname.force_encoding(Encoding::UTF_8) 527 | 528 | if (@metadata_target) 529 | @metadata_function = method(:apply_metadata) 530 | else 531 | @metadata_function = method(:noop) 532 | end 533 | 534 | if (@state_file_fatal_failure) 535 | @state_file_failure_function = method(:fatal_state_file) 536 | else 537 | @state_file_failure_function = method(:error_state_file) 538 | end 539 | 540 | end # def register 541 | 542 | 543 | def run(queue) 544 | 545 | msg_invalid_schedule = "Invalid config. schedule hash must contain " + 546 | "exactly one of the following keys - cron, at, every or in" 547 | 548 | @logger.fatal(msg_invalid_schedule) if @schedule.keys.length !=1 549 | raise LogStash::ConfigurationError, msg_invalid_schedule if @schedule.keys.length !=1 550 | schedule_type = @schedule.keys.first 551 | schedule_value = @schedule[schedule_type] 552 | @logger.fatal(msg_invalid_schedule) unless Schedule_types.include?(schedule_type) 553 | raise LogStash::ConfigurationError, msg_invalid_schedule unless Schedule_types.include?(schedule_type) 554 | @scheduler = Rufus::Scheduler.new(:max_work_threads => 1) 555 | 556 | #as of v3.0.9, :first_in => :now doesn't work. Use the following workaround instead 557 | opts = schedule_type == "every" ? { :first_in => 0.01 } : {} 558 | opts[:overlap] = false; 559 | 560 | @logger.info("Starting event stream with the configured URL.", 561 | :url => @url) 562 | @scheduler.send(schedule_type, schedule_value, opts) { run_once(queue) } 563 | 564 | @scheduler.join 565 | 566 | end # def run 567 | 568 | private 569 | def run_once(queue) 570 | 571 | request_async(queue) 572 | 573 | end # def run_once 574 | 575 | private 576 | def request_async(queue) 577 | 578 | @continue = true 579 | 580 | header_hash = { 581 | "Accept" => "application/json", 582 | "Content-Type" => "application/json" 583 | } 584 | 585 | if (@auth_token) 586 | header_hash["Authorization"] = "SSWS #{@auth_token.value}" 587 | elsif (@custom_auth_header) 588 | header_hash["Authorization"] = @custom_auth_header.value 589 | end 590 | 591 | begin 592 | while @continue and !stop? 593 | @logger.debug("Calling URL", 594 | :url => @url, 595 | :token_set => !@auth_token.nil?) 596 | 597 | started = Time.now 598 | 599 | client.async.get(@url.to_s, headers: header_hash). 600 | on_success { |response| handle_success(queue, response, @url, Time.now - started) }. 601 | on_failure { |exception| handle_failure(queue, exception, @url, Time.now - started) } 602 | 603 | client.execute! 604 | end 605 | rescue => e 606 | @logger.fatal(e.inspect) 607 | raise e 608 | ensure 609 | update_state_file() 610 | end 611 | end # def request_async 612 | 613 | private 614 | def update_state_file() 615 | for i in 1..3 616 | @trace_log_method.call("Starting state file update", 617 | :state_file_path => @state_file_path, 618 | :url => @url, 619 | :attempt_num => i) 620 | 621 | begin 622 | @write_method.call(@state_file_path, @url) 623 | rescue => e 624 | @logger.warn("Could not save state, retrying", 625 | :state_file_path => @state_file_path, 626 | :url => @url, 627 | :exception => e.inspect) 628 | 629 | sleep SLEEP_STATE_FILE_RETRY 630 | next 631 | end 632 | @logger.debug("Successfully wrote the state file", 633 | :state_file_path => @state_file_path, 634 | :url => @url, 635 | :attempts => i) 636 | # Break out of the loop once you're done 637 | return nil 638 | end 639 | @state_file_failure_function.call() 640 | end # def update_state_file 641 | 642 | private 643 | def handle_success(queue, response, requested_url, exec_time) 644 | 645 | @continue = false 646 | 647 | case response.code 648 | when HTTP_OK_200 649 | ## Some benchmarking code for reasonings behind the methods. 650 | ## They aren't great benchmarks, but basic ones that proved a point. 651 | ## If anyone has better/contradicting results let me know 652 | # 653 | ## Some system info on which these tests were run: 654 | #$ cat /proc/cpuinfo | grep -i "model name" | uniq -c 655 | # 4 model name : Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz 656 | # 657 | #$ free -m 658 | # total used free shared buff/cache available 659 | # Mem: 1984 925 372 8 686 833 660 | # Swap: 2047 0 2047 661 | # 662 | #str = '; rel="next"' 663 | #require "benchmark" 664 | # 665 | # 666 | #n = 50000000 667 | # 668 | # 669 | #Benchmark.bm do |x| 670 | # x.report { n.times { str.include?('rel="next"') } } # (2) 23.008853sec @50000000 times 671 | # x.report { n.times { str.end_with?('rel="next"') } } # (1) 16.894623sec @50000000 times 672 | # x.report { n.times { str =~ /rel="next"$/ } } # (3) 30.757554sec @50000000 times 673 | #end 674 | # 675 | #Benchmark.bm do |x| 676 | # x.report { n.times { str.match(/<([^>]+)>/).captures[0] } } # (2) 262.166085sec @50000000 times 677 | # x.report { n.times { str.split(';')[0][1...-1] } } # (1) 31.673270sec @50000000 times 678 | #end 679 | 680 | 681 | @logger.debug("Response headers", :headers => response.headers) 682 | @trace_log_method.call("Response body", :body => response.body) 683 | 684 | # Store the next URL to call from the header 685 | next_url = nil 686 | Array(response.headers["link"]).each do |link_header| 687 | if link_header.end_with?('rel="next"') 688 | next_url = link_header.split(';')[0][1...-1] 689 | end 690 | end 691 | 692 | if (response.body.length > 0) 693 | @codec.decode(response.body) do |decoded| 694 | @trace_log_method.call("Pushing event to queue") 695 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 696 | @metadata_function.call(event, requested_url, response, exec_time) 697 | decorate(event) 698 | queue << event 699 | end 700 | else 701 | @codec.decode("{}") do |decoded| 702 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 703 | @metadata_function.call(event, requested_url, response, exec_time) 704 | decorate(event) 705 | queue << event 706 | end 707 | end 708 | 709 | 710 | if (!next_url.nil? and next_url != @url) 711 | @url = next_url 712 | if (response.headers['x-rate-limit-remaining'].to_i > response.headers['x-rate-limit-limit'].to_i * @rate_limit_factor and response.headers['x-rate-limit-remaining'].to_i > 0) 713 | @continue = true 714 | @trace_log_method.call("Rate Limit Status", :remaining => response.headers['x-rate-limit-remaining'].to_i, :limit => response.headers['x-rate-limit-limit'].to_i) 715 | end 716 | end 717 | @logger.debug("Continue status", :continue => @continue ) 718 | 719 | 720 | when HTTP_UNAUTHORIZED_401 721 | @codec.decode(response.body) do |decoded| 722 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 723 | @metadata_function.call(event, requested_url, response, exec_time) 724 | event.set("okta_response_error", { 725 | "okta_plugin_status" => "Auth_token supplied is not valid, " + 726 | "validate the auth_token and update the plugin config.", 727 | "http_code" => 401 728 | }) 729 | event.tag("_okta_response_error") 730 | decorate(event) 731 | queue << event 732 | end 733 | 734 | @logger.error("Authentication required, check auth_code", 735 | :code => response.code, 736 | :headers => response.headers) 737 | @trace_log_method.call("Authentication failed body", :body => response.body) 738 | 739 | when HTTP_BAD_REQUEST_400 740 | if (response.body.include?("E0000031")) 741 | @codec.decode(response.body) do |decoded| 742 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 743 | @metadata_function.call(event, requested_url, response, exec_time) 744 | event.set("okta_response_error", { 745 | "okta_plugin_status" => "Filter string was not valid.", 746 | "http_code" => 400 747 | }) 748 | event.tag("_okta_response_error") 749 | decorate(event) 750 | queue << event 751 | end 752 | 753 | @logger.error("Filter string was not valid", 754 | :response_code => response.code, 755 | :okta_error => "E0000031", 756 | :filter_string => @filter) 757 | 758 | @logger.debug("Filter string error response", 759 | :response_body => response.body, 760 | :response_headers => response.headers) 761 | 762 | elsif (response.body.include?("E0000030")) 763 | 764 | @codec.decode(response.body) do |decoded| 765 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 766 | @metadata_function.call(event, requested_url, response, exec_time) 767 | event.set("okta_response_error", { 768 | "okta_plugin_status" => "since was not valid.", 769 | "http_code" => 400 770 | }) 771 | event.tag("_okta_response_error") 772 | decorate(event) 773 | queue << event 774 | end 775 | 776 | @logger.error("Date was not formatted correctly", 777 | :response_code => response.code, 778 | :okta_error => "E0000030", 779 | :date_string => @since) 780 | 781 | @logger.debug("Start date error response", 782 | :response_body => response.body, 783 | :response_headers => response.headers) 784 | 785 | ## If the Okta error code does not match known codes 786 | ## Process it as a generic error 787 | else 788 | handle_unknown_okta_code(queue,response,requested_url,exec_time) 789 | end 790 | when HTTP_TOO_MANY_REQUESTS_429 791 | @codec.decode(response.body) do |decoded| 792 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 793 | @metadata_function.call(event, requested_url, response, exec_time) 794 | event.set("okta_response_error", { 795 | "okta_plugin_status" => "rate limit exceeded; sleeping.", 796 | "http_code" => 429, 797 | "okta_error" => "E0000047", 798 | "reset_time" => response.headers['x-rate-limit-reset'] 799 | }) 800 | event.tag("_okta_response_error") 801 | decorate(event) 802 | queue << event 803 | end 804 | 805 | now = get_epoch 806 | sleep_time = (now - response.headers['x-rate-limit-reset'].to_i > 60) ? 60 : now - response.headers['x-rate-limit-reset'].to_i 807 | @logger.error("Rate limited exceeded", 808 | :response_code => response.code, 809 | :okta_error => "E0000047", 810 | :sleep_time => sleep_time, 811 | :reset_time => response.headers['x-rate-limit-reset']) 812 | 813 | @logger.debug("rate limit error response", 814 | :response_body => response.body, 815 | :response_headers => response.headers) 816 | 817 | # Use a local function so the test can override it 818 | local_sleep sleep_time 819 | else 820 | handle_unknown_http_code(queue,response,requested_url,exec_time) 821 | end 822 | 823 | end # def handle_success 824 | 825 | private 826 | def get_epoch() 827 | return Time.now.to_i 828 | end 829 | 830 | private 831 | def local_sleep(time) 832 | sleep time 833 | end 834 | private 835 | def handle_unknown_okta_code(queue,response,requested_url,exec_time) 836 | @codec.decode(response.body) do |decoded| 837 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 838 | @metadata_function.call(event, requested_url, response, exec_time) 839 | event.set("okta_response_error", { 840 | "okta_plugin_status" => "Unknown error code from Okta", 841 | "http_code" => response.code, 842 | }) 843 | event.tag("_okta_response_error") 844 | decorate(event) 845 | queue << event 846 | end 847 | 848 | @logger.error("Okta API Error", 849 | :http_code => response.code, 850 | :body => response.body, 851 | :headers => response.headers) 852 | 853 | end # def handle_unknown_okta_code 854 | 855 | private 856 | def handle_unknown_http_code(queue,response,requested_url,exec_time) 857 | @codec.decode(response.body) do |decoded| 858 | event = @target ? LogStash::Event.new(@target => decoded.to_hash) : decoded 859 | @metadata_function.call(event, requested_url, response, exec_time) 860 | 861 | event.set("http_response_error", { 862 | "okta_plugin_status" => "Unknown HTTP code, review HTTP errors", 863 | "http_code" => response.code, 864 | "http_headers" => response.headers 865 | }) 866 | event.tag("_http_response_error") 867 | decorate(event) 868 | queue << event 869 | end 870 | 871 | @logger.error("HTTP Error", 872 | :http_code => response.code, 873 | :body => response.body, 874 | :headers => response.headers) 875 | end # def handle_unknown_http_code 876 | 877 | private 878 | def handle_failure(queue, exception, requested_url, exec_time) 879 | 880 | @continue = false 881 | @logger.error("Client Connection Error", 882 | :exception => exception.inspect) 883 | 884 | event = LogStash::Event.new 885 | @metadata_function.call(event, requested_url, nil, exec_time) 886 | event.set("http_request_error", { 887 | "okta_plugin_status" => "Client Connection Error", 888 | "connect_error" => exception.message, 889 | "backtrace" => exception.backtrace 890 | }) 891 | event.tag("_http_request_error") 892 | decorate(event) 893 | queue << event 894 | 895 | end # def handle_failure 896 | 897 | private 898 | def apply_metadata(event, requested_url, response=nil, exec_time=nil) 899 | 900 | m = { 901 | "host" => @host, 902 | "url" => requested_url 903 | } 904 | 905 | if exec_time 906 | m["runtime_seconds"] = exec_time.round(3) 907 | end 908 | 909 | if response 910 | m["code"] = response.code 911 | m["response_headers"] = response.headers 912 | m["response_message"] = response.message 913 | m["retry_count"] = response.times_retried 914 | end 915 | 916 | event.set(@metadata_target,m) 917 | 918 | end 919 | 920 | # Dummy function to handle noops 921 | private 922 | def noop(*args) 923 | return 924 | end 925 | 926 | private 927 | def fatal_state_file() 928 | @logger.fatal("Unable to save state file after retrying. Exiting...", 929 | :url => @url, 930 | :state_file_path => @state_file_path) 931 | 932 | @logger.fatal("Unable to save state_file_path, " + 933 | "#{@state_file_path} after retrying.") 934 | raise LogStash::EnvironmentError, "Unable to save state_file_path, " + 935 | "#{@state_file_path} after retrying." 936 | end 937 | 938 | private 939 | def error_state_file() 940 | @logger.error("Unable to save state_file_path after retrying three times", 941 | :url => @url, 942 | :state_file_path => @state_file_path) 943 | end 944 | 945 | # based on code from logstash-input-file 946 | private 947 | def atomic_write(path, content) 948 | write_atomically(path) do |io| 949 | io.write("#{content}\n") 950 | end 951 | end 952 | 953 | private 954 | def non_atomic_write(path, content) 955 | IO.open(IO.sysopen(path, "w+")) do |io| 956 | io.write("#{content}\n") 957 | end 958 | end 959 | 960 | 961 | # Write to a file atomically. Useful for situations where you don't 962 | # want other processes or threads to see half-written files. 963 | # 964 | # File.write_atomically('important.file') do |file| 965 | # file.write('hello') 966 | # end 967 | private 968 | def write_atomically(file_name) 969 | 970 | # Create temporary file with identical permissions 971 | begin 972 | temp_file = File.new(rand_filename(file_name), "w", @state_file_stat.mode) 973 | temp_file.binmode 974 | return_val = yield temp_file 975 | ensure 976 | temp_file.close 977 | end 978 | 979 | # Overwrite original file with temp file 980 | File.rename(temp_file.path, file_name) 981 | 982 | # Unable to get permissions of the original file => return 983 | return return_val if @state_file_mode.nil? 984 | 985 | # Set correct uid/gid on new file 986 | File.chown(@state_file_stat.uid, @state_file_stat.gid, file_name) if old_stat 987 | 988 | return return_val 989 | end 990 | 991 | private 992 | def rand_filename(prefix) #:nodoc: 993 | [ prefix, Thread.current.object_id, Process.pid, rand(1000000) ].join('.') 994 | end 995 | 996 | ## Not used -- but keeping it in case I need to use it at some point 997 | ## Private utility method. 998 | #private 999 | #def probe_stat_in(dir) #:nodoc: 1000 | # begin 1001 | # basename = rand_filename(".permissions_check") 1002 | # file_name = File.join(dir, basename) 1003 | # #FileUtils.touch(file_name) 1004 | # # 'touch' a file to keep the conditional from happening later 1005 | # File.open(file_name, "w") {} 1006 | # File.stat(file_name) 1007 | # rescue 1008 | # # ... 1009 | # ensure 1010 | # File.delete(file_name) if File.exist?(file_name) 1011 | # end 1012 | #end 1013 | 1014 | private 1015 | def build_state_file_base(settings) #:nodoc: 1016 | if (settings.nil?) 1017 | @logger.warn("Attempting to use LOGSTASH_HOME. Note that this method is deprecated. " \ 1018 | "Consider upgrading or using state_file_path config option instead.") 1019 | # This section is going to be deprecated eventually, as path.data will be 1020 | # the default, not an environment variable (SINCEDB_DIR or LOGSTASH_HOME) 1021 | # NOTE: I don't have an answer for this right now, but this raise needs to be moved to `register` 1022 | if ENV["LOGSTASH_HOME"].nil? 1023 | @logger.error("No settings or LOGSTASH_HOME environment variable set, I don't know where " + 1024 | "to keep track of the files I'm watching. " + 1025 | "Set state_file_path in " + 1026 | "in your Logstash config for the file input with " + 1027 | "state_file_path '#{@state_file_path.inspect}'") 1028 | raise LogStash::ConfigurationError, 'The "state_file_path" setting ' + 1029 | 'was not given and the environment variable "LOGSTASH_HOME" ' + 1030 | 'is not set so we cannot build a file path for the state_file_path.' 1031 | end 1032 | logstash_data_path = File.path(ENV["LOGSTASH_HOME"]) 1033 | else 1034 | logstash_data_path = settings.get_value("path.data") 1035 | end 1036 | File.join(logstash_data_path, "plugins", "inputs", "okta_system_log").tap do |path| 1037 | # Ensure that the filepath exists before writing, since it's deeply nested. 1038 | nested_dir_create(path) 1039 | end 1040 | end 1041 | 1042 | private 1043 | def nested_dir_create(path) # :nodoc: 1044 | dirs = [] 1045 | until File.directory?(path) 1046 | dirs.push path 1047 | path = File.dirname(path) 1048 | end 1049 | 1050 | dirs.reverse_each do |dir| 1051 | Dir.mkdir(dir) 1052 | end 1053 | end 1054 | 1055 | private 1056 | def log_trace(message, vars = {}) 1057 | @logger.trace(message, vars) 1058 | end 1059 | 1060 | private 1061 | def log_debug(message, vars = {}) 1062 | @logger.debug(message, vars) 1063 | end 1064 | 1065 | private 1066 | def detect_trace_log_method() #:nodoc: 1067 | begin 1068 | if (@logger.trace?) 1069 | return method(:log_trace) 1070 | end 1071 | rescue NoMethodError 1072 | @logger.info("Using debug instead of trace due to lack of support" + 1073 | "in this version.") 1074 | return method(:log_debug) 1075 | end 1076 | return method(:log_trace) 1077 | end 1078 | 1079 | private 1080 | def is_defined(str) #:nodoc: 1081 | return !(str.nil? or str.length == 0) 1082 | end 1083 | 1084 | def detect_write_method(path) 1085 | if (LogStash::Environment.windows? || 1086 | File.chardev?(path) || 1087 | File.blockdev?(path) || 1088 | File.socket?(path)) 1089 | @logger.info("State file cannot be updated using an atomic write, " + 1090 | "using non-atomic write", :state_file_path => path) 1091 | return method(:non_atomic_write) 1092 | else 1093 | return method(:atomic_write) 1094 | end 1095 | end 1096 | 1097 | def detect_state_file_mode(path) 1098 | if (File.exist?(path)) 1099 | old_stat = File.stat(path) 1100 | else 1101 | # We need to create a file anyway so check it with the file created 1102 | # # If not possible, probe which are the default permissions in the 1103 | # # destination directory. 1104 | # old_stat = probe_stat_in(File.dirname(@state_file_path)) 1105 | 1106 | # 'touch' a file 1107 | File.open(path, "w") {} 1108 | old_stat = File.stat(path) 1109 | end 1110 | 1111 | return old_stat ? old_stat : nil 1112 | 1113 | end 1114 | 1115 | public 1116 | def stop 1117 | # nothing to do in this case so it is not necessary to define stop 1118 | # examples of common "stop" tasks: 1119 | # * close sockets (unblocking blocking reads/accepts) 1120 | # * cleanup temporary files 1121 | # * terminate spawned threads 1122 | begin 1123 | @scheduler.stop 1124 | rescue NoMethodError => e 1125 | unless (e.message == "undefined method `stop' for nil:NilClass") 1126 | raise 1127 | end 1128 | rescue => e 1129 | @logger.warn("Undefined error", :exception => e.inspect) 1130 | raise 1131 | ensure 1132 | if (is_defined(@url)) 1133 | update_state_file() 1134 | end 1135 | end 1136 | end # def stop 1137 | end # class LogStash::Inputs::OktaSystemLog 1138 | -------------------------------------------------------------------------------- /logstash-input-okta_system_log.gemspec: -------------------------------------------------------------------------------- 1 | Gem::Specification.new do |s| 2 | s.name = 'logstash-input-okta_system_log' 3 | s.version = '0.10.0' 4 | s.licenses = ['Apache-2.0'] 5 | s.summary = 'This plugin fetches log events from Okta using the System Log API' 6 | s.homepage = 'https://github.com/SecurityRiskAdvisors/logstash-input-okta_system_log' 7 | s.authors = ['Security Risk Advisors'] 8 | s.email = 'security@securityriskadvisors.com' 9 | s.require_paths = ['lib'] 10 | 11 | # Files 12 | s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT'] 13 | # Tests 14 | s.test_files = s.files.grep(%r{^(test|spec|features)/}) 15 | 16 | # Special flag to let us know this is actually a logstash plugin 17 | s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" } 18 | 19 | # Gem dependencies 20 | s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99" 21 | s.add_runtime_dependency 'logstash-codec-plain' 22 | s.add_runtime_dependency 'stud', "~> 0.0.22" 23 | #s.add_runtime_dependency 'logstash-mixin-http_client', "~> 7" # Retaining logstash 7x compat 24 | #s.add_runtime_dependency 'logstash-mixin-http_client', ">= 6.0.0", "< 7.0.0" # Retaining logstash 5/6x compat 25 | #s.add_runtime_dependency 'logstash-mixin-http_client', ">= 2.2.4", "< 3.0.0" # Retaining logstash 2.4 compat 26 | s.add_runtime_dependency 'logstash-mixin-http_client', ">= 2.2.4", "< 8.0.0" # Production versions 27 | s.add_runtime_dependency 'rufus-scheduler', "~>3.0.9" 28 | 29 | s.add_development_dependency 'logstash-codec-json' 30 | s.add_development_dependency 'logstash-codec-line' 31 | s.add_development_dependency 'logstash-devutils', '>= 0.0.16' 32 | s.add_development_dependency 'flores' 33 | s.add_development_dependency 'timecop' 34 | 35 | end 36 | -------------------------------------------------------------------------------- /spec/inputs/okta_system_log_spec.rb: -------------------------------------------------------------------------------- 1 | require "logstash/devutils/rspec/spec_helper" 2 | require 'logstash/inputs/okta_system_log' 3 | require 'flores/random' 4 | require "timecop" 5 | require "base64" 6 | require "rspec/wait" 7 | 8 | describe LogStash::Inputs::OktaSystemLog do 9 | let(:queue) { Queue.new } 10 | let(:default_schedule) { 11 | { "every" => "30s" } 12 | } 13 | let(:default_limit) { 1000 } 14 | let(:default_auth_token_key) { "asdflkjasdflkjasdf932r098-asdf" } 15 | let(:default_host) { "localhost" } 16 | let(:metadata_target) { "_http_poller_metadata" } 17 | let(:default_state_file_path) { "/dev/null" } 18 | let(:default_header) { {"x-rate-limit-remaining" => 3, "x-rate-limit-limit" => 4} } 19 | let(:default_rate_limit) { "RATE_MEDIUM" } 20 | 21 | let(:default_opts) { 22 | { 23 | "schedule" => default_schedule, 24 | "limit" => default_limit, 25 | "hostname" => default_host, 26 | "auth_token_key" => default_auth_token_key, 27 | "metadata_target" => metadata_target, 28 | "state_file_path" => default_state_file_path, 29 | "rate_limit" => default_rate_limit, 30 | "codec" => "json" 31 | } 32 | } 33 | let(:klass) { LogStash::Inputs::OktaSystemLog } 34 | 35 | describe "config" do 36 | shared_examples "configuration errors" do 37 | it "raises an exception" do 38 | expect {subject.register}.to raise_exception(LogStash::ConfigurationError) 39 | end 40 | end 41 | 42 | subject { klass.new(opts) } 43 | 44 | before(:each) do 45 | subject 46 | allow(File).to receive(:directory?).with(opts["state_file_path"]) { false } 47 | allow(File).to receive(:exist?).with(opts["state_file_path"]) { true } 48 | allow(File).to receive(:stat).with(opts["state_file_path"]) { double("file_stat") } 49 | # We don't really want to use the atomic write function 50 | allow(subject).to receive(:detect_write_method).with(opts["state_file_path"]) { subject.method(:non_atomic_write) } 51 | allow(File).to receive(:size).with(opts["state_file_path"]) { 0 } 52 | allow(subject).to receive(:update_state_file) { nil } 53 | 54 | # Might need these later 55 | #allow(File).to receive(:read).with(opts["state_file_path"], 1) { "\n" } 56 | #allow(LogStash::Environment).to receive(:windows?) { false } 57 | #allow(File).to receive(:chardev?).with(opts["state_file_path"]) { false } 58 | #allow(File).to receive(:blockdev?).with(opts["state_file_path"]) { false } 59 | #allow(File).to receive(:socket?).with(opts["state_file_path"]) { false } 60 | end 61 | 62 | context "the hostname is not in the correct format" do 63 | let(:opts) { default_opts.merge({"hostname" => "asdf__"}) } 64 | include_examples("configuration errors") 65 | end 66 | 67 | context "both hostname and custom_url are set" do 68 | let(:opts) { default_opts.merge({"custom_url" => "http://localhost/foo/bar"}) } 69 | include_examples("configuration errors") 70 | end 71 | 72 | context "custom_url is in an incorrect format" do 73 | let(:opts) { 74 | opts = default_opts.merge({"custom_url" => "htp://___/foo/bar"}).clone 75 | opts.delete("hostname") 76 | opts 77 | } 78 | include_examples("configuration errors") 79 | end 80 | 81 | context "The since parameter is not in the correct format" do 82 | let(:opts) { default_opts.merge({"since" => "1234567890"}) } 83 | include_examples("configuration errors") 84 | end 85 | 86 | context "The limit parameter is too large" do 87 | let(:opts) { default_opts.merge({"limit" => 10000}) } 88 | include_examples("configuration errors") 89 | end 90 | 91 | context "The limit is too small" do 92 | let(:opts) { default_opts.merge({"limit" => -10000}) } 93 | include_examples("configuration errors") 94 | end 95 | 96 | context "the q parameter has too many items" do 97 | let(:opts) { default_opts.merge({"q" => Array.new(size=11, obj="a")}) } 98 | include_examples("configuration errors") 99 | end 100 | 101 | context "the q parameter item has a space" do 102 | let(:opts) { default_opts.merge({"q" => ["a b"]}) } 103 | include_examples("configuration errors") 104 | end 105 | 106 | context "the q parameter item is too long" do 107 | let(:opts) { default_opts.merge({"q" => ["a" * 41]}) } 108 | include_examples("configuration errors") 109 | end 110 | 111 | context "the rate_limit parameter is too large" do 112 | let(:opts) { default_opts.merge({"rate_limit" => "1.5"}) } 113 | include_examples("configuration errors") 114 | end 115 | 116 | context "the rate_limit parameter is too small" do 117 | let(:opts) { default_opts.merge({"rate_limit" => "-0.5"}) } 118 | include_examples("configuration errors") 119 | end 120 | 121 | context "the rate_limit parameter uses a non-standard stand-in" do 122 | let(:opts) { default_opts.merge({"rate_limit" => "RATE_CRAWL"}) } 123 | include_examples("configuration errors") 124 | end 125 | 126 | context "the metadata target is not set" do 127 | let(:opts) { 128 | opts = default_opts.clone 129 | opts.delete("metadata_target") 130 | opts 131 | } 132 | it "sets the metadata function to apply_metadata" do 133 | subject.register 134 | expect(subject.instance_variable_get("@metadata_function")).to eql(subject.method(:apply_metadata)) 135 | expect(subject.instance_variable_get("@metadata_target")).to eql("@metadata") 136 | end 137 | end 138 | 139 | 140 | context "auth_token management" do 141 | let(:auth_file_opts) { 142 | auth_file_opts = default_opts.merge({"auth_token_file" => "/dev/null"}).clone 143 | auth_file_opts.delete("auth_token_key") 144 | auth_file_opts 145 | } 146 | 147 | context "custom_auth_header is defined with auth_token_key" do 148 | let(:opts) {default_opts.merge({"custom_auth_header" => "Basic user:password"})} 149 | include_examples("configuration errors") 150 | end 151 | 152 | context "custom_auth_header is defined with auth_token_file" do 153 | let(:opts) {auth_file_opts.merge({"custom_auth_header" => "Basic user:password"})} 154 | include_examples("configuration errors") 155 | end 156 | 157 | context "both auth_token key and file are provided" do 158 | let(:opts) {default_opts.merge({"auth_token_file" => "/dev/null"})} 159 | include_examples("configuration errors") 160 | end 161 | 162 | context "neither auth_token key nor file are provided" do 163 | let(:opts) { 164 | opts = default_opts.clone 165 | opts.delete("auth_token_key") 166 | opts 167 | } 168 | include_examples("configuration errors") 169 | end 170 | 171 | context "auth_token_file is too large" do 172 | let(:opts) {auth_file_opts} 173 | before {allow(File).to receive(:size).with(opts["auth_token_file"]) { 1 * 2**11 }} 174 | include_examples("configuration errors") 175 | end 176 | 177 | context "auth_token_file could not be read" do 178 | let(:opts) {auth_file_opts} 179 | before { 180 | allow(File).to receive(:size).with(opts["auth_token_file"]) { 10 } 181 | allow(File).to receive(:read).with(opts["auth_token_file"], 10) { raise IOError } 182 | } 183 | include_examples("configuration errors") 184 | end 185 | 186 | context "auth_token returns an unauthorized error" do 187 | let(:opts) { default_opts } 188 | before do 189 | subject.client.stub("https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 190 | :body => "{}", 191 | :code => klass::HTTP_UNAUTHORIZED_401 192 | ) 193 | end 194 | include_examples("configuration errors") 195 | end 196 | end 197 | end 198 | 199 | describe "instances" do 200 | subject { klass.new(default_opts) } 201 | 202 | before do 203 | subject.client.stub("https://#{default_opts["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 204 | :body => "{}", 205 | :code => klass::HTTP_OK_200, 206 | :headers => default_header 207 | ) 208 | allow(File).to receive(:directory?).with(default_state_file_path) { false } 209 | allow(File).to receive(:exist?).with(default_state_file_path) { true } 210 | allow(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 211 | # We don't really want to use the atomic write function 212 | allow(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 213 | allow(File).to receive(:size).with(default_state_file_path) { 0 } 214 | allow(subject).to receive(:update_state_file) { nil } 215 | subject.register 216 | end 217 | 218 | describe "#run" do 219 | it "should setup a scheduler" do 220 | runner = Thread.new do 221 | subject.run(double("queue")) 222 | expect(subject.instance_variable_get("@scheduler")).to be_a_kind_of(Rufus::Scheduler) 223 | end 224 | runner.kill 225 | runner.join 226 | end 227 | end 228 | 229 | describe "#run_once" do 230 | it "should issue an async request for each url" do 231 | expect(subject).to receive(:request_async).with(queue).once 232 | 233 | subject.send(:run_once, queue) # :run_once is a private method 234 | end 235 | end 236 | end 237 | 238 | describe "scheduler configuration" do 239 | before do 240 | instance.client.stub("https://#{default_opts["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 241 | :body => "{}", 242 | :code => klass::HTTP_OK_200, 243 | :headers => default_header 244 | ) 245 | allow(File).to receive(:directory?).and_call_original 246 | allow(File).to receive(:directory?).with(default_state_file_path) { false } 247 | allow(File).to receive(:exist?).with(default_state_file_path) { true } 248 | allow(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 249 | # We don't really want to use the atomic write function 250 | allow(instance).to receive(:detect_write_method).with(default_state_file_path) { instance.method(:non_atomic_write) } 251 | allow(File).to receive(:size).with(default_state_file_path) { 0 } 252 | allow(instance).to receive(:update_state_file) { nil } 253 | instance.register 254 | end 255 | 256 | context "given 'cron' expression" do 257 | let(:opts) { default_opts.merge("schedule" => {"cron" => "* * * * * UTC"}) } 258 | let(:instance) { klass.new(opts) } 259 | it "should run at the schedule" do 260 | Timecop.travel(Time.new(2000,1,1,0,0,0,'+00:00')) 261 | Timecop.scale(60) 262 | queue = Queue.new 263 | runner = Thread.new do 264 | instance.run(queue) 265 | end 266 | sleep 3 267 | instance.stop 268 | runner.kill 269 | runner.join 270 | expect(queue.size).to eq(2) 271 | Timecop.return 272 | end 273 | end 274 | 275 | context "given 'at' expression" do 276 | let(:opts) { default_opts.merge("schedule" => {"at" => "2000-01-01 00:05:00 +0000"}) } 277 | let(:instance) { klass.new(opts) } 278 | it "should run at the schedule" do 279 | Timecop.travel(Time.new(2000,1,1,0,0,0,'+00:00')) 280 | Timecop.scale(60 * 5) 281 | queue = Queue.new 282 | runner = Thread.new do 283 | instance.run(queue) 284 | end 285 | sleep 2 286 | instance.stop 287 | runner.kill 288 | runner.join 289 | expect(queue.size).to eq(1) 290 | Timecop.return 291 | end 292 | end 293 | 294 | context "given 'every' expression" do 295 | let(:opts) { default_opts.merge("schedule" => {"every" => "2s"}) } 296 | let(:instance) { klass.new(opts) } 297 | it "should run at the schedule" do 298 | queue = Queue.new 299 | runner = Thread.new do 300 | instance.run(queue) 301 | end 302 | #T 0123456 303 | #events x x x x 304 | #expects 3 events at T=5 305 | sleep 5 306 | instance.stop 307 | runner.kill 308 | runner.join 309 | expect(queue.size).to eq(3) 310 | end 311 | end 312 | 313 | context "given 'in' expression" do 314 | let(:opts) { default_opts.merge("schedule" => {"in" => "2s"}) } 315 | let(:instance) { klass.new(opts) } 316 | it "should run at the schedule" do 317 | queue = Queue.new 318 | runner = Thread.new do 319 | instance.run(queue) 320 | end 321 | sleep 3 322 | instance.stop 323 | runner.kill 324 | runner.join 325 | expect(queue.size).to eq(1) 326 | end 327 | end 328 | end 329 | 330 | describe "events" do 331 | shared_examples("matching metadata") { 332 | let(:metadata) { event.get(metadata_target) } 333 | let(:options) { defined?(settings) ? settings : opts } 334 | # The URL gets modified b/c of the limit that is placed on the API 335 | #let(:metadata_url) { "https://#{options["hostname"]+klass::OKTA_EVENT_LOG_PATH}?limit=#{options["limit"]}" } 336 | let(:metadata_url) { 337 | if (custom_settings) 338 | options["custom_url"]+"?limit=#{options["limit"]}" 339 | else 340 | "https://#{options["hostname"]+klass::OKTA_EVENT_LOG_PATH}?limit=#{options["limit"]}" 341 | end 342 | } 343 | 344 | it "should have the correct request url" do 345 | expect(metadata["url"].to_s).to eql(metadata_url) 346 | end 347 | 348 | it "should have the correct code" do 349 | expect(metadata["code"]).to eql(code) 350 | end 351 | } 352 | 353 | shared_examples "unprocessable_requests" do 354 | let(:poller) { klass.new(settings) } 355 | subject(:event) { 356 | poller.send(:run_once, queue) 357 | queue.pop(true) 358 | } 359 | 360 | before do 361 | unless (custom_settings) 362 | poller.client.stub("https://#{settings["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 363 | :body => "{}", 364 | :code => klass::HTTP_OK_200, 365 | :headers => default_header 366 | ) 367 | end 368 | allow(File).to receive(:directory?).with(default_state_file_path) { false } 369 | allow(File).to receive(:exist?).with(default_state_file_path) { true } 370 | allow(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 371 | # We don't really want to use the atomic write function 372 | allow(poller).to receive(:detect_write_method).with(default_state_file_path) { poller.method(:non_atomic_write) } 373 | allow(File).to receive(:size).with(default_state_file_path) { 0 } 374 | allow(poller).to receive(:update_state_file) { nil } 375 | poller.register 376 | allow(poller).to receive(:handle_failure).and_call_original 377 | allow(poller).to receive(:handle_success) 378 | event # materialize the subject 379 | end 380 | 381 | it "should enqueue a message" do 382 | expect(event).to be_a(LogStash::Event) 383 | end 384 | 385 | it "should enqueue a message with 'http_request_error' set" do 386 | expect(event.get("http_request_error")).to be_a(Hash) 387 | end 388 | 389 | it "should tag the event with '_http_request_error'" do 390 | expect(event.get("tags")).to include('_http_request_error') 391 | end 392 | 393 | it "should invoke handle failure exactly once" do 394 | expect(poller).to have_received(:handle_failure).once 395 | end 396 | 397 | it "should not invoke handle success at all" do 398 | expect(poller).not_to have_received(:handle_success) 399 | end 400 | 401 | include_examples("matching metadata") 402 | 403 | end 404 | 405 | context "with a non responsive server" do 406 | context "due to an invalid hostname" do # Fail with handlers 407 | let(:custom_settings) { false } 408 | let(:hostname) { "thouetnhoeu89ueoueohtueohtneuohn" } 409 | let(:code) { nil } # no response expected 410 | 411 | let(:settings) { default_opts.merge("hostname" => hostname) } 412 | 413 | include_examples("unprocessable_requests") 414 | end 415 | 416 | context "due to a non-existent host" do # Fail with handlers 417 | let(:custom_settings) { true } 418 | let(:custom_url) { "http://thouetnhoeu89ueoueohtueohtneuohn/path/api" } 419 | let(:code) { nil } # no response expected 420 | 421 | let(:settings) { 422 | 423 | settings = default_opts.merge("custom_url" => custom_url).clone 424 | settings.delete("hostname") 425 | settings 426 | } 427 | 428 | include_examples("unprocessable_requests") 429 | 430 | 431 | end 432 | context "due to a bogus port number" do # fail with return? 433 | let(:invalid_port) { Flores::Random.integer(65536..1000000) } 434 | let(:custom_settings) { true } 435 | let(:custom_url) { "http://127.0.0.1:#{invalid_port}" } 436 | let(:settings) { 437 | settings = default_opts.merge("custom_url" => custom_url.to_s).clone 438 | settings.delete("hostname") 439 | settings 440 | } 441 | let(:code) { nil } # No response expected 442 | 443 | include_examples("unprocessable_requests") 444 | end 445 | end 446 | 447 | describe "a valid request and decoded response" do 448 | let(:payload) {{"a" => 2, "hello" => ["a", "b", "c"]}} 449 | let(:response_body) { LogStash::Json.dump(payload) } 450 | let(:code) { klass::HTTP_OK_200 } 451 | let(:hostname) { default_host } 452 | let(:custom_settings) { false } 453 | let(:headers) { default_header } 454 | 455 | let(:opts) { default_opts } 456 | let(:instance) { 457 | klass.new(opts) 458 | } 459 | 460 | subject(:event) { 461 | queue.pop(true) 462 | } 463 | 464 | before do 465 | instance.client.stub("https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 466 | :body => "{}", 467 | :code => klass::HTTP_OK_200, 468 | :headers => headers 469 | ) 470 | allow(File).to receive(:directory?).with(default_state_file_path) { false } 471 | allow(File).to receive(:exist?).with(default_state_file_path) { true } 472 | allow(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 473 | # We don't really want to use the atomic write function 474 | allow(instance).to receive(:detect_write_method).with(default_state_file_path) { instance.method(:non_atomic_write) } 475 | allow(File).to receive(:size).with(default_state_file_path) { 0 } 476 | allow(instance).to receive(:update_state_file) { nil } 477 | 478 | instance.register 479 | allow(instance).to receive(:decorate) 480 | instance.client.stub(%r{#{opts["hostname"]}.*}, 481 | :body => response_body, 482 | :code => code, 483 | :headers => headers 484 | ) 485 | 486 | allow(instance).to receive(:get_epoch) { 1 } 487 | allow(instance).to receive(:local_sleep).with(1) { 1 } 488 | instance.send(:run_once, queue) 489 | end 490 | 491 | it "should have a matching message" do 492 | expect(event.to_hash).to include(payload) 493 | end 494 | 495 | it "should decorate the event" do 496 | expect(instance).to have_received(:decorate).once 497 | end 498 | 499 | include_examples("matching metadata") 500 | 501 | context "with an empty body" do 502 | let(:response_body) { "" } 503 | it "should return an empty event" do 504 | expect(event.get("[_http_poller_metadata][response_headers][content-length]")).to eql("0") 505 | end 506 | end 507 | 508 | context "with metadata omitted" do 509 | let(:opts) { 510 | opts = default_opts.clone 511 | opts.delete("metadata_target") 512 | opts 513 | } 514 | 515 | it "should not have any metadata on the event" do 516 | expect(event.get(metadata_target)).to be_nil 517 | end 518 | end 519 | 520 | context "with a specified target" do 521 | let(:target) { "mytarget" } 522 | let(:opts) { default_opts.merge("target" => target) } 523 | 524 | it "should store the event info in the target" do 525 | # When events go through the pipeline they are java-ified 526 | # this normalizes the payload to java types 527 | payload_normalized = LogStash::Json.load(LogStash::Json.dump(payload)) 528 | expect(event.get(target)).to include(payload_normalized) 529 | end 530 | end 531 | 532 | context "with non-200 HTTP response codes" do 533 | let(:code) { |example| example.metadata[:http_code] } 534 | let(:response_body) { "{}" } 535 | 536 | it "responds to a 500 code", :http_code => 500 do 537 | expect(event.to_hash).to include("http_response_error") 538 | expect(event.to_hash["http_response_error"]).to include({"http_code" => code}) 539 | expect(event.get("tags")).to include('_http_response_error') 540 | end 541 | it "responds to a 401/Unauthorized code", :http_code => 401 do 542 | expect(event.to_hash).to include("okta_response_error") 543 | expect(event.to_hash["okta_response_error"]).to include({"http_code" => code}) 544 | expect(event.get("tags")).to include('_okta_response_error') 545 | end 546 | it "responds to a 400 code", :http_code => 400 do 547 | expect(event.to_hash).to include("okta_response_error") 548 | expect(event.to_hash["okta_response_error"]).to include({"http_code" => code}) 549 | expect(event.get("tags")).to include('_okta_response_error') 550 | end 551 | context "when the request rate limit is reached" do 552 | let(:headers) { {"x-rate-limit-remaining" => 0, "x-rate-limit-reset" => 0} } 553 | it "reports and sleeps for the designated time", :http_code => 429 do 554 | expect(instance).to have_received(:get_epoch) 555 | expect(instance).to have_received(:local_sleep).with(1) 556 | expect(event.to_hash).to include("okta_response_error") 557 | expect(event.to_hash["okta_response_error"]).to include({"http_code" => code}) 558 | expect(event.to_hash["okta_response_error"]).to include({"reset_time" => 0}) 559 | expect(event.get("tags")).to include('_okta_response_error') 560 | end 561 | end 562 | context "specific okta errors" do 563 | let(:payload) { {:okta_error => "E0000031" } } 564 | let(:response_body) { LogStash::Json.dump(payload) } 565 | 566 | describe "filter string error" do 567 | let(:payload) { {:okta_error => "E0000031" } } 568 | let(:response_body) { LogStash::Json.dump(payload) } 569 | it "generates a filter string error event", :http_code => 400 do 570 | expect(event.to_hash).to include("okta_response_error") 571 | expect(event.to_hash["okta_response_error"]).to include({"http_code" => code}) 572 | expect(event.to_hash["okta_response_error"]).to include({"okta_plugin_status" => "Filter string was not valid."}) 573 | expect(event.get("tags")).to include('_okta_response_error') 574 | end 575 | end 576 | 577 | describe "start_date error" do 578 | let(:payload) { {:okta_error => "E0000030" } } 579 | let(:response_body) { LogStash::Json.dump(payload) } 580 | it "generates a start_date error event", :http_code => 400 do 581 | expect(event.to_hash).to include("okta_response_error") 582 | expect(event.to_hash["okta_response_error"]).to include({"http_code" => code}) 583 | expect(event.to_hash["okta_response_error"]).to include({"okta_plugin_status" => "since was not valid."}) 584 | expect(event.get("tags")).to include('_okta_response_error') 585 | end 586 | end 587 | end 588 | end 589 | end 590 | end 591 | 592 | describe "stopping" do 593 | let(:config) { default_opts } 594 | before do 595 | allow(File).to receive(:directory?).with(default_state_file_path) { false } 596 | allow(File).to receive(:exist?).with(default_state_file_path) { true } 597 | allow(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 598 | # We don't really want to use the atomic write function 599 | allow(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 600 | allow(File).to receive(:size).with(default_state_file_path) { 0 } 601 | allow(subject).to receive(:update_state_file) { nil } 602 | end 603 | it_behaves_like "an interruptible input plugin" 604 | end 605 | 606 | describe "state file" do 607 | context "when being setup" do 608 | 609 | let(:opts) { 610 | opts = default_opts.merge({"state_file_path" => default_state_file_path}).clone 611 | opts 612 | } 613 | 614 | subject { klass.new(opts) } 615 | 616 | let(:state_file_url) { "https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH}?limit=#{opts["limit"]}&after=asdfasdf" } 617 | let(:test_url) { "https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH}?limit=#{opts["limit"]}" } 618 | let(:state_file_url_changed) { "http://example.com/?limit=1000" } 619 | 620 | before(:each) do 621 | subject.client.stub("https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 622 | :body => "{}", 623 | :code => klass::HTTP_OK_200, 624 | :headers => default_header 625 | ) 626 | end 627 | 628 | 629 | it "sets up the state file correctly" do 630 | expect(File).to receive(:directory?).with(default_state_file_path) { false } 631 | expect(File).to receive(:exist?).with(default_state_file_path) { true } 632 | expect(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 633 | # We don't really want to use the atomic write function 634 | expect(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 635 | expect(File).to receive(:size).with(default_state_file_path) { 0 } 636 | subject.register 637 | end 638 | 639 | it "raises an error on file read" do 640 | expect(File).to receive(:directory?).with(default_state_file_path) { false } 641 | expect(File).to receive(:exist?).with(default_state_file_path) { true } 642 | expect(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 643 | # We don't really want to use the atomic write function 644 | expect(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 645 | expect(File).to receive(:size).with(default_state_file_path) { 10 } 646 | expect(File).to receive(:read).with(default_state_file_path, 10) { raise IOError } 647 | expect {subject.register}.to raise_exception(LogStash::ConfigurationError) 648 | end 649 | 650 | it "creates a url based on the state file" do 651 | expect(File).to receive(:directory?).with(default_state_file_path) { false } 652 | expect(File).to receive(:exist?).with(default_state_file_path) { true } 653 | expect(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 654 | # We don't really want to use the atomic write function 655 | expect(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 656 | expect(File).to receive(:size).with(default_state_file_path) { "#{state_file_url}\n".length } 657 | expect(File).to receive(:read).with(default_state_file_path, "#{state_file_url}\n".length) { "#{state_file_url}\n" } 658 | subject.register 659 | expect(subject.instance_variable_get("@url")).to eql(state_file_url) 660 | end 661 | 662 | it "uses the URL from options when state file is empty" do 663 | expect(File).to receive(:directory?).with(default_state_file_path) { false } 664 | expect(File).to receive(:exist?).with(default_state_file_path) { true } 665 | expect(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 666 | # We don't really want to use the atomic write function 667 | expect(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 668 | expect(File).to receive(:size).with(default_state_file_path) { 0 } 669 | subject.register 670 | expect(subject.instance_variable_get("@url").to_s).to eql(test_url) 671 | end 672 | 673 | it "raises an error when the config url is not part of the saved state" do 674 | expect(File).to receive(:directory?).with(default_state_file_path) { false } 675 | expect(File).to receive(:exist?).with(default_state_file_path) { true } 676 | expect(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 677 | # We don't really want to use the atomic write function 678 | expect(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 679 | expect(File).to receive(:size).with(default_state_file_path) { "#{state_file_url_changed}\n".length } 680 | expect(File).to receive(:read).with(default_state_file_path, "#{state_file_url_changed}\n".length) { "#{state_file_url_changed}\n" } 681 | expect {subject.register}.to raise_exception(LogStash::ConfigurationError) 682 | end 683 | 684 | it "sets the the failure mode to error" do 685 | expect(File).to receive(:directory?).with(default_state_file_path) { false } 686 | expect(File).to receive(:exist?).with(default_state_file_path) { true } 687 | expect(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 688 | # We don't really want to use the atomic write function 689 | expect(subject).to receive(:detect_write_method).with(default_state_file_path) { subject.method(:non_atomic_write) } 690 | expect(File).to receive(:size).with(default_state_file_path) { 0 } 691 | subject.register 692 | expect(subject.instance_variable_get("@state_file_failure_function")).to eql(subject.method(:error_state_file)) 693 | end 694 | end 695 | 696 | context "when running" do 697 | let(:opts) { 698 | opts = default_opts.merge({"state_file_path" => default_state_file_path}).clone 699 | opts 700 | } 701 | let(:instance) { klass.new(opts) } 702 | 703 | let(:payload) { '[{"eventId":"tevIMARaEyiSzm3sm1gvfn8cA1479235809000"}]}]' } 704 | let(:response_body) { LogStash::Json.dump(payload) } 705 | 706 | let(:url_initial) { "https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH}?after=1" } 707 | let(:url_final) { "https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH}?after=2" } 708 | let(:headers) { default_header.merge({"link" => ["<#{url_initial}>; rel=\"self\"", "<#{url_final}>; rel=\"next\""]}).clone } 709 | let(:code) { klass::HTTP_OK_200 } 710 | let(:file_path) { opts['state_file_dir'] + opts["state_file_prefix"] } 711 | let(:file_obj) { double("file") } 712 | let(:fd) { double("fd") } 713 | let(:time_anchor) { 2 } 714 | 715 | before(:each) do |example| 716 | allow(File).to receive(:directory?).with(default_state_file_path) { false } 717 | allow(File).to receive(:exist?).with(default_state_file_path) { true } 718 | allow(File).to receive(:stat).with(default_state_file_path) { double("file_stat") } 719 | # We don't really want to use the atomic write function 720 | allow(instance).to receive(:detect_write_method).with(default_state_file_path) { instance.method(:non_atomic_write) } 721 | allow(File).to receive(:size).with(default_state_file_path) { "#{url_initial}\n".length } 722 | allow(File).to receive(:read).with(default_state_file_path, "#{url_initial}\n".length) { "#{url_initial}\n" } 723 | 724 | instance.client.stub("https://#{opts["hostname"]+klass::OKTA_EVENT_LOG_PATH+klass::AUTH_TEST_URL}", 725 | :body => "{}", 726 | :code => code, 727 | :headers => default_header 728 | ) 729 | instance.register 730 | instance.client.stub( url_initial, 731 | :headers => headers, 732 | :body => response_body, 733 | :code => code ) 734 | 735 | allow(instance).to receive(:handle_failure) { instance.instance_variable_set(:@continue,false) } 736 | allow(instance).to receive(:get_time_int) { time_anchor } 737 | end 738 | 739 | it "updates the state file after data is fetched" do 740 | expect(IO).to receive(:sysopen).with(default_state_file_path, "w+") { fd } 741 | expect(IO).to receive(:open).with(fd).and_yield(file_obj) 742 | expect(file_obj).to receive(:write).with("#{url_final}\n") { url_final.length + 1 } 743 | instance.client.stub( url_final, 744 | :headers => default_header.merge({:link => "<#{url_final}>; rel=\"self\""}).clone, 745 | :body => "{}", 746 | :code => code ) 747 | instance.send(:run_once, queue) 748 | end 749 | 750 | it "updates the state file after a failure" do 751 | expect(IO).to receive(:sysopen).with(default_state_file_path, "w+") { fd } 752 | expect(IO).to receive(:open).with(fd).and_yield(file_obj) 753 | expect(file_obj).to receive(:write).with("#{url_final}\n") { url_final.length + 1 } 754 | instance.send(:run_once, queue) 755 | end 756 | 757 | context "when stop is called" do 758 | it "saves the state in the file" do 759 | # We are still testing the same condition 760 | expect(IO).to receive(:sysopen).with(default_state_file_path, "w+") { fd } 761 | expect(IO).to receive(:open).with(fd).and_yield(file_obj) 762 | expect(file_obj).to receive(:write).with("#{url_final}\n") { url_final.length + 1 } 763 | 764 | # Force a sleep to make the thread hang in the failure condition. 765 | allow(instance).to receive(:handle_failure) { 766 | instance.instance_variable_set(:@continue,false) 767 | sleep(30) 768 | } 769 | 770 | plugin_thread = Thread.new(instance,queue) { |subject, queue| 771 | instance.send(:run, queue) 772 | } 773 | 774 | # Sleep for a bit to make sure things are started. 775 | sleep 0.5 776 | expect(plugin_thread).to be_alive 777 | 778 | instance.do_stop 779 | 780 | # As they say in the logstash thread, why 3? 781 | # Because 2 is too short, and 4 is too long. 782 | wait(3).for { plugin_thread }.to_not be_alive 783 | end 784 | end 785 | end 786 | end 787 | end 788 | --------------------------------------------------------------------------------