├── .gitignore ├── README.md ├── bin ├── build_linux.sh ├── createLogs.sh ├── packer │ ├── config │ │ └── user-data.sh │ ├── ec2 │ │ └── home │ │ │ └── centos │ │ │ ├── build-linux.sh │ │ │ └── etc │ │ │ ├── journald-cloudwatch.conf │ │ │ └── systemd │ │ │ └── system │ │ │ └── journald-cloudwatch.service │ ├── ec2_env.sh │ ├── ec2_env.sh_example │ ├── getDevEc2Host.sh │ ├── loginIntoEc2Dev.sh │ ├── packer_docker.json │ ├── packer_ec2.json │ ├── runDocker.sh │ ├── runEc2Dev.sh │ └── scripts │ │ ├── 000-provision.sh │ │ ├── 040-logagent.sh │ │ └── ec2-provision.sh ├── run_build_linux.sh └── run_test_container.sh ├── cloud-watch ├── Journal.go ├── aws.go ├── cloudwatch_journal_repeater.go ├── cloudwatch_journal_repeater_test.go ├── config.go ├── config_test.go ├── creators.go ├── journal_darwin.go ├── journal_linux.go ├── journal_linux_test.go ├── mock.go ├── read_test.go ├── record.go ├── record_test.go └── workers.go ├── docs ├── README.md ├── images │ └── checker.png ├── index.html ├── javascripts │ └── scale.fix.js ├── params.json └── stylesheets │ ├── github-dark.css │ ├── github-light.css │ ├── normalize.css │ ├── styles.css │ └── stylesheet.css ├── main.go ├── main └── test.go └── samples ├── output.json └── sample.conf /.gitignore: -------------------------------------------------------------------------------- 1 | .idea/ 2 | systemd-cloud-watch 3 | systemd-cloud-watch.iml 4 | systemd-cloud-watch_linux 5 | bin/packer/ec2_env.sh 6 | /bin/aws.sh 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Systemd Journal CloudWatch Writer 2 | 3 | This utility reads from the [systemd journal](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html), 4 | and sends the data in batches to [Cloudwatch](https://aws.amazon.com/cloudwatch/). 5 | 6 | This is an alternative process to the AWS-provided logs agent. 7 | The AWS logs agent copies data from on-disk text log files into [Cloudwatch](https://aws.amazon.com/cloudwatch/). 8 | This utility `systemd-cloud-watch` reads the `systemd journal` and writes that data in batches to CloudWatch. 9 | 10 | There are other ways to do this using various techniques. But depending on the size of log messages and size of the core parts 11 | these other methods are fragile as AWS CloudWatch limits the size of the messages. 12 | This utility allows you cap the log field size, include only the fields that you want, or 13 | exclude the fields you don't want. We find that this is not only useful but essential. 14 | 15 | 16 | ## Log format 17 | 18 | The journal event data is written to ***CloudWatch*** Logs in JSON format, making it amenable to filtering using the JSON filter syntax. 19 | Log records are translated to ***CloudWatch*** JSON events using a structure like the following: 20 | 21 | #### Sample log 22 | ```json 23 | { 24 | "instanceId" : "i-xxxxxxxx", 25 | "pid" : 12354, 26 | "uid" : 0, 27 | "gid" : 0, 28 | "cmdName" : "cron", 29 | "exe" : "/usr/sbin/cron", 30 | "cmdLine" : "/usr/sbin/CRON -f", 31 | "systemdUnit" : "cron.service", 32 | "bootId" : "fa58079c7a6d12345678b6ebf1234567", 33 | "hostname" : "ip-10-1-0-15", 34 | "transport" : "syslog", 35 | "priority" : "INFO", 36 | "message" : "pam_unix(cron:session): session opened for user root by (uid=0)", 37 | "syslogFacility" : 10, 38 | "syslogIdent" : "CRON" 39 | } 40 | ``` 41 | 42 | The JSON-formatted log events could also be exported into an AWS ElasticSearch instance using the ***CloudWatch*** 43 | sync mechanism. Once in ElasticSearch, you can use an ELK stack to obtain more elaborate filtering and query capabilities. 44 | 45 | 46 | ## Installation 47 | 48 | If you have a binary distribution, you just need to drop the executable file somewhere. 49 | 50 | This tool assumes that it is running on an EC2 instance. 51 | 52 | This tool uses `libsystemd` to access the journal. systemd-based distributions generally ship 53 | with this already installed, but if yours doesn't you must manually install the library somehow before 54 | this tool will work. 55 | 56 | There are instructions on how to install the Linux requirements for development below see - 57 | [Setting up a Linux env for testing/developing (CentOS7)](#setting-up-a-linux-env-for-testingdeveloping-centos7). 58 | 59 | We also have two excellent examples of setting up a dev environment using [bin.packer](https://www.packer.io/) for both 60 | [AWS EC2](#building-the-ec2-image-with-packer-to-build-the-linux-instance-to-build-this-project) and 61 | [Docker](#building-the-docker-image-to-build-the-linux-instance-to-build-this-project). We setup CentoOS 7. 62 | The EC2 instance bin.packer build uses the ***aws command line*** to create and connect to a running image. 63 | These should be instructive for how to setup this utility in your environment to run with ***systemd*** as we provide 64 | all of the systemd scripts in the bin.packer provision scripts for EC2. An example is good. A running example is better. 65 | 66 | ## Configuration 67 | 68 | This tool uses a small configuration file to set some values that are required for its operation. 69 | Most of the configuration values are optional and have default settings, but a couple are required. 70 | 71 | The configuration file uses a syntax like this: 72 | 73 | ```js 74 | log_group = "my-awesome-app" 75 | 76 | ``` 77 | 78 | The following configuration settings are supported: 79 | 80 | * `aws_region`: (Optional) The AWS region whose CloudWatch Logs API will be written to. If not provided, 81 | this defaults to the region where the host EC2 instance is running. 82 | 83 | * `ec2_instance_id`: (Optional) The id of the EC2 instance on which the tool is running. There is very 84 | little reason to set this, since it will be automatically set to the id of the host EC2 instance. 85 | 86 | * `journal_dir`: (Optional) Override the directory where the systemd journal can be found. This is 87 | useful in conjunction with remote log aggregation, to work with journals synced from other systems. 88 | The default is to use the local system's journal. 89 | 90 | * `log_group`: (Required) The name of the cloudwatch log group to write logs into. This log group must 91 | be created before running the program. 92 | 93 | * `log_priority`: (Optional) The highest priority of the log messages to read (on a 0-7 scale). This defaults 94 | to DEBUG (all messages). This has a behaviour similar to `journalctl -p `. At the moment, only 95 | a single value can be specified, not a range. Possible values are: `0,1,2,3,4,5,6,7` or one of the corresponding 96 | `"emerg", "alert", "crit", "err", "warning", "notice", "info", "debug"`. 97 | When a single log level is specified, all messages with this log level or a lower (hence more important) 98 | log level are read and pushed to CloudWatch. For more information about priority levels, look at 99 | https://www.freedesktop.org/software/systemd/man/journalctl.html 100 | 101 | * `log_stream`: (Optional) The name of the cloudwatch log stream to write logs into. This defaults to 102 | the EC2 instance id. Each running instance of this application (along with any other applications 103 | writing logs into the same log group) must have a unique `log_stream` value. If the given log stream 104 | doesn't exist then it will be created before writing the first set of journal events. 105 | 106 | * `buffer_size`: (Optional) The size of the event buffer to send to CloudWatch Logs API. The default is 50. 107 | This means that cloud watch will send 50 logs at a time. 108 | 109 | * `fields`: (Optional) Specifies which fields should be included in the JSON map that is sent to CloudWatch. 110 | 111 | * `omit_fields`: (Optional) Specifies which fields should NOT be included in the JSON map that is sent to CloudWatch. 112 | 113 | * `field_length`: (Optional) Specifies how long string fileds can be in the JSON map that is sent to CloudWatch. 114 | The default is 255 characters. 115 | 116 | * `queue_batch_size` : (Optional) Internal. Default to 10,000 entries, how large the queue buffer is. This is chunks of log entries 117 | that can be sent to the cloud watch repeater. 118 | 119 | * `queue_channel_size`: (Optional) Internal. Default to 3 entries, how large the queue buffer is. This is how many `queue_batch_size` 120 | can be around to send before the journald reader waits for the cloudwatch repeater. 121 | 122 | * `queue_poll_duration_ms` : (Optional) Internal. Default to 10 ms, how long the queue manager will wait if there are no log entries to send 123 | to check again to see if there are log entries to send. 124 | 125 | * `queue_flush_log_ms` : (Optional) If `queue_batch_size` has not been met because there are no more journald entries to 126 | read, how long to flush the buffer to cloud watch receiver. Defaults to 100 ms. 127 | 128 | * `debug`: (Optional) Turns on debug logging. 129 | 130 | * `local`: (Optional) Used for unit testing. Will not try to create an AWS meta-data client to read region and AWS credentials. 131 | 132 | * `tail`: (Optional) Start from the tail of log. Only send new log entries. This is good for reboot so you don't send all of the 133 | logs in the system, which is the default behavior. 134 | 135 | * `rewind`: (Optional) Used to rewind X number of entries from the tail of the log. Must be used in conjunction with the 136 | `tail` setting. 137 | 138 | * `mock-cloud-watch` : (Optional) Used to send logs to a Journal Repeater that just spits out message and priority to the console. 139 | This is used for development only. 140 | 141 | 142 | If your average log message was 500 bytes, and your used the default setting then assuming the server was generating 143 | journald messages rapidly you could use a heap of up to `queue_channel_size` (3) * `queue_batch_size`(10,000) * 500 bytes 144 | (15,000,000). If you had a very resource constrained env, reduce the `queue_batch_size` and/or the `queue_channel_size`. 145 | 146 | 147 | 148 | ### AWS API access 149 | 150 | This program requires access to call some of the Cloudwatch API functions. The recommended way to 151 | achieve this is to create an 152 | [IAM Instance Profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) 153 | that grants your EC2 instance a role that has Cloudwatch API access. The program will automatically 154 | discover and make use of instance profile credentials. 155 | 156 | The following IAM policy grants the required access across all log groups in all regions: 157 | 158 | #### IAM file 159 | ```json 160 | { 161 | "Version": "2012-10-17", 162 | "Statement": [ 163 | { 164 | "Effect": "Allow", 165 | "Action": [ 166 | "logs:CreateLogStream", 167 | "logs:PutLogEvents", 168 | "logs:DescribeLogStreams" 169 | ], 170 | "Resource": [ 171 | "arn:aws:logs:*:*:log-group:*", 172 | "arn:aws:logs:*:*:log-group:*:log-stream:*" 173 | ] 174 | } 175 | ] 176 | } 177 | ``` 178 | 179 | In more complex environments you may want to restrict further which regions, groups and streams 180 | the instance can write to. You can do this by adjusting the two ARN strings in the `"Resource"` section: 181 | 182 | * The first `*` in each string can be replaced with an AWS region name like `us-east-1` 183 | to grant access only within the given region. 184 | * The `*` after `log-group` in each string can be replaced with a Cloudwatch Logs log group name 185 | to grant access only to the named group. 186 | * The `*` after `log-stream` in the second string can be replaced with a Cloudwatch Logs log stream 187 | name to grant access only to the named stream. 188 | 189 | Other combinations are possible too. For more information, see 190 | [the reference on ARNs and namespaces](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-cloudwatch-logs). 191 | 192 | 193 | 194 | ### Coexisting with the official Cloudwatch Logs agent 195 | 196 | This application can run on the same host as the official Cloudwatch Logs agent but care must be taken 197 | to ensure that they each use a different log stream name. Only one process may write into each log 198 | stream. 199 | 200 | ## Running on System Boot 201 | 202 | This program is best used as a persistent service that starts on boot and keeps running until the 203 | system is shut down. If you're using `journald` then you're presumably using systemd; you can create 204 | a systemd unit for this service. For example: 205 | 206 | ``` 207 | [Unit] 208 | Description=journald-cloudwatch-logs 209 | Wants=basic.target 210 | After=basic.target network.target 211 | 212 | [Service] 213 | User=nobody 214 | Group=nobody 215 | ExecStart=/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf 216 | KillMode=process 217 | Restart=on-failure 218 | RestartSec=42s 219 | ``` 220 | 221 | This program is designed under the assumption that it will run constantly from some point during 222 | system boot until the system shuts down. 223 | 224 | If the service is stopped while the system is running and then later started again, it will 225 | "lose" any journal entries that were written while it wasn't running. However, on the initial 226 | run after each boot it will clear the backlog of logs created during the boot process, so it 227 | is not necessary to run the program particularly early in the boot process unless you wish 228 | to *promptly* capture startup messages. 229 | 230 | ## Building 231 | 232 | #### Test cloud-watch package 233 | ```sh 234 | go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch 235 | ``` 236 | 237 | 238 | #### Build and Test on Linux (Centos7) 239 | ```sh 240 | ./run_build_linux.sh 241 | ``` 242 | 243 | The above starts up a docker container, runs `go get`, `go build`, `go test` and then copies the binary to 244 | `systemd-cloud-watch_linux`. 245 | 246 | #### Debug process running Linux 247 | ```sh 248 | ./run_test_container.sh 249 | ``` 250 | 251 | 252 | The above starts up a docker container that you can develop with that has all the prerequisites needed to 253 | compile and test this project. 254 | 255 | #### Sample debug session 256 | ```sh 257 | $ ./run_test_container.sh 258 | latest: Pulling from advantageous/golang-cloud-watch 259 | Digest: sha256:eaf5c0a387aee8cc2d690e1c5e18763e12beb7940ca0960ce1b9742229413e71 260 | Status: Image is up to date for advantageous/golang-cloud-watch:latest 261 | [root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/ 262 | .git/ README.md cloud-watch/ bin.packer/ sample.conf 263 | .gitignore build_linux.sh main.go run_build_linux.sh systemd-cloud-watch.iml 264 | .idea/ cgroup/ output.json run_test_container.sh systemd-cloud-watch_linux 265 | 266 | [root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/ 267 | 268 | [root@6e0d1f984c03 systemd-cloud-watch]# ls 269 | README.md build_linux.sh cgroup cloud-watch main.go output.json bin.packer run_build_linux.sh 270 | run_test_container.sh sample.conf systemd-cloud-watch.iml systemd-cloud-watch_linux 271 | 272 | [root@6e0d1f984c03 systemd-cloud-watch]# source ~/.bash_profile 273 | 274 | [root@6e0d1f984c03 systemd-cloud-watch]# export GOPATH=/gopath 275 | 276 | [root@6e0d1f984c03 systemd-cloud-watch]# /usr/lib/systemd/systemd-journald & 277 | [1] 24 278 | 279 | [root@6e0d1f984c03 systemd-cloud-watch]# systemd-cat echo "RUNNING JAVA BATCH JOB - ADF BATCH from `pwd`" 280 | 281 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go clean" 282 | Running go clean 283 | 284 | [root@6e0d1f984c03 systemd-cloud-watch]# go clean 285 | 286 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go get" 287 | Running go get 288 | 289 | [root@6e0d1f984c03 systemd-cloud-watch]# go get 290 | 291 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go build" 292 | Running go build 293 | [root@6e0d1f984c03 systemd-cloud-watch]# go build 294 | 295 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go test" 296 | Running go test 297 | 298 | [root@6e0d1f984c03 systemd-cloud-watch]# go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch 299 | === RUN TestRepeater 300 | config DEBUG: 2016/11/30 08:53:34 config.go:66: Loading log... 301 | aws INFO: 2016/11/30 08:53:34 aws.go:42: Config set to local 302 | aws INFO: 2016/11/30 08:53:34 aws.go:72: Client missing credentials not looked up 303 | aws INFO: 2016/11/30 08:53:34 aws.go:50: Client missing using config to set region 304 | aws INFO: 2016/11/30 08:53:34 aws.go:52: AWSRegion missing using default region us-west-2 305 | repeater ERROR: 2016/11/30 08:53:44 cloudwatch_journal_repeater.go:141: Error from putEvents NoCredentialProviders: no valid providers in chain. Deprecated. 306 | For verbose messaging see aws.Config.CredentialsChainVerboseErrors 307 | --- SKIP: TestRepeater (10.01s) 308 | cloudwatch_journal_repeater_test.go:43: Skipping WriteBatch, you need to setup AWS credentials for this to work 309 | === RUN TestConfig 310 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 311 | test INFO: 2016/11/30 08:53:44 config_test.go:33: [Foo Bar] 312 | --- PASS: TestConfig (0.00s) 313 | === RUN TestLogOmitField 314 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 315 | --- PASS: TestLogOmitField (0.00s) 316 | === RUN TestNewJournal 317 | --- PASS: TestNewJournal (0.00s) 318 | === RUN TestSdJournal_Operations 319 | --- PASS: TestSdJournal_Operations (0.00s) 320 | journal_linux_test.go:41: Read value=Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available → current limit 4.0G). 321 | === RUN TestNewRecord 322 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 323 | --- PASS: TestNewRecord (0.00s) 324 | === RUN TestLimitFields 325 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 326 | --- PASS: TestLimitFields (0.00s) 327 | === RUN TestOmitFields 328 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 329 | --- PASS: TestOmitFields (0.00s) 330 | PASS 331 | ok github.com/advantageous/systemd-cloud-watch/cloud-watch 10.017s 332 | ``` 333 | 334 | 335 | 336 | 337 | #### Building the docker image to build the linux instance to build this project 338 | 339 | ```sh 340 | # from project root 341 | cd bin.packer 342 | bin.packer build packer_docker.json 343 | ``` 344 | 345 | 346 | #### To run docker dev image 347 | ```sh 348 | # from project root 349 | cd bin.packer 350 | ./run.sh 351 | 352 | ``` 353 | 354 | #### Building the ec2 image with bin.packer to build the linux instance to build this project 355 | 356 | ```sh 357 | # from project root 358 | cd bin.packer 359 | bin.packer build packer_ec2.json 360 | ``` 361 | 362 | We use the [docker](https://www.packer.io/docs/builders/docker.html) support for [bin.packer](https://www.packer.io/). 363 | ("Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.") 364 | 365 | Use `ec2_env.sh_example` to create a `ec2_env.sh` with the instance id that was just created. 366 | 367 | #### ec2_env.sh_example 368 | ``` 369 | #!/usr/bin/env bash 370 | export ami=ami-YOURAMI 371 | export subnet=subnet-YOURSUBNET 372 | export security_group=sg-YOURSG 373 | export iam_profile=YOUR_IAM_ROLE 374 | export key_name=MY_PEM_FILE_KEY_NAME 375 | 376 | ``` 377 | 378 | ##### Using EC2 image (assumes you have ~/.ssh config setup) 379 | ```sh 380 | # from project root 381 | cd bin.packer 382 | 383 | # Run and log into dev env running in EC2 384 | ./runEc2Dev.sh 385 | 386 | # Log into running server 387 | ./loginIntoEc2Dev.sh 388 | 389 | ``` 390 | 391 | 392 | 393 | 394 | 395 | ## Setting up a Linux env for testing/developing (CentOS7). 396 | ```sh 397 | yum -y install wget 398 | yum install -y git 399 | yum install -y gcc 400 | yum install -y systemd-devel 401 | 402 | 403 | echo "installing go" 404 | cd /tmp 405 | wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz 406 | tar -C /usr/local/ -xzf go1.7.3.linux-amd64.tar.gz 407 | rm go1.7.3.linux-amd64.tar.gz 408 | echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bash_profile 409 | ``` 410 | 411 | ## Setting up Java to write to systemd journal 412 | 413 | #### gradle build 414 | ``` 415 | compile 'org.gnieh:logback-journal:0.2.0' 416 | 417 | ``` 418 | 419 | #### logback.xml 420 | ```xml 421 | 422 | 423 | 424 | 425 | 426 | 427 | 428 | {"serviceName":"adfCalcBatch","serviceHost":"${HOST}"} 429 | 430 | 431 | 432 | 433 | 434 | 435 | ``` 436 | 437 | ## Commands for controlling systemd service EC2 dev env 438 | 439 | ```sh 440 | # Get status 441 | sudo systemctl status journald-cloudwatch 442 | # Stop Service 443 | sudo systemctl stop journald-cloudwatch 444 | # Find the service 445 | ps -ef | grep cloud 446 | # Run service manually 447 | /usr/bin/systemd-cloud-watch_linux /etc/journald-cloudwatch.conf 448 | 449 | ``` 450 | 451 | 452 | 453 | ## Derived 454 | This is based on [advantageous journald-cloudwatch-logs](https://github.com/advantageous/journald-cloudwatch-logs) 455 | which was forked from [saymedia journald-cloudwatch-logs](https://github.com/saymedia/journald-cloudwatch-logs). 456 | 457 | 458 | ## Status 459 | Done and released. 460 | 461 | 462 | 463 | ### Using as a lib. 464 | 465 | You can use this project as a lib and you can pass in your own *JournalRepeater* and your own *Journal*. 466 | 467 | 468 | #### Interface for JournalRepeater 469 | ```go 470 | package cloud_watch 471 | 472 | 473 | type Record struct {...} //see source code 474 | 475 | type JournalRepeater interface { 476 | // Close closes a journal opened with NewJournal. 477 | Close() error; 478 | WriteBatch(records []Record) error; 479 | } 480 | ``` 481 | 482 | #### Interface for Journal 483 | ```go 484 | type Journal interface { 485 | // Close closes a journal opened with NewJournal. 486 | Close() error; 487 | 488 | // Next advances the read pointer into the journal by one entry. 489 | Next() (uint64, error); 490 | 491 | // NextSkip advances the read pointer by multiple entries at once, 492 | // as specified by the skip parameter. 493 | NextSkip(skip uint64) (uint64, error); 494 | 495 | // Previous sets the read pointer into the journal back by one entry. 496 | Previous() (uint64, error); 497 | 498 | // PreviousSkip sets back the read pointer by multiple entries at once, 499 | // as specified by the skip parameter. 500 | PreviousSkip(skip uint64) (uint64, error); 501 | 502 | // GetDataValue gets the data object associated with a specific field from the 503 | // current journal entry, returning only the value of the object. 504 | GetDataValue(field string) (string, error); 505 | 506 | 507 | // GetRealtimeUsec gets the realtime (wallclock) timestamp of the current 508 | // journal entry. 509 | GetRealtimeUsec() (uint64, error); 510 | 511 | AddLogFilters(config *Config) 512 | 513 | // GetMonotonicUsec gets the monotonic timestamp of the current journal entry. 514 | GetMonotonicUsec() (uint64, error); 515 | 516 | // GetCursor gets the cursor of the current journal entry. 517 | GetCursor() (string, error); 518 | 519 | 520 | // SeekHead seeks to the beginning of the journal, i.e. the oldest available 521 | // entry. 522 | SeekHead() error; 523 | 524 | // SeekTail may be used to seek to the end of the journal, i.e. the most recent 525 | // available entry. 526 | SeekTail() error; 527 | 528 | // SeekCursor seeks to a concrete journal cursor. 529 | SeekCursor(cursor string) error; 530 | 531 | // Wait will synchronously wait until the journal gets changed. The maximum time 532 | // this call sleeps may be controlled with the timeout parameter. If 533 | // sdjournal.IndefiniteWait is passed as the timeout parameter, Wait will 534 | // wait indefinitely for a journal change. 535 | Wait(timeout time.Duration) int; 536 | } 537 | 538 | ``` 539 | 540 | #### Using as a lib 541 | ```go 542 | 543 | package main 544 | 545 | import ( 546 | jcw "github.com/advantageous/systemd-cloud-watch/cloud-watch" 547 | "flag" 548 | "os" 549 | ) 550 | 551 | var help = flag.Bool("help", false, "set to true to show this help") 552 | 553 | func main() { 554 | 555 | logger := jcw.NewSimpleLogger("main", nil) 556 | 557 | flag.Parse() 558 | 559 | if *help { 560 | usage(logger) 561 | os.Exit(0) 562 | } 563 | 564 | configFilename := flag.Arg(0) 565 | if configFilename == "" { 566 | usage(logger) 567 | panic("config file name must be set!") 568 | } 569 | 570 | config := jcw.CreateConfig(configFilename, logger) 571 | logger = jcw.NewSimpleLogger("main", config) 572 | journal := jcw.CreateJournal(config, logger) //Instead of this, load your own journal 573 | repeater := jcw.CreateRepeater(config, logger) //Instead of this, load your own repeater 574 | 575 | jcw.RunWorkers(journal, repeater, logger, config ) 576 | } 577 | 578 | func usage(logger *jcw.Logger) { 579 | logger.Error.Println("Usage: systemd-cloud-watch ") 580 | flag.PrintDefaults() 581 | } 582 | 583 | ``` 584 | 585 | You could for example create a *JournalRepeater* that writes to *InfluxDB* instead of *CloudWatch*. 586 | 587 | 588 | 589 | 590 | Improvements: 591 | 592 | * Added unit tests (there were none). 593 | * Heavily reduced locking by using [qbit](https://github.com/advantageous/go-qbit) instead of original implementation. 594 | * Added cross compile so I can develop/test on my laptop (MacOS). 595 | * Made logging stateless. No more need for a state file. 596 | * No more getting out of sync with CloudWatch. 597 | * Detects being out of sync and recovers. 598 | * Fixed error with log messages being too big. 599 | * Added ability to include or omit logging fields. 600 | * Created docker image and scripts to test on Linux (CentOS7). 601 | * Created EC2 image and scripts to test on Linux running in AWS EC2 (CentOS7). 602 | * Code organization (we use a package). 603 | * Added comprehensive logging which includes debug logging by config. 604 | * Uses actual timestamp from journal log record instead of just current time 605 | * Auto-creates CloudWatch log group if it does not exist 606 | * Allow this to be used as a library by providing interface for Journal and JournalWriter. 607 | 608 | 609 | ## License 610 | 611 | The original work was from Say Media Inc. We had issues with it and did about a 90% rewrite. 612 | 613 | All additional work is covered under Apache 2.0 license. 614 | Copyright (c) 2016 Geoff Chandler, Rick Hightower 615 | 616 | 617 | Copyright (c) 2015 Say Media Inc 618 | 619 | Permission is hereby granted, free of charge, to any person obtaining a copy 620 | of this software and associated documentation files (the "Software"), to deal 621 | in the Software without restriction, including without limitation the rights 622 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 623 | copies of the Software, and to permit persons to whom the Software is 624 | furnished to do so, subject to the following conditions: 625 | 626 | The above copyright notice and this permission notice shall be included in all 627 | copies or substantial portions of the Software. 628 | 629 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 630 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 631 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 632 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 633 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 634 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 635 | SOFTWARE. 636 | 637 | -------------------------------------------------------------------------------- /bin/build_linux.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | 4 | rm systemd-cloud-watch_linux 5 | 6 | set -e 7 | 8 | cd /gopath/src/github.com/advantageous/systemd-cloud-watch/ 9 | source ~/.bash_profile 10 | export GOPATH=/gopath 11 | 12 | 13 | /usr/lib/systemd/systemd-journald & 14 | 15 | priorities=("emerg" "alert" "crit" "err" "warning" "notice" "info" "debug") 16 | 17 | for x in {1..100} 18 | do 19 | for priority in "${priorities[@]}" 20 | do 21 | echo "[$priority] TEST WITH LATEST LEVEL $x" | systemd-cat -p "$priority" 22 | done 23 | done 24 | 25 | 26 | echo "Running go clean" 27 | go clean 28 | echo "Running go get" 29 | go get 30 | echo "Running go build" 31 | go build 32 | echo "Running go test" 33 | go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch 34 | echo "Renaming output to _linux" 35 | mv systemd-cloud-watch systemd-cloud-watch_linux 36 | 37 | pkill -9 systemd 38 | -------------------------------------------------------------------------------- /bin/createLogs.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | priorities=("emerg" "alert" "crit" "err" "warning" "notice" "info" "debug") 4 | 5 | for x in {1..100} 6 | do 7 | for priority in "${priorities[@]}" 8 | do 9 | echo "[$priority] TEST WITH LATEST LEVEL $x" | systemd-cat -p "$priority" 10 | done 11 | done 12 | -------------------------------------------------------------------------------- /bin/packer/config/user-data.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | sed -i -e '/Defaults requiretty/{ s/.*/# Defaults requiretty/ }' /etc/sudoers 4 | sed -i -e '/%wheel\tALL=(ALL)\tALL/{ s/.*/%wheel\tALL=(ALL)\tNOPASSWD:\tALL/ }' /etc/sudoers 5 | -------------------------------------------------------------------------------- /bin/packer/ec2/home/centos/build-linux.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | 4 | rm systemd-cloud-watch_linux 5 | 6 | set -e 7 | 8 | cd /gopath/src/github.com/advantageous/systemd-cloud-watch/ 9 | 10 | systemd-cat echo "RUNNING JAVA BATCH JOB - ADF BATCH from `pwd`" 11 | 12 | 13 | echo "Running go clean" 14 | go clean 15 | echo "Running go get" 16 | go get 17 | echo "Running go build" 18 | go build 19 | echo "Running go test" 20 | go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch 21 | echo "Renaming output to _linux" 22 | mv systemd-cloud-watch systemd-cloud-watch_linux 23 | 24 | 25 | -------------------------------------------------------------------------------- /bin/packer/ec2/home/centos/etc/journald-cloudwatch.conf: -------------------------------------------------------------------------------- 1 | log_group="test-logstream" 2 | state_file="/var/lib/journald-cloudwatch-logs/state" 3 | log_priority=4 4 | buffer_size=100 5 | -------------------------------------------------------------------------------- /bin/packer/ec2/home/centos/etc/systemd/system/journald-cloudwatch.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=systemd-cloud-watch_linux 3 | Wants=basic.target 4 | After=basic.target network.target 5 | 6 | [Service] 7 | User=centos 8 | Group=centos 9 | ExecStart=/usr/bin/systemd-cloud-watch_linux /etc/journald-cloudwatch.conf 10 | KillMode=process 11 | Restart=on-failure 12 | RestartSec=42s 13 | 14 | 15 | [Install] 16 | WantedBy=multi-user.target 17 | 18 | 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /bin/packer/ec2_env.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | export ami=ami-4ede752e 3 | export subnet=subnet-4ad99b2e 4 | export security_group=sg-d5b222ac 5 | export iam_profile=RBSS-URG-DCOS 6 | export key_name=US-WEST-2-KEY-RBSS-001-D 7 | 8 | 9 | 10 | 11 | -------------------------------------------------------------------------------- /bin/packer/ec2_env.sh_example: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | export ami=ami-YOURAMI 3 | export subnet=subnet-YOURSUBNET 4 | export security_group=sg-YOURSG 5 | export iam_profile=YOUR_IAM_ROLE 6 | export key_name=MY_PEM_FILE_KEY_NAME 7 | 8 | 9 | 10 | -------------------------------------------------------------------------------- /bin/packer/getDevEc2Host.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | aws ec2 describe-instances --filters "Name=tag:Name,Values=i.int.dev.systemd.cloudwatch" | jq --raw-output .Reservations[].Instances[].PublicDnsName 4 | -------------------------------------------------------------------------------- /bin/packer/loginIntoEc2Dev.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | ssh centos@`./getDevEc2Host.sh` 3 | -------------------------------------------------------------------------------- /bin/packer/packer_docker.json: -------------------------------------------------------------------------------- 1 | { 2 | "builders": [ 3 | { 4 | "type": "docker", 5 | "image": "centos:centos7.2.1511", 6 | "commit": true 7 | } 8 | ], 9 | "provisioners": [ 10 | { 11 | "type": "shell", 12 | "scripts": [ 13 | "scripts/000-provision.sh" 14 | ] 15 | } 16 | ], 17 | "post-processors": [ 18 | [ 19 | { 20 | "type": "docker-tag", 21 | "repository": "advantageous/golang-cloud-watch", 22 | "tag": "0.1" 23 | }, 24 | { 25 | "type": "docker-tag", 26 | "repository": "advantageous/golang-cloud-watch", 27 | "tag": "latest" 28 | }, 29 | "docker-push" 30 | ] 31 | ] 32 | } -------------------------------------------------------------------------------- /bin/packer/packer_ec2.json: -------------------------------------------------------------------------------- 1 | { 2 | "variables": { 3 | "aws_access_key": "", 4 | "aws_secret_key": "", 5 | "aws_region": "us-west-2", 6 | "aws_ami_image": "ami-d2c924b2", 7 | "aws_instance_type": "m4.large", 8 | "image_version" : "0.6" 9 | }, 10 | "builders": [ 11 | { 12 | "type": "amazon-ebs", 13 | "access_key": "{{user `aws_access_key`}}", 14 | "secret_key": "{{user `aws_secret_key`}}", 15 | "region": "{{user `aws_region`}}", 16 | "source_ami": "{{user `aws_ami_image`}}", 17 | "instance_type": "{{user `aws_instance_type`}}", 18 | "ssh_username": "centos", 19 | "ami_name": "centos-7-systemd-cloud-watch-{{user `image_version`}}", 20 | "tags": { 21 | "Name": "centos-7-systemd-cloud-watch-{{user `image_version`}}", 22 | "OS_Version": "LinuxCentOs7", 23 | "Release": "7", 24 | "Description": "Base CentOs7 with prerequisites for go development for systemd cloud watch" 25 | }, 26 | "user_data_file": "config/user-data.sh" 27 | } 28 | ], 29 | "provisioners": [ 30 | { 31 | "type": "file", 32 | "source": "scripts/000-provision.sh", 33 | "destination": "/home/centos/000-provision.sh" 34 | }, 35 | { 36 | "type": "file", 37 | "source": "ec2/home/centos/", 38 | "destination": "/home/centos" 39 | }, 40 | { 41 | "type": "shell", 42 | "scripts": [ 43 | "scripts/ec2-provision.sh", 44 | "scripts/040-logagent.sh" 45 | ] 46 | }, 47 | { 48 | "type": "shell", 49 | "inline": [ 50 | "rm -fr /home/centos/etc" 51 | ] 52 | } 53 | ] 54 | } 55 | -------------------------------------------------------------------------------- /bin/packer/runDocker.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | docker pull advantageous/golang-cloud-watch:latest 3 | docker run -it advantageous/golang-cloud-watch 4 | -------------------------------------------------------------------------------- /bin/packer/runEc2Dev.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | source ec2_env.sh 4 | 5 | 6 | instance_id=$(aws ec2 run-instances --image-id "$ami" --subnet-id "$subnet" \ 7 | --instance-type m4.large --iam-instance-profile "Name=$iam_profile" \ 8 | --associate-public-ip-address --security-group-ids "$security_group" \ 9 | --key-name "$key_name" | jq --raw-output .Instances[].InstanceId) 10 | 11 | echo "${instance_id} is being created" 12 | 13 | aws ec2 wait instance-exists --instance-ids "$instance_id" 14 | 15 | aws ec2 create-tags --resources "${instance_id}" --tags Key=Name,Value="i.int.dev.systemd.cloudwatch" 16 | 17 | echo "${instance_id} was tagged waiting to login" 18 | 19 | aws ec2 wait instance-status-ok --instance-ids "$instance_id" 20 | 21 | ./loginIntoEc2Dev.sh 22 | 23 | 24 | 25 | -------------------------------------------------------------------------------- /bin/packer/scripts/000-provision.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | yum -y install wget 5 | yum install -y git 6 | yum install -y gcc 7 | yum install -y systemd-devel 8 | yum install -y nano 9 | 10 | echo "installing go" 11 | cd /tmp 12 | wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz 13 | tar -C /usr/local/ -xzf go1.7.3.linux-amd64.tar.gz 14 | rm go1.7.3.linux-amd64.tar.gz 15 | echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bash_profile 16 | 17 | 18 | wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64 19 | chmod +x jq-linux64 20 | mv jq-linux64 /usr/bin/jq 21 | 22 | -------------------------------------------------------------------------------- /bin/packer/scripts/040-logagent.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | echo Install log agent ------------------------------- 5 | mkdir /tmp/logagent 6 | cd /tmp/logagent 7 | curl -OL https://github.com/advantageous/systemd-cloud-watch/releases/download/v0.0.1-prerelease/systemd-cloud-watch_linux 8 | sudo mv systemd-cloud-watch_linux /usr/bin 9 | sudo chmod +x /usr/bin/systemd-cloud-watch_linux 10 | sudo mkdir -p /var/lib/journald-cloudwatch-logs/ 11 | sudo mv /home/centos/etc/journald-cloudwatch.conf /etc/ 12 | sudo mv /home/centos/etc/systemd/system/journald-cloudwatch.service /etc/systemd/system/journald-cloudwatch.service 13 | sudo chmod 664 /etc/systemd/system/journald-cloudwatch.service 14 | sudo chown -R centos /var/lib/journald-cloudwatch-logs/ 15 | sudo systemctl enable journald-cloudwatch.service 16 | sudo rm -rf /tmp/llogagent 17 | echo DONE installing log agent ------------------------------- 18 | -------------------------------------------------------------------------------- /bin/packer/scripts/ec2-provision.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | sudo chmod +x /home/centos/000-provision.sh 4 | sudo /home/centos/000-provision.sh 5 | 6 | echo 'export PATH=$PATH:/usr/local/go/bin' >> /home/centos/.bash_profile 7 | echo 'export GOPATH=/gopath' >> /home/centos/.bash_profile 8 | chown centos /home/centos/.bash_profile 9 | 10 | sudo mkdir -p /gopath/src/github.com/advantageous/ 11 | sudo chown centos /gopath/src/github.com/advantageous/ 12 | git clone https://github.com/advantageous/systemd-cloud-watch.git /gopath/src/github.com/advantageous/systemd-cloud-watch 13 | 14 | 15 | 16 | sudo chown -R centos /gopath 17 | 18 | -------------------------------------------------------------------------------- /bin/run_build_linux.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | docker pull advantageous/golang-cloud-watch:latest 3 | docker run -it --name build -v `pwd`:/gopath/src/github.com/advantageous/systemd-cloud-watch \ 4 | advantageous/golang-cloud-watch \ 5 | /bin/sh -c "/gopath/src/github.com/advantageous/systemd-cloud-watch/build_linux.sh" 6 | docker rm build 7 | -------------------------------------------------------------------------------- /bin/run_test_container.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | docker pull advantageous/golang-cloud-watch:latest 3 | docker run -it --name runner2 \ 4 | -p 80:80 \ 5 | -v `pwd`:/gopath/src/github.com/advantageous/systemd-cloud-watch \ 6 | advantageous/golang-cloud-watch:latest 7 | docker rm runner2 8 | -------------------------------------------------------------------------------- /cloud-watch/Journal.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import "time" 4 | 5 | type JournalRepeater interface { 6 | // Close closes a journal opened with NewJournal. 7 | Close() error 8 | WriteBatch(records []*Record) error 9 | } 10 | 11 | type Journal interface { 12 | // Close closes a journal opened with NewJournal. 13 | Close() error 14 | 15 | // Next advances the read pointer into the journal by one entry. 16 | Next() (uint64, error) 17 | 18 | // NextSkip advances the read pointer by multiple entries at once, 19 | // as specified by the skip parameter. 20 | NextSkip(skip uint64) (uint64, error) 21 | 22 | // Previous sets the read pointer into the journal back by one entry. 23 | Previous() (uint64, error) 24 | 25 | // PreviousSkip sets back the read pointer by multiple entries at once, 26 | // as specified by the skip parameter. 27 | PreviousSkip(skip uint64) (uint64, error) 28 | 29 | // GetDataValue gets the data object associated with a specific field from the 30 | // current journal entry, returning only the value of the object. 31 | GetDataValue(field string) (string, error) 32 | 33 | // GetRealtimeUsec gets the realtime (wallclock) timestamp of the current 34 | // journal entry. 35 | GetRealtimeUsec() (uint64, error) 36 | 37 | AddLogFilters(config *Config) 38 | 39 | // GetMonotonicUsec gets the monotonic timestamp of the current journal entry. 40 | GetMonotonicUsec() (uint64, error) 41 | 42 | // GetCursor gets the cursor of the current journal entry. 43 | GetCursor() (string, error) 44 | 45 | // SeekHead seeks to the beginning of the journal, i.e. the oldest available 46 | // entry. 47 | SeekHead() error 48 | 49 | // SeekTail may be used to seek to the end of the journal, i.e. the most recent 50 | // available entry. 51 | SeekTail() error 52 | 53 | // SeekCursor seeks to a concrete journal cursor. 54 | SeekCursor(cursor string) error 55 | 56 | // Wait will synchronously wait until the journal gets changed. The maximum time 57 | // this call sleeps may be controlled with the timeout parameter. If 58 | // sdjournal.IndefiniteWait is passed as the timeout parameter, Wait will 59 | // wait indefinitely for a journal change. 60 | Wait(timeout time.Duration) int 61 | } 62 | -------------------------------------------------------------------------------- /cloud-watch/aws.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "github.com/aws/aws-sdk-go/aws" 5 | awsCredentials "github.com/aws/aws-sdk-go/aws/credentials" 6 | "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" 7 | "github.com/aws/aws-sdk-go/aws/ec2metadata" 8 | awsSession "github.com/aws/aws-sdk-go/aws/session" 9 | "github.com/aws/aws-sdk-go/service/ec2" 10 | "os" 11 | "strings" 12 | lg "github.com/advantageous/go-logback/logging" 13 | ) 14 | 15 | var awsLogger = lg.NewSimpleLogger("aws") 16 | 17 | func NewAWSSession(cfg *Config) *awsSession.Session { 18 | 19 | metaDataClient, session := getClient(cfg) 20 | credentials := getCredentials(metaDataClient) 21 | 22 | if credentials != nil { 23 | awsConfig := &aws.Config{ 24 | Credentials: getCredentials(metaDataClient), 25 | Region: aws.String(getRegion(metaDataClient, cfg, session)), 26 | MaxRetries: aws.Int(3), 27 | } 28 | return awsSession.New(awsConfig) 29 | } else { 30 | return awsSession.New(&aws.Config{ 31 | Region: aws.String(getRegion(metaDataClient, cfg, session)), 32 | MaxRetries: aws.Int(3), 33 | }) 34 | } 35 | 36 | } 37 | 38 | func getClient(config *Config) (*ec2metadata.EC2Metadata, *awsSession.Session) { 39 | if !config.Local { 40 | awsLogger.Debug("Config NOT set to local using meta-data client to find local") 41 | var session = awsSession.New(&aws.Config{}) 42 | return ec2metadata.New(session), session 43 | } else { 44 | awsLogger.Info("Config set to local") 45 | return nil, nil 46 | } 47 | } 48 | 49 | func getRegion(client *ec2metadata.EC2Metadata, config *Config, session *awsSession.Session) string { 50 | 51 | if client == nil { 52 | awsLogger.Info("Client missing using config to set region") 53 | if config.AWSRegion == "" { 54 | awsLogger.Info("AWSRegion missing using default region us-west-2") 55 | return "us-west-2" 56 | } else { 57 | return config.AWSRegion 58 | } 59 | } else { 60 | region, err := client.Region() 61 | if err != nil { 62 | awsLogger.Errorf("Unable to get region from aws meta client : %s %v", err.Error(), err) 63 | os.Exit(3) 64 | } 65 | 66 | config.AWSRegion = region 67 | config.EC2InstanceId, err = client.GetMetadata("instance-id") 68 | if err != nil { 69 | awsLogger.Errorf("Unable to get instance id from aws meta client : %s %v", err.Error(), err) 70 | os.Exit(4) 71 | } 72 | 73 | if config.LogStreamName == "" { 74 | var az, name, ip string 75 | az = findAZ(client) 76 | ip = findLocalIp(client) 77 | 78 | name = findInstanceName(config.EC2InstanceId, config.AWSRegion, session) 79 | config.LogStreamName = name + "-" + strings.Replace(ip, ".", "-", -1) + "-" + az 80 | awsLogger.Infof("LogStreamName was not set so using %s \n", config.LogStreamName) 81 | } 82 | 83 | return region 84 | } 85 | 86 | } 87 | func findLocalIp(metaClient *ec2metadata.EC2Metadata) string { 88 | ip, err := metaClient.GetMetadata("local-ipv4") 89 | 90 | if err != nil { 91 | awsLogger.Errorf("Unable to get private ip address from aws meta client : %s %v", err.Error(), err) 92 | os.Exit(6) 93 | } 94 | 95 | return ip 96 | 97 | } 98 | 99 | func getCredentials(client *ec2metadata.EC2Metadata) *awsCredentials.Credentials { 100 | 101 | if client == nil { 102 | awsLogger.Infof("Client missing credentials not looked up") 103 | return nil 104 | } else { 105 | return awsCredentials.NewChainCredentials([]awsCredentials.Provider{ 106 | &awsCredentials.EnvProvider{}, 107 | &ec2rolecreds.EC2RoleProvider{ 108 | Client: client, 109 | }, 110 | }) 111 | } 112 | 113 | } 114 | 115 | func findAZ(metaClient *ec2metadata.EC2Metadata) string { 116 | 117 | az, err := metaClient.GetMetadata("placement/availability-zone") 118 | 119 | if err != nil { 120 | awsLogger.Errorf("Unable to get az from aws meta client : %s %v", err.Error(), err) 121 | os.Exit(5) 122 | } 123 | 124 | return az 125 | } 126 | 127 | func findInstanceName(instanceId string, region string, session *awsSession.Session) string { 128 | 129 | var name = "NO_NAME" 130 | var err error 131 | 132 | ec2Service := ec2.New(session, aws.NewConfig().WithRegion(region)) 133 | 134 | params := &ec2.DescribeInstancesInput{ 135 | InstanceIds: []*string{ 136 | aws.String(instanceId), // Required 137 | // More values... 138 | }, 139 | } 140 | 141 | resp, err := ec2Service.DescribeInstances(params) 142 | 143 | if err != nil { 144 | awsLogger.Errorf("Unable to get instance name tag DescribeInstances failed : %s %v", err.Error(), err) 145 | return name 146 | } 147 | 148 | if len(resp.Reservations) > 0 && len(resp.Reservations[0].Instances) > 0 { 149 | var instance = resp.Reservations[0].Instances[0] 150 | if len(instance.Tags) > 0 { 151 | 152 | for _, tag := range instance.Tags { 153 | if *tag.Key == "Name" { 154 | return *tag.Value 155 | } 156 | } 157 | } 158 | awsLogger.Errorf("Unable to get find name tag ") 159 | return name 160 | 161 | } else { 162 | awsLogger.Errorf("Unable to get find name tag ") 163 | return name 164 | } 165 | } 166 | -------------------------------------------------------------------------------- /cloud-watch/cloudwatch_journal_repeater.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "encoding/json" 5 | "errors" 6 | "fmt" 7 | "github.com/aws/aws-sdk-go/aws" 8 | "github.com/aws/aws-sdk-go/aws/awserr" 9 | awsSession "github.com/aws/aws-sdk-go/aws/session" 10 | "github.com/aws/aws-sdk-go/service/cloudwatchlogs" 11 | lg "github.com/advantageous/go-logback/logging" 12 | ) 13 | 14 | var messageId = int64(0) 15 | 16 | type CloudWatchJournalRepeater struct { 17 | conn *cloudwatchlogs.CloudWatchLogs 18 | logGroupName string 19 | logStreamName string 20 | nextSequenceToken string 21 | logger lg.Logger 22 | config *Config 23 | } 24 | 25 | func NewCloudWatchJournalRepeater(sess *awsSession.Session, logger lg.Logger, config *Config) (*CloudWatchJournalRepeater, error) { 26 | conn := cloudwatchlogs.New(sess) 27 | if logger == nil { 28 | if !config.Debug { 29 | logger = lg.GetSimpleLogger("CLOUD_WATCH_REPEATER_DEBUG", "repeater") 30 | } else { 31 | logger = lg.NewSimpleDebugLogger("repeater") 32 | } 33 | } 34 | 35 | return &CloudWatchJournalRepeater{ 36 | conn: conn, 37 | logGroupName: config.LogGroupName, 38 | logStreamName: config.LogStreamName, 39 | nextSequenceToken: "", 40 | logger: logger, 41 | config: config, 42 | }, nil 43 | } 44 | 45 | func (repeater *CloudWatchJournalRepeater) Close() error { 46 | return nil 47 | } 48 | 49 | func (repeater *CloudWatchJournalRepeater) WriteBatch(records []*Record) error { 50 | 51 | debug := repeater.config.Debug 52 | logger := repeater.logger 53 | 54 | events := make([]*cloudwatchlogs.InputLogEvent, 0, len(records)) 55 | for _, record := range records { 56 | 57 | messageId++ 58 | record.SeqId = messageId 59 | 60 | jsonDataBytes, err := json.MarshalIndent(*record, "", " ") 61 | if err != nil { 62 | return err 63 | } 64 | jsonData := string(jsonDataBytes) 65 | 66 | events = append(events, &cloudwatchlogs.InputLogEvent{ 67 | Message: aws.String(jsonData), 68 | Timestamp: aws.Int64(int64(record.TimeUsec)), 69 | }) 70 | } 71 | 72 | putEvents := func() error { 73 | request := &cloudwatchlogs.PutLogEventsInput{ 74 | LogEvents: events, 75 | LogGroupName: &repeater.logGroupName, 76 | LogStreamName: &repeater.logStreamName, 77 | } 78 | if repeater.nextSequenceToken != "" { 79 | request.SequenceToken = aws.String(repeater.nextSequenceToken) 80 | } 81 | result, err := repeater.conn.PutLogEvents(request) 82 | if err != nil { 83 | return err 84 | } 85 | repeater.nextSequenceToken = *result.NextSequenceToken 86 | 87 | return nil 88 | } 89 | 90 | getNextToken := func() error { 91 | limit := int64(1) 92 | describeRequest := &cloudwatchlogs.DescribeLogStreamsInput{ 93 | LogGroupName: &repeater.logGroupName, 94 | LogStreamNamePrefix: &repeater.logStreamName, 95 | Limit: &limit, 96 | } 97 | describeOutput, err := repeater.conn.DescribeLogStreams(describeRequest) 98 | 99 | if err != nil { 100 | return err 101 | } 102 | 103 | if len(describeOutput.LogStreams) > 0 { 104 | repeater.nextSequenceToken = 105 | *describeOutput.LogStreams[0].UploadSequenceToken 106 | 107 | if debug { 108 | logger.Debug("Next Token ", repeater.nextSequenceToken) 109 | } 110 | 111 | err = putEvents() 112 | if err != nil { 113 | return fmt.Errorf("failed to put events after sequence lookup: : %s %v", err.Error(), err) 114 | } 115 | return nil 116 | } 117 | 118 | return errors.New("failed to put events after looking for next sequence") 119 | } 120 | 121 | createStream := func() error { 122 | 123 | if debug { 124 | logger.Debug("Creating log stream ", repeater.logStreamName) 125 | } 126 | 127 | request := &cloudwatchlogs.CreateLogStreamInput{ 128 | LogGroupName: &repeater.logGroupName, 129 | LogStreamName: &repeater.logStreamName, 130 | } 131 | _, err := repeater.conn.CreateLogStream(request) 132 | return err 133 | } 134 | 135 | createLogGroup := func() error { 136 | 137 | if debug { 138 | logger.Debug("Creating log group ", repeater.logGroupName) 139 | } 140 | 141 | request := &cloudwatchlogs.CreateLogGroupInput{ 142 | LogGroupName: &repeater.logGroupName, 143 | } 144 | _, err := repeater.conn.CreateLogGroup(request) 145 | return err 146 | } 147 | 148 | recoverResourceNotFound := func(awsErr awserr.Error) error { 149 | // Maybe our log stream doesn't exist yet. We'll try 150 | // to create it and then, if we're successful, try 151 | // writing the events again. 152 | err := createStream() 153 | if err != nil { 154 | awsErr, _ = err.(awserr.Error) 155 | //If you did not create the stream, then maybe you need to create the log group. 156 | if awsErr.Code() == "ResourceNotFoundException" { 157 | err = createLogGroup() 158 | if err != nil { 159 | return fmt.Errorf("failed to create log group: %s %v", err.Error(), err) 160 | } 161 | err = createStream() 162 | if err != nil { 163 | return fmt.Errorf("failed to create stream after log group: %s %v", err.Error(), err) 164 | } 165 | 166 | } else { 167 | return fmt.Errorf("failed to create stream: %s %v", err.Error(), err) 168 | } 169 | } 170 | 171 | err = putEvents() 172 | if err != nil { 173 | return fmt.Errorf("failed to put events: %s %v", err.Error(), err) 174 | } 175 | return nil 176 | 177 | } 178 | 179 | if repeater.nextSequenceToken == "" { 180 | getNextToken() 181 | } 182 | 183 | var originalErr error 184 | err := putEvents() 185 | if err != nil { 186 | originalErr = err 187 | if awsErr, ok := err.(awserr.Error); ok { 188 | if awsErr.Code() == "ResourceNotFoundException" { 189 | err = recoverResourceNotFound(awsErr) 190 | if err != nil { 191 | return err 192 | } 193 | } else if awsErr.Code() == "DataAlreadyAcceptedException" { 194 | // This batch was already sent? 195 | repeater.logger.Errorf("DataAlreadyAcceptedException from putEvents : %s %v", err.Error(), err) 196 | err = getNextToken() 197 | if err != nil { 198 | return fmt.Errorf("Next token failed after DataAlreadyAcceptedException : %s %v", err.Error(), err) 199 | } 200 | } else if awsErr.Code() == "InvalidSequenceTokenException" { 201 | repeater.logger.Errorf("InvalidSequenceTokenException from putEvents : %s %v", err.Error(), err) 202 | err = getNextToken() 203 | if err != nil { 204 | return fmt.Errorf("Next token failed after InvalidSequenceTokenException : %s %v", err.Error(), err) 205 | } 206 | } else { 207 | repeater.logger.Errorf("Error from putEvents : %s %v", originalErr.Error(), originalErr) 208 | return fmt.Errorf("failed to put events: : %s %v", originalErr.Error(), originalErr) 209 | } 210 | } 211 | 212 | } else { 213 | if repeater.config.Debug { 214 | repeater.logger.Debug("SENT SUCCESSFULLY") 215 | } 216 | } 217 | 218 | return nil 219 | } 220 | -------------------------------------------------------------------------------- /cloud-watch/cloudwatch_journal_repeater_test.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "strings" 5 | "testing" 6 | "time" 7 | ) 8 | 9 | func TestRepeater(t *testing.T) { 10 | 11 | config_data := ` 12 | log_priority=3 13 | debug=true 14 | local=true 15 | log_stream="test-stream" 16 | log_group="test-group" 17 | ` 18 | 19 | config, _ := LoadConfigFromString(config_data, nil) 20 | session := NewAWSSession(config) 21 | 22 | repeater, err := NewCloudWatchJournalRepeater(session, nil, config) 23 | 24 | if err != nil { 25 | t.Errorf("Unable to created new repeater %s", err) 26 | t.Fail() 27 | } 28 | 29 | if repeater == nil { 30 | t.Error("Repeater nil") 31 | t.Fail() 32 | } 33 | 34 | records := []*Record{ 35 | {Message: "Hello mom", TimeUsec: time.Now().Unix() * 1000}, 36 | {Message: "Hello dad", TimeUsec: time.Now().Unix() * 1000}, 37 | } 38 | err = repeater.WriteBatch(records) 39 | 40 | if err != nil { 41 | if strings.Contains(err.Error(), "NoCredentialProviders") { 42 | t.Skip("Skipping WriteBatch, you need to setup AWS credentials for this to work") 43 | } else { 44 | t.Errorf("Unable to write batch %s", err) 45 | t.Fail() 46 | } 47 | } 48 | 49 | } 50 | -------------------------------------------------------------------------------- /cloud-watch/config.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "github.com/hashicorp/hcl" 5 | "io/ioutil" 6 | lg "github.com/advantageous/go-logback/logging" 7 | ) 8 | 9 | type Config struct { 10 | AWSRegion string `hcl:"aws_region"` 11 | EC2InstanceId string `hcl:"ec2_instance_id"` 12 | LogGroupName string `hcl:"log_group"` 13 | LogStreamName string `hcl:"log_stream"` 14 | LogPriority string `hcl:"log_priority"` 15 | JournalDir string `hcl:"journal_dir"` 16 | QueueChannelSize int `hcl:"queue_channel_size"` 17 | QueuePollDurationMS uint64 `hcl:"queue_poll_duration_ms"` 18 | FlushLogEntries uint64 `hcl:"queue_flush_log_ms"` 19 | QueueBatchSize int `hcl:"queue_batch_size"` 20 | CloudWatchBufferSize int `hcl:"buffer_size"` 21 | Debug bool `hcl:"debug"` 22 | Tail bool `hcl:"tail"` 23 | Rewind int `hcl:"rewind"` 24 | Local bool `hcl:"local"` 25 | AllowedFields []string `hcl:"fields"` 26 | OmitFields []string `hcl:"omit_fields"` 27 | logPriority int 28 | fields map[string]struct{} 29 | omitFields map[string]struct{} 30 | FieldLength int `hcl:"field_length"` 31 | MockCloudWatch bool `hcl:"mock-cloud-watch"` 32 | } 33 | 34 | func (config *Config) GetJournalDLogPriority() Priority { 35 | 36 | logLevels := map[Priority][]string{ 37 | EMERGENCY: {"0", "emerg"}, 38 | ALERT: {"1", "alert"}, 39 | CRITICAL: {"2", "crit"}, 40 | ERROR: {"3", "err"}, 41 | WARNING: {"4", "warning"}, 42 | NOTICE: {"5", "notice"}, 43 | INFO: {"6", "info"}, 44 | DEBUG: {"7", "debug"}, 45 | } 46 | 47 | for i, s := range logLevels { 48 | if s[0] == config.LogPriority || s[1] == config.LogPriority { 49 | return i 50 | } 51 | } 52 | 53 | return DEBUG 54 | } 55 | 56 | func (config *Config) AllowField(fieldName string) bool { 57 | 58 | if len(config.AllowedFields) == 0 && len(config.OmitFields) == 0 { 59 | return true 60 | } else if len(config.AllowedFields) > 0 && len(config.OmitFields) == 0 { 61 | _, hasField := config.fields[fieldName] 62 | return hasField 63 | } else if len(config.AllowedFields) == 0 && len(config.OmitFields) > 0 { 64 | _, omitField := config.omitFields[fieldName] 65 | return !omitField 66 | } else { 67 | logger := lg.NewSimpleLogger("SYSTEMD_CONFIG_DEBUG") 68 | logger.Warn("Only fields or omit_fields should be set") 69 | _, omitField := config.omitFields[fieldName] 70 | if omitField { 71 | return !omitField 72 | } else { 73 | _, hasField := config.fields[fieldName] 74 | return hasField 75 | 76 | } 77 | } 78 | } 79 | 80 | func arrayToMap(array []string) map[string]struct{} { 81 | theMap := make(map[string]struct{}) 82 | if array != nil && len(array) > 0 { 83 | for _, element := range array { 84 | theMap[element] = struct{}{} 85 | } 86 | } 87 | return theMap 88 | } 89 | 90 | func LoadConfigFromString(data string, logger lg.Logger) (*Config, error) { 91 | 92 | if logger == nil { 93 | logger = lg.NewSimpleLogger("SYSTEMD_CONFIG_DEBUG") 94 | } 95 | config := &Config{} 96 | 97 | logger.Debug("Loading log...") 98 | err := hcl.Decode(&config, data) 99 | if err != nil { 100 | return nil, err 101 | } 102 | config.fields = arrayToMap(config.AllowedFields) 103 | config.omitFields = arrayToMap(config.OmitFields) 104 | 105 | if config.CloudWatchBufferSize == 0 { 106 | logger.Debug("Loading log... cloud watch BufferSize not set, setting to 50") 107 | config.CloudWatchBufferSize = 50 108 | } 109 | 110 | if config.QueueChannelSize == 0 { 111 | logger.Debug("Loading log... Queue Channel Size not set, setting to 3") 112 | config.QueueChannelSize = 3 113 | } 114 | 115 | if config.QueueBatchSize == 0 { 116 | logger.Debug("Loading log... Queue Batch Size not set, setting to 10000") 117 | config.QueueBatchSize = 10000 118 | } 119 | 120 | if config.FlushLogEntries == 0 { 121 | logger.Debug("Loading log... Flush JournalD log entries not set, setting to 100 ms") 122 | config.FlushLogEntries = 100 123 | } 124 | 125 | if config.QueuePollDurationMS == 0 { 126 | logger.Debug("Loading log... Queue Poll Duration MS not set, setting to 10 ms") 127 | config.QueuePollDurationMS = 10 128 | } 129 | 130 | if config.FieldLength == 0 { 131 | logger.Debug("Loading log... FieldLength not set, setting to 255") 132 | config.FieldLength = 255 133 | } 134 | 135 | if config.LogPriority == "" { 136 | logger.Debug("Loading log... LogPriority not set, setting to debug") 137 | config.LogPriority = "debug" 138 | } 139 | 140 | if config.Tail { 141 | if config.Rewind == 0 { 142 | logger.Debug("Loading log... Rewind not set, but Tail is so setting to 10") 143 | config.Rewind = 10 144 | } 145 | } 146 | 147 | return config, nil 148 | 149 | } 150 | func LoadConfig(filename string, logger lg.Logger) (*Config, error) { 151 | logger.Printf("Loading config %s", filename) 152 | 153 | configBytes, err := ioutil.ReadFile(filename) 154 | if err != nil { 155 | return nil, err 156 | } 157 | return LoadConfigFromString(string(configBytes), logger) 158 | } 159 | -------------------------------------------------------------------------------- /cloud-watch/config_test.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "testing" 5 | lg "github.com/advantageous/go-logback/logging" 6 | ) 7 | 8 | func TestConfig(t *testing.T) { 9 | 10 | logger := lg.NewSimpleLogger("test") 11 | 12 | data := ` 13 | log_group="dcos-logstream-test" 14 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 15 | log_priority=3 16 | debug=true 17 | fields=["Foo", "Bar"] 18 | ` 19 | config, err := LoadConfigFromString(data, logger) 20 | 21 | if err != nil { 22 | t.Logf("Unable to parse config %s", err) 23 | t.Fail() 24 | } 25 | 26 | if config == nil { 27 | t.Log("Config is nil") 28 | t.Fail() 29 | } 30 | 31 | if len(config.AllowedFields) != 2 { 32 | t.Log("Fields not read") 33 | t.Fail() 34 | } 35 | 36 | logger.Info(config.AllowedFields) 37 | 38 | if config.AllowedFields[0] != "Foo" { 39 | t.Log("Field Value Foo not present") 40 | t.Fail() 41 | } 42 | 43 | if !config.AllowField("Foo") { 44 | t.Log("Field Value Foo should be allowed") 45 | t.Fail() 46 | } 47 | 48 | } 49 | 50 | func TestLogOmitField(t *testing.T) { 51 | 52 | logger := lg.NewSimpleLogger("test") 53 | 54 | data := `omit_fields=["Foo", "Bar"]` 55 | config, _ := LoadConfigFromString(data, logger) 56 | 57 | if config.AllowField("Foo") { 58 | t.Log("Field Value Foo should NOT allowed") 59 | t.Fail() 60 | } 61 | 62 | } 63 | -------------------------------------------------------------------------------- /cloud-watch/creators.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import lg "github.com/advantageous/go-logback/logging" 4 | 5 | func CreateConfig(configFilename string, logger lg.Logger) *Config { 6 | 7 | config, err := LoadConfig(configFilename, logger) 8 | if err != nil { 9 | logger.Error("Unable to load config", err, configFilename) 10 | panic("Unable to create config") 11 | } 12 | return config 13 | } 14 | 15 | func CreateJournal(config *Config, logger lg.Logger) Journal { 16 | 17 | journal, err := NewJournal(config) 18 | if err != nil { 19 | logger.Error("Unable to load journal", err) 20 | panic("Unable to create journal") 21 | } 22 | journal.AddLogFilters(config) 23 | return journal 24 | 25 | } 26 | 27 | func CreateRepeater(config *Config, logger lg.Logger) JournalRepeater { 28 | 29 | var repeater JournalRepeater 30 | var err error 31 | 32 | if !config.MockCloudWatch { 33 | logger.Info("Creating repeater that is conneting to AWS cloud watch") 34 | session := NewAWSSession(config) 35 | repeater, err = NewCloudWatchJournalRepeater(session, nil, config) 36 | 37 | } else { 38 | logger.Warn("Creating MOCK repeater") 39 | repeater = NewMockJournalRepeater() 40 | } 41 | 42 | if err != nil { 43 | panic("Unable to create repeater " + err.Error()) 44 | } 45 | return repeater 46 | 47 | } 48 | -------------------------------------------------------------------------------- /cloud-watch/journal_darwin.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | var mockMap = map[string]string{ 4 | // "__CURSOR": "s=6c072e0567ff423fa9cb39f136066299;i=3;b=923def0648b1422aa28a8846072481f2;m=65ee792c;t=542783a1cc4e0;x=7d96bf9e60a6512b", 5 | "__REALTIME_TIMESTAMP": "1480459022025952", 6 | "__MONOTONIC_TIMESTAMP": "1710127404", 7 | // "_BOOT_ID": "923def0648b1422aa28a8846072481f2", 8 | // "PRIORITY": "3", 9 | // "_TRANSPORT": "driver", 10 | // "_PID": "712", 11 | // "_UID": "0", 12 | // "_GID": "0", 13 | // "_COMM": "systemd-journal", 14 | // "_EXE": "/usr/lib/systemd/systemd-journald", 15 | // "_CMDLINE": "/usr/lib/systemd/systemd-journald", 16 | // "_CAP_EFFECTIVE": "a80425fb", 17 | // "_SYSTEMD_CGROUP": "c", 18 | // "_MACHINE_ID": "5125015c46bb4bf6a686b5e692492075", 19 | // "_HOSTNAME": "f5076731cfdb", 20 | "MESSAGE": "Journal started", 21 | // "MESSAGE_ID": "f77379a8490b408bbe5f6940505a777b", 22 | } 23 | 24 | func NewJournal(config *Config) (Journal, error) { 25 | return NewJournalWithMap(mockMap), nil 26 | } 27 | -------------------------------------------------------------------------------- /cloud-watch/journal_linux.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "github.com/coreos/go-systemd/sdjournal" 5 | "strconv" 6 | "time" 7 | ) 8 | 9 | type SdJournal struct { 10 | journal *sdjournal.Journal 11 | logger *Logger 12 | debug bool 13 | } 14 | 15 | func NewJournal(config *Config) (Journal, error) { 16 | 17 | logger := NewSimpleLogger("journal", config) 18 | 19 | var debug bool 20 | 21 | if config == nil { 22 | debug = true 23 | } else { 24 | debug = config.Debug 25 | } 26 | 27 | if config == nil || config.JournalDir == "" { 28 | journal, err := sdjournal.NewJournal() 29 | return &SdJournal{ 30 | journal, logger, debug, 31 | }, err 32 | } else { 33 | logger.Info.Printf("using journal dir: %s", config.JournalDir) 34 | journal, err := sdjournal.NewJournalFromDir(config.JournalDir) 35 | 36 | return &SdJournal{ 37 | journal, logger, debug, 38 | }, err 39 | } 40 | 41 | } 42 | 43 | func (journal *SdJournal) AddLogFilters(config *Config) { 44 | 45 | // Add Priority Filters 46 | if config.GetJournalDLogPriority() < DEBUG { 47 | for p, _ := range PriorityJsonMap { 48 | if p <= config.GetJournalDLogPriority() { 49 | journal.journal.AddMatch("PRIORITY=" + strconv.Itoa(int(p))) 50 | } 51 | } 52 | journal.journal.AddDisjunction() 53 | } 54 | } 55 | 56 | func (journal *SdJournal) Close() error { 57 | return journal.journal.Close() 58 | } 59 | 60 | // Next advances the read pointer into the journal by one entry. 61 | func (journal *SdJournal) Next() (uint64, error) { 62 | loc, err := journal.journal.Next() 63 | if journal.debug { 64 | journal.logger.Info.Printf("NEXT location %d %v", loc, err) 65 | } 66 | 67 | return loc, err 68 | } 69 | 70 | // NextSkip advances the read pointer by multiple entries at once, 71 | // as specified by the skip parameter. 72 | func (journal *SdJournal) NextSkip(skip uint64) (uint64, error) { 73 | return journal.journal.NextSkip(skip) 74 | } 75 | 76 | // Previous sets the read pointer into the journal back by one entry. 77 | func (journal *SdJournal) Previous() (uint64, error) { 78 | return journal.journal.Previous() 79 | } 80 | 81 | // PreviousSkip sets back the read pointer by multiple entries at once, 82 | // as specified by the skip parameter. 83 | func (journal *SdJournal) PreviousSkip(skip uint64) (uint64, error) { 84 | return journal.journal.PreviousSkip(skip) 85 | } 86 | 87 | // GetDataValue gets the data object associated with a specific field from the 88 | // current journal entry, returning only the value of the object. 89 | func (journal *SdJournal) GetDataValue(field string) (string, error) { 90 | return journal.journal.GetDataValue(field) 91 | } 92 | 93 | // GetRealtimeUsec gets the realtime (wallclock) timestamp of the current 94 | // journal entry. 95 | func (journal *SdJournal) GetRealtimeUsec() (uint64, error) { 96 | return journal.journal.GetRealtimeUsec() 97 | } 98 | 99 | // GetMonotonicUsec gets the monotonic timestamp of the current journal entry. 100 | func (journal *SdJournal) GetMonotonicUsec() (uint64, error) { 101 | return journal.journal.GetMonotonicUsec() 102 | } 103 | 104 | // GetCursor gets the cursor of the current journal entry. 105 | func (journal *SdJournal) GetCursor() (string, error) { 106 | return journal.journal.GetCursor() 107 | } 108 | 109 | // SeekHead seeks to the beginning of the journal, i.e. the oldest available 110 | // entry. 111 | func (journal *SdJournal) SeekHead() error { 112 | return journal.journal.SeekHead() 113 | } 114 | 115 | // SeekTail may be used to seek to the end of the journal, i.e. the most recent 116 | // available entry. 117 | func (journal *SdJournal) SeekTail() error { 118 | return journal.journal.SeekTail() 119 | } 120 | 121 | // SeekCursor seeks to a concrete journal cursor. 122 | func (journal *SdJournal) SeekCursor(cursor string) error { 123 | return journal.journal.SeekCursor(cursor) 124 | } 125 | 126 | // Wait will synchronously wait until the journal gets changed. The maximum time 127 | // this call sleeps may be controlled with the timeout parameter. If 128 | // sdjournal.IndefiniteWait is passed as the timeout parameter, Wait will 129 | // wait indefinitely for a journal change. 130 | func (journal *SdJournal) Wait(timeout time.Duration) int { 131 | return journal.journal.Wait(timeout) 132 | } 133 | -------------------------------------------------------------------------------- /cloud-watch/journal_linux_test.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import "testing" 4 | 5 | func TestNewJournal(t *testing.T) { 6 | 7 | j, e := NewJournal(nil) 8 | 9 | if e != nil { 10 | t.Fail() 11 | } 12 | 13 | if j == nil { 14 | t.Fail() 15 | } 16 | 17 | e = j.Close() 18 | 19 | if e != nil { 20 | t.Fail() 21 | } 22 | 23 | } 24 | 25 | func TestSdJournal_Operations(t *testing.T) { 26 | j, e := NewJournal(nil) 27 | 28 | if e != nil { 29 | t.Fail() 30 | } 31 | 32 | j.SeekHead() 33 | j.Next() 34 | 35 | value, e := j.GetDataValue("MESSAGE") 36 | 37 | if len(value) == 0 { 38 | t.Logf("Failed value=%s err=%s", value, e) 39 | t.Fail() 40 | } else { 41 | t.Logf("Read value=%s", value) 42 | } 43 | 44 | } 45 | -------------------------------------------------------------------------------- /cloud-watch/mock.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "sync/atomic" 5 | "time" 6 | lg "github.com/advantageous/go-logback/logging" 7 | ) 8 | 9 | type MockJournal interface { 10 | Journal 11 | SetCount(uint64) 12 | SetError(error) 13 | } 14 | 15 | type TestJournal struct { 16 | values map[string]string 17 | logger lg.Logger 18 | count int64 19 | err error 20 | } 21 | 22 | type MockJournalRepeater struct { 23 | logger lg.Logger 24 | } 25 | 26 | func (repeater *MockJournalRepeater) Close() error { 27 | return nil 28 | } 29 | 30 | func (repeater *MockJournalRepeater) WriteBatch(records []*Record) error { 31 | 32 | for _, record := range records { 33 | 34 | priority := string(PriorityJsonMap[record.Priority]) 35 | 36 | switch record.Priority { 37 | 38 | case EMERGENCY: 39 | repeater.logger.Error(priority, "------", record.Message) 40 | case ALERT: 41 | repeater.logger.Error(priority, "------", record.Message) 42 | 43 | case CRITICAL: 44 | repeater.logger.Error(priority, "------", record.Message) 45 | case ERROR: 46 | repeater.logger.Error(priority, "------", record.Message) 47 | case NOTICE: 48 | repeater.logger.Warn(priority, "------", record.Message) 49 | 50 | case WARNING: 51 | repeater.logger.Warn(priority, "------", record.Message) 52 | 53 | case INFO: 54 | repeater.logger.Info(priority, "------", record.Message) 55 | 56 | case DEBUG: 57 | repeater.logger.Debug(priority, "------", record.Message) 58 | 59 | default: 60 | repeater.logger.Debug("?????", priority, "------", record.Message) 61 | 62 | } 63 | 64 | } 65 | return nil 66 | } 67 | 68 | func NewMockJournalRepeater() (repeater *MockJournalRepeater) { 69 | return &MockJournalRepeater{lg.NewSimpleLogger("mock-repeater")} 70 | } 71 | 72 | func (journal *TestJournal) SetCount(count uint64) { 73 | 74 | atomic.StoreInt64(&journal.count, int64(count)) 75 | 76 | } 77 | 78 | func (journal *TestJournal) SetError(err error) { 79 | journal.err = err 80 | 81 | } 82 | 83 | func NewJournalWithMap(values map[string]string) Journal { 84 | logger := lg.NewSimpleLogger("test-journal") 85 | return &TestJournal{ 86 | values: values, 87 | logger: logger, 88 | count: 113, 89 | } 90 | } 91 | 92 | func (journal *TestJournal) Close() error { 93 | journal.logger.Info("Close") 94 | return nil 95 | } 96 | 97 | // Next advances the read pointer into the journal by one entry. 98 | func (journal *TestJournal) Next() (uint64, error) { 99 | journal.logger.Debug("Next") 100 | 101 | var count = atomic.LoadInt64(&journal.count) 102 | 103 | if count > 0 { 104 | atomic.AddInt64(&journal.count, -1) 105 | return uint64(1), nil 106 | } else { 107 | return uint64(0), nil 108 | } 109 | 110 | } 111 | 112 | // NextSkip advances the read pointer by multiple entries at once, 113 | // as specified by the skip parameter. 114 | func (journal *TestJournal) NextSkip(skip uint64) (uint64, error) { 115 | journal.logger.Info("Next Skip") 116 | return uint64(journal.count), nil 117 | } 118 | 119 | // Previous sets the read pointer into the journal back by one entry. 120 | func (journal *TestJournal) Previous() (uint64, error) { 121 | journal.logger.Info("Previous") 122 | return uint64(journal.count), nil 123 | } 124 | 125 | // PreviousSkip sets back the read pointer by multiple entries at once, 126 | // as specified by the skip parameter. 127 | func (journal *TestJournal) PreviousSkip(skip uint64) (uint64, error) { 128 | journal.logger.Info("Previous Skip") 129 | return uint64(journal.count), nil 130 | } 131 | 132 | // GetDataValue gets the data object associated with a specific field from the 133 | // current journal entry, returning only the value of the object. 134 | func (journal *TestJournal) GetDataValue(field string) (string, error) { 135 | if journal.count < 0 { 136 | panic("ARGH") 137 | } 138 | journal.logger.Debug("GetDataValue") 139 | return journal.values[field], nil 140 | } 141 | 142 | // GetRealtimeUsec gets the realtime (wallclock) timestamp of the current 143 | // journal entry. 144 | func (journal *TestJournal) GetRealtimeUsec() (uint64, error) { 145 | journal.logger.Info("GetRealtimeUsec") 146 | return 1480549576015541 / 1000, nil 147 | } 148 | 149 | func (journal *TestJournal) AddLogFilters(config *Config) { 150 | journal.logger.Info("AddLogFilters") 151 | } 152 | 153 | // GetMonotonicUsec gets the monotonic timestamp of the current journal entry. 154 | func (journal *TestJournal) GetMonotonicUsec() (uint64, error) { 155 | journal.logger.Info("GetMonotonicUsec") 156 | return uint64(journal.count), nil 157 | } 158 | 159 | // GetCursor gets the cursor of the current journal entry. 160 | func (journal *TestJournal) GetCursor() (string, error) { 161 | journal.logger.Info("GetCursor") 162 | return "abc-123", nil 163 | } 164 | 165 | // SeekHead seeks to the beginning of the journal, i.e. the oldest available 166 | // entry. 167 | func (journal *TestJournal) SeekHead() error { 168 | return nil 169 | } 170 | 171 | // SeekTail may be used to seek to the end of the journal, i.e. the most recent 172 | // available entry. 173 | func (journal *TestJournal) SeekTail() error { 174 | return nil 175 | } 176 | 177 | // SeekCursor seeks to a concrete journal cursor. 178 | func (journal *TestJournal) SeekCursor(cursor string) error { 179 | return nil 180 | } 181 | 182 | // Wait will synchronously wait until the journal gets changed. The maximum time 183 | // this call sleeps may be controlled with the timeout parameter. If 184 | // sdjournal.IndefiniteWait is passed as the timeout parameter, Wait will 185 | // wait indefinitely for a journal change. 186 | func (journal *TestJournal) Wait(timeout time.Duration) int { 187 | return 5 188 | } 189 | -------------------------------------------------------------------------------- /cloud-watch/read_test.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "errors" 5 | "fmt" 6 | "testing" 7 | "time" 8 | lg "github.com/advantageous/go-logback/logging" 9 | ) 10 | 11 | var readTestMap = map[string]string{ 12 | "__CURSOR": "s=6c072e0567ff423fa9cb39f136066299;i=3;b=923def0648b1422aa28a8846072481f2;m=65ee792c;t=542783a1cc4e0;x=7d96bf9e60a6512b", 13 | "__REALTIME_TIMESTAMP": "1480459022025952", 14 | "__MONOTONIC_TIMESTAMP": "1710127404", 15 | "_BOOT_ID": "923def0648b1422aa28a8846072481f2", 16 | "PRIORITY": "6", 17 | "_TRANSPORT": "driver", 18 | "_PID": "712", 19 | "_UID": "0", 20 | "_GID": "0", 21 | "_COMM": "systemd-journal", 22 | "_EXE": "/usr/lib/systemd/systemd-journald", 23 | "_CMDLINE": "/usr/lib/systemd/systemd-journald", 24 | "_CAP_EFFECTIVE": "a80425fb", 25 | "_SYSTEMD_CGROUP": "c", 26 | "_MACHINE_ID": "5125015c46bb4bf6a686b5e692492075", 27 | "_HOSTNAME": "f5076731cfdb", 28 | "MESSAGE": "Journal started", 29 | "MESSAGE_ID": "f77379a8490b408bbe5f6940505a777b", 30 | } 31 | 32 | const readTestConfigData = ` 33 | log_group="dcos-logstream-test" 34 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 35 | log_priority=3 36 | debug=true 37 | ` 38 | 39 | func TestReadFromJournalError(t *testing.T) { 40 | 41 | logger := lg.NewSimpleLogger("read-config-test") 42 | var journal MockJournal 43 | journal = NewJournalWithMap(readTestMap).(MockJournal) 44 | 45 | config, _ := LoadConfigFromString(readTestConfigData, logger) 46 | inputRecordChannel := make(chan Record) 47 | 48 | journal.SetError(errors.New("TEST ERROR")) 49 | journal.SetCount(1) 50 | runner := NewRunnerInternal(journal, NewMockJournalRepeater(), logger, config, false) 51 | 52 | go func() { 53 | 54 | record, _, _ := runner.readOneRecord() 55 | inputRecordChannel <- *record 56 | }() 57 | 58 | defer runner.Stop() 59 | var record Record 60 | var more bool 61 | 62 | timer := time.NewTimer(time.Millisecond * 50) 63 | 64 | select { 65 | case record, more = <-inputRecordChannel: 66 | if !more { 67 | return 68 | } 69 | 70 | if record == (Record{}) { 71 | t.Fatal() 72 | } 73 | case <-timer.C: 74 | t.Fatal() 75 | } 76 | 77 | } 78 | 79 | func TestReadAllFromJournal(t *testing.T) { 80 | 81 | logger := lg.NewSimpleLogger("read-config-test") 82 | var journal MockJournal 83 | journal = NewJournalWithMap(readTestMap).(MockJournal) 84 | 85 | config, _ := LoadConfigFromString(readTestConfigData, logger) 86 | 87 | journal.SetError(errors.New("TEST ERROR")) 88 | 89 | journal.SetCount(10) 90 | 91 | runner := NewRunnerInternal(journal, NewMockJournalRepeater(), logger, config, false) 92 | runner.Stop() 93 | go runner.readRecords() 94 | 95 | inputQueue := runner.queueManager.Queue().ReceiveQueue() 96 | 97 | count := 0 98 | batch := inputQueue.ReadBatchWait() 99 | 100 | for { 101 | if batch == nil { 102 | break 103 | } 104 | count += len(batch) 105 | inputQueue.ReadBatchWait() 106 | } 107 | 108 | fmt.Println("COUNT ", count, " \n\n\n") 109 | } 110 | -------------------------------------------------------------------------------- /cloud-watch/record.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "reflect" 5 | "strconv" 6 | "time" 7 | lg "github.com/advantageous/go-logback/logging" 8 | ) 9 | 10 | type Priority int 11 | 12 | var ( 13 | EMERGENCY Priority = 0 14 | ALERT Priority = 1 15 | CRITICAL Priority = 2 16 | ERROR Priority = 3 17 | WARNING Priority = 4 18 | NOTICE Priority = 5 19 | INFO Priority = 6 20 | DEBUG Priority = 7 21 | ) 22 | 23 | var PriorityJsonMap = map[Priority][]byte{ 24 | EMERGENCY: []byte("\"EMERG\""), 25 | ALERT: []byte("\"ALERT\""), 26 | CRITICAL: []byte("\"CRITICAL\""), 27 | ERROR: []byte("\"ERROR\""), 28 | WARNING: []byte("\"WARNING\""), 29 | NOTICE: []byte("\"NOTICE\""), 30 | INFO: []byte("\"INFO\""), 31 | DEBUG: []byte("\"DEBUG\""), 32 | } 33 | 34 | type Record struct { 35 | InstanceId string `json:"instanceId,omitempty"` 36 | TimeUsec int64 `json:"-" journald:"__REALTIME_TIMESTAMP"` 37 | PID int `json:"pid,omitempty" journald:"_PID"` 38 | UID int `json:"uid,omitempty" journald:"_UID"` 39 | GID int `json:"gid,omitempty" journald:"_GID"` 40 | Command string `json:"cmdName,omitempty" journald:"_COMM"` 41 | Executable string `json:"exe,omitempty" journald:"_EXE"` 42 | CommandLine string `json:"cmdLine,omitempty" journald:"_CMDLINE"` 43 | SystemdUnit string `json:"systemdUnit,omitempty" journald:"_SYSTEMD_UNIT"` 44 | BootId string `json:"bootId,omitempty" journald:"_BOOT_ID"` 45 | MachineId string `json:"machineId,omitempty" journald:"_MACHINE_ID"` 46 | Hostname string `json:"hostname,omitempty" journald:"_HOSTNAME"` 47 | Transport string `json:"transport,omitempty" journald:"_TRANSPORT"` 48 | Priority Priority `json:"priority" journald:"PRIORITY"` 49 | Message string `json:"message" journald:"MESSAGE"` 50 | MessageId string `json:"messageId,omitempty" journald:"MESSAGE_ID"` 51 | Errno int `json:"machineId,omitempty" journald:"ERRNO"` 52 | SeqId int64 `json:"seq,omitempty" ` 53 | Facility int `json:"syslogFacility,omitempty" journald:"SYSLOG_FACILITY"` 54 | Identifier string `json:"syslogIdent,omitempty" journald:"SYSLOG_IDENTIFIER"` 55 | SysPID int `json:"syslogPid,omitempty" journald:"SYSLOG_PID"` 56 | Device string `json:"kernelDevice,omitempty" journald:"_KERNEL_DEVICE"` 57 | Subsystem string `json:"kernelSubsystem,omitempty" journald:"_KERNEL_SUBSYSTEM"` 58 | SysName string `json:"kernelSysName,omitempty" journald:"_UDEV_SYSNAME"` 59 | DevNode string `json:"kernelDevNode,omitempty" journald:"_UDEV_DEVNODE"` 60 | } 61 | 62 | func NewRecord(journal Journal, logger lg.Logger, config *Config) (*Record, error) { 63 | record := &Record{} 64 | 65 | err := decodeRecord(journal, reflect.ValueOf(record).Elem(), logger, config) 66 | 67 | if record.TimeUsec == 0 { 68 | 69 | timestamp, err := journal.GetRealtimeUsec() 70 | if err != nil { 71 | logger.Errorf("Unable to read the time : %s %v", err.Error(), err) 72 | record.TimeUsec = time.Now().Unix() * 1000 73 | } else { 74 | record.TimeUsec = int64(timestamp / 1000) 75 | } 76 | } 77 | 78 | return record, err 79 | } 80 | 81 | func decodeRecord(journal Journal, toVal reflect.Value, logger lg.Logger, config *Config) error { 82 | toType := toVal.Type() 83 | 84 | numField := toVal.NumField() 85 | 86 | for i := 0; i < numField; i++ { 87 | fieldVal := toVal.Field(i) 88 | fieldDef := toType.Field(i) 89 | fieldType := fieldDef.Type 90 | fieldTag := fieldDef.Tag 91 | fieldTypeKind := fieldType.Kind() 92 | 93 | jdKey := fieldTag.Get("journald") 94 | if jdKey == "" { 95 | continue 96 | } 97 | 98 | if !config.AllowField(jdKey) { 99 | continue 100 | } 101 | 102 | value, err := journal.GetDataValue(jdKey) 103 | if err != nil || value == "" { 104 | fieldVal.Set(reflect.Zero(fieldType)) 105 | continue 106 | } 107 | 108 | switch fieldTypeKind { 109 | case reflect.Int: 110 | intVal, err := strconv.Atoi(value) 111 | if err != nil { 112 | logger.Warnf("Can't convert field %s to int", jdKey) 113 | fieldVal.Set(reflect.Zero(fieldType)) 114 | continue 115 | } 116 | fieldVal.SetInt(int64(intVal)) 117 | break 118 | case reflect.String: 119 | 120 | fieldVal.SetString(trimField(value, config.FieldLength)) 121 | break 122 | 123 | case reflect.Int64: 124 | u, err := strconv.ParseInt(value, 10, 64) 125 | if err != nil { 126 | logger.Warnf("Can't convert field %s to int64", jdKey) 127 | fieldVal.Set(reflect.Zero(fieldType)) 128 | continue 129 | } 130 | fieldVal.SetInt(u / 1000) 131 | break 132 | 133 | default: 134 | logger.Warnf("Can't convert field %s unsupported type %s", jdKey, fieldTypeKind) 135 | } 136 | } 137 | 138 | return nil 139 | } 140 | func trimField(value string, fieldLength int) string { 141 | 142 | if fieldLength == 0 { 143 | fieldLength = 255 144 | } 145 | 146 | if fieldLength < len(value) { 147 | return value[0:fieldLength] 148 | } else { 149 | return value 150 | } 151 | } 152 | -------------------------------------------------------------------------------- /cloud-watch/record_test.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "encoding/json" 5 | "testing" 6 | lg "github.com/advantageous/go-logback/logging" 7 | ) 8 | 9 | var testMap = map[string]string{ 10 | "__CURSOR": "s=6c072e0567ff423fa9cb39f136066299;i=3;b=923def0648b1422aa28a8846072481f2;m=65ee792c;t=542783a1cc4e0;x=7d96bf9e60a6512b", 11 | "__REALTIME_TIMESTAMP": "1480459022025952", 12 | "__MONOTONIC_TIMESTAMP": "1710127404", 13 | "_BOOT_ID": "923def0648b1422aa28a8846072481f2", 14 | "PRIORITY": "6", 15 | "_TRANSPORT": "driver", 16 | "_PID": "712", 17 | "_UID": "0", 18 | "_GID": "0", 19 | "_COMM": "systemd-journal", 20 | "_EXE": "/usr/lib/systemd/systemd-journald", 21 | "_CMDLINE": "/usr/lib/systemd/systemd-journald", 22 | "_CAP_EFFECTIVE": "a80425fb", 23 | "_SYSTEMD_CGROUP": "c", 24 | "_MACHINE_ID": "5125015c46bb4bf6a686b5e692492075", 25 | "_HOSTNAME": "f5076731cfdb", 26 | "MESSAGE": "Journal started", 27 | "MESSAGE_ID": "f77379a8490b408bbe5f6940505a777b", 28 | "SYSLOG_FACILITY": "5", 29 | } 30 | 31 | func TestNewRecord(t *testing.T) { 32 | 33 | journal := NewJournalWithMap(testMap) 34 | logger := lg.NewSimpleLogger("test") 35 | data := ` 36 | log_group="dcos-logstream-test" 37 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 38 | log_priority=3 39 | debug=true 40 | ` 41 | config, err := LoadConfigFromString(data, logger) 42 | 43 | record, err := NewRecord(journal, logger, config) 44 | 45 | if err != nil { 46 | t.Logf("Failed err=%s", err) 47 | t.Fail() 48 | } 49 | 50 | if record == nil { 51 | t.Log("Record nil") 52 | t.Fail() 53 | } 54 | 55 | if record.CommandLine != "/usr/lib/systemd/systemd-journald" { 56 | t.Log("Unable to read cmd line") 57 | t.Fail() 58 | } 59 | 60 | if record.TimeUsec != 1480459022025952/1000 { 61 | t.Logf("Unable to read time stamp %d", record.TimeUsec) 62 | t.Fail() 63 | } 64 | 65 | } 66 | 67 | func TestNewRecordJson(t *testing.T) { 68 | 69 | journal := NewJournalWithMap(testMap) 70 | logger := lg.NewSimpleLogger("test") 71 | data := ` 72 | log_group="dcos-logstream-test" 73 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 74 | log_priority=3 75 | debug=true 76 | ` 77 | config, err := LoadConfigFromString(data, logger) 78 | 79 | record, err := NewRecord(journal, logger, config) 80 | 81 | if err != nil { 82 | t.Logf("Failed err=%s", err) 83 | t.Fail() 84 | } 85 | 86 | if record == nil { 87 | t.Log("Record nil") 88 | t.Fail() 89 | } 90 | 91 | if record.CommandLine != "/usr/lib/systemd/systemd-journald" { 92 | t.Log("Unable to read cmd line") 93 | t.Fail() 94 | } 95 | 96 | if record.TimeUsec != 1480459022025952/1000 { 97 | t.Logf("Unable to read time stamp %d", record.TimeUsec) 98 | t.Fail() 99 | } 100 | 101 | jsonDataBytes, err := json.MarshalIndent(record, "", " ") 102 | jsonData := string(jsonDataBytes) 103 | 104 | t.Logf(jsonData) 105 | 106 | } 107 | 108 | func TestLimitFields(t *testing.T) { 109 | 110 | journal := NewJournalWithMap(testMap) 111 | logger := lg.NewSimpleLogger("test") 112 | data := ` 113 | log_group="dcos-logstream-test" 114 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 115 | log_priority=3 116 | debug=true 117 | fields=["__REALTIME_TIMESTAMP"] 118 | 119 | ` 120 | config, err := LoadConfigFromString(data, logger) 121 | 122 | record, err := NewRecord(journal, logger, config) 123 | 124 | if err != nil { 125 | t.Logf("Failed err=%s", err) 126 | t.Fail() 127 | } 128 | 129 | if record == nil { 130 | t.Log("Record nil") 131 | t.Fail() 132 | } 133 | 134 | if record.CommandLine != "" { 135 | t.Log("Unable to limit cmd line") 136 | t.Fail() 137 | } 138 | 139 | if record.TimeUsec != 1480459022025952/1000 { 140 | t.Logf("Unable to read time stamp %d", record.TimeUsec) 141 | t.Fail() 142 | } 143 | 144 | } 145 | 146 | func TestOmitFields(t *testing.T) { 147 | 148 | journal := NewJournalWithMap(testMap) 149 | logger := lg.NewSimpleLogger("test") 150 | data := ` 151 | log_group="dcos-logstream-test" 152 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 153 | log_priority=3 154 | debug=true 155 | omit_fields=["_CMDLINE"] 156 | 157 | ` 158 | config, err := LoadConfigFromString(data, logger) 159 | 160 | record, err := NewRecord(journal, logger, config) 161 | 162 | if err != nil { 163 | t.Logf("Failed err=%s", err) 164 | t.Fail() 165 | } 166 | 167 | if record == nil { 168 | t.Log("Record nil") 169 | t.Fail() 170 | } 171 | 172 | if record.CommandLine != "" { 173 | t.Log("Unable to limit cmd line") 174 | t.Fail() 175 | } 176 | 177 | if record.TimeUsec != 1480459022025952/1000 { 178 | t.Logf("Unable to read time stamp %d", record.TimeUsec) 179 | t.Fail() 180 | } 181 | 182 | } 183 | -------------------------------------------------------------------------------- /cloud-watch/workers.go: -------------------------------------------------------------------------------- 1 | package cloud_watch 2 | 3 | import ( 4 | "fmt" 5 | q "github.com/advantageous/go-qbit/qbit" 6 | "os" 7 | "os/signal" 8 | "syscall" 9 | "time" 10 | lg "github.com/advantageous/go-logback/logging" 11 | ) 12 | 13 | type Runner struct { 14 | records []*Record 15 | bufferSize int 16 | logger lg.Logger 17 | journalRepeater JournalRepeater 18 | journal Journal 19 | batchCounter uint64 20 | idleCounter uint64 21 | emptyCounter uint64 22 | lastMetricTime int64 23 | queueManager q.QueueManager 24 | config *Config 25 | debug bool 26 | instanceId string 27 | } 28 | 29 | func (r *Runner) Stop() { 30 | r.queueManager.Stop() 31 | } 32 | func (r *Runner) addToCloudWatchBatch(record *Record) { 33 | 34 | r.records = append(r.records, record) 35 | 36 | if len(r.records) >= r.bufferSize { 37 | r.sendBatch() 38 | } 39 | } 40 | 41 | func (r *Runner) sendBatch() { 42 | 43 | if len(r.records) > 0 { 44 | batchToSend := r.records 45 | r.records = make([]*Record, 0) 46 | err := r.journalRepeater.WriteBatch(batchToSend) 47 | if err != nil { 48 | r.logger.Error("Failed to write to cloudwatch batch size = : %d %s %v", 49 | len(r.records), err.Error(), err) 50 | } 51 | 52 | } 53 | } 54 | 55 | func NewRunnerInternal(journal Journal, repeater JournalRepeater, logger lg.Logger, config *Config, start bool) *Runner { 56 | 57 | if repeater == nil { 58 | panic("Repeater can't be nil") 59 | } 60 | r := &Runner{journal: journal, 61 | journalRepeater: repeater, 62 | logger: logger, 63 | config: config, 64 | debug: config.Debug, 65 | instanceId: config.EC2InstanceId, 66 | bufferSize: config.CloudWatchBufferSize} 67 | 68 | if logger == nil { 69 | if config.Debug { 70 | logger = lg.NewSimpleDebugLogger("record-reader") 71 | } else { 72 | logger = lg.GetSimpleLogger("RECORD_READER_DEBUG", "record-reader") 73 | } 74 | } 75 | 76 | r.queueManager = q.NewQueueManager(config.QueueChannelSize, 77 | config.QueueBatchSize, 78 | time.Duration(config.QueuePollDurationMS)*time.Millisecond, 79 | q.NewQueueListener(&q.QueueListener{ 80 | 81 | ReceiveFunc: func(item interface{}) { 82 | r.addToCloudWatchBatch(item.(*Record)) 83 | }, 84 | EndBatchFunc: func() { 85 | r.sendBatch() 86 | r.batchCounter++ 87 | }, 88 | IdleFunc: func() { 89 | r.sendBatch() 90 | now := time.Now().Unix() 91 | if now-r.lastMetricTime > 120 { 92 | now = r.lastMetricTime 93 | r.logger.Infof("Systemd CloudWatch: batches sent %d, idleCount %d, emptyCount %d", 94 | r.batchCounter, r.idleCounter, r.emptyCounter) 95 | } 96 | r.idleCounter++ 97 | }, 98 | EmptyFunc: func() { 99 | r.sendBatch() 100 | r.emptyCounter++ 101 | }, 102 | })) 103 | 104 | r.lastMetricTime = time.Now().Unix() 105 | r.positionCursor() 106 | 107 | if start { 108 | signalChannel := r.makeTerminateChannel() 109 | 110 | go func() { 111 | <-signalChannel 112 | r.queueManager.Stop() 113 | }() 114 | 115 | r.readRecords() 116 | } 117 | 118 | return r 119 | } 120 | func NewRunner(journal Journal, repeater JournalRepeater, logger lg.Logger, config *Config) *Runner { 121 | return NewRunnerInternal(journal, repeater, logger, config, true) 122 | 123 | } 124 | 125 | func (r *Runner) makeTerminateChannel() <-chan os.Signal { 126 | ch := make(chan os.Signal, 1) 127 | signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM) 128 | return ch 129 | } 130 | 131 | func (r *Runner) readOneRecord() (*Record, bool, error) { 132 | 133 | count, err := r.journal.Next() 134 | if err != nil { 135 | return nil, false, err 136 | } else if count > 0 { 137 | if r.debug { 138 | r.logger.Info("No errors, reading log") 139 | } 140 | record, err := NewRecord(r.journal, r.logger, r.config) 141 | record.InstanceId = r.instanceId 142 | if err != nil { 143 | return nil, false, fmt.Errorf("error unmarshalling record: %v", err) 144 | } 145 | if r.debug { 146 | r.logger.Info("Read record", record) 147 | } 148 | return record, true, nil 149 | } else { 150 | 151 | if r.debug { 152 | r.logger.Info("Waiting for two seconds") 153 | } 154 | r.journal.Wait(2 * time.Second) 155 | return nil, false, nil 156 | } 157 | 158 | } 159 | 160 | func (r *Runner) readRecords() { 161 | 162 | sendQueue := r.queueManager.SendQueueWithAutoFlush(time.Duration(r.config.FlushLogEntries) * time.Millisecond) 163 | 164 | for { 165 | 166 | record, isReadRecord, err := r.readOneRecord() 167 | 168 | if err == nil && isReadRecord && record != nil { 169 | sendQueue.Send(record) 170 | } 171 | 172 | if err != nil { 173 | r.logger.Error("Error reading record", err) 174 | } 175 | 176 | if !isReadRecord { 177 | if r.queueManager.Stopped() { 178 | r.logger.Info("Got stop message") 179 | break 180 | } 181 | } 182 | 183 | } 184 | 185 | } 186 | 187 | func (r *Runner) positionCursor() { 188 | 189 | if r.config.Tail { 190 | err := r.journal.SeekTail() 191 | if err != nil { 192 | r.logger.Error("Unable to seek to end of systemd journal", err) 193 | panic("Unable to seek to end of systemd journal") 194 | } else { 195 | r.logger.Info("Success: Seek to end of systemd journal") 196 | } 197 | 198 | count, err := r.journal.PreviousSkip(uint64(r.config.Rewind)) 199 | if err != nil { 200 | r.logger.Error("Unable to rewind after seeking to end of systemd journal", r.config.Rewind) 201 | panic("Unable to rewind systemd journal ") 202 | } else { 203 | r.logger.Info("Success: Rewind", r.config.Rewind, count) 204 | } 205 | } else { 206 | err := r.journal.SeekHead() 207 | if err != nil { 208 | r.logger.Error("Unable to seek to head of systemd journal", err) 209 | panic("Unable to seek to end of systemd journal") 210 | } else { 211 | r.logger.Info("Success: Seek to head of systemd journal") 212 | } 213 | 214 | } 215 | 216 | } 217 | -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | # systemd-cloud-watch 2 | 3 | This is an alternative process to the AWS-provided logs agent. 4 | The AWS logs agent copies data from on-disk text log files into [Cloudwatch](https://aws.amazon.com/cloudwatch/). 5 | 6 | This utility reads from the systemd journal and sends the data in batches to Cloudwatch. 7 | 8 | 9 | ## Derived 10 | This is based on [advantageous journald-cloudwatch-logs](https://github.com/advantageous/journald-cloudwatch-logs) 11 | which was forked from [saymedia journald-cloudwatch-logs](https://github.com/saymedia/journald-cloudwatch-logs). 12 | 13 | ## Status 14 | It is close to being done. 15 | 16 | 17 | Improvements: 18 | 19 | * Added unit tests (there were none). 20 | * Added cross compile so I can develop/test on my laptop (MacOS). 21 | * Made logging stateless. No more need for a state file. 22 | * No more getting out of sync with CloudWatch. 23 | * Detects being out of sync and recovers. 24 | * Fixed error with log messages being too big. 25 | * Added ability to include or omit logging fields. 26 | * Created docker image and scripts to test on Linux (CentOS7). 27 | * Created EC2 image and scripts to test on Linux running in AWS EC2 (CentOS7). 28 | * Code organization (we use a package). 29 | * Added comprehensive logging which includes debug logging by config. 30 | * Uses actual timestamp from journal log record instead of just current time 31 | * Auto-creates CloudWatch log group if it does not exist 32 | 33 | 34 | ## Log format 35 | 36 | The journal event data is written to ***CloudWatch*** Logs in JSON format, making it amenable to filtering using the JSON filter syntax. 37 | Log records are translated to ***CloudWatch*** JSON events using a structure like the following: 38 | 39 | #### Sample log 40 | ```javascript 41 | { 42 | "instanceId": "i-xxxxxxxx", 43 | "pid": 12354, 44 | "uid": 0, 45 | "gid": 0, 46 | "cmdName": "cron", 47 | "exe": "/usr/sbin/cron", 48 | "cmdLine": "/usr/sbin/CRON -f", 49 | "systemdUnit": "cron.service", 50 | "bootId": "fa58079c7a6d12345678b6ebf1234567", 51 | "hostname": "ip-10-1-0-15", 52 | "transport": "syslog", 53 | "priority": "INFO", 54 | "message": "pam_unix(cron:session): session opened for user root by (uid=0)", 55 | "syslog": { 56 | "facility": 10, 57 | "ident": "CRON", 58 | "pid": 12354 59 | }, 60 | "kernel": {} 61 | } 62 | ``` 63 | 64 | The JSON-formatted log events could also be exported into an AWS ElasticSearch instance using the ***CloudWatch*** 65 | sync mechanism. Once in ElasticSearch, you can use an ELK stack to obtain more elaborate filtering and query capabilities. 66 | 67 | 68 | ## Installation 69 | 70 | If you have a binary distribution, you just need to drop the executable file somewhere. 71 | 72 | This tool assumes that it is running on an EC2 instance. 73 | 74 | This tool uses `libsystemd` to access the journal. systemd-based distributions generally ship 75 | with this already installed, but if yours doesn't you must manually install the library somehow before 76 | this tool will work. 77 | 78 | There are instructions on how to install the Linux requirements for development below see - 79 | [Setting up a Linux env for testing/developing (CentOS7)](#setting-up-a-linux-env-for-testingdeveloping-centos7). 80 | 81 | We also have two excellent examples of setting up a dev environment using [bin.packer](https://www.packer.io/) for both 82 | [AWS EC2](#building-the-ec2-image-with-packer-to-build-the-linux-instance-to-build-this-project) and 83 | [Docker](#building-the-docker-image-to-build-the-linux-instance-to-build-this-project). We setup CentoOS 7. 84 | The EC2 instance bin.packer build uses the ***aws command line*** to create and connect to a running image. 85 | These should be instructive for how to setup this utility in your environment to run with ***systemd*** as we provide 86 | all of the systemd scripts in the bin.packer provision scripts for EC2. An example is good. A running example is better. 87 | 88 | ## Configuration 89 | 90 | This tool uses a small configuration file to set some values that are required for its operation. 91 | Most of the configuration values are optional and have default settings, but a couple are required. 92 | 93 | The configuration file uses a syntax like this: 94 | 95 | ```js 96 | log_group = "my-awesome-app" 97 | 98 | ``` 99 | 100 | The following configuration settings are supported: 101 | 102 | * `aws_region`: (Optional) The AWS region whose CloudWatch Logs API will be written to. If not provided, 103 | this defaults to the region where the host EC2 instance is running. 104 | 105 | * `ec2_instance_id`: (Optional) The id of the EC2 instance on which the tool is running. There is very 106 | little reason to set this, since it will be automatically set to the id of the host EC2 instance. 107 | 108 | * `journal_dir`: (Optional) Override the directory where the systemd journal can be found. This is 109 | useful in conjunction with remote log aggregation, to work with journals synced from other systems. 110 | The default is to use the local system's journal. 111 | 112 | * `log_group`: (Required) The name of the cloudwatch log group to write logs into. This log group must 113 | be created before running the program. 114 | 115 | * `log_priority`: (Optional) The highest priority of the log messages to read (on a 0-7 scale). This defaults 116 | to DEBUG (all messages). This has a behaviour similar to `journalctl -p `. At the moment, only 117 | a single value can be specified, not a range. Possible values are: `0,1,2,3,4,5,6,7` or one of the corresponding 118 | `"emerg", "alert", "crit", "err", "warning", "notice", "info", "debug"`. 119 | When a single log level is specified, all messages with this log level or a lower (hence more important) 120 | log level are read and pushed to CloudWatch. For more information about priority levels, look at 121 | https://www.freedesktop.org/software/systemd/man/journalctl.html 122 | 123 | * `log_stream`: (Optional) The name of the cloudwatch log stream to write logs into. This defaults to 124 | the EC2 instance id. Each running instance of this application (along with any other applications 125 | writing logs into the same log group) must have a unique `log_stream` value. If the given log stream 126 | doesn't exist then it will be created before writing the first set of journal events. 127 | 128 | * `buffer_size`: (Optional) The size of the local event buffer where journal events will be kept 129 | in order to write batches of events to the CloudWatch Logs API. The default is 100. A batch of 130 | new events will be written to CloudWatch Logs every second even if the buffer does not fill, but 131 | this setting provides a maximum batch size to use when clearing a large backlog of events, e.g. 132 | from system boot when the program starts for the first time. 133 | 134 | * `fields`: (Optional) Specifies which fields should be included in the JSON map that is sent to CloudWatch. 135 | 136 | * `omit_fields`: (Optional) Specifies which fields should NOT be included in the JSON map that is sent to CloudWatch. 137 | 138 | * `field_length`: (Optional) Specifies how long string fileds can be in the JSON map that is sent to CloudWatch. 139 | The default is 255 characters. 140 | 141 | * `debug`: (Optional) Turns on debug logging. 142 | 143 | * `local`: (Optional) Used for unit testing. Will not try to create an AWS meta-data client to read region and AWS credentials. 144 | 145 | 146 | 147 | ### AWS API access 148 | 149 | This program requires access to call some of the Cloudwatch API functions. The recommended way to 150 | achieve this is to create an 151 | [IAM Instance Profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) 152 | that grants your EC2 instance a role that has Cloudwatch API access. The program will automatically 153 | discover and make use of instance profile credentials. 154 | 155 | The following IAM policy grants the required access across all log groups in all regions: 156 | 157 | ```js 158 | { 159 | "Version": "2012-10-17", 160 | "Statement": [ 161 | { 162 | "Effect": "Allow", 163 | "Action": [ 164 | "logs:CreateLogStream", 165 | "logs:PutLogEvents", 166 | "logs:DescribeLogStreams" 167 | ], 168 | "Resource": [ 169 | "arn:aws:logs:*:*:log-group:*", 170 | "arn:aws:logs:*:*:log-group:*:log-stream:*" 171 | ] 172 | } 173 | ] 174 | } 175 | ``` 176 | 177 | In more complex environments you may want to restrict further which regions, groups and streams 178 | the instance can write to. You can do this by adjusting the two ARN strings in the `"Resource"` section: 179 | 180 | * The first `*` in each string can be replaced with an AWS region name like `us-east-1` 181 | to grant access only within the given region. 182 | * The `*` after `log-group` in each string can be replaced with a Cloudwatch Logs log group name 183 | to grant access only to the named group. 184 | * The `*` after `log-stream` in the second string can be replaced with a Cloudwatch Logs log stream 185 | name to grant access only to the named stream. 186 | 187 | Other combinations are possible too. For more information, see 188 | [the reference on ARNs and namespaces](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-cloudwatch-logs). 189 | 190 | 191 | 192 | ### Coexisting with the official Cloudwatch Logs agent 193 | 194 | This application can run on the same host as the official Cloudwatch Logs agent but care must be taken 195 | to ensure that they each use a different log stream name. Only one process may write into each log 196 | stream. 197 | 198 | ## Running on System Boot 199 | 200 | This program is best used as a persistent service that starts on boot and keeps running until the 201 | system is shut down. If you're using `journald` then you're presumably using systemd; you can create 202 | a systemd unit for this service. For example: 203 | 204 | ``` 205 | [Unit] 206 | Description=journald-cloudwatch-logs 207 | Wants=basic.target 208 | After=basic.target network.target 209 | 210 | [Service] 211 | User=nobody 212 | Group=nobody 213 | ExecStart=/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf 214 | KillMode=process 215 | Restart=on-failure 216 | RestartSec=42s 217 | ``` 218 | 219 | This program is designed under the assumption that it will run constantly from some point during 220 | system boot until the system shuts down. 221 | 222 | If the service is stopped while the system is running and then later started again, it will 223 | "lose" any journal entries that were written while it wasn't running. However, on the initial 224 | run after each boot it will clear the backlog of logs created during the boot process, so it 225 | is not necessary to run the program particularly early in the boot process unless you wish 226 | to *promptly* capture startup messages. 227 | 228 | ## Building 229 | 230 | #### Test cloud-watch package 231 | ```sh 232 | go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch 233 | ``` 234 | 235 | 236 | #### Build and Test on Linux (Centos7) 237 | ```sh 238 | ./run_build_linux.sh 239 | ``` 240 | 241 | The above starts up a docker container, runs `go get`, `go build`, `go test` and then copies the binary to 242 | `systemd-cloud-watch_linux`. 243 | 244 | #### Debug process running Linux 245 | ```sh 246 | ./run_test_container.sh 247 | ``` 248 | 249 | 250 | The above starts up a docker container that you can develop with that has all the prerequisites needed to 251 | compile and test this project. 252 | 253 | #### Sample debug session 254 | ```sh 255 | $ ./run_test_container.sh 256 | latest: Pulling from advantageous/golang-cloud-watch 257 | Digest: sha256:eaf5c0a387aee8cc2d690e1c5e18763e12beb7940ca0960ce1b9742229413e71 258 | Status: Image is up to date for advantageous/golang-cloud-watch:latest 259 | [root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/ 260 | .git/ README.md cloud-watch/ bin.packer/ sample.conf 261 | .gitignore build_linux.sh main.go run_build_linux.sh systemd-cloud-watch.iml 262 | .idea/ cgroup/ output.json run_test_container.sh systemd-cloud-watch_linux 263 | 264 | [root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/ 265 | 266 | [root@6e0d1f984c03 systemd-cloud-watch]# ls 267 | README.md build_linux.sh cgroup cloud-watch main.go output.json bin.packer run_build_linux.sh 268 | run_test_container.sh sample.conf systemd-cloud-watch.iml systemd-cloud-watch_linux 269 | 270 | [root@6e0d1f984c03 systemd-cloud-watch]# source ~/.bash_profile 271 | 272 | [root@6e0d1f984c03 systemd-cloud-watch]# export GOPATH=/gopath 273 | 274 | [root@6e0d1f984c03 systemd-cloud-watch]# /usr/lib/systemd/systemd-journald & 275 | [1] 24 276 | 277 | [root@6e0d1f984c03 systemd-cloud-watch]# systemd-cat echo "RUNNING JAVA BATCH JOB - ADF BATCH from `pwd`" 278 | 279 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go clean" 280 | Running go clean 281 | 282 | [root@6e0d1f984c03 systemd-cloud-watch]# go clean 283 | 284 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go get" 285 | Running go get 286 | 287 | [root@6e0d1f984c03 systemd-cloud-watch]# go get 288 | 289 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go build" 290 | Running go build 291 | [root@6e0d1f984c03 systemd-cloud-watch]# go build 292 | 293 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go test" 294 | Running go test 295 | 296 | [root@6e0d1f984c03 systemd-cloud-watch]# go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch 297 | === RUN TestRepeater 298 | config DEBUG: 2016/11/30 08:53:34 config.go:66: Loading log... 299 | aws INFO: 2016/11/30 08:53:34 aws.go:42: Config set to local 300 | aws INFO: 2016/11/30 08:53:34 aws.go:72: Client missing credentials not looked up 301 | aws INFO: 2016/11/30 08:53:34 aws.go:50: Client missing using config to set region 302 | aws INFO: 2016/11/30 08:53:34 aws.go:52: AWSRegion missing using default region us-west-2 303 | repeater ERROR: 2016/11/30 08:53:44 cloudwatch_journal_repeater.go:141: Error from putEvents NoCredentialProviders: no valid providers in chain. Deprecated. 304 | For verbose messaging see aws.Config.CredentialsChainVerboseErrors 305 | --- SKIP: TestRepeater (10.01s) 306 | cloudwatch_journal_repeater_test.go:43: Skipping WriteBatch, you need to setup AWS credentials for this to work 307 | === RUN TestConfig 308 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 309 | test INFO: 2016/11/30 08:53:44 config_test.go:33: [Foo Bar] 310 | --- PASS: TestConfig (0.00s) 311 | === RUN TestLogOmitField 312 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 313 | --- PASS: TestLogOmitField (0.00s) 314 | === RUN TestNewJournal 315 | --- PASS: TestNewJournal (0.00s) 316 | === RUN TestSdJournal_Operations 317 | --- PASS: TestSdJournal_Operations (0.00s) 318 | journal_linux_test.go:41: Read value=Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available → current limit 4.0G). 319 | === RUN TestNewRecord 320 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 321 | --- PASS: TestNewRecord (0.00s) 322 | === RUN TestLimitFields 323 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 324 | --- PASS: TestLimitFields (0.00s) 325 | === RUN TestOmitFields 326 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log... 327 | --- PASS: TestOmitFields (0.00s) 328 | PASS 329 | ok github.com/advantageous/systemd-cloud-watch/cloud-watch 10.017s 330 | ``` 331 | 332 | 333 | 334 | 335 | #### Building the docker image to build the linux instance to build this project 336 | 337 | ```sh 338 | # from project root 339 | cd bin.packer 340 | bin.packer build packer_docker.json 341 | ``` 342 | 343 | 344 | #### To run docker dev image 345 | ```sh 346 | # from project root 347 | cd bin.packer 348 | ./run.sh 349 | 350 | ``` 351 | 352 | #### Building the ec2 image with bin.packer to build the linux instance to build this project 353 | 354 | ```sh 355 | # from project root 356 | cd bin.packer 357 | bin.packer build packer_ec2.json 358 | ``` 359 | 360 | We use the [docker](https://www.packer.io/docs/builders/docker.html) support for [bin.packer](https://www.packer.io/). 361 | ("Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.") 362 | 363 | Use `ec2_env.sh_example` to create a `ec2_env.sh` with the instance id that was just created. 364 | 365 | #### ec2_env.sh_example 366 | ``` 367 | #!/usr/bin/env bash 368 | export ami=ami-YOURAMI 369 | export subnet=subnet-YOURSUBNET 370 | export security_group=sg-YOURSG 371 | export iam_profile=YOUR_IAM_ROLE 372 | export key_name=MY_PEM_FILE_KEY_NAME 373 | 374 | ``` 375 | 376 | ##### Using EC2 image (assumes you have ~/.ssh config setup) 377 | ```sh 378 | # from project root 379 | cd bin.packer 380 | 381 | # Run and log into dev env running in EC2 382 | ./runEc2Dev.sh 383 | 384 | # Log into running server 385 | ./loginIntoEc2Dev.sh 386 | 387 | ``` 388 | 389 | 390 | 391 | 392 | 393 | ## Setting up a Linux env for testing/developing (CentOS7). 394 | ```sh 395 | yum -y install wget 396 | yum install -y git 397 | yum install -y gcc 398 | yum install -y systemd-devel 399 | 400 | 401 | echo "installing go" 402 | cd /tmp 403 | wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz 404 | tar -C /usr/local/ -xzf go1.7.3.linux-amd64.tar.gz 405 | rm go1.7.3.linux-amd64.tar.gz 406 | echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bash_profile 407 | ``` 408 | 409 | ## Setting up Java to write to systemd journal 410 | 411 | #### gradle build 412 | ``` 413 | compile 'org.gnieh:logback-journal:0.2.0' 414 | 415 | ``` 416 | 417 | #### logback.xml 418 | ```xml 419 | 420 | 421 | 422 | 423 | 424 | 425 | 426 | {"serviceName":"adfCalcBatch","serviceHost":"${HOST}"} 427 | 428 | 429 | 430 | 431 | 432 | 433 | ``` 434 | 435 | ## Commands for controlling systemd service EC2 dev env 436 | 437 | ```sh 438 | # Get status 439 | sudo systemctl status journald-cloudwatch 440 | # Stop Service 441 | sudo systemctl stop journald-cloudwatch 442 | # Find the service 443 | ps -ef | grep cloud 444 | # Run service manually 445 | /usr/bin/systemd-cloud-watch_linux /etc/journald-cloudwatch.conf 446 | 447 | ``` 448 | 449 | -------------------------------------------------------------------------------- /docs/images/checker.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/advantageous/systemd-cloud-watch/c37dee94ad7213dd7496b4c4b3575ded4f08f0ae/docs/images/checker.png -------------------------------------------------------------------------------- /docs/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | Systemd Journal CloudWatch by advantageous 7 | 8 | 9 | 10 | 11 | 12 | 13 | 16 | 17 | 18 |
19 |
20 |

Systemd Journal CloudWatch

21 |

Alt util for AWS CloudWatch agent that works with systemd journal and sends the data in batches to AWS CloudWatch.

22 |

View the Project on GitHub advantageous/systemd-cloud-watch

23 | 28 |
29 |
30 |

31 | Systemd Journal CloudWatch Writer

32 | 33 |

This utility reads from the systemd journal, 34 | and sends the data in batches to Cloudwatch.

35 | 36 |

This is an alternative process to the AWS-provided logs agent. 37 | The AWS logs agent copies data from on-disk text log files into Cloudwatch. 38 | This utility systemd-cloud-watch reads the systemd journal and writes that data in batches to CloudWatch.

39 | 40 |

There are other ways to do this using various techniques. But depending on the size of log messages and size of the core parts 41 | these other methods are fragile. This utility allows you cap the log field size, include only the fields that you want, or 42 | exclude the fields you don't want. We find that this is not only useful but essential.

43 | 44 |

45 | Log format

46 | 47 |

The journal event data is written to CloudWatch Logs in JSON format, making it amenable to filtering using the JSON filter syntax. 48 | Log records are translated to CloudWatch JSON events using a structure like the following:

49 | 50 |

51 | Sample log

52 | 53 |
{
 54 |     "instanceId": "i-xxxxxxxx",
 55 |     "pid": 12354,
 56 |     "uid": 0,
 57 |     "gid": 0,
 58 |     "cmdName": "cron",
 59 |     "exe": "/usr/sbin/cron",
 60 |     "cmdLine": "/usr/sbin/CRON -f",
 61 |     "systemdUnit": "cron.service",
 62 |     "bootId": "fa58079c7a6d12345678b6ebf1234567",
 63 |     "hostname": "ip-10-1-0-15",
 64 |     "transport": "syslog",
 65 |     "priority": "INFO",
 66 |     "message": "pam_unix(cron:session): session opened for user root by (uid=0)",
 67 |     "syslog": {
 68 |         "facility": 10,
 69 |         "ident": "CRON",
 70 |         "pid": 12354
 71 |     },
 72 |     "kernel": {}
 73 | }
74 | 75 |

The JSON-formatted log events could also be exported into an AWS ElasticSearch instance using the CloudWatch 76 | sync mechanism. Once in ElasticSearch, you can use an ELK stack to obtain more elaborate filtering and query capabilities.

77 | 78 |

79 | Installation

80 | 81 |

If you have a binary distribution, you just need to drop the executable file somewhere.

82 | 83 |

This tool assumes that it is running on an EC2 instance.

84 | 85 |

This tool uses libsystemd to access the journal. systemd-based distributions generally ship 86 | with this already installed, but if yours doesn't you must manually install the library somehow before 87 | this tool will work.

88 | 89 |

There are instructions on how to install the Linux requirements for development below see - 90 | Setting up a Linux env for testing/developing (CentOS7).

91 | 92 |

We also have two excellent examples of setting up a dev environment using bin.packer for both 93 | AWS EC2 and 94 | Docker. We setup CentoOS 7. 95 | The EC2 instance bin.packer build uses the aws command line to create and connect to a running image. 96 | These should be instructive for how to setup this utility in your environment to run with systemd as we provide 97 | all of the systemd scripts in the bin.packer provision scripts for EC2. An example is good. A running example is better.

98 | 99 |

100 | Configuration

101 | 102 |

This tool uses a small configuration file to set some values that are required for its operation. 103 | Most of the configuration values are optional and have default settings, but a couple are required.

104 | 105 |

The configuration file uses a syntax like this:

106 | 107 |
log_group = "my-awesome-app"
108 | 
109 | 110 |

The following configuration settings are supported:

111 | 112 |
    113 |
  • aws_region: (Optional) The AWS region whose CloudWatch Logs API will be written to. If not provided, 114 | this defaults to the region where the host EC2 instance is running.

  • 115 |
  • ec2_instance_id: (Optional) The id of the EC2 instance on which the tool is running. There is very 116 | little reason to set this, since it will be automatically set to the id of the host EC2 instance.

  • 117 |
  • journal_dir: (Optional) Override the directory where the systemd journal can be found. This is 118 | useful in conjunction with remote log aggregation, to work with journals synced from other systems. 119 | The default is to use the local system's journal.

  • 120 |
  • log_group: (Required) The name of the cloudwatch log group to write logs into. This log group must 121 | be created before running the program.

  • 122 |
  • log_priority: (Optional) The highest priority of the log messages to read (on a 0-7 scale). This defaults 123 | to DEBUG (all messages). This has a behaviour similar to journalctl -p <priority>. At the moment, only 124 | a single value can be specified, not a range. Possible values are: 0,1,2,3,4,5,6,7 or one of the corresponding 125 | "emerg", "alert", "crit", "err", "warning", "notice", "info", "debug". 126 | When a single log level is specified, all messages with this log level or a lower (hence more important) 127 | log level are read and pushed to CloudWatch. For more information about priority levels, look at 128 | https://www.freedesktop.org/software/systemd/man/journalctl.html

  • 129 |
  • log_stream: (Optional) The name of the cloudwatch log stream to write logs into. This defaults to 130 | the EC2 instance id. Each running instance of this application (along with any other applications 131 | writing logs into the same log group) must have a unique log_stream value. If the given log stream 132 | doesn't exist then it will be created before writing the first set of journal events.

  • 133 |
  • buffer_size: (Optional) The size of the local event buffer where journal events will be kept 134 | in order to write batches of events to the CloudWatch Logs API. The default is 100. A batch of 135 | new events will be written to CloudWatch Logs every second even if the buffer does not fill, but 136 | this setting provides a maximum batch size to use when clearing a large backlog of events, e.g. 137 | from system boot when the program starts for the first time.

  • 138 |
  • fields: (Optional) Specifies which fields should be included in the JSON map that is sent to CloudWatch.

  • 139 |
  • omit_fields: (Optional) Specifies which fields should NOT be included in the JSON map that is sent to CloudWatch.

  • 140 |
  • field_length: (Optional) Specifies how long string fileds can be in the JSON map that is sent to CloudWatch. 141 | The default is 255 characters.

  • 142 |
  • debug: (Optional) Turns on debug logging.

  • 143 |
  • local: (Optional) Used for unit testing. Will not try to create an AWS meta-data client to read region and AWS credentials.

  • 144 |
145 | 146 |

147 | AWS API access

148 | 149 |

This program requires access to call some of the Cloudwatch API functions. The recommended way to 150 | achieve this is to create an 151 | IAM Instance Profile 152 | that grants your EC2 instance a role that has Cloudwatch API access. The program will automatically 153 | discover and make use of instance profile credentials.

154 | 155 |

The following IAM policy grants the required access across all log groups in all regions:

156 | 157 |
{
158 |     "Version": "2012-10-17",
159 |     "Statement": [
160 |         {
161 |             "Effect": "Allow",
162 |             "Action": [
163 |                 "logs:CreateLogStream",
164 |                 "logs:PutLogEvents",
165 |                 "logs:DescribeLogStreams"
166 |             ],
167 |             "Resource": [
168 |                 "arn:aws:logs:*:*:log-group:*",
169 |                 "arn:aws:logs:*:*:log-group:*:log-stream:*"
170 |             ]
171 |         }
172 |     ]
173 | }
174 | 175 |

In more complex environments you may want to restrict further which regions, groups and streams 176 | the instance can write to. You can do this by adjusting the two ARN strings in the "Resource" section:

177 | 178 |
    179 |
  • The first * in each string can be replaced with an AWS region name like us-east-1 180 | to grant access only within the given region.
  • 181 |
  • The * after log-group in each string can be replaced with a Cloudwatch Logs log group name 182 | to grant access only to the named group.
  • 183 |
  • The * after log-stream in the second string can be replaced with a Cloudwatch Logs log stream 184 | name to grant access only to the named stream.
  • 185 |
186 | 187 |

Other combinations are possible too. For more information, see 188 | the reference on ARNs and namespaces.

189 | 190 |

191 | Coexisting with the official Cloudwatch Logs agent

192 | 193 |

This application can run on the same host as the official Cloudwatch Logs agent but care must be taken 194 | to ensure that they each use a different log stream name. Only one process may write into each log 195 | stream.

196 | 197 |

198 | Running on System Boot

199 | 200 |

This program is best used as a persistent service that starts on boot and keeps running until the 201 | system is shut down. If you're using journald then you're presumably using systemd; you can create 202 | a systemd unit for this service. For example:

203 | 204 |
[Unit]
205 | Description=journald-cloudwatch-logs
206 | Wants=basic.target
207 | After=basic.target network.target
208 | 
209 | [Service]
210 | User=nobody
211 | Group=nobody
212 | ExecStart=/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf
213 | KillMode=process
214 | Restart=on-failure
215 | RestartSec=42s
216 | 
217 | 218 |

This program is designed under the assumption that it will run constantly from some point during 219 | system boot until the system shuts down.

220 | 221 |

If the service is stopped while the system is running and then later started again, it will 222 | "lose" any journal entries that were written while it wasn't running. However, on the initial 223 | run after each boot it will clear the backlog of logs created during the boot process, so it 224 | is not necessary to run the program particularly early in the boot process unless you wish 225 | to promptly capture startup messages.

226 | 227 |

228 | Building

229 | 230 |

231 | Test cloud-watch package

232 | 233 |
go test -v  github.com/advantageous/systemd-cloud-watch/cloud-watch
234 | 235 |

236 | Build and Test on Linux (Centos7)

237 | 238 |
 ./run_build_linux.sh
239 | 240 |

The above starts up a docker container, runs go get, go build, go test and then copies the binary to 241 | systemd-cloud-watch_linux.

242 | 243 |

244 | Debug process running Linux

245 | 246 |
 ./run_test_container.sh
247 | 248 |

The above starts up a docker container that you can develop with that has all the prerequisites needed to 249 | compile and test this project.

250 | 251 |

252 | Sample debug session

253 | 254 |
$ ./run_test_container.sh
255 | latest: Pulling from advantageous/golang-cloud-watch
256 | Digest: sha256:eaf5c0a387aee8cc2d690e1c5e18763e12beb7940ca0960ce1b9742229413e71
257 | Status: Image is up to date for advantageous/golang-cloud-watch:latest
258 | [root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/
259 | .git/                      README.md                  cloud-watch/               bin.packer/                    sample.conf
260 | .gitignore                 build_linux.sh             main.go                    run_build_linux.sh         systemd-cloud-watch.iml    
261 | .idea/                     cgroup/                    output.json                run_test_container.sh      systemd-cloud-watch_linux  
262 | 
263 | [root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/
264 | 
265 | [root@6e0d1f984c03 systemd-cloud-watch]# ls
266 | README.md  build_linux.sh  cgroup  cloud-watch  main.go  output.json  bin.packer  run_build_linux.sh
267 | run_test_container.sh  sample.conf  systemd-cloud-watch.iml  systemd-cloud-watch_linux
268 | 
269 | [root@6e0d1f984c03 systemd-cloud-watch]# source ~/.bash_profile
270 | 
271 | [root@6e0d1f984c03 systemd-cloud-watch]# export GOPATH=/gopath
272 | 
273 | [root@6e0d1f984c03 systemd-cloud-watch]# /usr/lib/systemd/systemd-journald &
274 | [1] 24
275 | 
276 | [root@6e0d1f984c03 systemd-cloud-watch]# systemd-cat echo "RUNNING JAVA BATCH JOB - ADF BATCH from `pwd`"
277 | 
278 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go clean"
279 | Running go clean
280 | 
281 | [root@6e0d1f984c03 systemd-cloud-watch]# go clean
282 | 
283 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go get"
284 | Running go get
285 | 
286 | [root@6e0d1f984c03 systemd-cloud-watch]# go get
287 | 
288 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go build"
289 | Running go build
290 | [root@6e0d1f984c03 systemd-cloud-watch]# go build
291 | 
292 | [root@6e0d1f984c03 systemd-cloud-watch]# echo "Running go test"
293 | Running go test
294 | 
295 | [root@6e0d1f984c03 systemd-cloud-watch]# go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch
296 | === RUN   TestRepeater
297 | config DEBUG: 2016/11/30 08:53:34 config.go:66: Loading log...
298 | aws INFO: 2016/11/30 08:53:34 aws.go:42: Config set to local
299 | aws INFO: 2016/11/30 08:53:34 aws.go:72: Client missing credentials not looked up
300 | aws INFO: 2016/11/30 08:53:34 aws.go:50: Client missing using config to set region
301 | aws INFO: 2016/11/30 08:53:34 aws.go:52: AWSRegion missing using default region us-west-2
302 | repeater ERROR: 2016/11/30 08:53:44 cloudwatch_journal_repeater.go:141: Error from putEvents NoCredentialProviders: no valid providers in chain. Deprecated.
303 |     For verbose messaging see aws.Config.CredentialsChainVerboseErrors
304 | --- SKIP: TestRepeater (10.01s)
305 |     cloudwatch_journal_repeater_test.go:43: Skipping WriteBatch, you need to setup AWS credentials for this to work
306 | === RUN   TestConfig
307 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
308 | test INFO: 2016/11/30 08:53:44 config_test.go:33: [Foo Bar]
309 | --- PASS: TestConfig (0.00s)
310 | === RUN   TestLogOmitField
311 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
312 | --- PASS: TestLogOmitField (0.00s)
313 | === RUN   TestNewJournal
314 | --- PASS: TestNewJournal (0.00s)
315 | === RUN   TestSdJournal_Operations
316 | --- PASS: TestSdJournal_Operations (0.00s)
317 |     journal_linux_test.go:41: Read value=Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available → current limit 4.0G).
318 | === RUN   TestNewRecord
319 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
320 | --- PASS: TestNewRecord (0.00s)
321 | === RUN   TestLimitFields
322 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
323 | --- PASS: TestLimitFields (0.00s)
324 | === RUN   TestOmitFields
325 | test DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...
326 | --- PASS: TestOmitFields (0.00s)
327 | PASS
328 | ok      github.com/advantageous/systemd-cloud-watch/cloud-watch 10.017s
329 | 330 |

331 | Building the docker image to build the linux instance to build this project

332 | 333 |
# from project root
334 | cd bin.packer
335 | bin.packer build packer_docker.json
336 | 337 |

338 | To run docker dev image

339 | 340 |
# from project root
341 | cd bin.packer
342 | ./run.sh
343 | 
344 | 345 |

346 | Building the ec2 image with bin.packer to build the linux instance to build this project

347 | 348 |
# from project root
349 | cd bin.packer
350 | bin.packer build packer_ec2.json
351 | 352 |

We use the docker support for bin.packer. 353 | ("Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.")

354 | 355 |

Use ec2_env.sh_example to create a ec2_env.sh with the instance id that was just created.

356 | 357 |

358 | ec2_env.sh_example

359 | 360 |
#!/usr/bin/env bash
361 | export ami=ami-YOURAMI
362 | export subnet=subnet-YOURSUBNET
363 | export security_group=sg-YOURSG
364 | export iam_profile=YOUR_IAM_ROLE
365 | export key_name=MY_PEM_FILE_KEY_NAME
366 | 
367 | 
368 | 369 |
370 | Using EC2 image (assumes you have ~/.ssh config setup)
371 | 372 |
# from project root
373 | cd bin.packer
374 | 
375 | # Run and log into dev env running in EC2
376 | ./runEc2Dev.sh
377 | 
378 | # Log into running server
379 | ./loginIntoEc2Dev.sh
380 | 
381 | 382 |

383 | Setting up a Linux env for testing/developing (CentOS7).

384 | 385 |
yum -y install wget
386 | yum install -y git
387 | yum install -y gcc
388 | yum install -y systemd-devel
389 | 
390 | 
391 | echo "installing go"
392 | cd /tmp
393 | wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz
394 | tar -C /usr/local/ -xzf go1.7.3.linux-amd64.tar.gz
395 | rm go1.7.3.linux-amd64.tar.gz
396 | echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bash_profile
397 | 398 |

399 | Setting up Java to write to systemd journal

400 | 401 |

402 | gradle build

403 | 404 |
compile 'org.gnieh:logback-journal:0.2.0'
405 | 
406 | 
407 | 408 |

409 | logback.xml

410 | 411 |
<?xml version="1.0" encoding="UTF-8"?>
412 | <configuration>
413 | 
414 |     <appender name="journal" class="org.gnieh.logback.SystemdJournalAppender" />
415 | 
416 |     <root level="INFO">
417 |         <appender-ref ref="journal" />
418 |         <customFields>{"serviceName":"adfCalcBatch","serviceHost":"${HOST}"}</customFields>
419 |     </root>
420 | 
421 | 
422 |     <logger name="com.mycompany" level="INFO"/>
423 | 
424 | </configuration>
425 | 426 |

427 | Commands for controlling systemd service EC2 dev env

428 | 429 |
# Get status
430 | sudo systemctl status journald-cloudwatch
431 | # Stop Service
432 | sudo systemctl stop journald-cloudwatch
433 | # Find the service
434 | ps -ef | grep cloud
435 | # Run service manually
436 | /usr/bin/systemd-cloud-watch_linux /etc/journald-cloudwatch.conf
437 | 
438 | 439 |

440 | Derived

441 | 442 |

This is based on advantageous journald-cloudwatch-logs 443 | which was forked from saymedia journald-cloudwatch-logs.

444 | 445 |

446 | Status

447 | 448 |

It is close to being done.

449 | 450 |

Improvements:

451 | 452 |
    453 |
  • Added unit tests (there were none).
  • 454 |
  • Added cross compile so I can develop/test on my laptop (MacOS).
  • 455 |
  • Made logging stateless. No more need for a state file.
  • 456 |
  • No more getting out of sync with CloudWatch.
  • 457 |
  • Detects being out of sync and recovers.
  • 458 |
  • Fixed error with log messages being too big.
  • 459 |
  • Added ability to include or omit logging fields.
  • 460 |
  • Created docker image and scripts to test on Linux (CentOS7).
  • 461 |
  • Created EC2 image and scripts to test on Linux running in AWS EC2 (CentOS7).
  • 462 |
  • Code organization (we use a package).
  • 463 |
  • Added comprehensive logging which includes debug logging by config.
  • 464 |
  • Uses actual timestamp from journal log record instead of just current time
  • 465 |
  • Auto-creates CloudWatch log group if it does not exist
  • 466 |
467 | 468 |

469 | License

470 | 471 |

Copyright (c) 2015 Say Media Inc

472 | 473 |

Permission is hereby granted, free of charge, to any person obtaining a copy 474 | of this software and associated documentation files (the "Software"), to deal 475 | in the Software without restriction, including without limitation the rights 476 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 477 | copies of the Software, and to permit persons to whom the Software is 478 | furnished to do so, subject to the following conditions:

479 | 480 |

The above copyright notice and this permission notice shall be included in all 481 | copies or substantial portions of the Software.

482 | 483 |

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 484 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 485 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 486 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 487 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 488 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 489 | SOFTWARE.

490 | 491 |

All additional work is covered under Apache 2.0 license. 492 | Copyright (c) 2016 Geoff Chandler, Rick Hightower

493 |
494 |
495 | 499 | 500 | 501 | 502 | 503 | -------------------------------------------------------------------------------- /docs/javascripts/scale.fix.js: -------------------------------------------------------------------------------- 1 | fixScale = function(doc) { 2 | 3 | var addEvent = 'addEventListener', 4 | type = 'gesturestart', 5 | qsa = 'querySelectorAll', 6 | scales = [1, 1], 7 | meta = qsa in doc ? doc[qsa]('meta[name=viewport]') : []; 8 | 9 | function fix() { 10 | meta.content = 'width=device-width,minimum-scale=' + scales[0] + ',maximum-scale=' + scales[1]; 11 | doc.removeEventListener(type, fix, true); 12 | } 13 | 14 | if ((meta = meta[meta.length - 1]) && addEvent in doc) { 15 | fix(); 16 | scales = [.25, 1.6]; 17 | doc[addEvent](type, fix, true); 18 | } 19 | 20 | }; -------------------------------------------------------------------------------- /docs/params.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Systemd Journal CloudWatch", 3 | "tagline": "Alt util for AWS CloudWatch agent that works with systemd journal and sends the data in batches to AWS CloudWatch.", 4 | "body": "# Systemd Journal CloudWatch Writer\r\n\r\nThis utility reads from the [systemd journal](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html),\r\n and sends the data in batches to [Cloudwatch](https://aws.amazon.com/cloudwatch/).\r\n \r\nThis is an alternative process to the AWS-provided logs agent.\r\nThe AWS logs agent copies data from on-disk text log files into [Cloudwatch](https://aws.amazon.com/cloudwatch/).\r\nThis utility `systemd-cloud-watch` reads the `systemd journal` and writes that data in batches to CloudWatch.\r\n\r\nThere are other ways to do this using various techniques. But depending on the size of log messages and size of the core parts\r\nthese other methods are fragile. This utility allows you cap the log field size, include only the fields that you want, or\r\nexclude the fields you don't want. We find that this is not only useful but essential. \r\n\r\n\r\n## Log format\r\n\r\nThe journal event data is written to ***CloudWatch*** Logs in JSON format, making it amenable to filtering using the JSON filter syntax.\r\nLog records are translated to ***CloudWatch*** JSON events using a structure like the following:\r\n\r\n#### Sample log\r\n```javascript\r\n{\r\n \"instanceId\": \"i-xxxxxxxx\",\r\n \"pid\": 12354,\r\n \"uid\": 0,\r\n \"gid\": 0,\r\n \"cmdName\": \"cron\",\r\n \"exe\": \"/usr/sbin/cron\",\r\n \"cmdLine\": \"/usr/sbin/CRON -f\",\r\n \"systemdUnit\": \"cron.service\",\r\n \"bootId\": \"fa58079c7a6d12345678b6ebf1234567\",\r\n \"hostname\": \"ip-10-1-0-15\",\r\n \"transport\": \"syslog\",\r\n \"priority\": \"INFO\",\r\n \"message\": \"pam_unix(cron:session): session opened for user root by (uid=0)\",\r\n \"syslog\": {\r\n \"facility\": 10,\r\n \"ident\": \"CRON\",\r\n \"pid\": 12354\r\n },\r\n \"kernel\": {}\r\n}\r\n```\r\n\r\nThe JSON-formatted log events could also be exported into an AWS ElasticSearch instance using the ***CloudWatch***\r\nsync mechanism. Once in ElasticSearch, you can use an ELK stack to obtain more elaborate filtering and query capabilities.\r\n\r\n\r\n## Installation\r\n\r\nIf you have a binary distribution, you just need to drop the executable file somewhere.\r\n\r\nThis tool assumes that it is running on an EC2 instance.\r\n\r\nThis tool uses `libsystemd` to access the journal. systemd-based distributions generally ship\r\nwith this already installed, but if yours doesn't you must manually install the library somehow before\r\nthis tool will work.\r\n\r\nThere are instructions on how to install the Linux requirements for development below see - \r\n[Setting up a Linux env for testing/developing (CentOS7)](#setting-up-a-linux-env-for-testingdeveloping-centos7).\r\n\r\nWe also have two excellent examples of setting up a dev environment using [packer](https://www.packer.io/) for both \r\n[AWS EC2](#building-the-ec2-image-with-packer-to-build-the-linux-instance-to-build-this-project) and \r\n[Docker](#building-the-docker-image-to-build-the-linux-instance-to-build-this-project). We setup CentoOS 7.\r\nThe EC2 instance packer build uses the ***aws command line*** to create and connect to a running image. \r\nThese should be instructive for how to setup this utility in your environment to run with ***systemd*** as we provide\r\nall of the systemd scripts in the packer provision scripts for EC2. An example is good. A running example is better.\r\n\r\n## Configuration\r\n\r\nThis tool uses a small configuration file to set some values that are required for its operation.\r\nMost of the configuration values are optional and have default settings, but a couple are required.\r\n\r\nThe configuration file uses a syntax like this:\r\n\r\n```js\r\nlog_group = \"my-awesome-app\"\r\n\r\n```\r\n\r\nThe following configuration settings are supported:\r\n\r\n* `aws_region`: (Optional) The AWS region whose CloudWatch Logs API will be written to. If not provided,\r\n this defaults to the region where the host EC2 instance is running.\r\n\r\n* `ec2_instance_id`: (Optional) The id of the EC2 instance on which the tool is running. There is very\r\n little reason to set this, since it will be automatically set to the id of the host EC2 instance.\r\n\r\n* `journal_dir`: (Optional) Override the directory where the systemd journal can be found. This is\r\n useful in conjunction with remote log aggregation, to work with journals synced from other systems.\r\n The default is to use the local system's journal.\r\n\r\n* `log_group`: (Required) The name of the cloudwatch log group to write logs into. This log group must\r\n be created before running the program.\r\n\r\n* `log_priority`: (Optional) The highest priority of the log messages to read (on a 0-7 scale). This defaults\r\n to DEBUG (all messages). This has a behaviour similar to `journalctl -p `. At the moment, only\r\n a single value can be specified, not a range. Possible values are: `0,1,2,3,4,5,6,7` or one of the corresponding\r\n `\"emerg\", \"alert\", \"crit\", \"err\", \"warning\", \"notice\", \"info\", \"debug\"`.\r\n When a single log level is specified, all messages with this log level or a lower (hence more important)\r\n log level are read and pushed to CloudWatch. For more information about priority levels, look at\r\n https://www.freedesktop.org/software/systemd/man/journalctl.html\r\n\r\n* `log_stream`: (Optional) The name of the cloudwatch log stream to write logs into. This defaults to\r\n the EC2 instance id. Each running instance of this application (along with any other applications\r\n writing logs into the same log group) must have a unique `log_stream` value. If the given log stream\r\n doesn't exist then it will be created before writing the first set of journal events.\r\n\r\n* `buffer_size`: (Optional) The size of the local event buffer where journal events will be kept\r\n in order to write batches of events to the CloudWatch Logs API. The default is 100. A batch of\r\n new events will be written to CloudWatch Logs every second even if the buffer does not fill, but\r\n this setting provides a maximum batch size to use when clearing a large backlog of events, e.g.\r\n from system boot when the program starts for the first time.\r\n\r\n* `fields`: (Optional) Specifies which fields should be included in the JSON map that is sent to CloudWatch.\r\n\r\n* `omit_fields`: (Optional) Specifies which fields should NOT be included in the JSON map that is sent to CloudWatch.\r\n\r\n* `field_length`: (Optional) Specifies how long string fileds can be in the JSON map that is sent to CloudWatch.\r\n The default is 255 characters.\r\n\r\n* `debug`: (Optional) Turns on debug logging.\r\n\r\n* `local`: (Optional) Used for unit testing. Will not try to create an AWS meta-data client to read region and AWS credentials.\r\n\r\n\r\n\r\n### AWS API access\r\n\r\nThis program requires access to call some of the Cloudwatch API functions. The recommended way to\r\nachieve this is to create an\r\n[IAM Instance Profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html)\r\nthat grants your EC2 instance a role that has Cloudwatch API access. The program will automatically\r\ndiscover and make use of instance profile credentials.\r\n\r\nThe following IAM policy grants the required access across all log groups in all regions:\r\n\r\n```js\r\n{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Effect\": \"Allow\",\r\n \"Action\": [\r\n \"logs:CreateLogStream\",\r\n \"logs:PutLogEvents\",\r\n \"logs:DescribeLogStreams\"\r\n ],\r\n \"Resource\": [\r\n \"arn:aws:logs:*:*:log-group:*\",\r\n \"arn:aws:logs:*:*:log-group:*:log-stream:*\"\r\n ]\r\n }\r\n ]\r\n}\r\n```\r\n\r\nIn more complex environments you may want to restrict further which regions, groups and streams\r\nthe instance can write to. You can do this by adjusting the two ARN strings in the `\"Resource\"` section:\r\n\r\n* The first `*` in each string can be replaced with an AWS region name like `us-east-1`\r\n to grant access only within the given region.\r\n* The `*` after `log-group` in each string can be replaced with a Cloudwatch Logs log group name\r\n to grant access only to the named group.\r\n* The `*` after `log-stream` in the second string can be replaced with a Cloudwatch Logs log stream\r\n name to grant access only to the named stream.\r\n\r\nOther combinations are possible too. For more information, see\r\n[the reference on ARNs and namespaces](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-cloudwatch-logs).\r\n\r\n\r\n\r\n### Coexisting with the official Cloudwatch Logs agent\r\n\r\nThis application can run on the same host as the official Cloudwatch Logs agent but care must be taken\r\nto ensure that they each use a different log stream name. Only one process may write into each log\r\nstream.\r\n\r\n## Running on System Boot\r\n\r\nThis program is best used as a persistent service that starts on boot and keeps running until the\r\nsystem is shut down. If you're using `journald` then you're presumably using systemd; you can create\r\na systemd unit for this service. For example:\r\n\r\n```\r\n[Unit]\r\nDescription=journald-cloudwatch-logs\r\nWants=basic.target\r\nAfter=basic.target network.target\r\n\r\n[Service]\r\nUser=nobody\r\nGroup=nobody\r\nExecStart=/usr/local/bin/journald-cloudwatch-logs /usr/local/etc/journald-cloudwatch-logs.conf\r\nKillMode=process\r\nRestart=on-failure\r\nRestartSec=42s\r\n```\r\n\r\nThis program is designed under the assumption that it will run constantly from some point during\r\nsystem boot until the system shuts down.\r\n\r\nIf the service is stopped while the system is running and then later started again, it will\r\n\"lose\" any journal entries that were written while it wasn't running. However, on the initial\r\nrun after each boot it will clear the backlog of logs created during the boot process, so it\r\nis not necessary to run the program particularly early in the boot process unless you wish\r\nto *promptly* capture startup messages.\r\n\r\n## Building\r\n\r\n#### Test cloud-watch package\r\n```sh\r\ngo test -v github.com/advantageous/systemd-cloud-watch/cloud-watch\r\n```\r\n\r\n\r\n#### Build and Test on Linux (Centos7)\r\n```sh\r\n ./run_build_linux.sh\r\n```\r\n\r\nThe above starts up a docker container, runs `go get`, `go build`, `go test` and then copies the binary to\r\n`systemd-cloud-watch_linux`.\r\n\r\n#### Debug process running Linux\r\n```sh\r\n ./run_test_container.sh\r\n```\r\n\r\n\r\nThe above starts up a docker container that you can develop with that has all the prerequisites needed to\r\ncompile and test this project.\r\n\r\n#### Sample debug session\r\n```sh\r\n$ ./run_test_container.sh\r\nlatest: Pulling from advantageous/golang-cloud-watch\r\nDigest: sha256:eaf5c0a387aee8cc2d690e1c5e18763e12beb7940ca0960ce1b9742229413e71\r\nStatus: Image is up to date for advantageous/golang-cloud-watch:latest\r\n[root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/\r\n.git/ README.md cloud-watch/ packer/ sample.conf \r\n.gitignore build_linux.sh main.go run_build_linux.sh systemd-cloud-watch.iml \r\n.idea/ cgroup/ output.json run_test_container.sh systemd-cloud-watch_linux \r\n\r\n[root@6e0d1f984c03 /]# cd gopath/src/github.com/advantageous/systemd-cloud-watch/\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# ls\r\nREADME.md build_linux.sh cgroup cloud-watch main.go output.json packer run_build_linux.sh \r\nrun_test_container.sh sample.conf systemd-cloud-watch.iml systemd-cloud-watch_linux\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# source ~/.bash_profile\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# export GOPATH=/gopath\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# /usr/lib/systemd/systemd-journald &\r\n[1] 24\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# systemd-cat echo \"RUNNING JAVA BATCH JOB - ADF BATCH from `pwd`\"\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# echo \"Running go clean\"\r\nRunning go clean\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# go clean\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# echo \"Running go get\"\r\nRunning go get\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# go get\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# echo \"Running go build\"\r\nRunning go build\r\n[root@6e0d1f984c03 systemd-cloud-watch]# go build\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# echo \"Running go test\"\r\nRunning go test\r\n\r\n[root@6e0d1f984c03 systemd-cloud-watch]# go test -v github.com/advantageous/systemd-cloud-watch/cloud-watch\r\n=== RUN TestRepeater\r\nconfig DEBUG: 2016/11/30 08:53:34 config.go:66: Loading log...\r\naws INFO: 2016/11/30 08:53:34 aws.go:42: Config set to local\r\naws INFO: 2016/11/30 08:53:34 aws.go:72: Client missing credentials not looked up\r\naws INFO: 2016/11/30 08:53:34 aws.go:50: Client missing using config to set region\r\naws INFO: 2016/11/30 08:53:34 aws.go:52: AWSRegion missing using default region us-west-2\r\nrepeater ERROR: 2016/11/30 08:53:44 cloudwatch_journal_repeater.go:141: Error from putEvents NoCredentialProviders: no valid providers in chain. Deprecated.\r\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\r\n--- SKIP: TestRepeater (10.01s)\r\n\tcloudwatch_journal_repeater_test.go:43: Skipping WriteBatch, you need to setup AWS credentials for this to work\r\n=== RUN TestConfig\r\ntest DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...\r\ntest INFO: 2016/11/30 08:53:44 config_test.go:33: [Foo Bar]\r\n--- PASS: TestConfig (0.00s)\r\n=== RUN TestLogOmitField\r\ntest DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...\r\n--- PASS: TestLogOmitField (0.00s)\r\n=== RUN TestNewJournal\r\n--- PASS: TestNewJournal (0.00s)\r\n=== RUN TestSdJournal_Operations\r\n--- PASS: TestSdJournal_Operations (0.00s)\r\n\tjournal_linux_test.go:41: Read value=Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available → current limit 4.0G).\r\n=== RUN TestNewRecord\r\ntest DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...\r\n--- PASS: TestNewRecord (0.00s)\r\n=== RUN TestLimitFields\r\ntest DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...\r\n--- PASS: TestLimitFields (0.00s)\r\n=== RUN TestOmitFields\r\ntest DEBUG: 2016/11/30 08:53:44 config.go:66: Loading log...\r\n--- PASS: TestOmitFields (0.00s)\r\nPASS\r\nok \tgithub.com/advantageous/systemd-cloud-watch/cloud-watch\t10.017s\r\n```\r\n\r\n\r\n\r\n\r\n#### Building the docker image to build the linux instance to build this project\r\n\r\n```sh\r\n# from project root\r\ncd packer\r\npacker build packer_docker.json\r\n```\r\n\r\n\r\n#### To run docker dev image\r\n```sh\r\n# from project root\r\ncd packer\r\n./run.sh\r\n\r\n```\r\n\r\n#### Building the ec2 image with packer to build the linux instance to build this project\r\n\r\n```sh\r\n# from project root\r\ncd packer\r\npacker build packer_ec2.json\r\n```\r\n\r\nWe use the [docker](https://www.packer.io/docs/builders/docker.html) support for [packer](https://www.packer.io/).\r\n(\"Packer is a tool for creating machine and container images for multiple platforms from a single source configuration.\")\r\n\r\nUse `ec2_env.sh_example` to create a `ec2_env.sh` with the instance id that was just created. \r\n\r\n#### ec2_env.sh_example\r\n```\r\n#!/usr/bin/env bash\r\nexport ami=ami-YOURAMI\r\nexport subnet=subnet-YOURSUBNET\r\nexport security_group=sg-YOURSG\r\nexport iam_profile=YOUR_IAM_ROLE\r\nexport key_name=MY_PEM_FILE_KEY_NAME\r\n\r\n```\r\n\r\n##### Using EC2 image (assumes you have ~/.ssh config setup)\r\n```sh\r\n# from project root\r\ncd packer\r\n\r\n# Run and log into dev env running in EC2\r\n./runEc2Dev.sh\r\n\r\n# Log into running server\r\n./loginIntoEc2Dev.sh\r\n\r\n```\r\n\r\n\r\n\r\n\r\n\r\n## Setting up a Linux env for testing/developing (CentOS7).\r\n```sh\r\nyum -y install wget\r\nyum install -y git\r\nyum install -y gcc\r\nyum install -y systemd-devel\r\n\r\n\r\necho \"installing go\"\r\ncd /tmp\r\nwget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz\r\ntar -C /usr/local/ -xzf go1.7.3.linux-amd64.tar.gz\r\nrm go1.7.3.linux-amd64.tar.gz\r\necho 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bash_profile\r\n```\r\n\r\n## Setting up Java to write to systemd journal\r\n\r\n#### gradle build\r\n```\r\ncompile 'org.gnieh:logback-journal:0.2.0'\r\n\r\n```\r\n\r\n#### logback.xml\r\n```xml\r\n\r\n\r\n\r\n \r\n\r\n \r\n \r\n {\"serviceName\":\"adfCalcBatch\",\"serviceHost\":\"${HOST}\"}\r\n \r\n\r\n\r\n \r\n\r\n\r\n```\r\n\r\n## Commands for controlling systemd service EC2 dev env\r\n\r\n```sh\r\n# Get status\r\nsudo systemctl status journald-cloudwatch\r\n# Stop Service\r\nsudo systemctl stop journald-cloudwatch\r\n# Find the service\r\nps -ef | grep cloud\r\n# Run service manually\r\n/usr/bin/systemd-cloud-watch_linux /etc/journald-cloudwatch.conf\r\n\r\n```\r\n\r\n\r\n\r\n## Derived\r\nThis is based on [advantageous journald-cloudwatch-logs](https://github.com/advantageous/journald-cloudwatch-logs)\r\nwhich was forked from [saymedia journald-cloudwatch-logs](https://github.com/saymedia/journald-cloudwatch-logs).\r\n\r\n\r\n## Status\r\nIt is close to being done. \r\n\r\n\r\nImprovements:\r\n\r\n* Added unit tests (there were none).\r\n* Added cross compile so I can develop/test on my laptop (MacOS).\r\n* Made logging stateless. No more need for a state file.\r\n* No more getting out of sync with CloudWatch.\r\n* Detects being out of sync and recovers.\r\n* Fixed error with log messages being too big.\r\n* Added ability to include or omit logging fields.\r\n* Created docker image and scripts to test on Linux (CentOS7).\r\n* Created EC2 image and scripts to test on Linux running in AWS EC2 (CentOS7).\r\n* Code organization (we use a package).\r\n* Added comprehensive logging which includes debug logging by config.\r\n* Uses actual timestamp from journal log record instead of just current time\r\n* Auto-creates CloudWatch log group if it does not exist\r\n\r\n## License\r\n\r\nCopyright (c) 2015 Say Media Inc\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy\r\nof this software and associated documentation files (the \"Software\"), to deal\r\nin the Software without restriction, including without limitation the rights\r\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\ncopies of the Software, and to permit persons to whom the Software is\r\nfurnished to do so, subject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\nSOFTWARE.\r\n\r\nAll additional work is covered under Apache 2.0 license.\r\nCopyright (c) 2016 Geoff Chandler, Rick Hightower\r\n", 5 | "note": "Don't delete this file! It's used internally to help with page regeneration." 6 | } -------------------------------------------------------------------------------- /docs/stylesheets/github-dark.css: -------------------------------------------------------------------------------- 1 | /* 2 | The MIT License (MIT) 3 | 4 | Copyright (c) 2016 GitHub, Inc. 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy 7 | of this software and associated documentation files (the "Software"), to deal 8 | in the Software without restriction, including without limitation the rights 9 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 | copies of the Software, and to permit persons to whom the Software is 11 | furnished to do so, subject to the following conditions: 12 | 13 | The above copyright notice and this permission notice shall be included in all 14 | copies or substantial portions of the Software. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 22 | SOFTWARE. 23 | 24 | */ 25 | 26 | .pl-c /* comment */ { 27 | color: #969896; 28 | } 29 | 30 | .pl-c1 /* constant, variable.other.constant, support, meta.property-name, support.constant, support.variable, meta.module-reference, markup.raw, meta.diff.header */, 31 | .pl-s .pl-v /* string variable */ { 32 | color: #0099cd; 33 | } 34 | 35 | .pl-e /* entity */, 36 | .pl-en /* entity.name */ { 37 | color: #9774cb; 38 | } 39 | 40 | .pl-smi /* variable.parameter.function, storage.modifier.package, storage.modifier.import, storage.type.java, variable.other */, 41 | .pl-s .pl-s1 /* string source */ { 42 | color: #ddd; 43 | } 44 | 45 | .pl-ent /* entity.name.tag */ { 46 | color: #7bcc72; 47 | } 48 | 49 | .pl-k /* keyword, storage, storage.type */ { 50 | color: #cc2372; 51 | } 52 | 53 | .pl-s /* string */, 54 | .pl-pds /* punctuation.definition.string, string.regexp.character-class */, 55 | .pl-s .pl-pse .pl-s1 /* string punctuation.section.embedded source */, 56 | .pl-sr /* string.regexp */, 57 | .pl-sr .pl-cce /* string.regexp constant.character.escape */, 58 | .pl-sr .pl-sre /* string.regexp source.ruby.embedded */, 59 | .pl-sr .pl-sra /* string.regexp string.regexp.arbitrary-repitition */ { 60 | color: #3c66e2; 61 | } 62 | 63 | .pl-v /* variable */ { 64 | color: #fb8764; 65 | } 66 | 67 | .pl-id /* invalid.deprecated */ { 68 | color: #e63525; 69 | } 70 | 71 | .pl-ii /* invalid.illegal */ { 72 | color: #f8f8f8; 73 | background-color: #e63525; 74 | } 75 | 76 | .pl-sr .pl-cce /* string.regexp constant.character.escape */ { 77 | font-weight: bold; 78 | color: #7bcc72; 79 | } 80 | 81 | .pl-ml /* markup.list */ { 82 | color: #c26b2b; 83 | } 84 | 85 | .pl-mh /* markup.heading */, 86 | .pl-mh .pl-en /* markup.heading entity.name */, 87 | .pl-ms /* meta.separator */ { 88 | font-weight: bold; 89 | color: #264ec5; 90 | } 91 | 92 | .pl-mq /* markup.quote */ { 93 | color: #00acac; 94 | } 95 | 96 | .pl-mi /* markup.italic */ { 97 | font-style: italic; 98 | color: #ddd; 99 | } 100 | 101 | .pl-mb /* markup.bold */ { 102 | font-weight: bold; 103 | color: #ddd; 104 | } 105 | 106 | .pl-md /* markup.deleted, meta.diff.header.from-file */ { 107 | color: #bd2c00; 108 | background-color: #ffecec; 109 | } 110 | 111 | .pl-mi1 /* markup.inserted, meta.diff.header.to-file */ { 112 | color: #55a532; 113 | background-color: #eaffea; 114 | } 115 | 116 | .pl-mdr /* meta.diff.range */ { 117 | font-weight: bold; 118 | color: #9774cb; 119 | } 120 | 121 | .pl-mo /* meta.output */ { 122 | color: #264ec5; 123 | } 124 | 125 | -------------------------------------------------------------------------------- /docs/stylesheets/github-light.css: -------------------------------------------------------------------------------- 1 | /* 2 | The MIT License (MIT) 3 | 4 | Copyright (c) 2016 GitHub, Inc. 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy 7 | of this software and associated documentation files (the "Software"), to deal 8 | in the Software without restriction, including without limitation the rights 9 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 | copies of the Software, and to permit persons to whom the Software is 11 | furnished to do so, subject to the following conditions: 12 | 13 | The above copyright notice and this permission notice shall be included in all 14 | copies or substantial portions of the Software. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 22 | SOFTWARE. 23 | 24 | */ 25 | 26 | .pl-c /* comment */ { 27 | color: #969896; 28 | } 29 | 30 | .pl-c1 /* constant, variable.other.constant, support, meta.property-name, support.constant, support.variable, meta.module-reference, markup.raw, meta.diff.header */, 31 | .pl-s .pl-v /* string variable */ { 32 | color: #0086b3; 33 | } 34 | 35 | .pl-e /* entity */, 36 | .pl-en /* entity.name */ { 37 | color: #795da3; 38 | } 39 | 40 | .pl-smi /* variable.parameter.function, storage.modifier.package, storage.modifier.import, storage.type.java, variable.other */, 41 | .pl-s .pl-s1 /* string source */ { 42 | color: #333; 43 | } 44 | 45 | .pl-ent /* entity.name.tag */ { 46 | color: #63a35c; 47 | } 48 | 49 | .pl-k /* keyword, storage, storage.type */ { 50 | color: #a71d5d; 51 | } 52 | 53 | .pl-s /* string */, 54 | .pl-pds /* punctuation.definition.string, string.regexp.character-class */, 55 | .pl-s .pl-pse .pl-s1 /* string punctuation.section.embedded source */, 56 | .pl-sr /* string.regexp */, 57 | .pl-sr .pl-cce /* string.regexp constant.character.escape */, 58 | .pl-sr .pl-sre /* string.regexp source.ruby.embedded */, 59 | .pl-sr .pl-sra /* string.regexp string.regexp.arbitrary-repitition */ { 60 | color: #183691; 61 | } 62 | 63 | .pl-v /* variable */ { 64 | color: #ed6a43; 65 | } 66 | 67 | .pl-id /* invalid.deprecated */ { 68 | color: #b52a1d; 69 | } 70 | 71 | .pl-ii /* invalid.illegal */ { 72 | color: #f8f8f8; 73 | background-color: #b52a1d; 74 | } 75 | 76 | .pl-sr .pl-cce /* string.regexp constant.character.escape */ { 77 | font-weight: bold; 78 | color: #63a35c; 79 | } 80 | 81 | .pl-ml /* markup.list */ { 82 | color: #693a17; 83 | } 84 | 85 | .pl-mh /* markup.heading */, 86 | .pl-mh .pl-en /* markup.heading entity.name */, 87 | .pl-ms /* meta.separator */ { 88 | font-weight: bold; 89 | color: #1d3e81; 90 | } 91 | 92 | .pl-mq /* markup.quote */ { 93 | color: #008080; 94 | } 95 | 96 | .pl-mi /* markup.italic */ { 97 | font-style: italic; 98 | color: #333; 99 | } 100 | 101 | .pl-mb /* markup.bold */ { 102 | font-weight: bold; 103 | color: #333; 104 | } 105 | 106 | .pl-md /* markup.deleted, meta.diff.header.from-file */ { 107 | color: #bd2c00; 108 | background-color: #ffecec; 109 | } 110 | 111 | .pl-mi1 /* markup.inserted, meta.diff.header.to-file */ { 112 | color: #55a532; 113 | background-color: #eaffea; 114 | } 115 | 116 | .pl-mdr /* meta.diff.range */ { 117 | font-weight: bold; 118 | color: #795da3; 119 | } 120 | 121 | .pl-mo /* meta.output */ { 122 | color: #1d3e81; 123 | } 124 | 125 | -------------------------------------------------------------------------------- /docs/stylesheets/normalize.css: -------------------------------------------------------------------------------- 1 | /*! normalize.css v3.0.2 | MIT License | git.io/normalize */ 2 | 3 | /** 4 | * 1. Set default font family to sans-serif. 5 | * 2. Prevent iOS text size adjust after orientation change, without disabling 6 | * user zoom. 7 | */ 8 | 9 | html { 10 | font-family: sans-serif; /* 1 */ 11 | -ms-text-size-adjust: 100%; /* 2 */ 12 | -webkit-text-size-adjust: 100%; /* 2 */ 13 | } 14 | 15 | /** 16 | * Remove default margin. 17 | */ 18 | 19 | body { 20 | margin: 0; 21 | } 22 | 23 | /* HTML5 display definitions 24 | ========================================================================== */ 25 | 26 | /** 27 | * Correct `block` display not defined for any HTML5 element in IE 8/9. 28 | * Correct `block` display not defined for `details` or `summary` in IE 10/11 29 | * and Firefox. 30 | * Correct `block` display not defined for `main` in IE 11. 31 | */ 32 | 33 | article, 34 | aside, 35 | details, 36 | figcaption, 37 | figure, 38 | footer, 39 | header, 40 | hgroup, 41 | main, 42 | menu, 43 | nav, 44 | section, 45 | summary { 46 | display: block; 47 | } 48 | 49 | /** 50 | * 1. Correct `inline-block` display not defined in IE 8/9. 51 | * 2. Normalize vertical alignment of `progress` in Chrome, Firefox, and Opera. 52 | */ 53 | 54 | audio, 55 | canvas, 56 | progress, 57 | video { 58 | display: inline-block; /* 1 */ 59 | vertical-align: baseline; /* 2 */ 60 | } 61 | 62 | /** 63 | * Prevent modern browsers from displaying `audio` without controls. 64 | * Remove excess height in iOS 5 devices. 65 | */ 66 | 67 | audio:not([controls]) { 68 | display: none; 69 | height: 0; 70 | } 71 | 72 | /** 73 | * Address `[hidden]` styling not present in IE 8/9/10. 74 | * Hide the `template` element in IE 8/9/11, Safari, and Firefox < 22. 75 | */ 76 | 77 | [hidden], 78 | template { 79 | display: none; 80 | } 81 | 82 | /* Links 83 | ========================================================================== */ 84 | 85 | /** 86 | * Remove the gray background color from active links in IE 10. 87 | */ 88 | 89 | a { 90 | background-color: transparent; 91 | } 92 | 93 | /** 94 | * Improve readability when focused and also mouse hovered in all browsers. 95 | */ 96 | 97 | a:active, 98 | a:hover { 99 | outline: 0; 100 | } 101 | 102 | /* Text-level semantics 103 | ========================================================================== */ 104 | 105 | /** 106 | * Address styling not present in IE 8/9/10/11, Safari, and Chrome. 107 | */ 108 | 109 | abbr[title] { 110 | border-bottom: 1px dotted; 111 | } 112 | 113 | /** 114 | * Address style set to `bolder` in Firefox 4+, Safari, and Chrome. 115 | */ 116 | 117 | b, 118 | strong { 119 | font-weight: bold; 120 | } 121 | 122 | /** 123 | * Address styling not present in Safari and Chrome. 124 | */ 125 | 126 | dfn { 127 | font-style: italic; 128 | } 129 | 130 | /** 131 | * Address variable `h1` font-size and margin within `section` and `article` 132 | * contexts in Firefox 4+, Safari, and Chrome. 133 | */ 134 | 135 | h1 { 136 | font-size: 2em; 137 | margin: 0.67em 0; 138 | } 139 | 140 | /** 141 | * Address styling not present in IE 8/9. 142 | */ 143 | 144 | mark { 145 | background: #ff0; 146 | color: #000; 147 | } 148 | 149 | /** 150 | * Address inconsistent and variable font size in all browsers. 151 | */ 152 | 153 | small { 154 | font-size: 80%; 155 | } 156 | 157 | /** 158 | * Prevent `sub` and `sup` affecting `line-height` in all browsers. 159 | */ 160 | 161 | sub, 162 | sup { 163 | font-size: 75%; 164 | line-height: 0; 165 | position: relative; 166 | vertical-align: baseline; 167 | } 168 | 169 | sup { 170 | top: -0.5em; 171 | } 172 | 173 | sub { 174 | bottom: -0.25em; 175 | } 176 | 177 | /* Embedded content 178 | ========================================================================== */ 179 | 180 | /** 181 | * Remove border when inside `a` element in IE 8/9/10. 182 | */ 183 | 184 | img { 185 | border: 0; 186 | } 187 | 188 | /** 189 | * Correct overflow not hidden in IE 9/10/11. 190 | */ 191 | 192 | svg:not(:root) { 193 | overflow: hidden; 194 | } 195 | 196 | /* Grouping content 197 | ========================================================================== */ 198 | 199 | /** 200 | * Address margin not present in IE 8/9 and Safari. 201 | */ 202 | 203 | figure { 204 | margin: 1em 40px; 205 | } 206 | 207 | /** 208 | * Address differences between Firefox and other browsers. 209 | */ 210 | 211 | hr { 212 | box-sizing: content-box; 213 | height: 0; 214 | } 215 | 216 | /** 217 | * Contain overflow in all browsers. 218 | */ 219 | 220 | pre { 221 | overflow: auto; 222 | } 223 | 224 | /** 225 | * Address odd `em`-unit font size rendering in all browsers. 226 | */ 227 | 228 | code, 229 | kbd, 230 | pre, 231 | samp { 232 | font-family: monospace, monospace; 233 | font-size: 1em; 234 | } 235 | 236 | /* Forms 237 | ========================================================================== */ 238 | 239 | /** 240 | * Known limitation: by default, Chrome and Safari on OS X allow very limited 241 | * styling of `select`, unless a `border` property is set. 242 | */ 243 | 244 | /** 245 | * 1. Correct color not being inherited. 246 | * Known issue: affects color of disabled elements. 247 | * 2. Correct font properties not being inherited. 248 | * 3. Address margins set differently in Firefox 4+, Safari, and Chrome. 249 | */ 250 | 251 | button, 252 | input, 253 | optgroup, 254 | select, 255 | textarea { 256 | color: inherit; /* 1 */ 257 | font: inherit; /* 2 */ 258 | margin: 0; /* 3 */ 259 | } 260 | 261 | /** 262 | * Address `overflow` set to `hidden` in IE 8/9/10/11. 263 | */ 264 | 265 | button { 266 | overflow: visible; 267 | } 268 | 269 | /** 270 | * Address inconsistent `text-transform` inheritance for `button` and `select`. 271 | * All other form control elements do not inherit `text-transform` values. 272 | * Correct `button` style inheritance in Firefox, IE 8/9/10/11, and Opera. 273 | * Correct `select` style inheritance in Firefox. 274 | */ 275 | 276 | button, 277 | select { 278 | text-transform: none; 279 | } 280 | 281 | /** 282 | * 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio` 283 | * and `video` controls. 284 | * 2. Correct inability to style clickable `input` types in iOS. 285 | * 3. Improve usability and consistency of cursor style between image-type 286 | * `input` and others. 287 | */ 288 | 289 | button, 290 | html input[type="button"], /* 1 */ 291 | input[type="reset"], 292 | input[type="submit"] { 293 | -webkit-appearance: button; /* 2 */ 294 | cursor: pointer; /* 3 */ 295 | } 296 | 297 | /** 298 | * Re-set default cursor for disabled elements. 299 | */ 300 | 301 | button[disabled], 302 | html input[disabled] { 303 | cursor: default; 304 | } 305 | 306 | /** 307 | * Remove inner padding and border in Firefox 4+. 308 | */ 309 | 310 | button::-moz-focus-inner, 311 | input::-moz-focus-inner { 312 | border: 0; 313 | padding: 0; 314 | } 315 | 316 | /** 317 | * Address Firefox 4+ setting `line-height` on `input` using `!important` in 318 | * the UA stylesheet. 319 | */ 320 | 321 | input { 322 | line-height: normal; 323 | } 324 | 325 | /** 326 | * It's recommended that you don't attempt to style these elements. 327 | * Firefox's implementation doesn't respect box-sizing, padding, or width. 328 | * 329 | * 1. Address box sizing set to `content-box` in IE 8/9/10. 330 | * 2. Remove excess padding in IE 8/9/10. 331 | */ 332 | 333 | input[type="checkbox"], 334 | input[type="radio"] { 335 | box-sizing: border-box; /* 1 */ 336 | padding: 0; /* 2 */ 337 | } 338 | 339 | /** 340 | * Fix the cursor style for Chrome's increment/decrement buttons. For certain 341 | * `font-size` values of the `input`, it causes the cursor style of the 342 | * decrement button to change from `default` to `text`. 343 | */ 344 | 345 | input[type="number"]::-webkit-inner-spin-button, 346 | input[type="number"]::-webkit-outer-spin-button { 347 | height: auto; 348 | } 349 | 350 | /** 351 | * 1. Address `appearance` set to `searchfield` in Safari and Chrome. 352 | * 2. Address `box-sizing` set to `border-box` in Safari and Chrome 353 | * (include `-moz` to future-proof). 354 | */ 355 | 356 | input[type="search"] { 357 | -webkit-appearance: textfield; /* 1 */ /* 2 */ 358 | box-sizing: content-box; 359 | } 360 | 361 | /** 362 | * Remove inner padding and search cancel button in Safari and Chrome on OS X. 363 | * Safari (but not Chrome) clips the cancel button when the search input has 364 | * padding (and `textfield` appearance). 365 | */ 366 | 367 | input[type="search"]::-webkit-search-cancel-button, 368 | input[type="search"]::-webkit-search-decoration { 369 | -webkit-appearance: none; 370 | } 371 | 372 | /** 373 | * Define consistent border, margin, and padding. 374 | */ 375 | 376 | fieldset { 377 | border: 1px solid #c0c0c0; 378 | margin: 0 2px; 379 | padding: 0.35em 0.625em 0.75em; 380 | } 381 | 382 | /** 383 | * 1. Correct `color` not being inherited in IE 8/9/10/11. 384 | * 2. Remove padding so people aren't caught out if they zero out fieldsets. 385 | */ 386 | 387 | legend { 388 | border: 0; /* 1 */ 389 | padding: 0; /* 2 */ 390 | } 391 | 392 | /** 393 | * Remove default vertical scrollbar in IE 8/9/10/11. 394 | */ 395 | 396 | textarea { 397 | overflow: auto; 398 | } 399 | 400 | /** 401 | * Don't inherit the `font-weight` (applied by a rule above). 402 | * NOTE: the default cannot safely be changed in Chrome and Safari on OS X. 403 | */ 404 | 405 | optgroup { 406 | font-weight: bold; 407 | } 408 | 409 | /* Tables 410 | ========================================================================== */ 411 | 412 | /** 413 | * Remove most spacing between table cells. 414 | */ 415 | 416 | table { 417 | border-collapse: collapse; 418 | border-spacing: 0; 419 | } 420 | 421 | td, 422 | th { 423 | padding: 0; 424 | } 425 | -------------------------------------------------------------------------------- /docs/stylesheets/styles.css: -------------------------------------------------------------------------------- 1 | @import url(https://fonts.googleapis.com/css?family=Lato:300italic,700italic,300,700); 2 | html { 3 | background: #6C7989; 4 | background: #6c7989 -webkit-gradient(linear, 50% 0%, 50% 100%, color-stop(0%, #6c7989), color-stop(100%, #434b55)) fixed; 5 | background: #6c7989 -webkit-linear-gradient(#6c7989, #434b55) fixed; 6 | background: #6c7989 -moz-linear-gradient(#6c7989, #434b55) fixed; 7 | background: #6c7989 -o-linear-gradient(#6c7989, #434b55) fixed; 8 | background: #6c7989 -ms-linear-gradient(#6c7989, #434b55) fixed; 9 | background: #6c7989 linear-gradient(#6c7989, #434b55) fixed; 10 | } 11 | 12 | body { 13 | padding: 50px 0; 14 | margin: 0; 15 | font: 14px/1.5 Lato, "Helvetica Neue", Helvetica, Arial, sans-serif; 16 | color: #555; 17 | font-weight: 300; 18 | background: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAeCAYAAABNChwpAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAABx0RVh0U29mdHdhcmUAQWRvYmUgRmlyZXdvcmtzIENTNXG14zYAAAAUdEVYdENyZWF0aW9uIFRpbWUAMy82LzEygrTcTAAAAFRJREFUSIljfPDggZRf5RIGGNjUHsNATz6jXmSL1Kb2GLiAX+USBnrymRgGGDCORgFmoNAXjEbBaBSMRsFoFIxGwWgUjEbBaBSMRsFoFIxGwWgUAABYNujumib3wAAAAABJRU5ErkJggg==') fixed; 19 | } 20 | 21 | .wrapper { 22 | width: 640px; 23 | margin: 0 auto; 24 | background: #DEDEDE; 25 | -webkit-border-radius: 8px; 26 | -moz-border-radius: 8px; 27 | -ms-border-radius: 8px; 28 | -o-border-radius: 8px; 29 | border-radius: 8px; 30 | -webkit-box-shadow: rgba(0, 0, 0, 0.2) 0 0 0 1px, rgba(0, 0, 0, 0.45) 0 3px 10px; 31 | -moz-box-shadow: rgba(0, 0, 0, 0.2) 0 0 0 1px, rgba(0, 0, 0, 0.45) 0 3px 10px; 32 | box-shadow: rgba(0, 0, 0, 0.2) 0 0 0 1px, rgba(0, 0, 0, 0.45) 0 3px 10px; 33 | } 34 | 35 | header, section, footer { 36 | display: block; 37 | } 38 | 39 | a { 40 | color: #069; 41 | text-decoration: none; 42 | } 43 | 44 | p { 45 | margin: 0 0 20px; 46 | padding: 0; 47 | } 48 | 49 | strong { 50 | color: #222; 51 | font-weight: 700; 52 | } 53 | 54 | header { 55 | -webkit-border-radius: 8px 8px 0 0; 56 | -moz-border-radius: 8px 8px 0 0; 57 | -ms-border-radius: 8px 8px 0 0; 58 | -o-border-radius: 8px 8px 0 0; 59 | border-radius: 8px 8px 0 0; 60 | background: #C6EAFA; 61 | background: -webkit-gradient(linear, 50% 0%, 50% 100%, color-stop(0%, #ddfbfc), color-stop(100%, #c6eafa)); 62 | background: -webkit-linear-gradient(#ddfbfc, #c6eafa); 63 | background: -moz-linear-gradient(#ddfbfc, #c6eafa); 64 | background: -o-linear-gradient(#ddfbfc, #c6eafa); 65 | background: -ms-linear-gradient(#ddfbfc, #c6eafa); 66 | background: linear-gradient(#ddfbfc, #c6eafa); 67 | position: relative; 68 | padding: 15px 20px; 69 | border-bottom: 1px solid #B2D2E1; 70 | } 71 | header h1 { 72 | margin: 0; 73 | padding: 0; 74 | font-size: 24px; 75 | line-height: 1.2; 76 | color: #069; 77 | text-shadow: rgba(255, 255, 255, 0.9) 0 1px 0; 78 | } 79 | header.without-description h1 { 80 | margin: 10px 0; 81 | } 82 | header p { 83 | margin: 0; 84 | color: #61778B; 85 | width: 300px; 86 | font-size: 13px; 87 | } 88 | header p.view { 89 | display: none; 90 | font-weight: 700; 91 | text-shadow: rgba(255, 255, 255, 0.9) 0 1px 0; 92 | -webkit-font-smoothing: antialiased; 93 | } 94 | header p.view a { 95 | color: #06c; 96 | } 97 | header p.view small { 98 | font-weight: 400; 99 | } 100 | header ul { 101 | margin: 0; 102 | padding: 0; 103 | list-style: none; 104 | position: absolute; 105 | z-index: 1; 106 | right: 20px; 107 | top: 20px; 108 | height: 38px; 109 | padding: 1px 0; 110 | background: #5198DF; 111 | background: -webkit-gradient(linear, 50% 0%, 50% 100%, color-stop(0%, #77b9fb), color-stop(100%, #3782cd)); 112 | background: -webkit-linear-gradient(#77b9fb, #3782cd); 113 | background: -moz-linear-gradient(#77b9fb, #3782cd); 114 | background: -o-linear-gradient(#77b9fb, #3782cd); 115 | background: -ms-linear-gradient(#77b9fb, #3782cd); 116 | background: linear-gradient(#77b9fb, #3782cd); 117 | border-radius: 5px; 118 | -webkit-box-shadow: inset rgba(255, 255, 255, 0.45) 0 1px 0, inset rgba(0, 0, 0, 0.2) 0 -1px 0; 119 | -moz-box-shadow: inset rgba(255, 255, 255, 0.45) 0 1px 0, inset rgba(0, 0, 0, 0.2) 0 -1px 0; 120 | box-shadow: inset rgba(255, 255, 255, 0.45) 0 1px 0, inset rgba(0, 0, 0, 0.2) 0 -1px 0; 121 | width: auto; 122 | } 123 | header ul:before { 124 | content: ''; 125 | position: absolute; 126 | z-index: -1; 127 | left: -5px; 128 | top: -4px; 129 | right: -5px; 130 | bottom: -6px; 131 | background: rgba(0, 0, 0, 0.1); 132 | -webkit-border-radius: 8px; 133 | -moz-border-radius: 8px; 134 | -ms-border-radius: 8px; 135 | -o-border-radius: 8px; 136 | border-radius: 8px; 137 | -webkit-box-shadow: rgba(0, 0, 0, 0.2) 0 -1px 0, inset rgba(255, 255, 255, 0.7) 0 -1px 0; 138 | -moz-box-shadow: rgba(0, 0, 0, 0.2) 0 -1px 0, inset rgba(255, 255, 255, 0.7) 0 -1px 0; 139 | box-shadow: rgba(0, 0, 0, 0.2) 0 -1px 0, inset rgba(255, 255, 255, 0.7) 0 -1px 0; 140 | } 141 | header ul li { 142 | width: 79px; 143 | float: left; 144 | border-right: 1px solid #3A7CBE; 145 | height: 38px; 146 | } 147 | header ul li.single { 148 | border: none; 149 | } 150 | header ul li + li { 151 | width: 78px; 152 | border-left: 1px solid #8BBEF3; 153 | } 154 | header ul li + li + li { 155 | border-right: none; 156 | width: 79px; 157 | } 158 | header ul a { 159 | line-height: 1; 160 | font-size: 11px; 161 | color: #fff; 162 | color: rgba(255, 255, 255, 0.8); 163 | display: block; 164 | text-align: center; 165 | font-weight: 400; 166 | padding-top: 6px; 167 | height: 40px; 168 | text-shadow: rgba(0, 0, 0, 0.4) 0 -1px 0; 169 | } 170 | header ul a strong { 171 | font-size: 14px; 172 | display: block; 173 | color: #fff; 174 | -webkit-font-smoothing: antialiased; 175 | } 176 | 177 | section { 178 | padding: 15px 20px; 179 | font-size: 15px; 180 | border-top: 1px solid #fff; 181 | background: -webkit-gradient(linear, 50% 0%, 50% 700, color-stop(0%, #fafafa), color-stop(100%, #dedede)); 182 | background: -webkit-linear-gradient(#fafafa, #dedede 700px); 183 | background: -moz-linear-gradient(#fafafa, #dedede 700px); 184 | background: -o-linear-gradient(#fafafa, #dedede 700px); 185 | background: -ms-linear-gradient(#fafafa, #dedede 700px); 186 | background: linear-gradient(#fafafa, #dedede 700px); 187 | -webkit-border-radius: 0 0 8px 8px; 188 | -moz-border-radius: 0 0 8px 8px; 189 | -ms-border-radius: 0 0 8px 8px; 190 | -o-border-radius: 0 0 8px 8px; 191 | border-radius: 0 0 8px 8px; 192 | position: relative; 193 | } 194 | 195 | h1, h2, h3, h4, h5, h6 { 196 | color: #222; 197 | padding: 0; 198 | margin: 0 0 20px; 199 | line-height: 1.2; 200 | } 201 | 202 | p, ul, ol, table, pre, dl { 203 | margin: 0 0 20px; 204 | } 205 | 206 | h1, h2, h3 { 207 | line-height: 1.1; 208 | } 209 | 210 | h1 { 211 | font-size: 28px; 212 | } 213 | 214 | h2 { 215 | color: #393939; 216 | } 217 | 218 | h3, h4, h5, h6 { 219 | color: #494949; 220 | } 221 | 222 | blockquote { 223 | margin: 0 -20px 20px; 224 | padding: 15px 20px 1px 40px; 225 | font-style: italic; 226 | background: #ccc; 227 | background: rgba(0, 0, 0, 0.06); 228 | color: #222; 229 | } 230 | 231 | img { 232 | max-width: 100%; 233 | } 234 | 235 | code, pre { 236 | font-family: Monaco, Bitstream Vera Sans Mono, Lucida Console, Terminal; 237 | color: #333; 238 | font-size: 12px; 239 | overflow-x: auto; 240 | } 241 | 242 | pre { 243 | padding: 20px; 244 | background: #3A3C42; 245 | color: #f8f8f2; 246 | margin: 0 -20px 20px; 247 | } 248 | pre code { 249 | color: #f8f8f2; 250 | } 251 | li pre { 252 | margin-left: -60px; 253 | padding-left: 60px; 254 | } 255 | 256 | table { 257 | width: 100%; 258 | border-collapse: collapse; 259 | } 260 | 261 | th, td { 262 | text-align: left; 263 | padding: 5px 10px; 264 | border-bottom: 1px solid #aaa; 265 | } 266 | 267 | dt { 268 | color: #222; 269 | font-weight: 700; 270 | } 271 | 272 | th { 273 | color: #222; 274 | } 275 | 276 | small { 277 | font-size: 11px; 278 | } 279 | 280 | hr { 281 | border: 0; 282 | background: #aaa; 283 | height: 1px; 284 | margin: 0 0 20px; 285 | } 286 | 287 | footer { 288 | width: 640px; 289 | margin: 0 auto; 290 | padding: 20px 0 0; 291 | color: #ccc; 292 | overflow: hidden; 293 | } 294 | footer a { 295 | color: #fff; 296 | font-weight: bold; 297 | } 298 | footer p { 299 | float: left; 300 | } 301 | footer p + p { 302 | float: right; 303 | } 304 | 305 | @media print, screen and (max-width: 740px) { 306 | body { 307 | padding: 0; 308 | } 309 | 310 | .wrapper { 311 | -webkit-border-radius: 0; 312 | -moz-border-radius: 0; 313 | -ms-border-radius: 0; 314 | -o-border-radius: 0; 315 | border-radius: 0; 316 | -webkit-box-shadow: none; 317 | -moz-box-shadow: none; 318 | box-shadow: none; 319 | width: 100%; 320 | } 321 | 322 | footer { 323 | -webkit-border-radius: 0; 324 | -moz-border-radius: 0; 325 | -ms-border-radius: 0; 326 | -o-border-radius: 0; 327 | border-radius: 0; 328 | padding: 20px; 329 | width: auto; 330 | } 331 | footer p { 332 | float: none; 333 | margin: 0; 334 | } 335 | footer p + p { 336 | float: none; 337 | } 338 | } 339 | @media print, screen and (max-width:580px) { 340 | header ul { 341 | display: none; 342 | } 343 | 344 | header p.view { 345 | display: block; 346 | } 347 | 348 | header p { 349 | width: 100%; 350 | } 351 | } 352 | @media print { 353 | header p.view a small:before { 354 | content: 'at https://github.com/'; 355 | } 356 | } 357 | -------------------------------------------------------------------------------- /docs/stylesheets/stylesheet.css: -------------------------------------------------------------------------------- 1 | * { 2 | box-sizing: border-box; } 3 | 4 | body { 5 | padding: 0; 6 | margin: 0; 7 | font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif; 8 | font-size: 16px; 9 | line-height: 1.5; 10 | color: #606c71; } 11 | 12 | a { 13 | color: #1e6bb8; 14 | text-decoration: none; } 15 | a:hover { 16 | text-decoration: underline; } 17 | 18 | .btn { 19 | display: inline-block; 20 | margin-bottom: 1rem; 21 | color: rgba(255, 255, 255, 0.7); 22 | background-color: rgba(255, 255, 255, 0.08); 23 | border-color: rgba(255, 255, 255, 0.2); 24 | border-style: solid; 25 | border-width: 1px; 26 | border-radius: 0.3rem; 27 | transition: color 0.2s, background-color 0.2s, border-color 0.2s; } 28 | .btn + .btn { 29 | margin-left: 1rem; } 30 | 31 | .btn:hover { 32 | color: rgba(255, 255, 255, 0.8); 33 | text-decoration: none; 34 | background-color: rgba(255, 255, 255, 0.2); 35 | border-color: rgba(255, 255, 255, 0.3); } 36 | 37 | @media screen and (min-width: 64em) { 38 | .btn { 39 | padding: 0.75rem 1rem; } } 40 | 41 | @media screen and (min-width: 42em) and (max-width: 64em) { 42 | .btn { 43 | padding: 0.6rem 0.9rem; 44 | font-size: 0.9rem; } } 45 | 46 | @media screen and (max-width: 42em) { 47 | .btn { 48 | display: block; 49 | width: 100%; 50 | padding: 0.75rem; 51 | font-size: 0.9rem; } 52 | .btn + .btn { 53 | margin-top: 1rem; 54 | margin-left: 0; } } 55 | 56 | .page-header { 57 | color: #fff; 58 | text-align: center; 59 | background-color: #159957; 60 | background-image: linear-gradient(120deg, #155799, #159957); } 61 | 62 | @media screen and (min-width: 64em) { 63 | .page-header { 64 | padding: 5rem 6rem; } } 65 | 66 | @media screen and (min-width: 42em) and (max-width: 64em) { 67 | .page-header { 68 | padding: 3rem 4rem; } } 69 | 70 | @media screen and (max-width: 42em) { 71 | .page-header { 72 | padding: 2rem 1rem; } } 73 | 74 | .project-name { 75 | margin-top: 0; 76 | margin-bottom: 0.1rem; } 77 | 78 | @media screen and (min-width: 64em) { 79 | .project-name { 80 | font-size: 3.25rem; } } 81 | 82 | @media screen and (min-width: 42em) and (max-width: 64em) { 83 | .project-name { 84 | font-size: 2.25rem; } } 85 | 86 | @media screen and (max-width: 42em) { 87 | .project-name { 88 | font-size: 1.75rem; } } 89 | 90 | .project-tagline { 91 | margin-bottom: 2rem; 92 | font-weight: normal; 93 | opacity: 0.7; } 94 | 95 | @media screen and (min-width: 64em) { 96 | .project-tagline { 97 | font-size: 1.25rem; } } 98 | 99 | @media screen and (min-width: 42em) and (max-width: 64em) { 100 | .project-tagline { 101 | font-size: 1.15rem; } } 102 | 103 | @media screen and (max-width: 42em) { 104 | .project-tagline { 105 | font-size: 1rem; } } 106 | 107 | .main-content :first-child { 108 | margin-top: 0; } 109 | .main-content img { 110 | max-width: 100%; } 111 | .main-content h1, .main-content h2, .main-content h3, .main-content h4, .main-content h5, .main-content h6 { 112 | margin-top: 2rem; 113 | margin-bottom: 1rem; 114 | font-weight: normal; 115 | color: #159957; } 116 | .main-content p { 117 | margin-bottom: 1em; } 118 | .main-content code { 119 | padding: 2px 4px; 120 | font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace; 121 | font-size: 0.9rem; 122 | color: #383e41; 123 | background-color: #f3f6fa; 124 | border-radius: 0.3rem; } 125 | .main-content pre { 126 | padding: 0.8rem; 127 | margin-top: 0; 128 | margin-bottom: 1rem; 129 | font: 1rem Consolas, "Liberation Mono", Menlo, Courier, monospace; 130 | color: #567482; 131 | word-wrap: normal; 132 | background-color: #f3f6fa; 133 | border: solid 1px #dce6f0; 134 | border-radius: 0.3rem; } 135 | .main-content pre > code { 136 | padding: 0; 137 | margin: 0; 138 | font-size: 0.9rem; 139 | color: #567482; 140 | word-break: normal; 141 | white-space: pre; 142 | background: transparent; 143 | border: 0; } 144 | .main-content .highlight { 145 | margin-bottom: 1rem; } 146 | .main-content .highlight pre { 147 | margin-bottom: 0; 148 | word-break: normal; } 149 | .main-content .highlight pre, .main-content pre { 150 | padding: 0.8rem; 151 | overflow: auto; 152 | font-size: 0.9rem; 153 | line-height: 1.45; 154 | border-radius: 0.3rem; } 155 | .main-content pre code, .main-content pre tt { 156 | display: inline; 157 | max-width: initial; 158 | padding: 0; 159 | margin: 0; 160 | overflow: initial; 161 | line-height: inherit; 162 | word-wrap: normal; 163 | background-color: transparent; 164 | border: 0; } 165 | .main-content pre code:before, .main-content pre code:after, .main-content pre tt:before, .main-content pre tt:after { 166 | content: normal; } 167 | .main-content ul, .main-content ol { 168 | margin-top: 0; } 169 | .main-content blockquote { 170 | padding: 0 1rem; 171 | margin-left: 0; 172 | color: #819198; 173 | border-left: 0.3rem solid #dce6f0; } 174 | .main-content blockquote > :first-child { 175 | margin-top: 0; } 176 | .main-content blockquote > :last-child { 177 | margin-bottom: 0; } 178 | .main-content table { 179 | display: block; 180 | width: 100%; 181 | overflow: auto; 182 | word-break: normal; 183 | word-break: keep-all; } 184 | .main-content table th { 185 | font-weight: bold; } 186 | .main-content table th, .main-content table td { 187 | padding: 0.5rem 1rem; 188 | border: 1px solid #e9ebec; } 189 | .main-content dl { 190 | padding: 0; } 191 | .main-content dl dt { 192 | padding: 0; 193 | margin-top: 1rem; 194 | font-size: 1rem; 195 | font-weight: bold; } 196 | .main-content dl dd { 197 | padding: 0; 198 | margin-bottom: 1rem; } 199 | .main-content hr { 200 | height: 2px; 201 | padding: 0; 202 | margin: 1rem 0; 203 | background-color: #eff0f1; 204 | border: 0; } 205 | 206 | @media screen and (min-width: 64em) { 207 | .main-content { 208 | max-width: 64rem; 209 | padding: 2rem 6rem; 210 | margin: 0 auto; 211 | font-size: 1.1rem; } } 212 | 213 | @media screen and (min-width: 42em) and (max-width: 64em) { 214 | .main-content { 215 | padding: 2rem 4rem; 216 | font-size: 1.1rem; } } 217 | 218 | @media screen and (max-width: 42em) { 219 | .main-content { 220 | padding: 2rem 1rem; 221 | font-size: 1rem; } } 222 | 223 | .site-footer { 224 | padding-top: 2rem; 225 | margin-top: 2rem; 226 | border-top: solid 1px #eff0f1; } 227 | 228 | .site-footer-owner { 229 | display: block; 230 | font-weight: bold; } 231 | 232 | .site-footer-credits { 233 | color: #819198; } 234 | 235 | @media screen and (min-width: 64em) { 236 | .site-footer { 237 | font-size: 1rem; } } 238 | 239 | @media screen and (min-width: 42em) and (max-width: 64em) { 240 | .site-footer { 241 | font-size: 1rem; } } 242 | 243 | @media screen and (max-width: 42em) { 244 | .site-footer { 245 | font-size: 0.9rem; } } 246 | -------------------------------------------------------------------------------- /main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | jcw "github.com/advantageous/systemd-cloud-watch/cloud-watch" 6 | "os" 7 | lg "github.com/advantageous/go-logback/logging" 8 | ) 9 | 10 | var help = flag.Bool("help", false, "set to true to show this help") 11 | 12 | func main() { 13 | 14 | logger := lg.NewSimpleLogger("main") 15 | 16 | flag.Parse() 17 | 18 | if *help { 19 | usage(logger) 20 | os.Exit(0) 21 | } 22 | 23 | configFilename := flag.Arg(0) 24 | if configFilename == "" { 25 | usage(logger) 26 | println("config file name must be set!") 27 | os.Exit(2) 28 | } 29 | 30 | config := jcw.CreateConfig(configFilename, logger) 31 | journal := jcw.CreateJournal(config, logger) 32 | repeater := jcw.CreateRepeater(config, logger) 33 | 34 | jcw.NewRunner(journal, repeater, logger, config) 35 | 36 | } 37 | 38 | func usage(logger lg.Logger) { 39 | logger.Error("Usage: systemd-cloud-watch ") 40 | flag.PrintDefaults() 41 | } 42 | -------------------------------------------------------------------------------- /main/test.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | j "github.com/advantageous/systemd-cloud-watch/cloud-watch" 5 | 6 | "time" 7 | ) 8 | 9 | var readTestMap = map[string]string{ 10 | "__CURSOR": "s=6c072e0567ff423fa9cb39f136066299;i=3;b=923def0648b1422aa28a8846072481f2;m=65ee792c;t=542783a1cc4e0;x=7d96bf9e60a6512b", 11 | "__REALTIME_TIMESTAMP": "1480459022025952", 12 | "__MONOTONIC_TIMESTAMP": "1710127404", 13 | "_BOOT_ID": "923def0648b1422aa28a8846072481f2", 14 | "PRIORITY": "6", 15 | "_TRANSPORT": "driver", 16 | "_PID": "712", 17 | "_UID": "0", 18 | "_GID": "0", 19 | "_COMM": "systemd-journal", 20 | "_EXE": "/usr/lib/systemd/systemd-journald", 21 | "_CMDLINE": "/usr/lib/systemd/systemd-journald", 22 | "_CAP_EFFECTIVE": "a80425fb", 23 | "_SYSTEMD_CGROUP": "c", 24 | "_MACHINE_ID": "5125015c46bb4bf6a686b5e692492075", 25 | "_HOSTNAME": "f5076731cfdb", 26 | "MESSAGE": "Journal started", 27 | "MESSAGE_ID": "f77379a8490b408bbe5f6940505a777b", 28 | } 29 | 30 | const readTestConfigData = ` 31 | log_group="dcos-logstream-test" 32 | state_file="/var/lib/journald-cloudwatch-logs/state-test" 33 | log_priority=3 34 | debug=true 35 | ` 36 | 37 | func main() { 38 | 39 | logger := j.NewSimpleLogger("read-config-test", nil) 40 | var journal j.MockJournal 41 | journal = j.NewJournalWithMap(readTestMap).(j.MockJournal) 42 | 43 | config, _ := j.LoadConfigFromString(readTestConfigData, logger) 44 | records := make(chan j.Record) 45 | 46 | journal.SetCount(1) 47 | 48 | go j.ReadOneRecord(journal, records, logger, config, "foo-bar") 49 | 50 | var record j.Record 51 | var more bool 52 | 53 | timer := time.NewTimer(time.Millisecond * 1000) 54 | 55 | select { 56 | case record, more = <-records: 57 | 58 | if !more { 59 | 60 | panic("NO MORE") 61 | } 62 | 63 | if record == (j.Record{}) { 64 | panic("RECORD") 65 | } 66 | 67 | case <-timer.C: 68 | logger.Info.Println("TIMEOUT") 69 | 70 | } 71 | 72 | } 73 | -------------------------------------------------------------------------------- /samples/output.json: -------------------------------------------------------------------------------- 1 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=1;b=923def0648b1422aa28a8846072481f2;m=4fd1eb09;t=54278240036bd;x=c82f34de75241376", "__REALTIME_TIMESTAMP" : "1480458651055805", "__MONOTONIC_TIMESTAMP" : "1339157257", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "665", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 2 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=2;b=923def0648b1422aa28a8846072481f2;m=4fd1eba4;t=5427824003758;x=c82f34de75241376", "__REALTIME_TIMESTAMP" : "1480458651055960", "__MONOTONIC_TIMESTAMP" : "1339157412", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "665", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 3 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=3;b=923def0648b1422aa28a8846072481f2;m=4fd1ebfd;t=54278240037b1;x=8cdf0292acb0f69", "__REALTIME_TIMESTAMP" : "1480458651056049", "__MONOTONIC_TIMESTAMP" : "1339157501", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_PID" : "665", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal started", "MESSAGE_ID" : "f77379a8490b408bbe5f6940505a777b" } 4 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=4;b=923def0648b1422aa28a8846072481f2;m=4ffbc069;t=54278242a0c1d;x=21eb3a2cec19bc8b", "__REALTIME_TIMESTAMP" : "1480458653797405", "__MONOTONIC_TIMESTAMP" : "1341898857", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_PID" : "665", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal stopped", "MESSAGE_ID" : "d93fb3c9c24d451a97cea615ce59c00b" } 5 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=5;b=923def0648b1422aa28a8846072481f2;m=502e27eb;t=54278245c73a0;x=e3775f298e96855d", "__REALTIME_TIMESTAMP" : "1480458657100704", "__MONOTONIC_TIMESTAMP" : "1345202155", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_PID" : "666" } 6 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=6;b=923def0648b1422aa28a8846072481f2;m=502e28fb;t=54278245c74af;x=e3775f298e96855d", "__REALTIME_TIMESTAMP" : "1480458657100975", "__MONOTONIC_TIMESTAMP" : "1345202427", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 8.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_PID" : "666" } 7 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=7;b=923def0648b1422aa28a8846072481f2;m=502e297a;t=54278245c752e;x=23959bded1799942", "__REALTIME_TIMESTAMP" : "1480458657101102", "__MONOTONIC_TIMESTAMP" : "1345202554", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal started", "MESSAGE_ID" : "f77379a8490b408bbe5f6940505a777b", "_PID" : "666" } 8 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=8;b=923def0648b1422aa28a8846072481f2;m=5132d4f4;t=54278256120a8;x=2c1819b1e5cb5a05", "__REALTIME_TIMESTAMP" : "1480458674184360", "__MONOTONIC_TIMESTAMP" : "1362285812", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_PID" : "669", "_COMM" : "echo", "_EXE" : "/usr/bin/echo", "_CMDLINE" : "echo RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch" } 9 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=9;b=923def0648b1422aa28a8846072481f2;m=5494547c;t=5427828c2a030;x=904fe0693c848d13", "__REALTIME_TIMESTAMP" : "1480458730905648", "__MONOTONIC_TIMESTAMP" : "1419007100", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "a80425fb", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_COMM" : "echo", "_PID" : "671", "_SYSTEMD_CGROUP" : "/" } 10 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=a;b=923def0648b1422aa28a8846072481f2;m=54a0a832;t=5427828cef3e6;x=66d926a054bb6077", "__REALTIME_TIMESTAMP" : "1480458731713510", "__MONOTONIC_TIMESTAMP" : "1419814962", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "a80425fb", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_COMM" : "echo", "_SYSTEMD_CGROUP" : "/", "_PID" : "673" } 11 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=b;b=923def0648b1422aa28a8846072481f2;m=54ab05f1;t=5427828d951a5;x=c1269e7f94ef798c", "__REALTIME_TIMESTAMP" : "1480458732392869", "__MONOTONIC_TIMESTAMP" : "1420494321", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "a80425fb", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_COMM" : "echo", "_SYSTEMD_CGROUP" : "/", "_PID" : "675" } 12 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=c;b=923def0648b1422aa28a8846072481f2;m=54b66306;t=5427828e4aeba;x=4d8fa6c72f65446a", "__REALTIME_TIMESTAMP" : "1480458733137594", "__MONOTONIC_TIMESTAMP" : "1421239046", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "a80425fb", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_COMM" : "echo", "_SYSTEMD_CGROUP" : "/", "_PID" : "677" } 13 | { "__CURSOR" : "s=a205e69472cb47cb962e76ce8736aa77;i=d;b=923def0648b1422aa28a8846072481f2;m=54c0e0fd;t=5427828ef2cb1;x=346fe9bde265a70a", "__REALTIME_TIMESTAMP" : "1480458733825201", "__MONOTONIC_TIMESTAMP" : "1421926653", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "a80425fb", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_COMM" : "echo", "_SYSTEMD_CGROUP" : "/", "_PID" : "679" } 14 | { "__CURSOR" : "s=f3ce919275654384aa62c74a9a5465f8;i=1;b=923def0648b1422aa28a8846072481f2;m=5cc5bc68;t=5427830f4081c;x=2c0487daba7e943a", "__REALTIME_TIMESTAMP" : "1480458868361244", "__MONOTONIC_TIMESTAMP" : "1556462696", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 16.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "691", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 15 | { "__CURSOR" : "s=f3ce919275654384aa62c74a9a5465f8;i=2;b=923def0648b1422aa28a8846072481f2;m=5cc5bd48;t=5427830f408fc;x=2c0487daba7e943a", "__REALTIME_TIMESTAMP" : "1480458868361468", "__MONOTONIC_TIMESTAMP" : "1556462920", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 16.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "691", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 16 | { "__CURSOR" : "s=f3ce919275654384aa62c74a9a5465f8;i=3;b=923def0648b1422aa28a8846072481f2;m=5cc5bdcd;t=5427830f40982;x=5fb10888f11ceb4a", "__REALTIME_TIMESTAMP" : "1480458868361602", "__MONOTONIC_TIMESTAMP" : "1556463053", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_PID" : "691", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal started", "MESSAGE_ID" : "f77379a8490b408bbe5f6940505a777b" } 17 | { "__CURSOR" : "s=f3ce919275654384aa62c74a9a5465f8;i=4;b=923def0648b1422aa28a8846072481f2;m=5cd9b96f;t=5427831080523;x=55cceb25db121d6", "__REALTIME_TIMESTAMP" : "1480458869671203", "__MONOTONIC_TIMESTAMP" : "1557772655", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "_TRANSPORT" : "stdout", "MESSAGE" : "RUNNING JAVA BATCH JOB - ADF BATCH from /gopath/src/github.com/advantageous/systemd-cloud-watch", "_PID" : "693" } 18 | { "__CURSOR" : "s=e1b770ccab6b4aeab8cb258cbed6fdde;i=1;b=923def0648b1422aa28a8846072481f2;m=5fa48b93;t=5427833d2d747;x=fe6614dd13ed7c2e", "__REALTIME_TIMESTAMP" : "1480458916517703", "__MONOTONIC_TIMESTAMP" : "1604619155", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 24.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "699", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 19 | { "__CURSOR" : "s=e1b770ccab6b4aeab8cb258cbed6fdde;i=2;b=923def0648b1422aa28a8846072481f2;m=5fa48c9c;t=5427833d2d850;x=fe6614dd13ed7c2e", "__REALTIME_TIMESTAMP" : "1480458916517968", "__MONOTONIC_TIMESTAMP" : "1604619420", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 24.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "699", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 20 | { "__CURSOR" : "s=e1b770ccab6b4aeab8cb258cbed6fdde;i=3;b=923def0648b1422aa28a8846072481f2;m=5fa48d21;t=5427833d2d8d5;x=787917a81699d321", "__REALTIME_TIMESTAMP" : "1480458916518101", "__MONOTONIC_TIMESTAMP" : "1604619553", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_PID" : "699", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal started", "MESSAGE_ID" : "f77379a8490b408bbe5f6940505a777b" } 21 | { "__CURSOR" : "s=a8bc18aa3eb24645b1421d7809f1240b;i=1;b=923def0648b1422aa28a8846072481f2;m=61683165;t=5427835967d19;x=b2b014c4e804275f", "__REALTIME_TIMESTAMP" : "1480458946116889", "__MONOTONIC_TIMESTAMP" : "1634218341", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 32.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "702", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 22 | { "__CURSOR" : "s=a8bc18aa3eb24645b1421d7809f1240b;i=2;b=923def0648b1422aa28a8846072481f2;m=616832ce;t=5427835967e82;x=b2b014c4e804275f", "__REALTIME_TIMESTAMP" : "1480458946117250", "__MONOTONIC_TIMESTAMP" : "1634218702", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 32.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "702", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 23 | { "__CURSOR" : "s=a8bc18aa3eb24645b1421d7809f1240b;i=3;b=923def0648b1422aa28a8846072481f2;m=61683379;t=5427835967f2d;x=f0764a39f3da45de", "__REALTIME_TIMESTAMP" : "1480458946117421", "__MONOTONIC_TIMESTAMP" : "1634218873", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_PID" : "702", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal started", "MESSAGE_ID" : "f77379a8490b408bbe5f6940505a777b" } 24 | { "__CURSOR" : "s=6c072e0567ff423fa9cb39f136066299;i=1;b=923def0648b1422aa28a8846072481f2;m=65ee77d0;t=542783a1cc384;x=cf26ba97c656bb84", "__REALTIME_TIMESTAMP" : "1480459022025604", "__MONOTONIC_TIMESTAMP" : "1710127056", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 40.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "712", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 25 | { "__CURSOR" : "s=6c072e0567ff423fa9cb39f136066299;i=2;b=923def0648b1422aa28a8846072481f2;m=65ee78c1;t=542783a1cc475;x=cf26ba97c656bb84", "__REALTIME_TIMESTAMP" : "1480459022025845", "__MONOTONIC_TIMESTAMP" : "1710127297", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "MESSAGE" : "Runtime journal is using 40.0M (max allowed 4.0G, trying to leave 4.0G free of 55.1G available \uffffffe2\uffffff86\uffffff92 current limit 4.0G).", "MESSAGE_ID" : "ec387f577b844b8fa948f33cad9a75e6", "_PID" : "712", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb" } 26 | { "__CURSOR" : "s=6c072e0567ff423fa9cb39f136066299;i=3;b=923def0648b1422aa28a8846072481f2;m=65ee792c;t=542783a1cc4e0;x=7d96bf9e60a6512b", "__REALTIME_TIMESTAMP" : "1480459022025952", "__MONOTONIC_TIMESTAMP" : "1710127404", "_BOOT_ID" : "923def0648b1422aa28a8846072481f2", "PRIORITY" : "6", "_TRANSPORT" : "driver", "_PID" : "712", "_UID" : "0", "_GID" : "0", "_COMM" : "systemd-journal", "_EXE" : "/usr/lib/systemd/systemd-journald", "_CMDLINE" : "/usr/lib/systemd/systemd-journald", "_CAP_EFFECTIVE" : "a80425fb", "_SYSTEMD_CGROUP" : "c", "_MACHINE_ID" : "5125015c46bb4bf6a686b5e692492075", "_HOSTNAME" : "f5076731cfdb", "MESSAGE" : "Journal started", "MESSAGE_ID" : "f77379a8490b408bbe5f6940505a777b" } 27 | -------------------------------------------------------------------------------- /samples/sample.conf: -------------------------------------------------------------------------------- 1 | log_priority=7 2 | debug=true 3 | local=true 4 | log_stream="test-today-777" 5 | log_group="test-group-777" 6 | batchSize=5 7 | 8 | --------------------------------------------------------------------------------