├── .gitignore ├── README.md ├── Vagrantfile ├── build ├── artifacts │ ├── lc-tlscert │ ├── logstash-forwarder.crt │ └── logstash-forwarder.key ├── elasticsearch │ └── elasticsearch.yml ├── kibana │ └── config.js ├── logstash │ ├── 01-lumberjack-input.conf │ ├── 10-filter.conf │ ├── 30-lumberjack-output.conf │ ├── logstash-forwarder │ ├── logstash-forwarder.conf │ └── logstash-forwarder.init ├── nginx │ ├── kibanaelastic.conf │ └── logdemo.conf ├── provision-logstash.sh ├── provision-web.sh ├── ssl │ └── ssl.conf ├── supervisord │ └── supervisord.conf └── varnish │ ├── default.vcl │ └── varnish ├── composer.json ├── composer.lock └── www └── index.php /.gitignore: -------------------------------------------------------------------------------- 1 | vendor 2 | .idea 3 | .vagrant 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | #Application Logging With Logstash And Monolog 2 | 3 | ##Introduction 4 | 5 | This code is the companion to the conference talk of the same name given in 6 | London at PHPUK 2015. 7 | 8 | Follow the instructions below to create teo VMs. The first of these is a fully working 9 | Logstash instance and the second a webserver hosting a simple PHP demo application. 10 | 11 | The web server sends its syslog, auth, Nginx and web application logs to the Logstash 12 | instance. The demo app has four endpoints all of which generate different types of log data 13 | and demonstate different techniques. 14 | 15 | You can also look at the two provisioning files in *build/* for detailed instructions on how to install 16 | Logstash, Elastic Search and Kibana on a central server and the log forwarder component on the other servers 17 | in your infrastructure. 18 | 19 | ##Set Up 20 | 21 | ###Requirments 22 | 23 | 1. [Virtual box](https://www.virtualbox.org/) 24 | 25 | 2. [Vagrant](https://www.vagrantup.com/) 26 | 27 | 3. The vagrant [host updater plugin](https://github.com/cogitatio/vagrant-hostsupdater) 28 | 29 | ###Instructions 30 | 31 | 1. vagrant up 32 | 33 | 2. vagrant ssh web -c "cd /vagrant && composer install" 34 | 35 | 3. Access Kibana from your web browser [http://logs.logstashdemo.com](http://logs.logstashdemo.com/index.html#/dashboard/file/logstash.json) 36 | 37 | 4. Access the demo web application from your browser: http://web.logstashdemo.com 38 | 39 | ### What am I looking at? 40 | 41 | When you visit the Kibana dashboard you should see a graph showing the various logs shipped from the web server 42 | to the logs server. There should be a lot of syslog entries initially. By visiting the web instance you can trigger some 43 | Nginx access logs to be collected (see below for examples). 44 | 45 | Try the following query in the search box: 46 | 47 | ``` 48 | type: nginx-access-hello-app AND response:200 49 | ``` 50 | 51 | You should see only the Nginx access logs rather than all logs including syslogs. 52 | 53 | ### List of web endpoints 54 | 55 | http://web.logstashdemo.com/ 56 | 57 | **/** - A simple endpoint producing a sucesfull response and a single entry in the access log 58 | 59 | **/flappy** - An endpoint that produces a range of errors to demonstrate using logstash to display errors 60 | 61 | **/fingerscrossed** - An endpoint which uses Monolog + Fingers Crossed handler to demo application logging 62 | 63 | **/register** - An endpoint which uses Monolog and Symfony Event Manager to demonstrate business event logging 64 | 65 | 66 | ###FAQ 67 | 68 | ####How do you set up Logstash / Elastic Search 69 | 70 | I followed guides published by Digital Occean 71 | 72 | [Part One: Setting up Logstash, Kibana](https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04) 73 | 74 | [Part Two: Using Logstash filters](https://www.digitalocean.com/community/tutorials/adding-logstash-filters-to-improve-centralized-logging) 75 | 76 | The file *build/provision-logstash.sh* contains a script which automates the steps in the first article 77 | 78 | ####How do you ship logs to Logstash 79 | 80 | The article: 81 | 82 | [Part One: Setting up Logstash, Kibana](https://www.digitalocean.com/community/tutorials/how-to-use-logstash-and-kibana-to-centralize-and-visualize-logs-on-ubuntu-14-04) 83 | 84 | Shows how to use the Logstash log forwarder and the script in *build/provision-web.sh* 85 | shows how the log forwarder is installed and configured. 86 | 87 | ####How do I search logs via Kibana 88 | 89 | Go to the http://logs.logstashdemo.com after following the instructions above. 90 | 91 | [The documentation](http://www.elasticsearch.org/guide/en/kibana/current/working-with-queries-and-filters.html) provides a set of examples on how to use the search effectivley. 92 | 93 | ####Why Haven't you used *insert x configuration managment system* to set this up? 94 | 95 | In an effort to clearly define the steps required to install Logstash and make the accessible to everyone I haven't used a config management tool. 96 | In a production setup I recommend you automate the deployment of Logstash and its agents! 97 | 98 | ###Troubleshooting 99 | 100 | ####Networking Issues 101 | 102 | This vagrant file uses a host only networking configuration. This should be created automatically but where it isn't: 103 | 104 | 1) Open Virtual Box 105 | 106 | 2) Open preferences 107 | 108 | 3) Select networking 109 | 110 | 4) Select 'Host Only Networks' 111 | 112 | 5) Create a host only network wit the following settings: 113 | ``` 114 | IPV4 Address: 10.0.4.1 115 | IPV4 Mask: 255.255.255.0 116 | ``` 117 | 118 | ####No Logfiles in Kibana 119 | 120 | If you aren't seeing logs appear in Kibana check the log for the logstash forwarding agent. 121 | ``` 122 | tail -f /var/log/logstash-forwarder.log 123 | ``` 124 | 125 | You should also be able to use the browser console to see if any errors are being returned from ajax calls to elastic search. 126 | 127 | 128 | ####Vagrant refuses to 'up' the machines with a message about a missing plugin 129 | 130 | Have you installed the Host Updater plugin as indicated above? 131 | 132 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! 5 | VAGRANTFILE_API_VERSION = "2" 6 | 7 | Vagrant.configure("2") do |config| 8 | 9 | config.vm.provision "shell", inline: "echo Hello" 10 | 11 | config.vm.define "log" do |log| 12 | 13 | # Box template to use 14 | log.vm.box = "ubuntu/trusty64" 15 | 16 | # Increase memory available 17 | log.vm.provider "virtualbox" do |v| 18 | v.memory = 1024 19 | end 20 | 21 | log.vm.network "private_network", ip: "10.0.4.55" 22 | 23 | log.vm.synced_folder ".", "/vagrant", type: "nfs" 24 | 25 | log.ssh.forward_agent = true 26 | 27 | log.hostsupdater.aliases = ["logs.logstashdemo.com"] 28 | 29 | log.vm.provision "shell", path: "build/provision-logstash.sh" 30 | 31 | end 32 | 33 | config.vm.define "web" do |web| 34 | 35 | # Box template to use 36 | web.vm.box = "ubuntu/trusty64" 37 | 38 | # Increase memory available 39 | web.vm.provider "virtualbox" do |v| 40 | v.memory = 1024 41 | end 42 | 43 | web.vm.network "private_network", ip: "10.0.4.56" 44 | 45 | web.vm.synced_folder ".", "/vagrant", type: "nfs" 46 | 47 | web.ssh.forward_agent = true 48 | 49 | web.hostsupdater.aliases = ["web.logstashdemo.com"] 50 | 51 | web.vm.provision "shell", path: "build/provision-web.sh" 52 | end 53 | end -------------------------------------------------------------------------------- /build/artifacts/lc-tlscert: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LoveSoftware/application-logging-with-logstash/bc1be872a5ddb29c0cdbe8cf969995d8e7e79889/build/artifacts/lc-tlscert -------------------------------------------------------------------------------- /build/artifacts/logstash-forwarder.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDOzCCAiWgAwIBAgIQV2zNBK3fj3dU5XtDVqqFoDALBgkqhkiG9w0BAQswNjEU 3 | MBIGA1UEChMLTG9nIENvdXJpZXIxHjAcBgNVBAMTFWxvZ3MubG9nc3Rhc2hkZW1v 4 | LmNvbTAeFw0xNTAyMTQxNzQyMjdaFw0yODEwMjMxNzQyMjdaMDYxFDASBgNVBAoT 5 | C0xvZyBDb3VyaWVyMR4wHAYDVQQDExVsb2dzLmxvZ3N0YXNoZGVtby5jb20wggEi 6 | MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC4xr6gkLSwlmXuk5yZC4i2kh/k 7 | +8zSdeHEtaybYqvJSjelINj1FbhFGKQpftBOjBhHWZYIjz3321+eiYpmJgQoGuPU 8 | o13jw4XcpOol0tT1Ho+KTdpUzDJt2S2ka3OL8wyzNTMU0KLxIiLX3xQ+a97RBzCR 9 | R812azME3GniYlkKCM8Bm5097hBJ8Pnltp1l/CFUbFePOIxia/QrCK4izWmxgsDy 10 | L6HBwSapMqYTXGRT12yaHzJvOEGpavZD8CM+ChZJtnDCZJE+cpOIP+t05oPZI/yl 11 | +Dd7QPKG1hH+xnzjjHd3a7vO1yLdulRd3egap4mAdZUXBldLFdPfa268Kq9hAgMB 12 | AAGjSTBHMA4GA1UdDwEB/wQEAwIApDATBgNVHSUEDDAKBggrBgEFBQcDATAPBgNV 13 | HRMBAf8EBTADAQH/MA8GA1UdEQQIMAaHBAoABDcwCwYJKoZIhvcNAQELA4IBAQAD 14 | f2LS7RCjxWrI3bwmJYZxbie25D0WPkF4KvVw1VnmjI6lf+ld2DG5DT6nmm3LfmBn 15 | a/mN4eatm0m61gms+9+3uGhV8jr5mGT0wVfWzYQUTyW9VbmlW54kNPnm6Z7f4/mr 16 | wvwrhdquKkYXYHwWscnYceveUEilB248bjpbKmx6LYGZz1s7K7LJuVlj3K94QgU7 17 | jacyuv0GZJ7YWuFbBBwP9CUXmK/HTR+OgoKAxcanijQ7CFyuQD27QLK79TVrk/en 18 | RM5LkwL6nLMqDaAed3IPvS75Q9YIF7XtSEAIFIRSgn1cBsKneKc2bGEEWvGFiQNz 19 | 77N1TN3zZTi76X61bAqe 20 | -----END CERTIFICATE----- 21 | -------------------------------------------------------------------------------- /build/artifacts/logstash-forwarder.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEAuMa+oJC0sJZl7pOcmQuItpIf5PvM0nXhxLWsm2KryUo3pSDY 3 | 9RW4RRikKX7QTowYR1mWCI8999tfnomKZiYEKBrj1KNd48OF3KTqJdLU9R6Pik3a 4 | VMwybdktpGtzi/MMszUzFNCi8SIi198UPmve0QcwkUfNdmszBNxp4mJZCgjPAZud 5 | Pe4QSfD55badZfwhVGxXjziMYmv0KwiuIs1psYLA8i+hwcEmqTKmE1xkU9dsmh8y 6 | bzhBqWr2Q/AjPgoWSbZwwmSRPnKTiD/rdOaD2SP8pfg3e0DyhtYR/sZ844x3d2u7 7 | ztci3bpUXd3oGqeJgHWVFwZXSxXT32tuvCqvYQIDAQABAoIBAG801jfmv4jkC5cJ 8 | 6h7GLVLMITv8O+qSnf145dhjC0bLTzAn08u1dcDIMszykMYlVNtkVIL0SvRoaGUP 9 | HGecC7ZjcKliZTiWTXNdIbr/58Fa0kMH1hZhCxzHr8ucC9+3uPYGV6b4Zoi/5b6M 10 | eS+UVnbxX86gK01Q+VS8n1FrpXD+kyP0Nz5TlucB/VZthW+2glyKhNqdJJKYLc+9 11 | lBlC2ikTwNfydL6RXUdYR0vGAhTjIs55VttIlXMRUiJs4V4FYhI5+5/IudO6VVkO 12 | 5dytNnQG4luEAE3P/Ism5Y/DNBTGfkzhz1jvkrjbK75LOJfO6QDNNsGiUn0VnL6U 13 | tOycXwECgYEAxUBhOHqW+ne22TzFKXiDS3IuzRKSQglnkyZsMtPA3DgChPas2jCf 14 | 66EAW9prOXsJl8rN0whpvRaGvECogZXcS0/9DyuFfXsqI76c7uJUjazWJz17GyJ6 15 | 8xjJQ1oRTf2oBPv9a2houIiSO8bKTxuiexicORxt31YPyeCF30s2q9ECgYEA788v 16 | xH/FGickMcC84bRRbpHP+W0URMfa233yRBvxsVWcFBuGpLGnAuK5T8V4QlEYZQ32 17 | SNfuX9cr+feny1USOkZoWYydRGy7k1OP5iAFX2gQtGo7d9zA3jllsDSqdkx+usAF 18 | QLiUnrUL0fB5qoN1itCq0dUJBiBQUpHhowia/pECgYEAt9PxBybQj+qDwN8uzCBh 19 | FC38yefV4K9NFMlJKvFHmrSkPHB71PheAcXRRMlBBpfQ7+L0gQklKjDVLpp/sA0O 20 | +i04pSulQ7VGJ3vcW5EYxdRe3MEier5eoTHnV9qXp/yO2t5RZgkvF1NIHWd9Yc5a 21 | Vagw59TD3NEi87xIZzp8YBECgYBlK/LKIuGD7BmACAFn84wbatbkMxnG/s4dpeAM 22 | zgFEwIptjUNbvjtoo0BtIDFhQRdaou5Rww+lTYEXH12iEgzzmvqxNPqwgHMOb8WY 23 | 38+EdcH+a4cVRYP0/SAim8WCzTj2DsbojDbfUiBffOXHg1iWrPw0NH1vITjh7PvV 24 | rW6+kQKBgQCsPbmYdVAqAWT9KIWDUBgX9i+/8hN2FU0zCMXTDln464oql1GJl3SP 25 | nuqCvJFvARqxYIF2hquaHij1Qz79imzFFFt7D5dwkVqdVZs4SBnsCIrLfmpYpTit 26 | Dhe4k0zDCZ764vFFp86WUwHSpeYCBvKY7o719in3KNSMmY8DFUD1hQ== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /build/elasticsearch/elasticsearch.yml: -------------------------------------------------------------------------------- 1 | ##################### Elasticsearch Configuration Example ##################### 2 | 3 | # This file contains an overview of various configuration settings, 4 | # targeted at operations staff. Application developers should 5 | # consult the guide at . 6 | # 7 | # The installation procedure is covered at 8 | # . 9 | # 10 | # Elasticsearch comes with reasonable defaults for most settings, 11 | # so you can try it out without bothering with configuration. 12 | # 13 | # Most of the time, these defaults are just fine for running a production 14 | # cluster. If you're fine-tuning your cluster, or wondering about the 15 | # effect of certain configuration option, please _do ask_ on the 16 | # mailing list or IRC channel [http://elasticsearch.org/community]. 17 | 18 | # Any element in the configuration can be replaced with environment variables 19 | # by placing them in ${...} notation. For example: 20 | # 21 | # node.rack: ${RACK_ENV_VAR} 22 | 23 | # For information on supported formats and syntax for the config file, see 24 | # 25 | 26 | 27 | ################################### Cluster ################################### 28 | 29 | # Cluster name identifies your cluster for auto-discovery. If you're running 30 | # multiple clusters on the same network, make sure you're using unique names. 31 | # 32 | # cluster.name: elasticsearch 33 | 34 | 35 | #################################### Node ##################################### 36 | 37 | # Node names are generated dynamically on startup, so you're relieved 38 | # from configuring them manually. You can tie this node to a specific name: 39 | # 40 | # node.name: "Franz Kafka" 41 | 42 | # Every node can be configured to allow or deny being eligible as the master, 43 | # and to allow or deny to store the data. 44 | # 45 | # Allow this node to be eligible as a master node (enabled by default): 46 | # 47 | # node.master: true 48 | # 49 | # Allow this node to store data (enabled by default): 50 | # 51 | # node.data: true 52 | 53 | # You can exploit these settings to design advanced cluster topologies. 54 | # 55 | # 1. You want this node to never become a master node, only to hold data. 56 | # This will be the "workhorse" of your cluster. 57 | # 58 | # node.master: false 59 | # node.data: true 60 | # 61 | # 2. You want this node to only serve as a master: to not store any data and 62 | # to have free resources. This will be the "coordinator" of your cluster. 63 | # 64 | # node.master: true 65 | # node.data: false 66 | # 67 | # 3. You want this node to be neither master nor data node, but 68 | # to act as a "search load balancer" (fetching data from nodes, 69 | # aggregating results, etc.) 70 | # 71 | # node.master: false 72 | # node.data: false 73 | 74 | # Use the Cluster Health API [http://localhost:9200/_cluster/health], the 75 | # Node Info API [http://localhost:9200/_nodes] or GUI tools 76 | # such as , 77 | # , 78 | # and 79 | # to inspect the cluster state. 80 | 81 | # A node can have generic attributes associated with it, which can later be used 82 | # for customized shard allocation filtering, or allocation awareness. An attribute 83 | # is a simple key value pair, similar to node.key: value, here is an example: 84 | # 85 | # node.rack: rack314 86 | 87 | # By default, multiple nodes are allowed to start from the same installation location 88 | # to disable it, set the following: 89 | # node.max_local_storage_nodes: 1 90 | 91 | 92 | #################################### Index #################################### 93 | 94 | # You can set a number of options (such as shard/replica options, mapping 95 | # or analyzer definitions, translog settings, ...) for indices globally, 96 | # in this file. 97 | # 98 | # Note, that it makes more sense to configure index settings specifically for 99 | # a certain index, either when creating it or by using the index templates API. 100 | # 101 | # See and 102 | # 103 | # for more information. 104 | 105 | # Set the number of shards (splits) of an index (5 by default): 106 | # 107 | # index.number_of_shards: 5 108 | 109 | # Set the number of replicas (additional copies) of an index (1 by default): 110 | # 111 | # index.number_of_replicas: 1 112 | 113 | # Note, that for development on a local machine, with small indices, it usually 114 | # makes sense to "disable" the distributed features: 115 | # 116 | # index.number_of_shards: 1 117 | # index.number_of_replicas: 0 118 | 119 | # These settings directly affect the performance of index and search operations 120 | # in your cluster. Assuming you have enough machines to hold shards and 121 | # replicas, the rule of thumb is: 122 | # 123 | # 1. Having more *shards* enhances the _indexing_ performance and allows to 124 | # _distribute_ a big index across machines. 125 | # 2. Having more *replicas* enhances the _search_ performance and improves the 126 | # cluster _availability_. 127 | # 128 | # The "number_of_shards" is a one-time setting for an index. 129 | # 130 | # The "number_of_replicas" can be increased or decreased anytime, 131 | # by using the Index Update Settings API. 132 | # 133 | # Elasticsearch takes care about load balancing, relocating, gathering the 134 | # results from nodes, etc. Experiment with different settings to fine-tune 135 | # your setup. 136 | 137 | # Use the Index Status API () to inspect 138 | # the index status. 139 | 140 | 141 | #################################### Paths #################################### 142 | 143 | # Path to directory containing configuration (this file and logging.yml): 144 | # 145 | # path.conf: /path/to/conf 146 | 147 | # Path to directory where to store index data allocated for this node. 148 | # 149 | # path.data: /path/to/data 150 | # 151 | # Can optionally include more than one location, causing data to be striped across 152 | # the locations (a la RAID 0) on a file level, favouring locations with most free 153 | # space on creation. For example: 154 | # 155 | # path.data: /path/to/data1,/path/to/data2 156 | 157 | # Path to temporary files: 158 | # 159 | # path.work: /path/to/work 160 | 161 | # Path to log files: 162 | # 163 | # path.logs: /path/to/logs 164 | 165 | # Path to where plugins are installed: 166 | # 167 | # path.plugins: /path/to/plugins 168 | 169 | 170 | #################################### Plugin ################################### 171 | 172 | # If a plugin listed here is not installed for current node, the node will not start. 173 | # 174 | # plugin.mandatory: mapper-attachments,lang-groovy 175 | 176 | 177 | ################################### Memory #################################### 178 | 179 | # Elasticsearch performs poorly when JVM starts swapping: you should ensure that 180 | # it _never_ swaps. 181 | # 182 | # Set this property to true to lock the memory: 183 | # 184 | # bootstrap.mlockall: true 185 | 186 | # Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set 187 | # to the same value, and that the machine has enough memory to allocate 188 | # for Elasticsearch, leaving enough memory for the operating system itself. 189 | # 190 | # You should also make sure that the Elasticsearch process is allowed to lock 191 | # the memory, eg. by using `ulimit -l unlimited`. 192 | 193 | 194 | ############################## Network And HTTP ############################### 195 | 196 | # Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens 197 | # on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node 198 | # communication. (the range means that if the port is busy, it will automatically 199 | # try the next port). 200 | 201 | # Set the bind address specifically (IPv4 or IPv6): 202 | # 203 | # network.bind_host: 192.168.0.1 204 | 205 | # Set the address other nodes will use to communicate with this node. If not 206 | # set, it is automatically derived. It must point to an actual IP address. 207 | # 208 | # network.publish_host: 192.168.0.1 209 | 210 | # Set both 'bind_host' and 'publish_host': 211 | # 212 | # network.host: 192.168.0.1 213 | 214 | # Set a custom port for the node to node communication (9300 by default): 215 | # 216 | # transport.tcp.port: 9300 217 | 218 | # Enable compression for all communication between nodes (disabled by default): 219 | # 220 | # transport.tcp.compress: true 221 | 222 | # Set a custom port to listen for HTTP traffic: 223 | # 224 | # http.port: 9200 225 | 226 | # Set a custom allowed content length: 227 | # 228 | # http.max_content_length: 100mb 229 | 230 | # Disable HTTP completely: 231 | # 232 | # http.enabled: false 233 | 234 | 235 | ################################### Gateway ################################### 236 | 237 | # The gateway allows for persisting the cluster state between full cluster 238 | # restarts. Every change to the state (such as adding an index) will be stored 239 | # in the gateway, and when the cluster starts up for the first time, 240 | # it will read its state from the gateway. 241 | 242 | # There are several types of gateway implementations. For more information, see 243 | # . 244 | 245 | # The default gateway type is the "local" gateway (recommended): 246 | # 247 | # gateway.type: local 248 | 249 | # Settings below control how and when to start the initial recovery process on 250 | # a full cluster restart (to reuse as much local data as possible when using shared 251 | # gateway). 252 | 253 | # Allow recovery process after N nodes in a cluster are up: 254 | # 255 | # gateway.recover_after_nodes: 1 256 | 257 | # Set the timeout to initiate the recovery process, once the N nodes 258 | # from previous setting are up (accepts time value): 259 | # 260 | # gateway.recover_after_time: 5m 261 | 262 | # Set how many nodes are expected in this cluster. Once these N nodes 263 | # are up (and recover_after_nodes is met), begin recovery process immediately 264 | # (without waiting for recover_after_time to expire): 265 | # 266 | # gateway.expected_nodes: 2 267 | 268 | 269 | ############################# Recovery Throttling ############################# 270 | 271 | # These settings allow to control the process of shards allocation between 272 | # nodes during initial recovery, replica allocation, rebalancing, 273 | # or when adding and removing nodes. 274 | 275 | # Set the number of concurrent recoveries happening on a node: 276 | # 277 | # 1. During the initial recovery 278 | # 279 | # cluster.routing.allocation.node_initial_primaries_recoveries: 4 280 | # 281 | # 2. During adding/removing nodes, rebalancing, etc 282 | # 283 | # cluster.routing.allocation.node_concurrent_recoveries: 2 284 | 285 | # Set to throttle throughput when recovering (eg. 100mb, by default 20mb): 286 | # 287 | # indices.recovery.max_bytes_per_sec: 20mb 288 | 289 | # Set to limit the number of open concurrent streams when 290 | # recovering a shard from a peer: 291 | # 292 | # indices.recovery.concurrent_streams: 5 293 | 294 | 295 | ################################## Discovery ################################## 296 | 297 | # Discovery infrastructure ensures nodes can be found within a cluster 298 | # and master node is elected. Multicast discovery is the default. 299 | 300 | # Set to ensure a node sees N other master eligible nodes to be considered 301 | # operational within the cluster. Its recommended to set it to a higher value 302 | # than 1 when running more than 2 nodes in the cluster. 303 | # 304 | # discovery.zen.minimum_master_nodes: 1 305 | 306 | # Set the time to wait for ping responses from other nodes when discovering. 307 | # Set this option to a higher value on a slow or congested network 308 | # to minimize discovery failures: 309 | # 310 | # discovery.zen.ping.timeout: 3s 311 | 312 | # For more information, see 313 | # 314 | 315 | # Unicast discovery allows to explicitly control which nodes will be used 316 | # to discover the cluster. It can be used when multicast is not present, 317 | # or to restrict the cluster communication-wise. 318 | # 319 | # 1. Disable multicast discovery (enabled by default): 320 | # 321 | # discovery.zen.ping.multicast.enabled: false 322 | # 323 | # 2. Configure an initial list of master nodes in the cluster 324 | # to perform discovery when new nodes (master or data) are started: 325 | # 326 | # discovery.zen.ping.unicast.hosts: ["host1", "host2:port"] 327 | 328 | # EC2 discovery allows to use AWS EC2 API in order to perform discovery. 329 | # 330 | # You have to install the cloud-aws plugin for enabling the EC2 discovery. 331 | # 332 | # For more information, see 333 | # 334 | # 335 | # See 336 | # for a step-by-step tutorial. 337 | 338 | # GCE discovery allows to use Google Compute Engine API in order to perform discovery. 339 | # 340 | # You have to install the cloud-gce plugin for enabling the GCE discovery. 341 | # 342 | # For more information, see . 343 | 344 | # Azure discovery allows to use Azure API in order to perform discovery. 345 | # 346 | # You have to install the cloud-azure plugin for enabling the Azure discovery. 347 | # 348 | # For more information, see . 349 | 350 | ################################## Slow Log ################################## 351 | 352 | # Shard level query and fetch threshold logging. 353 | 354 | #index.search.slowlog.threshold.query.warn: 10s 355 | #index.search.slowlog.threshold.query.info: 5s 356 | #index.search.slowlog.threshold.query.debug: 2s 357 | #index.search.slowlog.threshold.query.trace: 500ms 358 | 359 | #index.search.slowlog.threshold.fetch.warn: 1s 360 | #index.search.slowlog.threshold.fetch.info: 800ms 361 | #index.search.slowlog.threshold.fetch.debug: 500ms 362 | #index.search.slowlog.threshold.fetch.trace: 200ms 363 | 364 | #index.indexing.slowlog.threshold.index.warn: 10s 365 | #index.indexing.slowlog.threshold.index.info: 5s 366 | #index.indexing.slowlog.threshold.index.debug: 2s 367 | #index.indexing.slowlog.threshold.index.trace: 500ms 368 | 369 | ################################## GC Logging ################################ 370 | 371 | #monitor.jvm.gc.young.warn: 1000ms 372 | #monitor.jvm.gc.young.info: 700ms 373 | #monitor.jvm.gc.young.debug: 400ms 374 | 375 | #monitor.jvm.gc.old.warn: 10s 376 | #monitor.jvm.gc.old.info: 5s 377 | #monitor.jvm.gc.old.debug: 2s 378 | 379 | script.disable_dynamic: true 380 | network.host: localhost 381 | -------------------------------------------------------------------------------- /build/kibana/config.js: -------------------------------------------------------------------------------- 1 | /** @scratch /configuration/config.js/1 2 | * 3 | * == Configuration 4 | * config.js is where you will find the core Kibana configuration. This file contains parameter that 5 | * must be set before kibana is run for the first time. 6 | */ 7 | define(['settings'], 8 | function (Settings) { 9 | 10 | 11 | /** @scratch /configuration/config.js/2 12 | * 13 | * === Parameters 14 | */ 15 | return new Settings({ 16 | 17 | /** @scratch /configuration/config.js/5 18 | * 19 | * ==== elasticsearch 20 | * 21 | * The URL to your elasticsearch server. You almost certainly don't 22 | * want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on 23 | * the same host. By default this will attempt to reach ES at the same host you have 24 | * kibana installed on. You probably want to set it to the FQDN of your 25 | * elasticsearch host 26 | * 27 | * Note: this can also be an object if you want to pass options to the http client. For example: 28 | * 29 | * +elasticsearch: {server: "http://localhost:9200", withCredentials: true}+ 30 | * 31 | */ 32 | elasticsearch: "http://"+window.location.hostname+":80", 33 | 34 | /** @scratch /configuration/config.js/5 35 | * 36 | * ==== default_route 37 | * 38 | * This is the default landing page when you don't specify a dashboard to load. You can specify 39 | * files, scripts or saved dashboards here. For example, if you had saved a dashboard called 40 | * `WebLogs' to elasticsearch you might use: 41 | * 42 | * default_route: '/dashboard/elasticsearch/WebLogs', 43 | */ 44 | default_route : '/dashboard/file/default.json', 45 | 46 | /** @scratch /configuration/config.js/5 47 | * 48 | * ==== kibana-int 49 | * 50 | * The default ES index to use for storing Kibana specific object 51 | * such as stored dashboards 52 | */ 53 | kibana_index: "kibana-int", 54 | 55 | /** @scratch /configuration/config.js/5 56 | * 57 | * ==== panel_name 58 | * 59 | * An array of panel modules available. Panels will only be loaded when they are defined in the 60 | * dashboard, but this list is used in the "add panel" interface. 61 | */ 62 | panel_names: [ 63 | 'histogram', 64 | 'map', 65 | 'goal', 66 | 'table', 67 | 'filtering', 68 | 'timepicker', 69 | 'text', 70 | 'hits', 71 | 'column', 72 | 'trends', 73 | 'bettermap', 74 | 'query', 75 | 'terms', 76 | 'stats', 77 | 'sparklines' 78 | ] 79 | }); 80 | }); 81 | -------------------------------------------------------------------------------- /build/logstash/01-lumberjack-input.conf: -------------------------------------------------------------------------------- 1 | input { 2 | lumberjack { 3 | port => 5000 4 | type => "logs" 5 | ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" 6 | ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" 7 | } 8 | } -------------------------------------------------------------------------------- /build/logstash/10-filter.conf: -------------------------------------------------------------------------------- 1 | filter { 2 | if [type] == "syslog" { 3 | grok { 4 | match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } 5 | add_field => [ "received_at", "%{@timestamp}" ] 6 | add_field => [ "received_from", "%{host}" ] 7 | } 8 | syslog_pri { } 9 | date { 10 | match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] 11 | } 12 | } 13 | 14 | if [type] == "nginx-access" { 15 | grok { 16 | match => { "message" => "%{COMBINEDAPACHELOG}" } 17 | add_field => [ "received_at", "%{@timestamp}" ] 18 | add_field => [ "received_from", "%{host}" ] 19 | } 20 | } 21 | 22 | if [type] == "helloapp-applog" { 23 | json { 24 | source => "message" 25 | add_field => [ "received_at", "%{@timestamp}" ] 26 | add_field => [ "received_from", "%{host}" ] 27 | } 28 | } 29 | 30 | if [type] == "helloapp-buslog" { 31 | json { 32 | source => "message" 33 | add_field => [ "received_at", "%{@timestamp}" ] 34 | add_field => [ "received_from", "%{host}" ] 35 | } 36 | } 37 | } 38 | 39 | -------------------------------------------------------------------------------- /build/logstash/30-lumberjack-output.conf: -------------------------------------------------------------------------------- 1 | output { 2 | elasticsearch { host => localhost } 3 | stdout { codec => rubydebug } 4 | } -------------------------------------------------------------------------------- /build/logstash/logstash-forwarder: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LoveSoftware/application-logging-with-logstash/bc1be872a5ddb29c0cdbe8cf969995d8e7e79889/build/logstash/logstash-forwarder -------------------------------------------------------------------------------- /build/logstash/logstash-forwarder.conf: -------------------------------------------------------------------------------- 1 | { 2 | "network": { 3 | "servers": [ "logs.logstashdemo.com:5000" ], 4 | "timeout": 15, 5 | "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" 6 | }, 7 | "files": [ 8 | { 9 | "paths": [ 10 | "/var/log/syslog", 11 | "/var/log/auth.log" 12 | ], 13 | "fields": { "type": "syslog" } 14 | }, 15 | { 16 | "paths": [ 17 | "/var/log/nginx/helloapp.access.log" 18 | ], 19 | "fields": { "type": "nginx-access" } 20 | }, 21 | { 22 | "paths": [ 23 | "/var/log/nginx/helloapp.error.log" 24 | ], 25 | "fields": { "type": "nginx-error" } 26 | }, 27 | { 28 | "paths": [ 29 | "/var/log/app.log" 30 | ], 31 | "fields": { "type": "helloapp-applog" } 32 | }, 33 | { 34 | "paths": [ 35 | "/var/log/bus.log" 36 | ], 37 | "fields": { "type": "helloapp-buslog" } 38 | } 39 | ] 40 | } -------------------------------------------------------------------------------- /build/logstash/logstash-forwarder.init: -------------------------------------------------------------------------------- 1 | #! /bin/sh 2 | ### BEGIN INIT INFO 3 | # Provides: skeleton 4 | # Required-Start: $remote_fs $syslog 5 | # Required-Stop: $remote_fs $syslog 6 | # Default-Start: 2 3 4 5 7 | # Default-Stop: 0 1 6 8 | # Short-Description: Example initscript 9 | # Description: This file should be used to construct scripts to be 10 | # placed in /etc/init.d. 11 | ### END INIT INFO 12 | 13 | # Author: Jordan Sissel 14 | 15 | # PATH should only include /usr/* if it runs after the mountnfs.sh script 16 | PATH=/sbin:/usr/sbin:/bin:/usr/bin 17 | DESC="log shipper" 18 | NAME=logstash-forwarder 19 | DAEMON=/opt/logstash-forwarder/bin/logstash-forwarder 20 | DAEMON_ARGS="-config /etc/logstash-forwarder -spool-size 100" 21 | PIDFILE=/var/run/$NAME.pid 22 | SCRIPTNAME=/etc/init.d/$NAME 23 | LOG=/var/log/logstash-forwarder.log 24 | 25 | [ -r /etc/default/$NAME ] && . /etc/default/$NAME 26 | . /lib/init/vars.sh 27 | . /lib/lsb/init-functions 28 | 29 | COMMAND="cd /var/run; exec $DAEMON $DAEMON_ARGS > $LOG" 30 | 31 | do_start() { 32 | # Skip if it's already running 33 | start-stop-daemon --start --quiet --pidfile $PIDFILE --exec /bin/sh --test > /dev/null || return 1 34 | 35 | cd /var/run 36 | # Actually start it now. 37 | start-stop-daemon --start --quiet --make-pidfile --background \ 38 | --pidfile $PIDFILE --exec /bin/sh -- -c "$COMMAND" || return 2 39 | } 40 | 41 | do_stop() 42 | { 43 | start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE 44 | RETVAL="$?" 45 | [ "$RETVAL" = 2 ] && return 2 46 | start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON 47 | [ "$?" = 2 ] && return 2 48 | rm -f $PIDFILE 49 | return "$RETVAL" 50 | } 51 | 52 | case "$1" in 53 | start) 54 | [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" 55 | do_start 56 | case "$?" in 57 | 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 58 | 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; 59 | esac 60 | ;; 61 | stop) 62 | [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" 63 | do_stop 64 | case "$?" in 65 | 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 66 | 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; 67 | esac 68 | ;; 69 | status) 70 | status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $? 71 | ;; 72 | restart|force-reload) 73 | log_daemon_msg "Restarting $DESC" "$NAME" 74 | do_stop 75 | case "$?" in 76 | 0|1) 77 | do_start 78 | case "$?" in 79 | 0) log_end_msg 0 ;; 80 | 1) log_end_msg 1 ;; # Old process is still running 81 | *) log_end_msg 1 ;; # Failed to start 82 | esac 83 | ;; 84 | *) 85 | # Failed to stop 86 | log_end_msg 1 87 | ;; 88 | esac 89 | ;; 90 | *) 91 | echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2 92 | exit 3 93 | ;; 94 | esac 95 | 96 | : -------------------------------------------------------------------------------- /build/nginx/kibanaelastic.conf: -------------------------------------------------------------------------------- 1 | # 2 | # Nginx proxy for Elasticsearch + Kibana 3 | # 4 | # In this setup, we are password protecting the saving of dashboards. You may 5 | # wish to extend the password protection to all paths. 6 | # 7 | # Even though these paths are being called as the result of an ajax request, the 8 | # browser will prompt for a username/password on the first request 9 | # 10 | # If you use this, you'll want to point config.js at http://FQDN:80/ instead of 11 | # http://FQDN:9200 12 | # 13 | server { 14 | listen *:80 ; 15 | 16 | server_name logs.logstashdemo.com; 17 | access_log /var/log/nginx/kibana.myhost.org.access.log; 18 | 19 | location / { 20 | root /var/www/kibana3; 21 | index index.html index.htm; 22 | } 23 | 24 | location ~ ^/_aliases$ { 25 | proxy_pass http://127.0.0.1:9200; 26 | proxy_read_timeout 90; 27 | } 28 | location ~ ^/.*/_aliases$ { 29 | proxy_pass http://127.0.0.1:9200; 30 | proxy_read_timeout 90; 31 | } 32 | location ~ ^/_nodes$ { 33 | proxy_pass http://127.0.0.1:9200; 34 | proxy_read_timeout 90; 35 | } 36 | location ~ ^/.*/_search$ { 37 | proxy_pass http://127.0.0.1:9200; 38 | proxy_read_timeout 90; 39 | } 40 | location ~ ^/.*/_mapping { 41 | proxy_pass http://127.0.0.1:9200; 42 | proxy_read_timeout 90; 43 | } 44 | 45 | # Password protected end points 46 | location ~ ^/kibana-int/dashboard/.*$ { 47 | proxy_pass http://127.0.0.1:9200; 48 | proxy_read_timeout 90; 49 | limit_except GET { 50 | proxy_pass http://127.0.0.1:9200; 51 | auth_basic "Restricted"; 52 | auth_basic_user_file /etc/nginx/conf.d/logs.logstashdemo.com.passwd; 53 | } 54 | } 55 | location ~ ^/kibana-int/temp.*$ { 56 | proxy_pass http://127.0.0.1:9200; 57 | proxy_read_timeout 90; 58 | limit_except GET { 59 | proxy_pass http://127.0.0.1:9200; 60 | auth_basic "Restricted"; 61 | auth_basic_user_file /etc/nginx/conf.d/logs.logstashdemo.com.passwd; 62 | } 63 | } 64 | } -------------------------------------------------------------------------------- /build/nginx/logdemo.conf: -------------------------------------------------------------------------------- 1 | # You may add here your 2 | # server { 3 | # ... 4 | # } 5 | # statements for each of your virtual hosts to this file 6 | 7 | ## 8 | # You should look at the following URL's in order to grasp a solid understanding 9 | # of Nginx configuration files in order to fully unleash the power of Nginx. 10 | # http://wiki.nginx.org/Pitfalls 11 | # http://wiki.nginx.org/QuickStart 12 | # http://wiki.nginx.org/Configuration 13 | # 14 | # Generally, you will want to move this file somewhere, and start with a clean 15 | # file but keep this around for reference. Or just disable in sites-enabled. 16 | # 17 | # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. 18 | ## 19 | 20 | server { 21 | #site root is redirected to the app boot script 22 | 23 | root /vagrant/www; 24 | 25 | listen 8080; 26 | 27 | access_log /var/log/nginx/helloapp.access.log; 28 | error_log /var/log/nginx/helloapp.error.log; 29 | 30 | location = / { 31 | try_files @site @site; 32 | } 33 | 34 | #all other locations try other files first and go to our front controller if none of them exists 35 | location / { 36 | try_files $uri $uri/ @site; 37 | } 38 | 39 | #return 404 for all php files as we do have a front controller 40 | location ~ \.php$ { 41 | return 404; 42 | } 43 | 44 | location @site { 45 | fastcgi_pass unix:/var/run/php5-fpm.sock; 46 | include fastcgi_params; 47 | fastcgi_param SCRIPT_FILENAME $document_root/index.php; 48 | #uncomment when running via https 49 | #fastcgi_param HTTPS on; 50 | } 51 | } 52 | 53 | 54 | -------------------------------------------------------------------------------- /build/provision-logstash.sh: -------------------------------------------------------------------------------- 1 | 2 | # Update Apt 3 | wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add - 4 | echo 'deb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list 5 | echo 'deb http://packages.elasticsearch.org/logstash/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list 6 | sudo add-apt-repository ppa:webupd8team/java 7 | sudo apt-get update 8 | 9 | # Install Java 10 | sudo echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections 11 | sudo apt-get install -y oracle-java8-installer 12 | sudo apt-get install -y oracle-java8-set-default 13 | 14 | # Install Elastic Search 15 | sudo apt-get -y install elasticsearch=1.1.1 16 | sudo update-rc.d elasticsearch defaults 95 10 17 | sudo cp /vagrant/build/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml 18 | sudo service elasticsearch start 19 | 20 | # Install Kibana 21 | cd ~; wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz 22 | tar xf kibana-3.0.1.tar.gz 23 | cp /vagrant/build/kibana/config.js ~/kibana-3.0.1/config.js 24 | sudo mkdir -p /var/www/kibana3 25 | sudo cp -R ~/kibana-3.0.1/* /var/www/kibana3 26 | 27 | # Install Nginx 28 | sudo apt-get install -y nginx 29 | sudo cp /vagrant/build/nginx/kibanaelastic.conf /etc/nginx/sites-available/default 30 | sudo apt-get install -y apache2-utils 31 | echo "password" | sudo htpasswd -c -i /etc/nginx/conf.d/logs.logstashdemo.com.passwd kibana 32 | sudo service nginx restart 33 | 34 | # Hosts File 35 | echo '127.0.0.1 logs.logstashdemo.com' | sudo tee --append /etc/hosts 36 | echo '10.0.4.56 web.logstashdemo.com' | sudo tee --append /etc/hosts 37 | 38 | # Install logstash 39 | sudo apt-get install logstash=1.4.2-1-2c0f5a1 40 | 41 | # Create SSL Certs 42 | sudo mkdir -p /etc/pki/tls/certs 43 | sudo mkdir /etc/pki/tls/private 44 | 45 | sudo cp /vagrant/build/artifacts/logstash-forwarder.crt /etc/pki/tls/certs 46 | sudo cp /vagrant/build/artifacts/logstash-forwarder.key /etc/pki/tls/private 47 | 48 | sudo cp /vagrant/build/logstash/01-lumberjack-input.conf /etc/logstash/conf.d/01-lumberjack-input.conf 49 | sudo cp /vagrant/build/logstash/10-filter.conf /etc/logstash/conf.d/10-filter.conf 50 | sudo cp /vagrant/build/logstash/30-lumberjack-output.conf /etc/logstash/conf.d/30-lumberjack-output.conf 51 | 52 | sudo service logstash restart 53 | -------------------------------------------------------------------------------- /build/provision-web.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | sudo apt-get update 3 | 4 | # Install Required packages 5 | sudo apt-get install -y logstash-forwarder 6 | sudo apt-get install -y git 7 | sudo apt-get install -y nginx 8 | sudo apt-get install -y php5-fpm php5 php5-dev 9 | sudo apt-get install -y varnish 10 | sudo apt-get install -y supervisor 11 | 12 | # Configure Nginx 13 | sudo cp /vagrant/build/nginx/logdemo.conf /etc/nginx/sites-available/default 14 | sudo service nginx restart 15 | 16 | # Configure Varnish 17 | sudo cp /vagrant/build/varnish/varnish /etc/default/varnish 18 | sudo cp /vagrant/build/varnish/default.vcl /etc/varnish/default.vcl 19 | sudo service varnish restart 20 | 21 | # Configure Composer 22 | curl -sS https://getcomposer.org/installer | php 23 | sudo mv composer.phar /usr/local/bin/composer 24 | 25 | # Add log files 26 | sudo touch /var/log/logstash-forwarder.log 27 | sudo touch /var/log/app.log 28 | sudo touch /var/log/bus.log 29 | sudo chown www-data /var/log/app.log 30 | sudo chown www-data /var/log/bus.log 31 | sudo chmod 777 /var/log/logstash-forwarder.log 32 | 33 | # Hosts File 34 | echo '10.0.4.55 logs.logstashdemo.com' | sudo tee --append /etc/hosts 35 | echo '127.0.0.1 web.logstashdemo.com' | sudo tee --append /etc/hosts 36 | 37 | # Configure The Logstash Forwarder 38 | sudo cp /vagrant/build/logstash/logstash-forwarder /usr/bin/logstash-forwarder 39 | sudo cp /vagrant/build/supervisord/supervisord.conf /etc/supervisor/supervisord.conf 40 | 41 | sudo mkdir -p /etc/pki/tls/certs 42 | sudo cp /vagrant/build/artifacts/logstash-forwarder.crt /etc/pki/tls/certs/ 43 | 44 | sudo service supervisor restart 45 | -------------------------------------------------------------------------------- /build/ssl/ssl.conf: -------------------------------------------------------------------------------- 1 | # 2 | # OpenSSL example configuration file. 3 | # This is mostly being used for generation of certificate requests. 4 | # 5 | 6 | # This definition stops the following lines choking if HOME isn't 7 | # defined. 8 | HOME = . 9 | RANDFILE = $ENV::HOME/.rnd 10 | 11 | # Extra OBJECT IDENTIFIER info: 12 | #oid_file = $ENV::HOME/.oid 13 | oid_section = new_oids 14 | 15 | # To use this configuration file with the "-extfile" option of the 16 | # "openssl x509" utility, name here the section containing the 17 | # X.509v3 extensions to use: 18 | # extensions = 19 | # (Alternatively, use a configuration file that has only 20 | # X.509v3 extensions in its main [= default] section.) 21 | 22 | [ new_oids ] 23 | 24 | # We can add new OIDs in here for use by 'ca', 'req' and 'ts'. 25 | # Add a simple OID like this: 26 | # testoid1=1.2.3.4 27 | # Or use config file substitution like this: 28 | # testoid2=${testoid1}.5.6 29 | 30 | # Policies used by the TSA examples. 31 | tsa_policy1 = 1.2.3.4.1 32 | tsa_policy2 = 1.2.3.4.5.6 33 | tsa_policy3 = 1.2.3.4.5.7 34 | 35 | #################################################################### 36 | [ ca ] 37 | default_ca = CA_default # The default ca section 38 | 39 | #################################################################### 40 | [ CA_default ] 41 | 42 | dir = ./demoCA # Where everything is kept 43 | certs = $dir/certs # Where the issued certs are kept 44 | crl_dir = $dir/crl # Where the issued crl are kept 45 | database = $dir/index.txt # database index file. 46 | #unique_subject = no # Set to 'no' to allow creation of 47 | # several ctificates with same subject. 48 | new_certs_dir = $dir/newcerts # default place for new certs. 49 | 50 | certificate = $dir/cacert.pem # The CA certificate 51 | serial = $dir/serial # The current serial number 52 | crlnumber = $dir/crlnumber # the current crl number 53 | # must be commented out to leave a V1 CRL 54 | crl = $dir/crl.pem # The current CRL 55 | private_key = $dir/private/cakey.pem# The private key 56 | RANDFILE = $dir/private/.rand # private random number file 57 | 58 | x509_extensions = usr_cert # The extentions to add to the cert 59 | 60 | # Comment out the following two lines for the "traditional" 61 | # (and highly broken) format. 62 | name_opt = ca_default # Subject Name options 63 | cert_opt = ca_default # Certificate field options 64 | 65 | # Extension copying option: use with caution. 66 | # copy_extensions = copy 67 | 68 | # Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs 69 | # so this is commented out by default to leave a V1 CRL. 70 | # crlnumber must also be commented out to leave a V1 CRL. 71 | # crl_extensions = crl_ext 72 | 73 | default_days = 365 # how long to certify for 74 | default_crl_days= 30 # how long before next CRL 75 | default_md = default # use public key default MD 76 | preserve = no # keep passed DN ordering 77 | 78 | # A few difference way of specifying how similar the request should look 79 | # For type CA, the listed attributes must be the same, and the optional 80 | # and supplied fields are just that :-) 81 | policy = policy_match 82 | 83 | # For the CA policy 84 | [ policy_match ] 85 | countryName = match 86 | stateOrProvinceName = match 87 | organizationName = match 88 | organizationalUnitName = optional 89 | commonName = supplied 90 | emailAddress = optional 91 | 92 | # For the 'anything' policy 93 | # At this point in time, you must list all acceptable 'object' 94 | # types. 95 | [ policy_anything ] 96 | countryName = optional 97 | stateOrProvinceName = optional 98 | localityName = optional 99 | organizationName = optional 100 | organizationalUnitName = optional 101 | commonName = supplied 102 | emailAddress = optional 103 | 104 | #################################################################### 105 | [ req ] 106 | default_bits = 2048 107 | default_keyfile = privkey.pem 108 | distinguished_name = req_distinguished_name 109 | attributes = req_attributes 110 | x509_extensions = v3_ca # The extentions to add to the self signed cert 111 | 112 | # Passwords for private keys if not present they will be prompted for 113 | # input_password = secret 114 | # output_password = secret 115 | 116 | # This sets a mask for permitted string types. There are several options. 117 | # default: PrintableString, T61String, BMPString. 118 | # pkix : PrintableString, BMPString (PKIX recommendation before 2004) 119 | # utf8only: only UTF8Strings (PKIX recommendation after 2004). 120 | # nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings). 121 | # MASK:XXXX a literal mask value. 122 | # WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings. 123 | string_mask = utf8only 124 | 125 | # req_extensions = v3_req # The extensions to add to a certificate request 126 | 127 | [ req_distinguished_name ] 128 | countryName = Country Name (2 letter code) 129 | countryName_default = AU 130 | countryName_min = 2 131 | countryName_max = 2 132 | 133 | stateOrProvinceName = State or Province Name (full name) 134 | stateOrProvinceName_default = Some-State 135 | 136 | localityName = Locality Name (eg, city) 137 | 138 | 0.organizationName = Organization Name (eg, company) 139 | 0.organizationName_default = Internet Widgits Pty Ltd 140 | 141 | # we can do this but it is not needed normally :-) 142 | #1.organizationName = Second Organization Name (eg, company) 143 | #1.organizationName_default = World Wide Web Pty Ltd 144 | 145 | organizationalUnitName = Organizational Unit Name (eg, section) 146 | #organizationalUnitName_default = 147 | 148 | commonName = Common Name (e.g. server FQDN or YOUR name) 149 | commonName_max = 64 150 | 151 | emailAddress = Email Address 152 | emailAddress_max = 64 153 | 154 | # SET-ex3 = SET extension number 3 155 | 156 | [ req_attributes ] 157 | challengePassword = A challenge password 158 | challengePassword_min = 4 159 | challengePassword_max = 20 160 | 161 | unstructuredName = An optional company name 162 | 163 | [ usr_cert ] 164 | 165 | # These extensions are added when 'ca' signs a request. 166 | 167 | # This goes against PKIX guidelines but some CAs do it and some software 168 | # requires this to avoid interpreting an end user certificate as a CA. 169 | 170 | basicConstraints=CA:FALSE 171 | 172 | # Here are some examples of the usage of nsCertType. If it is omitted 173 | # the certificate can be used for anything *except* object signing. 174 | 175 | # This is OK for an SSL server. 176 | # nsCertType = server 177 | 178 | # For an object signing certificate this would be used. 179 | # nsCertType = objsign 180 | 181 | # For normal client use this is typical 182 | # nsCertType = client, email 183 | 184 | # and for everything including object signing: 185 | # nsCertType = client, email, objsign 186 | 187 | # This is typical in keyUsage for a client certificate. 188 | # keyUsage = nonRepudiation, digitalSignature, keyEncipherment 189 | 190 | # This will be displayed in Netscape's comment listbox. 191 | nsComment = "OpenSSL Generated Certificate" 192 | 193 | # PKIX recommendations harmless if included in all certificates. 194 | subjectKeyIdentifier=hash 195 | authorityKeyIdentifier=keyid,issuer 196 | 197 | # This stuff is for subjectAltName and issuerAltname. 198 | # Import the email address. 199 | # subjectAltName=email:copy 200 | # An alternative to produce certificates that aren't 201 | # deprecated according to PKIX. 202 | # subjectAltName=email:move 203 | 204 | # Copy subject details 205 | # issuerAltName=issuer:copy 206 | 207 | #nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem 208 | #nsBaseUrl 209 | #nsRevocationUrl 210 | #nsRenewalUrl 211 | #nsCaPolicyUrl 212 | #nsSslServerName 213 | 214 | # This is required for TSA certificates. 215 | # extendedKeyUsage = critical,timeStamping 216 | 217 | [ v3_req ] 218 | 219 | # Extensions to add to a certificate request 220 | 221 | basicConstraints = CA:FALSE 222 | keyUsage = nonRepudiation, digitalSignature, keyEncipherment 223 | 224 | [ v3_ca ] 225 | 226 | 227 | # Extensions for a typical CA 228 | 229 | 230 | # PKIX recommendation. 231 | 232 | subjectKeyIdentifier=hash 233 | 234 | authorityKeyIdentifier=keyid:always,issuer 235 | 236 | # This is what PKIX recommends but some broken software chokes on critical 237 | # extensions. 238 | #basicConstraints = critical,CA:true 239 | # So we do this instead. 240 | basicConstraints = CA:true 241 | 242 | # Key usage: this is typical for a CA certificate. However since it will 243 | # prevent it being used as an test self-signed certificate it is best 244 | # left out by default. 245 | # keyUsage = cRLSign, keyCertSign 246 | 247 | # Some might want this also 248 | # nsCertType = sslCA, emailCA 249 | 250 | # Include email address in subject alt name: another PKIX recommendation 251 | # subjectAltName=email:copy 252 | # Copy issuer details 253 | # issuerAltName=issuer:copy 254 | 255 | # DER hex encoding of an extension: beware experts only! 256 | # obj=DER:02:03 257 | # Where 'obj' is a standard or added object 258 | # You can even override a supported extension: 259 | # basicConstraints= critical, DER:30:03:01:01:FF 260 | 261 | subjectAltName = IP:10.0.4.55 262 | 263 | [ crl_ext ] 264 | 265 | # CRL extensions. 266 | # Only issuerAltName and authorityKeyIdentifier make any sense in a CRL. 267 | 268 | # issuerAltName=issuer:copy 269 | authorityKeyIdentifier=keyid:always 270 | 271 | [ proxy_cert_ext ] 272 | # These extensions should be added when creating a proxy certificate 273 | 274 | # This goes against PKIX guidelines but some CAs do it and some software 275 | # requires this to avoid interpreting an end user certificate as a CA. 276 | 277 | basicConstraints=CA:FALSE 278 | 279 | # Here are some examples of the usage of nsCertType. If it is omitted 280 | # the certificate can be used for anything *except* object signing. 281 | 282 | # This is OK for an SSL server. 283 | # nsCertType = server 284 | 285 | # For an object signing certificate this would be used. 286 | # nsCertType = objsign 287 | 288 | # For normal client use this is typical 289 | # nsCertType = client, email 290 | 291 | # and for everything including object signing: 292 | # nsCertType = client, email, objsign 293 | 294 | # This is typical in keyUsage for a client certificate. 295 | # keyUsage = nonRepudiation, digitalSignature, keyEncipherment 296 | 297 | # This will be displayed in Netscape's comment listbox. 298 | nsComment = "OpenSSL Generated Certificate" 299 | 300 | # PKIX recommendations harmless if included in all certificates. 301 | subjectKeyIdentifier=hash 302 | authorityKeyIdentifier=keyid,issuer 303 | 304 | # This stuff is for subjectAltName and issuerAltname. 305 | # Import the email address. 306 | # subjectAltName=email:copy 307 | # An alternative to produce certificates that aren't 308 | # deprecated according to PKIX. 309 | # subjectAltName=email:move 310 | 311 | # Copy subject details 312 | # issuerAltName=issuer:copy 313 | 314 | #nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem 315 | #nsBaseUrl 316 | #nsRevocationUrl 317 | #nsRenewalUrl 318 | #nsCaPolicyUrl 319 | #nsSslServerName 320 | 321 | # This really needs to be in place for it to be a proxy certificate. 322 | proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo 323 | 324 | #################################################################### 325 | [ tsa ] 326 | 327 | default_tsa = tsa_config1 # the default TSA section 328 | 329 | [ tsa_config1 ] 330 | 331 | # These are used by the TSA reply generation only. 332 | dir = ./demoCA # TSA root directory 333 | serial = $dir/tsaserial # The current serial number (mandatory) 334 | crypto_device = builtin # OpenSSL engine to use for signing 335 | signer_cert = $dir/tsacert.pem # The TSA signing certificate 336 | # (optional) 337 | certs = $dir/cacert.pem # Certificate chain to include in reply 338 | # (optional) 339 | signer_key = $dir/private/tsakey.pem # The TSA private key (optional) 340 | 341 | default_policy = tsa_policy1 # Policy if request did not specify it 342 | # (optional) 343 | other_policies = tsa_policy2, tsa_policy3 # acceptable policies (optional) 344 | digests = md5, sha1 # Acceptable message digests (mandatory) 345 | accuracy = secs:1, millisecs:500, microsecs:100 # (optional) 346 | clock_precision_digits = 0 # number of digits after dot. (optional) 347 | ordering = yes # Is ordering defined for timestamps? 348 | # (optional, default: no) 349 | tsa_name = yes # Must the TSA name be included in the reply? 350 | # (optional, default: no) 351 | ess_cert_id_chain = no # Must the ESS cert id chain be included? 352 | # (optional, default: no) 353 | 354 | -------------------------------------------------------------------------------- /build/supervisord/supervisord.conf: -------------------------------------------------------------------------------- 1 | ; Supervisor config file. 2 | 3 | 4 | [unix_http_server] 5 | file=/var/run/supervisor.sock ; (the path to the socket file) 6 | chmod=0700 ; socket file mode (default 0700) 7 | 8 | [supervisord] 9 | logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log) 10 | logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB) 11 | logfile_backups=10 ; (num of main logfile rotation backups;default 10) 12 | loglevel=info ; (log level;default info; others: debug,warn,trace) 13 | pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid) 14 | nodaemon=false ; (start in foreground if true;default false) 15 | minfds=1024 ; (min. avail startup file descriptors;default 1024) 16 | minprocs=200 ; (min. avail process descriptors;default 200) 17 | childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP) 18 | directory=/home/vagrant ; (Working directory for supervisord spawned processes) 19 | 20 | [rpcinterface:supervisor] 21 | supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface 22 | 23 | [supervisorctl] 24 | serverurl=unix:///var/run/supervisor.sock 25 | 26 | [program:logstash] 27 | command=/usr/bin/sudo /usr/bin/logstash-forwarder --config /vagrant/build/logstash/logstash-forwarder.conf 28 | process_name=%(program_name)s_%(process_num)02d 29 | numprocs=1 30 | user=vagrant 31 | -------------------------------------------------------------------------------- /build/varnish/default.vcl: -------------------------------------------------------------------------------- 1 | backend default { 2 | .host = "localhost"; 3 | .port = "8080"; 4 | } 5 | 6 | # Let all traffic through 7 | sub vcl_recv { 8 | return(pass); 9 | } -------------------------------------------------------------------------------- /build/varnish/varnish: -------------------------------------------------------------------------------- 1 | # Configuration file for varnish 2 | # 3 | # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK 4 | # to be set from this shell script fragment. 5 | # 6 | # Note: If systemd is installed, this file is obsolete and ignored. You will 7 | # need to copy /lib/systemd/system/varnish.service to /etc/systemd/system/ and 8 | # edit that file. 9 | 10 | # Should we start varnishd at boot? Set to "no" to disable. 11 | START=yes 12 | 13 | # Maximum number of open files (for ulimit -n) 14 | NFILES=131072 15 | 16 | # Maximum locked memory size (for ulimit -l) 17 | # Used for locking the shared memory log in memory. If you increase log size, 18 | # you need to increase this number as well 19 | MEMLOCK=82000 20 | 21 | # Default varnish instance name is the local nodename. Can be overridden with 22 | # the -n switch, to have more instances on a single server. 23 | # You may need to uncomment this variable for alternatives 1 and 3 below. 24 | # INSTANCE=$(uname -n) 25 | 26 | # This file contains 4 alternatives, please use only one. 27 | 28 | ## Alternative 1, Minimal configuration, no VCL 29 | # 30 | # Listen on port 6081, administration on localhost:6082, and forward to 31 | # content server on localhost:8080. Use a 1GB fixed-size cache file. 32 | # 33 | # This example uses the INSTANCE variable above, which you need to uncomment. 34 | # 35 | # DAEMON_OPTS="-a :6081 \ 36 | # -T localhost:6082 \ 37 | # -b localhost:8080 \ 38 | # -u varnish -g varnish \ 39 | # -S /etc/varnish/secret \ 40 | # -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" 41 | 42 | 43 | ## Alternative 2, Configuration with VCL 44 | # 45 | # Listen on port 6081, administration on localhost:6082, and forward to 46 | # one content server selected by the vcl file, based on the request. 47 | # 48 | DAEMON_OPTS="-a :80 \ 49 | -T localhost:6082 \ 50 | -f /etc/varnish/default.vcl \ 51 | -S /etc/varnish/secret \ 52 | -s malloc,256m" 53 | 54 | 55 | ## Alternative 3, Advanced configuration 56 | # 57 | # This example uses the INSTANCE variable above, which you need to uncomment. 58 | # 59 | # See varnishd(1) for more information. 60 | # 61 | # # Main configuration file. You probably want to change it :) 62 | # VARNISH_VCL_CONF=/etc/varnish/default.vcl 63 | # 64 | # # Default address and port to bind to 65 | # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify 66 | # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. 67 | # VARNISH_LISTEN_ADDRESS= 68 | # VARNISH_LISTEN_PORT=6081 69 | # 70 | # # Telnet admin interface listen address and port 71 | # VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 72 | # VARNISH_ADMIN_LISTEN_PORT=6082 73 | # 74 | # # The minimum number of worker threads to start 75 | # VARNISH_MIN_THREADS=1 76 | # 77 | # # The Maximum number of worker threads to start 78 | # VARNISH_MAX_THREADS=1000 79 | # 80 | # # Idle timeout for worker threads 81 | # VARNISH_THREAD_TIMEOUT=120 82 | # 83 | # # Cache file location 84 | # VARNISH_STORAGE_FILE=/var/lib/varnish/$INSTANCE/varnish_storage.bin 85 | # 86 | # # Cache file size: in bytes, optionally using k / M / G / T suffix, 87 | # # or in percentage of available disk space using the % suffix. 88 | # VARNISH_STORAGE_SIZE=1G 89 | # 90 | # # File containing administration secret 91 | # VARNISH_SECRET_FILE=/etc/varnish/secret 92 | # 93 | # # Backend storage specification 94 | # VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" 95 | # 96 | # # Default TTL used when the backend does not specify one 97 | # VARNISH_TTL=120 98 | # 99 | # # DAEMON_OPTS is used by the init script. If you add or remove options, make 100 | # # sure you update this section, too. 101 | # DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \ 102 | # -f ${VARNISH_VCL_CONF} \ 103 | # -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \ 104 | # -t ${VARNISH_TTL} \ 105 | # -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \ 106 | # -S ${VARNISH_SECRET_FILE} \ 107 | # -s ${VARNISH_STORAGE}" 108 | # 109 | 110 | 111 | ## Alternative 4, Do It Yourself 112 | # 113 | # DAEMON_OPTS="" 114 | -------------------------------------------------------------------------------- /composer.json: -------------------------------------------------------------------------------- 1 | { 2 | "require": { 3 | "silex/silex": "~1.2", 4 | "monolog/monolog": "~1.12" 5 | } 6 | } 7 | -------------------------------------------------------------------------------- /composer.lock: -------------------------------------------------------------------------------- 1 | { 2 | "_readme": [ 3 | "This file locks the dependencies of your project to a known state", 4 | "Read more about it at http://getcomposer.org/doc/01-basic-usage.md#composer-lock-the-lock-file", 5 | "This file is @generated automatically" 6 | ], 7 | "hash": "2a4f39520a9a164c1133a4bf015f4710", 8 | "packages": [ 9 | { 10 | "name": "monolog/monolog", 11 | "version": "1.12.0", 12 | "source": { 13 | "type": "git", 14 | "url": "https://github.com/Seldaek/monolog.git", 15 | "reference": "1fbe8c2641f2b163addf49cc5e18f144bec6b19f" 16 | }, 17 | "dist": { 18 | "type": "zip", 19 | "url": "https://api.github.com/repos/Seldaek/monolog/zipball/1fbe8c2641f2b163addf49cc5e18f144bec6b19f", 20 | "reference": "1fbe8c2641f2b163addf49cc5e18f144bec6b19f", 21 | "shasum": "" 22 | }, 23 | "require": { 24 | "php": ">=5.3.0", 25 | "psr/log": "~1.0" 26 | }, 27 | "provide": { 28 | "psr/log-implementation": "1.0.0" 29 | }, 30 | "require-dev": { 31 | "aws/aws-sdk-php": "~2.4, >2.4.8", 32 | "doctrine/couchdb": "~1.0@dev", 33 | "graylog2/gelf-php": "~1.0", 34 | "phpunit/phpunit": "~4.0", 35 | "raven/raven": "~0.5", 36 | "ruflin/elastica": "0.90.*", 37 | "videlalvaro/php-amqplib": "~2.4" 38 | }, 39 | "suggest": { 40 | "aws/aws-sdk-php": "Allow sending log messages to AWS services like DynamoDB", 41 | "doctrine/couchdb": "Allow sending log messages to a CouchDB server", 42 | "ext-amqp": "Allow sending log messages to an AMQP server (1.0+ required)", 43 | "ext-mongo": "Allow sending log messages to a MongoDB server", 44 | "graylog2/gelf-php": "Allow sending log messages to a GrayLog2 server", 45 | "raven/raven": "Allow sending log messages to a Sentry server", 46 | "rollbar/rollbar": "Allow sending log messages to Rollbar", 47 | "ruflin/elastica": "Allow sending log messages to an Elastic Search server", 48 | "videlalvaro/php-amqplib": "Allow sending log messages to an AMQP server using php-amqplib" 49 | }, 50 | "type": "library", 51 | "extra": { 52 | "branch-alias": { 53 | "dev-master": "1.12.x-dev" 54 | } 55 | }, 56 | "autoload": { 57 | "psr-4": { 58 | "Monolog\\": "src/Monolog" 59 | } 60 | }, 61 | "notification-url": "https://packagist.org/downloads/", 62 | "license": [ 63 | "MIT" 64 | ], 65 | "authors": [ 66 | { 67 | "name": "Jordi Boggiano", 68 | "email": "j.boggiano@seld.be", 69 | "homepage": "http://seld.be" 70 | } 71 | ], 72 | "description": "Sends your logs to files, sockets, inboxes, databases and various web services", 73 | "homepage": "http://github.com/Seldaek/monolog", 74 | "keywords": [ 75 | "log", 76 | "logging", 77 | "psr-3" 78 | ], 79 | "time": "2014-12-29 21:29:35" 80 | }, 81 | { 82 | "name": "pimple/pimple", 83 | "version": "v1.1.1", 84 | "source": { 85 | "type": "git", 86 | "url": "https://github.com/silexphp/Pimple.git", 87 | "reference": "2019c145fe393923f3441b23f29bbdfaa5c58c4d" 88 | }, 89 | "dist": { 90 | "type": "zip", 91 | "url": "https://api.github.com/repos/silexphp/Pimple/zipball/2019c145fe393923f3441b23f29bbdfaa5c58c4d", 92 | "reference": "2019c145fe393923f3441b23f29bbdfaa5c58c4d", 93 | "shasum": "" 94 | }, 95 | "require": { 96 | "php": ">=5.3.0" 97 | }, 98 | "type": "library", 99 | "extra": { 100 | "branch-alias": { 101 | "dev-master": "1.1.x-dev" 102 | } 103 | }, 104 | "autoload": { 105 | "psr-0": { 106 | "Pimple": "lib/" 107 | } 108 | }, 109 | "notification-url": "https://packagist.org/downloads/", 110 | "license": [ 111 | "MIT" 112 | ], 113 | "authors": [ 114 | { 115 | "name": "Fabien Potencier", 116 | "email": "fabien@symfony.com" 117 | } 118 | ], 119 | "description": "Pimple is a simple Dependency Injection Container for PHP 5.3", 120 | "homepage": "http://pimple.sensiolabs.org", 121 | "keywords": [ 122 | "container", 123 | "dependency injection" 124 | ], 125 | "time": "2013-11-22 08:30:29" 126 | }, 127 | { 128 | "name": "psr/log", 129 | "version": "1.0.0", 130 | "source": { 131 | "type": "git", 132 | "url": "https://github.com/php-fig/log.git", 133 | "reference": "fe0936ee26643249e916849d48e3a51d5f5e278b" 134 | }, 135 | "dist": { 136 | "type": "zip", 137 | "url": "https://api.github.com/repos/php-fig/log/zipball/fe0936ee26643249e916849d48e3a51d5f5e278b", 138 | "reference": "fe0936ee26643249e916849d48e3a51d5f5e278b", 139 | "shasum": "" 140 | }, 141 | "type": "library", 142 | "autoload": { 143 | "psr-0": { 144 | "Psr\\Log\\": "" 145 | } 146 | }, 147 | "notification-url": "https://packagist.org/downloads/", 148 | "license": [ 149 | "MIT" 150 | ], 151 | "authors": [ 152 | { 153 | "name": "PHP-FIG", 154 | "homepage": "http://www.php-fig.org/" 155 | } 156 | ], 157 | "description": "Common interface for logging libraries", 158 | "keywords": [ 159 | "log", 160 | "psr", 161 | "psr-3" 162 | ], 163 | "time": "2012-12-21 11:40:51" 164 | }, 165 | { 166 | "name": "silex/silex", 167 | "version": "v1.2.2", 168 | "source": { 169 | "type": "git", 170 | "url": "https://github.com/silexphp/Silex.git", 171 | "reference": "8c5e86eb97f3eee633729b22e950082fb5591328" 172 | }, 173 | "dist": { 174 | "type": "zip", 175 | "url": "https://api.github.com/repos/silexphp/Silex/zipball/8c5e86eb97f3eee633729b22e950082fb5591328", 176 | "reference": "8c5e86eb97f3eee633729b22e950082fb5591328", 177 | "shasum": "" 178 | }, 179 | "require": { 180 | "php": ">=5.3.3", 181 | "pimple/pimple": "~1.0", 182 | "symfony/event-dispatcher": ">=2.3,<2.6-dev", 183 | "symfony/http-foundation": ">=2.3,<2.6-dev", 184 | "symfony/http-kernel": ">=2.3,<2.6-dev", 185 | "symfony/routing": ">=2.3,<2.6-dev" 186 | }, 187 | "require-dev": { 188 | "doctrine/dbal": "~2.2", 189 | "monolog/monolog": "~1.4,>=1.4.1", 190 | "phpunit/phpunit": "~3.7", 191 | "swiftmailer/swiftmailer": "5.*", 192 | "symfony/browser-kit": ">=2.3,<2.6-dev", 193 | "symfony/config": ">=2.3,<2.6-dev", 194 | "symfony/css-selector": ">=2.3,<2.6-dev", 195 | "symfony/debug": ">=2.3,<2.6-dev", 196 | "symfony/dom-crawler": ">=2.3,<2.6-dev", 197 | "symfony/finder": ">=2.3,<2.6-dev", 198 | "symfony/form": ">=2.3,<2.6-dev", 199 | "symfony/locale": ">=2.3,<2.6-dev", 200 | "symfony/monolog-bridge": ">=2.3,<2.6-dev", 201 | "symfony/options-resolver": ">=2.3,<2.6-dev", 202 | "symfony/process": ">=2.3,<2.6-dev", 203 | "symfony/security": ">=2.3,<2.6-dev", 204 | "symfony/serializer": ">=2.3,<2.6-dev", 205 | "symfony/translation": ">=2.3,<2.6-dev", 206 | "symfony/twig-bridge": ">=2.3,<2.6-dev", 207 | "symfony/validator": ">=2.3,<2.6-dev", 208 | "twig/twig": ">=1.8.0,<2.0-dev" 209 | }, 210 | "suggest": { 211 | "symfony/browser-kit": ">=2.3,<2.6-dev", 212 | "symfony/css-selector": ">=2.3,<2.6-dev", 213 | "symfony/dom-crawler": ">=2.3,<2.6-dev", 214 | "symfony/form": ">=2.3,<2.6-dev" 215 | }, 216 | "type": "library", 217 | "extra": { 218 | "branch-alias": { 219 | "dev-master": "1.2.x-dev" 220 | } 221 | }, 222 | "autoload": { 223 | "psr-0": { 224 | "Silex": "src/" 225 | } 226 | }, 227 | "notification-url": "https://packagist.org/downloads/", 228 | "license": [ 229 | "MIT" 230 | ], 231 | "authors": [ 232 | { 233 | "name": "Fabien Potencier", 234 | "email": "fabien@symfony.com" 235 | }, 236 | { 237 | "name": "Igor Wiedler", 238 | "email": "igor@wiedler.ch" 239 | } 240 | ], 241 | "description": "The PHP micro-framework based on the Symfony2 Components", 242 | "homepage": "http://silex.sensiolabs.org", 243 | "keywords": [ 244 | "microframework" 245 | ], 246 | "time": "2014-09-26 09:32:30" 247 | }, 248 | { 249 | "name": "symfony/debug", 250 | "version": "v2.6.3", 251 | "target-dir": "Symfony/Component/Debug", 252 | "source": { 253 | "type": "git", 254 | "url": "https://github.com/symfony/Debug.git", 255 | "reference": "7213c8200d60728c9d4c56d5830aa2d80ae3d25d" 256 | }, 257 | "dist": { 258 | "type": "zip", 259 | "url": "https://api.github.com/repos/symfony/Debug/zipball/7213c8200d60728c9d4c56d5830aa2d80ae3d25d", 260 | "reference": "7213c8200d60728c9d4c56d5830aa2d80ae3d25d", 261 | "shasum": "" 262 | }, 263 | "require": { 264 | "php": ">=5.3.3", 265 | "psr/log": "~1.0" 266 | }, 267 | "require-dev": { 268 | "symfony/class-loader": "~2.2", 269 | "symfony/http-foundation": "~2.1", 270 | "symfony/http-kernel": "~2.3.24|~2.5.9|~2.6,>=2.6.2" 271 | }, 272 | "suggest": { 273 | "symfony/http-foundation": "", 274 | "symfony/http-kernel": "" 275 | }, 276 | "type": "library", 277 | "extra": { 278 | "branch-alias": { 279 | "dev-master": "2.6-dev" 280 | } 281 | }, 282 | "autoload": { 283 | "psr-0": { 284 | "Symfony\\Component\\Debug\\": "" 285 | } 286 | }, 287 | "notification-url": "https://packagist.org/downloads/", 288 | "license": [ 289 | "MIT" 290 | ], 291 | "authors": [ 292 | { 293 | "name": "Symfony Community", 294 | "homepage": "http://symfony.com/contributors" 295 | }, 296 | { 297 | "name": "Fabien Potencier", 298 | "email": "fabien@symfony.com" 299 | } 300 | ], 301 | "description": "Symfony Debug Component", 302 | "homepage": "http://symfony.com", 303 | "time": "2015-01-05 17:41:06" 304 | }, 305 | { 306 | "name": "symfony/event-dispatcher", 307 | "version": "v2.5.9", 308 | "target-dir": "Symfony/Component/EventDispatcher", 309 | "source": { 310 | "type": "git", 311 | "url": "https://github.com/symfony/EventDispatcher.git", 312 | "reference": "3694afc8bcddabc37e1f1ab76e6fd93e0f187415" 313 | }, 314 | "dist": { 315 | "type": "zip", 316 | "url": "https://api.github.com/repos/symfony/EventDispatcher/zipball/3694afc8bcddabc37e1f1ab76e6fd93e0f187415", 317 | "reference": "3694afc8bcddabc37e1f1ab76e6fd93e0f187415", 318 | "shasum": "" 319 | }, 320 | "require": { 321 | "php": ">=5.3.3" 322 | }, 323 | "require-dev": { 324 | "psr/log": "~1.0", 325 | "symfony/config": "~2.0,>=2.0.5", 326 | "symfony/dependency-injection": "~2.0,>=2.0.5,<2.6.0", 327 | "symfony/stopwatch": "~2.3" 328 | }, 329 | "suggest": { 330 | "symfony/dependency-injection": "", 331 | "symfony/http-kernel": "" 332 | }, 333 | "type": "library", 334 | "extra": { 335 | "branch-alias": { 336 | "dev-master": "2.5-dev" 337 | } 338 | }, 339 | "autoload": { 340 | "psr-0": { 341 | "Symfony\\Component\\EventDispatcher\\": "" 342 | } 343 | }, 344 | "notification-url": "https://packagist.org/downloads/", 345 | "license": [ 346 | "MIT" 347 | ], 348 | "authors": [ 349 | { 350 | "name": "Symfony Community", 351 | "homepage": "http://symfony.com/contributors" 352 | }, 353 | { 354 | "name": "Fabien Potencier", 355 | "email": "fabien@symfony.com" 356 | } 357 | ], 358 | "description": "Symfony EventDispatcher Component", 359 | "homepage": "http://symfony.com", 360 | "time": "2015-01-05 08:51:41" 361 | }, 362 | { 363 | "name": "symfony/http-foundation", 364 | "version": "v2.5.9", 365 | "target-dir": "Symfony/Component/HttpFoundation", 366 | "source": { 367 | "type": "git", 368 | "url": "https://github.com/symfony/HttpFoundation.git", 369 | "reference": "154d6c9ae8f7c27799a6119688dbd6026234441a" 370 | }, 371 | "dist": { 372 | "type": "zip", 373 | "url": "https://api.github.com/repos/symfony/HttpFoundation/zipball/154d6c9ae8f7c27799a6119688dbd6026234441a", 374 | "reference": "154d6c9ae8f7c27799a6119688dbd6026234441a", 375 | "shasum": "" 376 | }, 377 | "require": { 378 | "php": ">=5.3.3" 379 | }, 380 | "require-dev": { 381 | "symfony/expression-language": "~2.4" 382 | }, 383 | "type": "library", 384 | "extra": { 385 | "branch-alias": { 386 | "dev-master": "2.5-dev" 387 | } 388 | }, 389 | "autoload": { 390 | "psr-0": { 391 | "Symfony\\Component\\HttpFoundation\\": "" 392 | }, 393 | "classmap": [ 394 | "Symfony/Component/HttpFoundation/Resources/stubs" 395 | ] 396 | }, 397 | "notification-url": "https://packagist.org/downloads/", 398 | "license": [ 399 | "MIT" 400 | ], 401 | "authors": [ 402 | { 403 | "name": "Symfony Community", 404 | "homepage": "http://symfony.com/contributors" 405 | }, 406 | { 407 | "name": "Fabien Potencier", 408 | "email": "fabien@symfony.com" 409 | } 410 | ], 411 | "description": "Symfony HttpFoundation Component", 412 | "homepage": "http://symfony.com", 413 | "time": "2015-01-03 11:12:44" 414 | }, 415 | { 416 | "name": "symfony/http-kernel", 417 | "version": "v2.5.9", 418 | "target-dir": "Symfony/Component/HttpKernel", 419 | "source": { 420 | "type": "git", 421 | "url": "https://github.com/symfony/HttpKernel.git", 422 | "reference": "a218b9ba87b24c440e4e9cd171c880e83796a5bb" 423 | }, 424 | "dist": { 425 | "type": "zip", 426 | "url": "https://api.github.com/repos/symfony/HttpKernel/zipball/a218b9ba87b24c440e4e9cd171c880e83796a5bb", 427 | "reference": "a218b9ba87b24c440e4e9cd171c880e83796a5bb", 428 | "shasum": "" 429 | }, 430 | "require": { 431 | "php": ">=5.3.3", 432 | "psr/log": "~1.0", 433 | "symfony/debug": "~2.5.9|~2.6,>=2.6.2", 434 | "symfony/event-dispatcher": "~2.5.9|~2.6,>=2.6.2", 435 | "symfony/http-foundation": "~2.5" 436 | }, 437 | "require-dev": { 438 | "symfony/browser-kit": "~2.3", 439 | "symfony/class-loader": "~2.1", 440 | "symfony/config": "~2.0,>=2.0.5", 441 | "symfony/console": "~2.2", 442 | "symfony/css-selector": "~2.0,>=2.0.5", 443 | "symfony/dependency-injection": "~2.2", 444 | "symfony/dom-crawler": "~2.0,>=2.0.5", 445 | "symfony/expression-language": "~2.4", 446 | "symfony/finder": "~2.0,>=2.0.5", 447 | "symfony/process": "~2.0,>=2.0.5", 448 | "symfony/routing": "~2.2", 449 | "symfony/stopwatch": "~2.3", 450 | "symfony/templating": "~2.2" 451 | }, 452 | "suggest": { 453 | "symfony/browser-kit": "", 454 | "symfony/class-loader": "", 455 | "symfony/config": "", 456 | "symfony/console": "", 457 | "symfony/dependency-injection": "", 458 | "symfony/finder": "" 459 | }, 460 | "type": "library", 461 | "extra": { 462 | "branch-alias": { 463 | "dev-master": "2.5-dev" 464 | } 465 | }, 466 | "autoload": { 467 | "psr-0": { 468 | "Symfony\\Component\\HttpKernel\\": "" 469 | } 470 | }, 471 | "notification-url": "https://packagist.org/downloads/", 472 | "license": [ 473 | "MIT" 474 | ], 475 | "authors": [ 476 | { 477 | "name": "Symfony Community", 478 | "homepage": "http://symfony.com/contributors" 479 | }, 480 | { 481 | "name": "Fabien Potencier", 482 | "email": "fabien@symfony.com" 483 | } 484 | ], 485 | "description": "Symfony HttpKernel Component", 486 | "homepage": "http://symfony.com", 487 | "time": "2015-01-07 12:32:08" 488 | }, 489 | { 490 | "name": "symfony/routing", 491 | "version": "v2.5.9", 492 | "target-dir": "Symfony/Component/Routing", 493 | "source": { 494 | "type": "git", 495 | "url": "https://github.com/symfony/Routing.git", 496 | "reference": "47e350dadadabdf64c8dbab499a1132c567f9411" 497 | }, 498 | "dist": { 499 | "type": "zip", 500 | "url": "https://api.github.com/repos/symfony/Routing/zipball/47e350dadadabdf64c8dbab499a1132c567f9411", 501 | "reference": "47e350dadadabdf64c8dbab499a1132c567f9411", 502 | "shasum": "" 503 | }, 504 | "require": { 505 | "php": ">=5.3.3" 506 | }, 507 | "require-dev": { 508 | "doctrine/annotations": "~1.0", 509 | "doctrine/common": "~2.2", 510 | "psr/log": "~1.0", 511 | "symfony/config": "~2.2", 512 | "symfony/expression-language": "~2.4", 513 | "symfony/http-foundation": "~2.3", 514 | "symfony/yaml": "~2.0,>=2.0.5" 515 | }, 516 | "suggest": { 517 | "doctrine/annotations": "For using the annotation loader", 518 | "symfony/config": "For using the all-in-one router or any loader", 519 | "symfony/expression-language": "For using expression matching", 520 | "symfony/yaml": "For using the YAML loader" 521 | }, 522 | "type": "library", 523 | "extra": { 524 | "branch-alias": { 525 | "dev-master": "2.5-dev" 526 | } 527 | }, 528 | "autoload": { 529 | "psr-0": { 530 | "Symfony\\Component\\Routing\\": "" 531 | } 532 | }, 533 | "notification-url": "https://packagist.org/downloads/", 534 | "license": [ 535 | "MIT" 536 | ], 537 | "authors": [ 538 | { 539 | "name": "Symfony Community", 540 | "homepage": "http://symfony.com/contributors" 541 | }, 542 | { 543 | "name": "Fabien Potencier", 544 | "email": "fabien@symfony.com" 545 | } 546 | ], 547 | "description": "Symfony Routing Component", 548 | "homepage": "http://symfony.com", 549 | "keywords": [ 550 | "router", 551 | "routing", 552 | "uri", 553 | "url" 554 | ], 555 | "time": "2015-01-05 08:51:41" 556 | } 557 | ], 558 | "packages-dev": [], 559 | "aliases": [], 560 | "minimum-stability": "stable", 561 | "stability-flags": [], 562 | "prefer-stable": false, 563 | "prefer-lowest": false, 564 | "platform": [], 565 | "platform-dev": [] 566 | } 567 | -------------------------------------------------------------------------------- /www/index.php: -------------------------------------------------------------------------------- 1 | get( 20 | '/', function () use ($app) { 21 | return 'Hello - I am generating logs as we speak!'; 22 | } 23 | ); 24 | 25 | /** 26 | * This endpoint produces either a succesfull response or one of a few predefined errors. 27 | * It allows us to demo logstash's ability to surface errors from logs. 28 | * 29 | * Errors include a 404, a 500 and a sucesfull but latent request. 30 | */ 31 | $app->get( 32 | '/flappy', function (Request $req) use ($app) { 33 | 34 | $option = rand(0, 3); 35 | 36 | switch ($option) { 37 | case 0: 38 | $response = new Response("NOT FOUND", 404); 39 | break; 40 | case 1: 41 | $response = new Response("Something terrible has happened", 500); 42 | break; 43 | case 2: 44 | sleep(3); 45 | $response = new Response("Slow response"); 46 | break; 47 | case 3: 48 | $response = new Response("Normal response"); 49 | break; 50 | } 51 | 52 | return $response; 53 | } 54 | ); 55 | 56 | $app->get( 57 | '/fingerscrossed', function (Request $req) use ($app) { 58 | 59 | // Pick up the required log level from the environment 60 | $logEnv = getenv("LOG_LEVEL"); 61 | $debugLevel = empty($logLevel) ? $logEnv : Logger::WARNING; 62 | 63 | // *** Application Log 64 | // We use this log to log interactions with the code. Eg - DB connections, queries, api calls etc. 65 | // Creating a "fingers crossed" handler allows us to collect debug / info messages but only persist the 66 | // messages to disc is we encounter a log at the severity level in the LOG_LEVEL environment variable. 67 | // Why not just hard code to Logger::Warning? Because we can use LOG_LEVEL to force debugging on a 68 | // host by host basis if we require. 69 | $appLog = new Logger('AppLog'); 70 | $appStreamHandler = new StreamHandler('/var/log/app.log', Logger::DEBUG); 71 | $appStreamHandler->setFormatter(new LogstashFormatter("helloapp", "application")); 72 | 73 | // Use the Varnish ID 74 | 75 | $id = $req->headers->get("X_VARNISH", uniqid("req-id")); 76 | $appLog->pushProcessor(new TagProcessor(['request-id' => $id])); 77 | 78 | $appLog->pushHandler(new FingersCrossedHandler($appStreamHandler, $debugLevel)); 79 | 80 | // In reality this is spread through controllers, models, services etc 81 | 82 | $appLog->debug("Doing somthing bootstrappy"); 83 | $appLog->debug("Bootstrap complete"); 84 | $appLog->debug("Calling data source"); 85 | $appLog->debug("Doing query on a remote web service"); 86 | 87 | // If something goes wrong we get all the debug messages + warning 88 | // If not we get no messages 89 | if (rand(0, 1) === 1) { 90 | sleep(2); 91 | $appLog->warning("Database retry due to connection issue"); 92 | 93 | return new Response("Slow Response", 200); 94 | } else { 95 | $appLog->debug("Returning response"); 96 | 97 | return new Response("Fast Response", 200); 98 | } 99 | } 100 | ); 101 | 102 | /** 103 | * A demo showing how Logstash can surface interesting events from your application log. 104 | */ 105 | $app->get( 106 | '/register', function (Request $req) use ($app) { 107 | 108 | //*** Business Events Log 109 | // We use this log to log all events - Eg Registrations, Purchases, Logins, Password Resets 110 | $busLog = new Logger('BusLog'); 111 | $busStreamHandler = new StreamHandler('/var/log/bus.log', Logger::INFO); 112 | $busStreamHandler->setFormatter(new LogstashFormatter("helloapp", "business")); 113 | $busLog->pushHandler($busStreamHandler); 114 | 115 | $dispatcher = new EventDispatcher(); 116 | 117 | // A more advanced implementation could use a subscriber rather than adhock listeners. 118 | 119 | $dispatcher->addListener( 120 | "business.registration.pre", function () use ($busLog) { 121 | 122 | // Fired before a customer registers 123 | $busLog->info("Customer registering"); 124 | } 125 | ); 126 | 127 | $dispatcher->addListener( 128 | "business.registration.post", function () use ($busLog) { 129 | 130 | // Fires after a customer has registered 131 | $busLog->info("Customer registered"); 132 | } 133 | ); 134 | 135 | // In reality these events are dispatched from deep within your domain layer 136 | $dispatcher->dispatch("business.registration.pre"); 137 | $dispatcher->dispatch("business.registration.post"); 138 | 139 | return new Response("Registered!", 201); 140 | } 141 | ); 142 | 143 | $app->run(); --------------------------------------------------------------------------------