├── CHANGELOG.md ├── CONTRIBUTORS ├── Gemfile ├── LICENSE ├── README.md ├── Rakefile ├── VERSION ├── examples ├── logstash-apache2-to-loganalytics.conf └── logstash-stdin-to-loganalytics.conf ├── lib └── logstash │ └── outputs │ └── azure_loganalytics.rb ├── logstash-output-azure_loganalytics.gemspec └── spec └── outputs └── azure_loganalytics_spec.rb /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## 0.6.0 2 | 3 | * Multithreading support - [PR #17](https://github.com/yokawasa/logstash-output-azure_loganalytics/pull/17) by [@daniel-chambers](https://github.com/daniel-chambers) 4 | * Big performance improvement 5 | * New parame `max_batch_items` is added 6 | * No longer `flush_items` and `flush_interval_time` params are supported in the plugin configuration 7 | 8 | ## 0.5.2 9 | 10 | * Fixed using sprintf in log_type - [PR #16](https://github.com/yokawasa/logstash-output-azure_loganalytics/pull/16) by [@daniel-chambers](https://github.com/daniel-chambers) 11 | 12 | ## 0.5.1 13 | 14 | * Change base [azure-loganalytics-datacollector-api](https://github.com/yokawasa/azure-log-analytics-data-collector) to ">= 0.5.0" 15 | 16 | ## 0.5.0 17 | 18 | * Support sprintf syntax like `%{my_log_type}` for `log_type` config param - [Issue #13](https://github.com/yokawasa/logstash-output-azure_loganalytics/issues/13) 19 | 20 | ## 0.4.0 21 | 22 | * Change base [azure-loganalytics-datacollector-api](https://github.com/yokawasa/azure-log-analytics-data-collector) to ">= 0.4.0" 23 | 24 | ## 0.3.2 25 | 26 | * Improvement: removed unnecessary key check 27 | 28 | ## 0.3.1 29 | 30 | * Performance optimization for large key_names list scenario - [Issue#10](https://github.com/yokawasa/logstash-output-azure_loganalytics/issues/10) 31 | 32 | ## 0.3.0 33 | 34 | * Support `key_types` param - [Issue#8](https://github.com/yokawasa/logstash-output-azure_loganalytics/issues/8) 35 | * Support custom log analytics API endpoint (for supporting Azure sovereign cloud) - [Issue#9](https://github.com/yokawasa/logstash-output-azure_loganalytics/issues/9) 36 | 37 | ## 0.2.3 38 | 39 | * Added additional debug logging for successful requests - [PR#7](https://github.com/yokawasa/logstash-output-azure_loganalytics/pull/7) by [@daniel-chambers](https://github.com/daniel-chambers) 40 | 41 | ## 0.2.2 42 | 43 | * Fix logging failure - [PR#6](https://github.com/yokawasa/logstash-output-azure_loganalytics/pull/6) by [@daniel-chambers](https://github.com/daniel-chambers) 44 | 45 | ## 0.2.1 46 | 47 | * Updated gem dependencies to allow compatibility with Logstash 5 + 6 (Thanks to [@arthurtoper](https://github.com/arthurtoper)) 48 | 49 | ## 0.2.0 50 | 51 | * Support for time-generated-field in output configuration - [Issue#4](https://github.com/yokawasa/logstash-output-azure_loganalytics/issues/4) (Thanks to [@KiZach](https://github.com/KiZach)) 52 | 53 | ## 0.1.1 54 | 55 | * Fixed up [Issue#2](https://github.com/yokawasa/logstash-output-azure_loganalytics/issues/2) (Thanks to [@gmousset](https://github.com/gmousset)) 56 | 57 | ## 0.1.0 58 | 59 | * Initial Release 60 | -------------------------------------------------------------------------------- /CONTRIBUTORS: -------------------------------------------------------------------------------- 1 | The following is a list of people who have contributed ideas, code, bug 2 | reports, or in general have helped logstash along its way. 3 | 4 | Contributors: 5 | * Yoichi Kawasaki (@yokawasa) 6 | * Gwendal Mousset (@gmousset) 7 | * Arthur Toper (@arthurtoper) 8 | * Daniel Chambers (@daniel-chambers) 9 | 10 | Note: If you've sent us patches, bug reports, or otherwise contributed to 11 | Logstash, and you aren't on the list above and want to be, please let us know 12 | and we'll make sure you're here. Contributions from folks like you are what make 13 | open source awesome. 14 | -------------------------------------------------------------------------------- /Gemfile: -------------------------------------------------------------------------------- 1 | source 'https://rubygems.org' 2 | gemspec 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2012–2015 Elasticsearch 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. 14 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Azure Log Analytics output plugin for Logstash 2 | logstash-output-azure_loganalytics is a logstash plugin to output to Azure Log Analytics. [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite [destinations](https://www.elastic.co/products/logstash). [Log Analytics](https://azure.microsoft.com/en-us/services/log-analytics/) is a service in Operations Management Suite (OMS) that helps you collect and analyze data generated by resources in your cloud and on-premises environments. It gives you real-time insights using integrated search and custom dashboards to readily analyze millions of records across all of your workloads and servers regardless of their physical location. The plugin stores in-coming events to Azure Log Analytics by leveraging [Log Analytics HTTP Data Collector API](https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-data-collector-api) 3 | 4 | > [NOTICE] 5 | > logstash-output-azure_loganalytics >= 0.6.0 6 | > - Multithreading support 7 | > - New parame `max_batch_items` is added 8 | > - No longer `flush_items` and `flush_interval_time` params are supported in the plugin configuration 9 | 10 | 11 | ## Installation 12 | 13 | You can install this plugin using the Logstash "plugin" or "logstash-plugin" (for newer versions of Logstash) command: 14 | ``` 15 | bin/plugin install logstash-output-azure_loganalytics 16 | # or 17 | bin/logstash-plugin install logstash-output-azure_loganalytics (Newer versions of Logstash) 18 | ``` 19 | Please see [Logstash reference](https://www.elastic.co/guide/en/logstash/current/offline-plugins.html) for more information. 20 | 21 | ## Configuration 22 | 23 | ``` 24 | output { 25 | azure_loganalytics { 26 | customer_id => "" 27 | shared_key => "" 28 | log_type => "" 29 | key_names => ['key1','key2','key3'..] ## list of Key names 30 | key_types => {'key1'=> 'string' 'key2'=>'double' 'key3'=>'boolean' .. } 31 | max_batch_items => 32 | } 33 | } 34 | ``` 35 | 36 | * **customer\_id (required)** - Your Operations Management Suite workspace ID 37 | * **shared\_key (required)** - The primary or the secondary Connected Sources client authentication key. 38 | * **log\_type (required)** - The name of the event type that is being submitted to Log Analytics. It must only contain alpha numeric and _, and not exceed 100 chars. sprintf syntax like `%{my_log_type}` is supported. 39 | * **time\_generated\_field (optional)** - Default:''(empty string) The name of the time generated field. Be carefule that the value of field should strictly follow the ISO 8601 format (YYYY-MM-DDThh:mm:ssZ). See also [this](https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-data-collector-api#create-a-request) for more details 40 | * **key\_names (optional)** - Default:[] (empty array). The list of key names in in-coming record that you want to submit to Log Analytics. 41 | * **key\_types (optional)** - Default:{} (empty hash). The list of data types for each column as which you want to store in Log Analytics (`string`, `boolean`, or `double`) 42 | * The key names in `key_types` param must be included in `key_names` param. The column data whose key isn't included in `key_names` is treated as `string` data type. 43 | * Multiple key value entries are separated by `spaces` rather than commas (See also [this](https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html#hash)) 44 | * If you want to store a column as datetime or guid data format, set `string` for the column ( the value of the column should be `YYYY-MM-DDThh:mm:ssZ format` if it's `datetime`, and `GUID format` if it's `guid`). 45 | * In case that `key_types` param are not specified, all columns that you want to submit ( you choose with `key_names` param ) are stored as `string` data type in Log Analytics. 46 | * **max_batch_items (optional)** - Default 50. Maximum number of log events to put in one request to Log Analytics. 47 | 48 | > [NOTE] There is a special param for changing the Log Analytics API endpoint (mainly for supporting Azure sovereign cloud) 49 | > * **endpoint (optional)** - Default: ods.opinsights.azure.com 50 | 51 | ## Tests 52 | 53 | Here is an example configuration where Logstash's event source and destination are configured as Apache2 access log and Azure Log Analytics respectively. 54 | 55 | ### Example Configuration 56 | ``` 57 | input { 58 | file { 59 | path => "/var/log/apache2/access.log" 60 | start_position => "beginning" 61 | } 62 | } 63 | 64 | filter { 65 | if [path] =~ "access" { 66 | mutate { replace => { "type" => "apache_access" } } 67 | grok { 68 | match => { "message" => "%{COMBINEDAPACHELOG}" } 69 | } 70 | } 71 | date { 72 | match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] 73 | } 74 | } 75 | 76 | output { 77 | azure_loganalytics { 78 | customer_id => "818f7bbc-8034-4cc3-b97d-f068dd4cd659" 79 | shared_key => "ppC5500KzCcDsOKwM1yWUvZydCuC3m+ds/2xci0byeQr1G3E0Jkygn1N0Rxx/yVBUrDE2ok3vf4ksXxcBmQQHw==(dummy)" 80 | log_type => "ApacheAccessLog" 81 | key_names => ['logid','date','processing_time','remote','user','method','status','agent'] 82 | flush_items => 10 83 | flush_interval_time => 5 84 | } 85 | # for debug 86 | stdout { codec => rubydebug } 87 | } 88 | ``` 89 | 90 | You can find example configuration files in logstash-output-azure_loganalytics/examples. 91 | 92 | ### Run the plugin with the example configuration 93 | 94 | Now you run logstash with the the example configuration like this: 95 | ``` 96 | # Test your logstash configuration before actually running the logstash 97 | bin/logstash -f logstash-apache2-to-loganalytics.conf --configtest 98 | # run 99 | bin/logstash -f logstash-apache2-to-loganalytics.conf 100 | ``` 101 | 102 | Here is an expected output for sample input (Apache2 access log): 103 | 104 | Apache2 access log 105 | ``` 106 | 106.143.121.169 - - [29/Dec/2016:01:38:16 +0000] "GET /test.html HTTP/1.1" 304 179 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" 107 | ``` 108 | 109 | Output (rubydebug) 110 | ``` 111 | { 112 | "message" => "106.143.121.169 - - [29/Dec/2016:01:38:16 +0000] \"GET /test.html HTTP/1.1\" 304 179 \"-\" \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36\"", 113 | "@version" => "1", 114 | "@timestamp" => "2016-12-29T01:38:16.000Z", 115 | "path" => "/var/log/apache2/access.log", 116 | "host" => "yoichitest01", 117 | "type" => "apache_access", 118 | "clientip" => "106.143.121.169", 119 | "ident" => "-", 120 | "auth" => "-", 121 | "timestamp" => "29/Dec/2016:01:38:16 +0000", 122 | "verb" => "GET", 123 | "request" => "/test.html", 124 | "httpversion" => "1.1", 125 | "response" => "304", 126 | "bytes" => "179", 127 | "referrer" => "\"-\"", 128 | "agent" => "\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36\"" 129 | } 130 | ``` 131 | 132 | ## Debugging 133 | If you need to debug and watch what this plugin is sending to Log Analytics, you can change the logstash log level for this plugin to `DEBUG` to get additional logs in the logstash logs. 134 | 135 | One way of changing the log level is to use the logstash API: 136 | 137 | ``` 138 | > curl -XPUT 'localhost:9600/_node/logging?pretty' -H "Content-Type: application/json" -d '{ "logger.logstash.outputs.azureloganalytcs" : "DEBUG" }' 139 | { 140 | "host" : "yoichitest01", 141 | "version" : "6.5.4", 142 | "http_address" : "127.0.0.1:9600", 143 | "id" : "d8038a9e-02c6-411a-9f6b-597f910edc54", 144 | "name" : "yoichitest01", 145 | "acknowledged" : true 146 | } 147 | ``` 148 | 149 | You should then be able to see logs like this in your logstash logs: 150 | 151 | ``` 152 | [2019-03-29T01:18:52,652][DEBUG][logstash.outputs.azureloganalytics] Posting log batch (log count: 50) as log type HealthCheckLogs to DataCollector API. First log: {"message":{"Application":"HealthCheck.API","Environments":{},"Name":"SystemMetrics","LogLevel":"Information","Properties":{"CPU":3,"Memory":83}},"beat":{"version":"6.5.4","hostname":"yoichitest01","name":"yoichitest01"},"timestamp":"2019-03-29T01:18:51.901Z"} 153 | 154 | [2019-03-29T01:18:52,819][DEBUG][logstash.outputs.azureloganalytics] Successfully posted logs as log type HealthCheckLogs with result code 200 to DataCollector API 155 | ``` 156 | 157 | Once you're done, you can use the logstash API to undo your log level changes: 158 | 159 | ``` 160 | > curl -XPUT 'localhost:9600/_node/logging/reset?pretty' 161 | ``` 162 | 163 | ## Contributing 164 | 165 | Bug reports and pull requests are welcome on GitHub at https://github.com/yokawasa/logstash-output-azure_loganalytics. 166 | -------------------------------------------------------------------------------- /Rakefile: -------------------------------------------------------------------------------- 1 | @files=[] 2 | 3 | task :default do 4 | system('rake -T') 5 | end 6 | 7 | require "logstash/devutils/rake" 8 | -------------------------------------------------------------------------------- /VERSION: -------------------------------------------------------------------------------- 1 | 0.6.0 2 | -------------------------------------------------------------------------------- /examples/logstash-apache2-to-loganalytics.conf: -------------------------------------------------------------------------------- 1 | input { 2 | file { 3 | path => "/var/log/apache2/access.log" 4 | start_position => "beginning" 5 | } 6 | } 7 | 8 | filter { 9 | if [path] =~ "access" { 10 | mutate { replace => { "type" => "apache_access" } } 11 | grok { 12 | match => { "message" => "%{COMBINEDAPACHELOG}" } 13 | } 14 | } 15 | date { 16 | match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] 17 | target => "iso8610timestamp" 18 | } 19 | } 20 | 21 | output { 22 | azure_loganalytics { 23 | customer_id => "" 24 | shared_key => "" 25 | log_type => "ApacheAccessLog" 26 | time_generated_field => "iso8610timestamp" 27 | key_names => ['host','clientip','timestamp','iso8610timestamp','verb','request','httpversion','response','agent'] 28 | key_types => {'host'=>'string' 'clientip'=>'string' 'timestamp'=>'string' 'iso8610timestamp'=>'string' 'verb'=>'string' 'request'=>'string' 'httpversion'=>'double' 'response'=>'double' 'agent'=>'string'} 29 | max_batch_items => 50 30 | } 31 | # for debug 32 | stdout { codec => rubydebug } 33 | } 34 | -------------------------------------------------------------------------------- /examples/logstash-stdin-to-loganalytics.conf: -------------------------------------------------------------------------------- 1 | input { 2 | stdin {} 3 | } 4 | 5 | filter { 6 | if [path] =~ "access" { 7 | mutate { replace => { "type" => "apache_access" } } 8 | grok { 9 | match => { "message" => "%{COMBINEDAPACHELOG}" } 10 | } 11 | } 12 | date { 13 | match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] 14 | } 15 | } 16 | 17 | output { 18 | azure_loganalytics { 19 | customer_id => "" 20 | shared_key => "" 21 | log_type => "ApacheAccessLog" 22 | key_names => ['host','clientip','timestamp','verb','request','httpversion','response','agent'] 23 | max_batch_items => 50 24 | } 25 | # for debug 26 | stdout { codec => rubydebug } 27 | } 28 | -------------------------------------------------------------------------------- /lib/logstash/outputs/azure_loganalytics.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | 3 | require "logstash/outputs/base" 4 | require "logstash/namespace" 5 | require "securerandom" 6 | 7 | class LogStash::Outputs::AzureLogAnalytics < LogStash::Outputs::Base 8 | config_name "azure_loganalytics" 9 | 10 | # Your Operations Management Suite workspace ID 11 | config :customer_id, :validate => :string, :required => true 12 | 13 | # The primary or the secondary Connected Sources client authentication key 14 | config :shared_key, :validate => :string, :required => true 15 | 16 | # The name of the event type that is being submitted to Log Analytics. 17 | # This must only contain alpha numeric and _, and not exceed 100 chars. 18 | # sprintf syntax like %{my_log_type} is supported. 19 | config :log_type, :validate => :string, :required => true 20 | 21 | # The service endpoint (Default: ods.opinsights.azure.com) 22 | config :endpoint, :validate => :string, :default => 'ods.opinsights.azure.com' 23 | 24 | # The name of the time generated field. 25 | # Be carefule that the value of field should strictly follow the ISO 8601 format (YYYY-MM-DDThh:mm:ssZ) 26 | config :time_generated_field, :validate => :string, :default => '' 27 | 28 | # The list of key names in in-coming record that you want to submit to Log Analytics 29 | config :key_names, :validate => :array, :default => [] 30 | 31 | # The list of data types for each column as which you want to store in Log Analytics (`string`, `boolean`, or `double`) 32 | # - The key names in `key_types` param must be included in `key_names` param. The column data whose key isn't included in `key_names` is treated as `string` data type. 33 | # - Multiple key value entries are separated by `spaces` rather than commas 34 | # See also https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html#hash 35 | # - If you want to store a column as datetime or guid data format, set `string` for the column ( the value of the column should be `YYYY-MM-DDThh:mm:ssZ format` if it's `datetime`, and `GUID format` if it's `guid`). 36 | # - In case that `key_types` param are not specified, all columns that you want to submit ( you choose with `key_names` param ) are stored as `string` data type in Log Analytics. 37 | # Example: 38 | # key_names => ['key1','key2','key3','key4',...] 39 | # key_types => {'key1'=>'string' 'key2'=>'string' 'key3'=>'boolean' 'key4'=>'double' ...} 40 | config :key_types, :validate => :hash, :default => {} 41 | 42 | # Maximum number of log events to put in one request to Log Analytics 43 | config :max_batch_items, :validate => :number, :default => 50 44 | 45 | concurrency :shared 46 | 47 | public 48 | def register 49 | require 'azure/loganalytics/datacollectorapi/client' 50 | 51 | @key_types.each { |k, v| 52 | t = v.downcase 53 | if ( !t.eql?('string') && !t.eql?('double') && !t.eql?('boolean') ) 54 | raise ArgumentError, "Key type(#{v}) for key(#{k}) must be either string, boolean, or double" 55 | end 56 | } 57 | 58 | ## Start 59 | @client=Azure::Loganalytics::Datacollectorapi::Client::new(@customer_id,@shared_key,@endpoint) 60 | 61 | end # def register 62 | 63 | public 64 | def multi_receive(events) 65 | 66 | flush_guid = SecureRandom.uuid 67 | @logger.debug("Start receive: #{flush_guid}. Received #{events.length} events") 68 | 69 | documentsByLogType = {} # This is a map of log_type to list of documents (themselves maps) to send to Log Analytics 70 | events.each do |event| 71 | document = {} 72 | 73 | log_type_for_event = event.sprintf(@log_type) 74 | 75 | event_hash = event.to_hash() 76 | if @key_names.length > 0 77 | # Get the intersection of key_names and keys of event_hash 78 | keys_intersection = @key_names & event_hash.keys 79 | keys_intersection.each do |key| 80 | if @key_types.include?(key) 81 | document[key] = convert_value(@key_types[key], event_hash[key]) 82 | else 83 | document[key] = event_hash[key] 84 | end 85 | end 86 | else 87 | document = event_hash 88 | end 89 | # Skip if document doesn't contain any items 90 | next if (document.keys).length < 1 91 | 92 | if documentsByLogType[log_type_for_event] == nil then 93 | documentsByLogType[log_type_for_event] = [] 94 | end 95 | documentsByLogType[log_type_for_event].push(document) 96 | end 97 | 98 | # Skip in case there are no candidate documents to deliver 99 | if documentsByLogType.length < 1 100 | @logger.debug("No documents in batch. Skipping") 101 | return 102 | end 103 | 104 | documentsByLogType.each do |log_type_for_events, events| 105 | events.each_slice(@max_batch_items) do |event_batch| 106 | begin 107 | @logger.debug("Posting log batch (log count: #{event_batch.length}) as log type #{log_type_for_events} to DataCollector API. First log: " + (event_batch[0].to_json).to_s) 108 | res = @client.post_data(log_type_for_events, event_batch, @time_generated_field) 109 | if Azure::Loganalytics::Datacollectorapi::Client.is_success(res) 110 | @logger.debug("Successfully posted logs as log type #{log_type_for_events} with result code #{res.code} to DataCollector API") 111 | else 112 | @logger.error("DataCollector API request failure (log type #{log_type_for_events}): error code: #{res.code}, data=>" + (event_batch.to_json).to_s) 113 | end 114 | rescue Exception => ex 115 | @logger.error("Exception occured in posting to DataCollector API as log type #{log_type_for_events}: '#{ex}', data=>" + (event_batch.to_json).to_s) 116 | end 117 | end 118 | end 119 | @logger.debug("End receive: #{flush_guid}") 120 | 121 | end # def multi_receive 122 | 123 | private 124 | def convert_value(type, val) 125 | t = type.downcase 126 | case t 127 | when "boolean" 128 | v = val.downcase 129 | return (v.to_s == 'true' ) ? true : false 130 | when "double" 131 | return Integer(val) rescue Float(val) rescue val 132 | else 133 | return val 134 | end 135 | end 136 | 137 | end # class LogStash::Outputs::AzureLogAnalytics 138 | -------------------------------------------------------------------------------- /logstash-output-azure_loganalytics.gemspec: -------------------------------------------------------------------------------- 1 | Gem::Specification.new do |s| 2 | s.name = 'logstash-output-azure_loganalytics' 3 | s.version = File.read("VERSION").strip 4 | s.authors = ["Yoichi Kawasaki"] 5 | s.email = "yoichi.kawasaki@outlook.com" 6 | s.summary = %q{logstash output plugin to store events into Azure Log Analytics} 7 | s.description = s.summary 8 | s.homepage = "http://github.com/yokawasa/logstash-output-azure_loganalytics" 9 | s.licenses = ["Apache License (2.0)"] 10 | s.require_paths = ["lib"] 11 | 12 | # Files 13 | s.files = Dir['lib/**/*','spec/**/*','vendor/**/*','*.gemspec','*.md','CONTRIBUTORS','Gemfile','LICENSE','NOTICE.TXT', 'VERSION'] 14 | # Tests 15 | s.test_files = s.files.grep(%r{^(test|spec|features)/}) 16 | 17 | # Special flag to let us know this is actually a logstash plugin 18 | s.metadata = { "logstash_plugin" => "true", "logstash_group" => "output" } 19 | 20 | # Gem dependencies 21 | s.add_runtime_dependency "rest-client", ">= 1.8.0" 22 | s.add_runtime_dependency "azure-loganalytics-datacollector-api", ">= 0.5.0" 23 | s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99" 24 | s.add_runtime_dependency "logstash-codec-plain" 25 | s.add_development_dependency "logstash-devutils" 26 | end 27 | -------------------------------------------------------------------------------- /spec/outputs/azure_loganalytics_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/devutils/rspec/spec_helper" 3 | require "logstash/outputs/azure_loganalytics" 4 | require "logstash/codecs/plain" 5 | require "logstash/event" 6 | 7 | describe LogStash::Outputs::AzureLogAnalytics do 8 | 9 | let(:customer_id) { '' } 10 | let(:shared_key) { '' } 11 | let(:log_type) { 'ApacheAccessLog' } 12 | let(:key_names) { ['logid','date','processing_time','remote','user','method','status','agent','eventtime'] } 13 | let(:time_generated_field) { 'eventtime' } 14 | 15 | let(:azure_loganalytics_config) { 16 | { 17 | "customer_id" => customer_id, 18 | "shared_key" => shared_key, 19 | "log_type" => log_type, 20 | "key_names" => key_names, 21 | "time_generated_field" => time_generated_field 22 | } 23 | } 24 | 25 | let(:azure_loganalytics_output) { LogStash::Outputs::AzureLogAnalytics.new(azure_loganalytics_config) } 26 | 27 | before do 28 | azure_loganalytics_output.register 29 | end 30 | 31 | describe "#multi_receive" do 32 | it "Should successfully send the event to Azure Log Analytics" do 33 | events = [] 34 | log1 = { 35 | :logid => "5cdad72f-c848-4df0-8aaa-ffe033e75d57", 36 | :date => "2017-04-22 09:44:32 JST", 37 | :processing_time => "372", 38 | :remote => "101.202.74.59", 39 | :user => "-", 40 | :method => "GET / HTTP/1.1", 41 | :status => "304", 42 | :size => "-", 43 | :referer => "-", 44 | :agent => "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:27.0) Gecko/20100101 Firefox/27.0", 45 | :eventtime => "2017-04-22T01:44:32Z" 46 | } 47 | 48 | log2 = { 49 | :logid => "7260iswx-8034-4cc3-uirtx-f068dd4cd659", 50 | :date => "2017-04-22 09:45:14 JST", 51 | :processing_time => "105", 52 | :remote => "201.78.74.59", 53 | :user => "-", 54 | :method => "GET /manager/html HTTP/1.1", 55 | :status =>"200", 56 | :size => "-", 57 | :referer => "-", 58 | :agent => "Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0", 59 | :eventtime => "2017-04-22T01:45:14Z" 60 | } 61 | 62 | event1 = LogStash::Event.new(log1) 63 | event2 = LogStash::Event.new(log2) 64 | events.push(event1) 65 | events.push(event2) 66 | expect {azure_loganalytics_output.multi_receive(events)}.to_not raise_error 67 | end 68 | end 69 | 70 | end 71 | --------------------------------------------------------------------------------