├── .gitignore ├── LICENSE ├── README.md ├── app ├── config.go ├── global.go ├── main.go ├── version.go └── worker.go ├── assembly ├── bin │ └── load.sh ├── common │ ├── app.properties │ └── build.sh ├── linux │ ├── release.sh │ └── test.sh └── mac │ ├── release.sh │ └── test.sh ├── change_log.md └── profiles ├── release ├── config.yml ├── kafka_log.xml └── log.xml └── test ├── config.yml ├── kafka_log.xml └── log.xml /.gitignore: -------------------------------------------------------------------------------- 1 | .*.swp 2 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | Copyright 2017 Alex Stocks. 180 | 181 | Licensed under the Apache License, Version 2.0 (the "License"); 182 | you may not use this file except in compliance with the License. 183 | You may obtain a copy of the License at 184 | 185 | http://www.apache.org/licenses/LICENSE-2.0 186 | 187 | Unless required by applicable law or agreed to in writing, software 188 | distributed under the License is distributed on an "AS IS" BASIS, 189 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 190 | See the License for the specific language governing permissions and 191 | limitations under the License. 192 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # kafka-connect-elasticsearcch # 2 | --- 3 | * write data from Kafka into Elasticsearch index and set an index to output to that changes daily 4 | 5 | ## introdction ## 6 | --- 7 | To run the app, compile it firstly: 8 | 9 | $ sh assembly/linux/test.sh 10 | 11 | Next, start the app: 12 | 13 | $ cd target/linux/kafka-connect-elasticsearch-0.0.01-2017*-*-test/ && bash bin/load_kafka-connect-elasticsearch.sh monitor 14 | 15 | If you wanna change sbin name, pls reset the value of TARGET_EXEC_NAME in assembly/common/app.properties. 16 | 17 | 18 | ## LICENCE ## 19 | --- 20 | Apache License 2.0 21 | 22 | -------------------------------------------------------------------------------- /app/config.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "io/ioutil" 5 | ) 6 | 7 | import ( 8 | "gopkg.in/yaml.v2" 9 | ) 10 | 11 | // ConfYaml is config structure. 12 | type ConfYaml struct { 13 | Core SectionCore `yaml:"core"` 14 | Kafka SectionKafka `yaml:"kafka"` 15 | Es SectionEs `yaml:"es"` 16 | } 17 | 18 | // SectionPID is sub section of config. 19 | type SectionPID struct { 20 | Enabled bool `yaml:"enabled"` 21 | Path string `yaml:"path"` 22 | Override bool `yaml:"override"` 23 | } 24 | 25 | // SectionCore is sub section of config. 26 | type SectionCore struct { 27 | FailFastTimeout int `yaml:"fail_fast_timeout"` 28 | WorkerNum int64 `yaml:"worker_num"` 29 | QueueNum int64 `yaml:"queue_num"` 30 | PID SectionPID `yaml:"pid"` 31 | } 32 | 33 | // SectionKafka is sub section of config. 34 | type SectionKafka struct { 35 | Brokers string `yaml:"brokers"` 36 | Topic string `yaml:"topic"` 37 | ConsumerGroup string `yaml:"consumer_group"` 38 | } 39 | 40 | type SectionEs struct { 41 | EsHosts []string `yaml:"es_hosts"` 42 | ShardNum int32 `yaml:"shard_num"` 43 | ReplicaNum int32 `yaml:"replica_num"` 44 | RefreshInterval int32 `yaml:"refresh_interval"` 45 | 46 | Index string `yaml:"index"` 47 | IndexTimeSuffixFormat string `yaml:"index_time_suffix_format"` 48 | Type string `yaml:"type"` 49 | KibanaTimeField string `yaml:"kibana_time_filed"` 50 | KibanaTimeFormat string `yaml:"kibana_time_format"` 51 | BulkSize int32 `yaml:"bulk_size"` 52 | BulkTimeout int32 `yaml:"bulk_timeout"` 53 | } 54 | 55 | // LoadConfYaml provide load yml config. 56 | func LoadConfYaml(confPath string) (ConfYaml, error) { 57 | var config ConfYaml 58 | 59 | configFile, err := ioutil.ReadFile(confPath) 60 | 61 | if err != nil { 62 | return config, err 63 | } 64 | 65 | err = yaml.Unmarshal(configFile, &config) 66 | 67 | if err != nil { 68 | return config, err 69 | } 70 | 71 | return config, nil 72 | } 73 | -------------------------------------------------------------------------------- /app/global.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "sync" 6 | "time" 7 | ) 8 | 9 | import ( 10 | "github.com/AlexStocks/goext/database/elasticsearch" 11 | "github.com/AlexStocks/goext/log" 12 | "github.com/AlexStocks/goext/log/kafka" 13 | ) 14 | 15 | type ( 16 | empty interface{} 17 | ) 18 | 19 | var ( 20 | // local ip 21 | LocalIP string 22 | LocalHost string 23 | // progress id 24 | ProcessID string 25 | // Kafka2EsConf is main config 26 | Kafka2EsConf ConfYaml 27 | // Consumer 28 | KafkaConsumer gxkafka.Consumer 29 | // Log records server request log 30 | Log gxlog.Logger 31 | // KafkaLog records kafka message info 32 | KafkaLog gxlog.Logger 33 | // kafka message pusher worker 34 | Worker *EsWorker 35 | // es client 36 | EsClient gxelasticsearch.EsClient 37 | // for es index 38 | IndexLock sync.RWMutex 39 | TdyIndex string 40 | TmwIndex string 41 | ) 42 | 43 | // 创建今天和明天共两天的index 44 | func initEsIndex() { 45 | var ( 46 | err error 47 | t time.Time 48 | ) 49 | 50 | // 创建今天的index 51 | t = time.Now() 52 | TdyIndex = Kafka2EsConf.Es.Index + fmt.Sprintf(Kafka2EsConf.Es.IndexTimeSuffixFormat, t.Year(), t.Month(), t.Day()) 53 | err = EsClient.CreateEsIndexWithTimestamp( 54 | TdyIndex, 55 | Kafka2EsConf.Es.ShardNum, 56 | Kafka2EsConf.Es.ReplicaNum, 57 | Kafka2EsConf.Es.RefreshInterval, 58 | Kafka2EsConf.Es.Type, 59 | Kafka2EsConf.Es.KibanaTimeField, 60 | Kafka2EsConf.Es.KibanaTimeFormat, 61 | ) 62 | if err != nil { 63 | panic(err) 64 | } 65 | Log.Info("create today index:%s", TdyIndex) 66 | 67 | // 创建第二天的index 68 | t = time.Now().AddDate(0, 0, 1) 69 | TmwIndex = Kafka2EsConf.Es.Index + fmt.Sprintf(Kafka2EsConf.Es.IndexTimeSuffixFormat, t.Year(), t.Month(), t.Day()) 70 | err = EsClient.CreateEsIndexWithTimestamp( 71 | TmwIndex, 72 | Kafka2EsConf.Es.ShardNum, 73 | Kafka2EsConf.Es.ReplicaNum, 74 | Kafka2EsConf.Es.RefreshInterval, 75 | Kafka2EsConf.Es.Type, 76 | Kafka2EsConf.Es.KibanaTimeField, 77 | Kafka2EsConf.Es.KibanaTimeFormat, 78 | ) 79 | if err != nil { 80 | panic(err) 81 | } 82 | Log.Info("create tomorrrow index:%s", TmwIndex) 83 | } 84 | 85 | func updateLastDate() { 86 | var ( 87 | tmw time.Time 88 | index string 89 | flag bool 90 | err error 91 | ) 92 | 93 | tmw = time.Now().AddDate(0, 0, 1) 94 | index = Kafka2EsConf.Es.Index + fmt.Sprintf(Kafka2EsConf.Es.IndexTimeSuffixFormat, tmw.Year(), tmw.Month(), tmw.Day()) 95 | 96 | IndexLock.RLock() 97 | if TmwIndex != index { 98 | flag = true 99 | } 100 | IndexLock.RUnlock() 101 | 102 | if flag { 103 | err = EsClient.CreateEsIndexWithTimestamp( 104 | index, 105 | Kafka2EsConf.Es.ShardNum, 106 | Kafka2EsConf.Es.ReplicaNum, 107 | Kafka2EsConf.Es.RefreshInterval, 108 | Kafka2EsConf.Es.Type, 109 | Kafka2EsConf.Es.KibanaTimeField, 110 | Kafka2EsConf.Es.KibanaTimeFormat, 111 | ) 112 | Log.Info("CreateEsIndexWithTimestamp() = error:%#v", err) 113 | 114 | if err == nil { 115 | IndexLock.Lock() 116 | TdyIndex = TmwIndex 117 | TmwIndex = index 118 | IndexLock.Unlock() 119 | } 120 | } 121 | } 122 | 123 | func getIndex() string { 124 | IndexLock.RLock() 125 | defer IndexLock.RUnlock() 126 | return TdyIndex 127 | } 128 | -------------------------------------------------------------------------------- /app/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "flag" 5 | "fmt" 6 | "log" 7 | _ "net/http/pprof" 8 | "os" 9 | "os/signal" 10 | "path" 11 | "path/filepath" 12 | "strconv" 13 | "strings" 14 | "syscall" 15 | "time" 16 | ) 17 | 18 | import ( 19 | "github.com/AlexStocks/goext/database/elasticsearch" 20 | "github.com/AlexStocks/goext/log" 21 | "github.com/AlexStocks/goext/log/kafka" 22 | "github.com/AlexStocks/goext/net" 23 | "github.com/AlexStocks/goext/time" 24 | ) 25 | 26 | const ( 27 | APP_CONF_FILE string = "APP_CONF_FILE" 28 | APP_LOG_CONF_FILE string = "APP_LOG_CONF_FILE" 29 | APP_KAFKA_LOG_CONF_FILE string = "APP_KAFKA_LOG_CONF_FILE" 30 | ) 31 | 32 | const ( 33 | FailfastTimeout = 3 // in second 34 | KeepAliveTimeout = 60e9 35 | ) 36 | 37 | var ( 38 | pprofPath = "/debug/pprof/" 39 | 40 | usageStr = ` 41 | Usage: kafka-connect-elasticsearch [options] 42 | Server Options: 43 | -c, --config Configuration file path 44 | -k, --kafka_log Kafka Log configuration file 45 | -l, --log Log configuration file 46 | Common Options: 47 | -h, --help Show this message 48 | -v, --version Show version 49 | ` 50 | ) 51 | 52 | // usage will print out the flag options for the server. 53 | func usage() { 54 | fmt.Printf("%s\n", usageStr) 55 | os.Exit(0) 56 | } 57 | 58 | func getHostInfo() { 59 | var ( 60 | err error 61 | ) 62 | 63 | LocalHost, err = os.Hostname() 64 | if err != nil { 65 | panic(fmt.Sprintf("os.Hostname() = %s", err)) 66 | } 67 | 68 | LocalIP, err = gxnet.GetLocalIP() 69 | if err != nil { 70 | panic("can not get local IP!") 71 | } 72 | 73 | ProcessID = fmt.Sprintf("%s@%s", LocalIP, LocalHost) 74 | } 75 | 76 | func createPIDFile() error { 77 | if !Kafka2EsConf.Core.PID.Enabled { 78 | return nil 79 | } 80 | 81 | pidPath := Kafka2EsConf.Core.PID.Path 82 | _, err := os.Stat(pidPath) 83 | if os.IsNotExist(err) || Kafka2EsConf.Core.PID.Override { 84 | currentPid := os.Getpid() 85 | if err := os.MkdirAll(filepath.Dir(pidPath), os.ModePerm); err != nil { 86 | return fmt.Errorf("Can't create PID folder on %v", err) 87 | } 88 | 89 | file, err := os.Create(pidPath) 90 | if err != nil { 91 | return fmt.Errorf("Can't create PID file: %v", err) 92 | } 93 | defer file.Close() 94 | if _, err := file.WriteString(fmt.Sprintf("%s-%s", ProcessID, strconv.FormatInt(int64(currentPid), 10))); err != nil { 95 | return fmt.Errorf("Can'write PID information on %s: %v", pidPath, err) 96 | } 97 | } else { 98 | return fmt.Errorf("%s already exists", pidPath) 99 | } 100 | return nil 101 | } 102 | 103 | // initLog use for initial log module 104 | func initLog(logConf string) { 105 | Log = gxlog.NewLoggerWithConfFile(logConf) 106 | Log.SetAsDefaultLogger() 107 | } 108 | 109 | // initLog use for initial log module 110 | func initKafkaLog(logConf string) { 111 | KafkaLog = gxlog.NewLoggerWithConfFile(logConf) 112 | } 113 | 114 | // initEsClient initialise EsClient 115 | func initEsClient() { 116 | var ( 117 | err error 118 | ) 119 | 120 | // Create a client 121 | EsClient, err = gxelasticsearch.CreateEsClient(Kafka2EsConf.Es.EsHosts) 122 | if err != nil { 123 | panic(err) 124 | } 125 | 126 | initEsIndex() 127 | } 128 | 129 | func initWorker() { 130 | Worker = NewEsWorker() 131 | Worker.Start(int64(Kafka2EsConf.Core.WorkerNum), int64(Kafka2EsConf.Core.QueueNum)) 132 | } 133 | 134 | func initKafkaConsumer() { 135 | var ( 136 | err error 137 | id string 138 | ) 139 | 140 | id = LocalIP + "-" + LocalHost + "-" + "kafka2es" 141 | KafkaConsumer, err = gxkafka.NewConsumer( 142 | id, 143 | strings.Split(Kafka2EsConf.Kafka.Brokers, ","), 144 | []string{Kafka2EsConf.Kafka.Topic}, 145 | Kafka2EsConf.Kafka.ConsumerGroup, 146 | Worker.enqueueKafkaMessage, 147 | kafkaConsumerErrorCallback, 148 | kafkaConsumerNotificationCallback, 149 | ) 150 | if err != nil { 151 | panic(fmt.Sprintf("Failed to initialize Kafka consumer: %v", err)) 152 | } 153 | 154 | err = KafkaConsumer.Start() 155 | if err != nil { 156 | panic(fmt.Sprintf("Failed to start Kafka consumer: %v", err)) 157 | } 158 | } 159 | 160 | func initSignal() { 161 | var ( 162 | // signal.Notify的ch信道是阻塞的(signal.Notify不会阻塞发送信号), 需要设置缓冲 163 | signals = make(chan os.Signal, 1) 164 | ticker = time.NewTicker(KeepAliveTimeout) 165 | ) 166 | // It is not possible to block SIGKILL or syscall.SIGSTOP 167 | signal.Notify(signals, os.Interrupt, os.Kill, syscall.SIGHUP, syscall.SIGQUIT, syscall.SIGTERM, syscall.SIGINT) 168 | for { 169 | select { 170 | case sig := <-signals: 171 | Log.Info("get signal %s", sig.String()) 172 | switch sig { 173 | case syscall.SIGHUP: 174 | // reload() 175 | default: 176 | go gxtime.Future(Kafka2EsConf.Core.FailFastTimeout, func() { 177 | Log.Warn("app exit now by force...") 178 | os.Exit(1) 179 | }) 180 | 181 | // 要么fastFailTimeout时间内执行完毕下面的逻辑然后程序退出,要么执行上面的超时函数程序强行退出 182 | KafkaConsumer.Stop() 183 | KafkaLog.Close() 184 | ticker.Stop() 185 | Log.Warn("app exit now...") 186 | Log.Close() 187 | return 188 | } 189 | 190 | // case <-time.After(time.Duration(KeepAliveTimeout)): 191 | case <-ticker.C: 192 | updateLastDate() 193 | Log.Info(Worker.Info()) 194 | } 195 | } 196 | } 197 | 198 | func main() { 199 | var ( 200 | err error 201 | showVersion bool 202 | configFile string 203 | logConf string 204 | kafkaLogConf string 205 | ) 206 | 207 | ///////////////////////////////////////////////// 208 | // conf 209 | ///////////////////////////////////////////////// 210 | 211 | SetVersion(Version) 212 | 213 | flag.BoolVar(&showVersion, "v", false, "Print version information.") 214 | flag.BoolVar(&showVersion, "version", false, "Print version information.") 215 | flag.StringVar(&configFile, "c", "", "Configuration file path.") 216 | flag.StringVar(&configFile, "config", "", "Configuration file path.") 217 | flag.StringVar(&logConf, "l", "", "Logger configuration file.") 218 | flag.StringVar(&logConf, "log", "", "Logger configuration file.") 219 | flag.StringVar(&kafkaLogConf, "k", "", "Kafka logger configuration file.") 220 | flag.StringVar(&kafkaLogConf, "kafka_log", "", "Kafka logger configuration file.") 221 | 222 | flag.Usage = usage 223 | flag.Parse() 224 | 225 | // Show version and exit 226 | if showVersion { 227 | PrintVersion() 228 | os.Exit(0) 229 | } 230 | 231 | if configFile == "" { 232 | configFile = os.Getenv(APP_CONF_FILE) 233 | if configFile == "" { 234 | panic("can not load configFile") 235 | } 236 | } 237 | if path.Ext(configFile) != ".yml" { 238 | panic(fmt.Sprintf("application configure file name{%v} suffix must be .yml", configFile)) 239 | } 240 | Kafka2EsConf, err = LoadConfYaml(configFile) 241 | if err != nil { 242 | log.Printf("Load yaml config file error: '%v'", err) 243 | return 244 | } 245 | fmt.Printf("config: %+v\n", gxlog.PrettyString(Kafka2EsConf)) 246 | 247 | if logConf == "" { 248 | logConf = os.Getenv(APP_LOG_CONF_FILE) 249 | if logConf == "" { 250 | panic("can not load logConf") 251 | } 252 | } 253 | 254 | if kafkaLogConf == "" { 255 | kafkaLogConf = os.Getenv(APP_KAFKA_LOG_CONF_FILE) 256 | if kafkaLogConf == "" { 257 | panic("can not load kafkaLogConf") 258 | } 259 | } 260 | 261 | ///////////////////////////////////////////////// 262 | // worker 263 | ///////////////////////////////////////////////// 264 | if Kafka2EsConf.Core.FailFastTimeout == 0 { 265 | Kafka2EsConf.Core.FailFastTimeout = FailfastTimeout 266 | } 267 | 268 | getHostInfo() 269 | 270 | initLog(logConf) 271 | initKafkaLog(kafkaLogConf) 272 | 273 | if err = createPIDFile(); err != nil { 274 | Log.Critic(err) 275 | } 276 | 277 | initEsClient() 278 | initWorker() 279 | // kafka message receiver 280 | initKafkaConsumer() 281 | 282 | initSignal() 283 | } 284 | -------------------------------------------------------------------------------- /app/version.go: -------------------------------------------------------------------------------- 1 | /****************************************************** 2 | # DESC : version 3 | # MAINTAINER : Alex Stocks 4 | # LICENCE : Apache License 2.0 5 | # EMAIL : alexstocks@foxmail.com 6 | # MOD : 2017-04-09 11:33 7 | # FILE : version.go 8 | ******************************************************/ 9 | 10 | package main 11 | 12 | import ( 13 | "fmt" 14 | "runtime" 15 | ) 16 | 17 | var ( 18 | Version = "0.0.10" 19 | DATE = "2017/10/28" 20 | ) 21 | 22 | // SetVersion for setup Version string. 23 | func SetVersion(ver string) { 24 | Version = ver 25 | } 26 | 27 | // PrintVersion provide print server engine 28 | func PrintVersion() { 29 | fmt.Printf(`kafka-connect-elasticsearch %s, Compiler: %s %s, Copyright (C) %s Alex Stocks.`, 30 | Version, 31 | runtime.Compiler, 32 | runtime.Version(), 33 | DATE, 34 | ) 35 | fmt.Println() 36 | } 37 | -------------------------------------------------------------------------------- /app/worker.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "sync" 6 | "sync/atomic" 7 | "time" 8 | ) 9 | 10 | import ( 11 | "github.com/AlexStocks/goext/runtime" 12 | "github.com/AlexStocks/goext/strings" 13 | "github.com/AlexStocks/goext/time" 14 | "github.com/Shopify/sarama" 15 | sc "github.com/bsm/sarama-cluster" 16 | ) 17 | 18 | ///////////////////////////////////////////////// 19 | // kafka messaeg -> es 20 | ///////////////////////////////////////////////// 21 | 22 | ///////////////////////////////////////////////// 23 | // worker 24 | ///////////////////////////////////////////////// 25 | 26 | type EsWorker struct { 27 | Q chan *sarama.ConsumerMessage 28 | lock sync.Mutex 29 | done chan empty 30 | wg sync.WaitGroup 31 | } 32 | 33 | func NewEsWorker() *EsWorker { 34 | return &EsWorker{ 35 | done: make(chan empty), 36 | } 37 | 38 | } 39 | 40 | // Start for initialize all workers. 41 | func (w *EsWorker) Start(workerNum int64, queueNum int64) { 42 | Log.Debug("worker number = %v, queue number is = %v", workerNum, queueNum) 43 | w.Q = make(chan *sarama.ConsumerMessage, queueNum) 44 | for i := int64(0); i < workerNum; i++ { 45 | w.wg.Add(1) 46 | go w.startEsWorker() 47 | } 48 | } 49 | 50 | var ( 51 | workerIndex uint64 52 | ) 53 | 54 | func (w *EsWorker) startEsWorker() { 55 | var ( 56 | flag bool 57 | id int 58 | index uint64 59 | err error 60 | message *sarama.ConsumerMessage 61 | docArray []interface{} 62 | ticker *time.Ticker 63 | ) 64 | 65 | id = gxruntime.GoID() 66 | index = atomic.AddUint64(&workerIndex, 1) 67 | Log.Info("worker{%d-%d} starts to work now.", index, id) 68 | ticker = time.NewTicker(gxtime.TimeSecondDuration(float64(Kafka2EsConf.Es.BulkTimeout))) 69 | defer ticker.Stop() 70 | 71 | LOOP: 72 | for { 73 | select { 74 | case message = <-w.Q: 75 | Log.Debug("dequeue{worker{%d-%d} , message{topic:%v, partition:%v, offset:%v, msg:%v}}}", 76 | index, id, message.Topic, message.Partition, message.Offset, string(message.Value)) 77 | KafkaLog.Info("consumer{worker{%d-%d} , message{topic:%v, partition:%v, offset:%v}}}", 78 | index, id, message.Topic, message.Partition, message.Offset) 79 | docArray = append(docArray, message.Value) 80 | if int(Kafka2EsConf.Es.BulkSize) <= len(docArray) { 81 | flag = true 82 | } 83 | 84 | // case <-time.After(gxtime.TimeSecondDuration(float64(Kafka2EsConf.Es.BulkTimeout))): 85 | case <-ticker.C: 86 | if 0 < len(docArray) { 87 | flag = true 88 | } 89 | 90 | case <-w.done: 91 | if 0 < len(docArray) { 92 | EsClient.BulkInsert(getIndex(), Kafka2EsConf.Es.Type, docArray) 93 | } 94 | w.wg.Done() 95 | Log.Info("worker{%d-%d} exits now.", index, id) 96 | break LOOP 97 | } 98 | 99 | if flag { 100 | err = EsClient.BulkInsert(getIndex(), Kafka2EsConf.Es.Type, docArray) 101 | if err != nil { 102 | // Log.Error("error:%#v, log:%s", err, (docArray[0].([]byte))) 103 | Log.Error("error:%s, log:%s", err, gxstrings.String(docArray[0].([]byte))) 104 | } else { 105 | Log.Info("successfully insert %d msgs into es", len(docArray)) 106 | } 107 | flag = false 108 | docArray = docArray[:0] 109 | } 110 | } 111 | } 112 | 113 | func (w *EsWorker) Stop() { 114 | close(w.done) 115 | w.wg.Wait() 116 | } 117 | 118 | // check whether the worker has been closed. 119 | func (w *EsWorker) IsClosed() bool { 120 | select { 121 | case <-w.done: 122 | return true 123 | 124 | default: 125 | return false 126 | } 127 | } 128 | 129 | func kafkaConsumerErrorCallback(err error) { 130 | KafkaLog.Error("kafka consumer error:%+v", err) 131 | } 132 | 133 | func kafkaConsumerNotificationCallback(note *sc.Notification) { 134 | KafkaLog.Info("kafka consumer Rebalanced: %+v", note) 135 | } 136 | 137 | // queueNotification add kafka message to queue list. 138 | func (w *EsWorker) enqueueKafkaMessage(message *sarama.ConsumerMessage, preOffset int64) { 139 | // if w.IsClosed() { 140 | // return errors.New("worker has been closed!") 141 | // } 142 | defer KafkaConsumer.Commit(message) 143 | 144 | // 不重复消费kafka消息 145 | if preOffset != 0 && message.Offset <= preOffset { 146 | // 此处应对加上告警 147 | Log.Error("@preOffset{%d}, @message{topic:%v, partition:%v, offset:%v, msg:%v}", 148 | preOffset, message.Topic, message.Partition, message.Offset, string(message.Value)) 149 | return 150 | } 151 | 152 | Log.Debug("enqueue{message:%s}", string(message.Value)) 153 | w.Q <- message 154 | return 155 | } 156 | 157 | func (w *EsWorker) Info() string { 158 | return fmt.Sprintf("elasticsearch worker queue size %d", len(w.Q)) 159 | } 160 | -------------------------------------------------------------------------------- /assembly/bin/load.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # ****************************************************** 3 | # DESC : kafka-connect-elasticsearch devops script 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : LGPL V3 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-05-13 02:01 9 | # FILE : load.sh 10 | # ****************************************************** 11 | 12 | APP_NAME="APPLICATION_NAME" 13 | APP_ARGS="" 14 | SLEEP_INTERVAL=5 15 | MAX_LIFETIME=4000 16 | 17 | PROJECT_HOME="" 18 | OS_NAME=`uname` 19 | if [[ ${OS_NAME} != "Windows" ]]; then 20 | PROJECT_HOME=`pwd` 21 | PROJECT_HOME=${PROJECT_HOME}"/" 22 | fi 23 | 24 | export APP_CONF_FILE=${PROJECT_HOME}"TARGET_CONF_FILE" 25 | export APP_LOG_CONF_FILE=${PROJECT_HOME}"TARGET_LOG_CONF_FILE" 26 | export APP_KAFKA_LOG_CONF_FILE=${PROJECT_HOME}"TARGET_KAFKA_LOG_CONF_FILE" 27 | # export GOTRACEBACK=system 28 | # export GODEBUG=gctrace=1 29 | 30 | usage() { 31 | echo "Usage: $0 start" 32 | echo " $0 stop" 33 | echo " $0 term" 34 | echo " $0 restart" 35 | echo " $0 list" 36 | echo " $0 monitor" 37 | echo " $0 crontab" 38 | exit 39 | } 40 | 41 | start() { 42 | APP_LOG_PATH=${PROJECT_HOME}"logs/" 43 | mkdir -p ${APP_LOG_PATH} 44 | APP_BIN=${PROJECT_HOME}sbin/${APP_NAME} 45 | chmod u+x ${APP_BIN} 46 | CMD="nohup ${APP_BIN} ${APP_ARGS} >>${APP_NAME}.nohup.out 2>&1 &" 47 | # CMD="${APP_BIN}" 48 | eval ${CMD} 49 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $2}'` 50 | if [[ ${OS_NAME} != "Linux" && ${OS_NAME} != "Darwin" ]]; then 51 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $1}'` 52 | fi 53 | CUR=`date +%FT%T` 54 | if [ "${PID}" != "" ]; then 55 | for p in ${PID} 56 | do 57 | echo "start ${APP_NAME} ( pid =" ${p} ") at " ${CUR} 58 | done 59 | fi 60 | } 61 | 62 | stop() { 63 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $2}'` 64 | if [[ ${OS_NAME} != "Linux" && ${OS_NAME} != "Darwin" ]]; then 65 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $1}'` 66 | fi 67 | if [ "${PID}" != "" ]; 68 | then 69 | for ps in ${PID} 70 | do 71 | echo "kill -SIGINT ${APP_NAME} ( pid =" ${ps} ")" 72 | kill -2 ${ps} 73 | done 74 | fi 75 | } 76 | 77 | term() { 78 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $2}'` 79 | if [[ ${OS_NAME} != "Linux" && ${OS_NAME} != "Darwin" ]]; then 80 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $1}'` 81 | fi 82 | if [ "${PID}" != "" ]; 83 | then 84 | for ps in ${PID} 85 | do 86 | echo "kill -9 ${APP_NAME} ( pid =" ${ps} ")" 87 | kill -9 ${ps} 88 | done 89 | fi 90 | } 91 | 92 | list() { 93 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{printf("%s,%s,%s,%s\n", $1, $2, $9, $10)}'` 94 | if [[ ${OS_NAME} != "Linux" && ${OS_NAME} != "Darwin" ]]; then 95 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{printf("%s,%s,%s,%s,%s\n", $1, $4, $6, $7, $8)}'` 96 | fi 97 | 98 | if [ "${PID}" != "" ]; then 99 | echo "list ${APP_NAME}" 100 | 101 | if [[ ${OS_NAME} == "Linux" || ${OS_NAME} == "Darwin" ]]; then 102 | echo "index: user, pid, start, duration" 103 | else 104 | echo "index: PID, WINPID, UID, STIME, COMMAND" 105 | fi 106 | idx=0 107 | for ps in ${PID} 108 | do 109 | echo "${idx}: ${ps}" 110 | ((idx ++)) 111 | done 112 | fi 113 | } 114 | 115 | monitor() { 116 | idx=0 117 | while true; do 118 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $2}'` 119 | if [[ ${OS_NAME} != "Linux" && ${OS_NAME} != "Darwin" ]]; then 120 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $1}'` 121 | fi 122 | if [[ "${PID}" == "" ]]; then 123 | start 124 | idx=0 125 | fi 126 | 127 | ((LIFE=idx*${SLEEP_INTERVAL})) 128 | echo "${APP_NAME} ( pid = " ${PID} ") has been working in normal state for " $LIFE " seconds." 129 | ((idx ++)) 130 | sleep ${SLEEP_INTERVAL} 131 | done 132 | } 133 | 134 | crontab() { 135 | idx=0 136 | while true; do 137 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $2}'` 138 | if [[ ${OS_NAME} != "Linux" && ${OS_NAME} != "Darwin" ]]; then 139 | PID=`ps aux | grep -w ${APP_NAME} | grep -v grep | awk '{print $1}'` 140 | fi 141 | if [[ "${PID}" == "" ]]; then 142 | start 143 | idx=0 144 | fi 145 | 146 | ((LIFE=idx*${SLEEP_INTERVAL})) 147 | echo "${APP_NAME} ( pid = " ${PID} ") has been working in normal state for " $LIFE " seconds." 148 | ((idx ++)) 149 | sleep ${SLEEP_INTERVAL} 150 | if [[ ${LIFE} -gt ${MAX_LIFETIME} ]]; then 151 | kill -9 ${PID} 152 | fi 153 | done 154 | } 155 | 156 | opt=$1 157 | case C"$opt" in 158 | Cstart) 159 | start 160 | ;; 161 | Cstop) 162 | stop 163 | ;; 164 | Cterm) 165 | term 166 | ;; 167 | Crestart) 168 | term 169 | start 170 | ;; 171 | Clist) 172 | list 173 | ;; 174 | Cmonitor) 175 | monitor 176 | ;; 177 | Ccrontab) 178 | crontab 179 | ;; 180 | C*) 181 | usage 182 | ;; 183 | esac 184 | 185 | -------------------------------------------------------------------------------- /assembly/common/app.properties: -------------------------------------------------------------------------------- 1 | # configure script 2 | # ****************************************************** 3 | # DESC : application environment variable 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : Apache License 2.0 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-07-12 16:29 9 | # FILE : app.properties 10 | # ****************************************************** 11 | 12 | export TARGET_EXEC_NAME="kafka-connect-elasticsearch" 13 | export BUILD_PACKAGE="app" 14 | 15 | export TARGET_CONF_FILE="conf/config.yml" 16 | export TARGET_LOG_CONF_FILE="conf/log.xml" 17 | export TARGET_KAFKA_LOG_CONF_FILE="conf/kafka_log.xml" 18 | -------------------------------------------------------------------------------- /assembly/common/build.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # ****************************************************** 3 | # DESC : build script 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : Apache License 2.0 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-07-12 16:28 9 | # FILE : build.sh 10 | # ****************************************************** 11 | 12 | rm -rf target/ 13 | 14 | PROJECT_HOME=`pwd` 15 | TARGET_FOLDER=${PROJECT_HOME}/target/${GOOS} 16 | 17 | TARGET_SBIN_NAME=${TARGET_EXEC_NAME} 18 | version=`cat app/version.go | grep Version | awk -F '=' '{print $2}' | awk -F '"' '{print $2}'` 19 | if [[ ${GOOS} == "windows" ]]; then 20 | TARGET_SBIN_NAME=${TARGET_SBIN_NAME}.exe 21 | fi 22 | TARGET_NAME=${TARGET_FOLDER}/${TARGET_SBIN_NAME} 23 | if [[ $PROFILE == "dev" || $PROFILE == "test" ]]; then 24 | # GFLAGS=-gcflags "-N -l" -race -x -v # -x会把go build的详细过程输出 25 | # GFLAGS=-gcflags "-N -l" -race -v 26 | # GFLAGS="-gcflags \"-N -l\" -v" 27 | cd ${BUILD_PACKAGE} && GOOS=$GOOS GOARCH=$GOARCH go build -gcflags "-N -l" -x -v -i -o ${TARGET_NAME} && cd - 28 | else 29 | # -s去掉符号表(然后panic时候的stack trace就没有任何文件名/行号信息了,这个等价于普通C/C++程序被strip的效果), 30 | # -w去掉DWARF调试信息,得到的程序就不能用gdb调试了。-s和-w也可以分开使用,一般来说如果不打算用gdb调试, 31 | # -w基本没啥损失。-s的损失就有点大了。 32 | cd ${BUILD_PACKAGE} && GOOS=$GOOS GOARCH=$GOARCH go build -ldflags "-w" -x -v -i -o ${TARGET_NAME} && cd - 33 | fi 34 | 35 | TAR_NAME=${TARGET_EXEC_NAME}-${version}-`date "+%Y%m%d-%H%M"`-${PROFILE} 36 | 37 | mkdir -p ${TARGET_FOLDER}/${TAR_NAME} 38 | 39 | SBIN_DIR=${TARGET_FOLDER}/${TAR_NAME}/sbin 40 | BIN_DIR=${TARGET_FOLDER}/${TAR_NAME} 41 | CONF_DIR=${TARGET_FOLDER}/${TAR_NAME}/conf 42 | 43 | mkdir -p ${SBIN_DIR} 44 | mkdir -p ${CONF_DIR} 45 | 46 | mv ${TARGET_NAME} ${SBIN_DIR} 47 | cp -r assembly/bin ${BIN_DIR} 48 | cd ${BIN_DIR}/bin/ && mv load.sh load_${TARGET_EXEC_NAME}.sh && cd - 49 | 50 | platform=$(uname) 51 | # modify APPLICATION_NAME 52 | if [ ${platform} == "Darwin" ]; then 53 | sed -i "" "s~APPLICATION_NAME~${TARGET_EXEC_NAME}~g" ${BIN_DIR}/bin/* 54 | else 55 | sed -i "s~APPLICATION_NAME~${TARGET_EXEC_NAME}~g" ${BIN_DIR}/bin/* 56 | fi 57 | 58 | # modify TARGET_CONF_FILE 59 | if [ ${platform} == "Darwin" ]; then 60 | sed -i "" "s~TARGET_CONF_FILE~${TARGET_CONF_FILE}~g" ${BIN_DIR}/bin/* 61 | else 62 | sed -i "s~TARGET_CONF_FILE~${TARGET_CONF_FILE}~g" ${BIN_DIR}/bin/* 63 | fi 64 | 65 | # modify TARGET_LOG_CONF_FILE 66 | if [ ${platform} == "Darwin" ]; then 67 | sed -i "" "s~TARGET_LOG_CONF_FILE~${TARGET_LOG_CONF_FILE}~g" ${BIN_DIR}/bin/* 68 | else 69 | sed -i "s~TARGET_LOG_CONF_FILE~${TARGET_LOG_CONF_FILE}~g" ${BIN_DIR}/bin/* 70 | fi 71 | 72 | # modify TARGET_KAFKA_LOG_CONF_FILE 73 | if [ ${platform} == "Darwin" ]; then 74 | sed -i "" "s~TARGET_KAFKA_LOG_CONF_FILE~${TARGET_KAFKA_LOG_CONF_FILE}~g" ${BIN_DIR}/bin/* 75 | else 76 | sed -i "s~TARGET_KAFKA_LOG_CONF_FILE~${TARGET_KAFKA_LOG_CONF_FILE}~g" ${BIN_DIR}/bin/* 77 | fi 78 | 79 | cp -r profiles/${PROFILE}/* ${CONF_DIR} 80 | 81 | cd ${TARGET_FOLDER} 82 | 83 | tar czf ${TAR_NAME}.tar.gz ${TAR_NAME}/* 84 | 85 | -------------------------------------------------------------------------------- /assembly/linux/release.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # ****************************************************** 3 | # DESC : build script for release env 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : Apache License 2.0 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-07-12 16:34 9 | # FILE : test.sh 10 | # ****************************************************** 11 | 12 | 13 | set -e 14 | 15 | export GOOS=linux 16 | export GOARCH=amd64 17 | 18 | export PROFILE="release" 19 | export PROJECT_HOME=`pwd` 20 | 21 | if [ -f "${PROJECT_HOME}/assembly/common/app.properties" ]; then 22 | . ${PROJECT_HOME}/assembly/common/app.properties 23 | fi 24 | 25 | 26 | if [ -f "${PROJECT_HOME}/assembly/common/build.sh" ]; then 27 | sh ${PROJECT_HOME}/assembly/common/build.sh 28 | fi 29 | -------------------------------------------------------------------------------- /assembly/linux/test.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # ****************************************************** 3 | # DESC : build script for test env 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : Apache License 2.0 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-07-12 16:34 9 | # FILE : test.sh 10 | # ****************************************************** 11 | 12 | 13 | set -e 14 | 15 | export GOOS=linux 16 | export GOARCH=amd64 17 | 18 | export PROFILE="test" 19 | export PROJECT_HOME=`pwd` 20 | 21 | if [ -f "${PROJECT_HOME}/assembly/common/app.properties" ]; then 22 | . ${PROJECT_HOME}/assembly/common/app.properties 23 | fi 24 | 25 | 26 | if [ -f "${PROJECT_HOME}/assembly/common/build.sh" ]; then 27 | sh ${PROJECT_HOME}/assembly/common/build.sh 28 | fi 29 | -------------------------------------------------------------------------------- /assembly/mac/release.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # ****************************************************** 3 | # DESC : build script for release env 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : Apache License 2.0 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-07-12 16:34 9 | # FILE : test.sh 10 | # ****************************************************** 11 | 12 | 13 | set -e 14 | 15 | export GOOS=darwin 16 | export GOARCH=amd64 17 | 18 | export PROFILE="release" 19 | export PROJECT_HOME=`pwd` 20 | 21 | if [ -f "${PROJECT_HOME}/assembly/common/app.properties" ]; then 22 | . ${PROJECT_HOME}/assembly/common/app.properties 23 | fi 24 | 25 | 26 | if [ -f "${PROJECT_HOME}/assembly/common/build.sh" ]; then 27 | sh ${PROJECT_HOME}/assembly/common/build.sh 28 | fi 29 | -------------------------------------------------------------------------------- /assembly/mac/test.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # ****************************************************** 3 | # DESC : build script for test env 4 | # AUTHOR : Alex Stocks 5 | # VERSION : 1.0 6 | # LICENCE : Apache License 2.0 7 | # EMAIL : alexstocks@foxmail.com 8 | # MOD : 2016-07-12 16:34 9 | # FILE : test.sh 10 | # ****************************************************** 11 | 12 | 13 | set -e 14 | 15 | export GOOS=darwin 16 | export GOARCH=amd64 17 | 18 | export PROFILE="test" 19 | export PROJECT_HOME=`pwd` 20 | 21 | if [ -f "${PROJECT_HOME}/assembly/common/app.properties" ]; then 22 | . ${PROJECT_HOME}/assembly/common/app.properties 23 | fi 24 | 25 | 26 | if [ -f "${PROJECT_HOME}/assembly/common/build.sh" ]; then 27 | sh ${PROJECT_HOME}/assembly/common/build.sh 28 | fi 29 | -------------------------------------------------------------------------------- /change_log.md: -------------------------------------------------------------------------------- 1 | ## develop history ## 2 | --- 3 | 4 | - 2017/11/04 5 | > impovement 6 | * add timestamp mappint-settings for kibana 7 | 8 | - 2017/05/05 9 | > impovement 10 | * use tiker instead of time.After 11 | 12 | - 2017/04/25 13 | > improvement 14 | * add bulk insert 15 | * version: 0.0.02 16 | 17 | - 2017/04/19 18 | > impovement 19 | * add hostinfo in pidfile 20 | * add worker queue size info log 21 | 22 | - 2017/04/09 23 | > init 24 | > version: 0.0.01 25 | 26 | -------------------------------------------------------------------------------- /profiles/release/config.yml: -------------------------------------------------------------------------------- 1 | core: 2 | worker_num: 1024 3 | queue_num: 8192 4 | fail_fast_timeout: 3 # 当程序收到signal时候,要保证在fail_fast_timeout(unit: second)时间段内退出 5 | pid: 6 | enabled: false 7 | path: "kafka2es.pid" 8 | override: true 9 | 10 | kafka: 11 | brokers: "10.33.80.155:9092,10.33.80.170:9092,10.33.80.171:9092" 12 | topic: "bc_log" 13 | consumer_group: "bc" 14 | 15 | es: 16 | es_hosts: 17 | - 18 | http://10.33.80.170:9200 19 | shard_num: 5 20 | replica_num: 0 21 | refresh_interval: 300 22 | 23 | index: bc 24 | index_time_suffix_format: -%d%02d%02d 25 | type: go 26 | kibana_time_filed: timestamp 27 | kibana_time_format: yyyyMMdd HH:mm:ss.SSSZ 28 | bulk_size: 5000 29 | bulk_timeout: 60 -------------------------------------------------------------------------------- /profiles/release/kafka_log.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | debug_file 4 | file 5 | DEBUG 6 | logs/kafka2es_kafka_debug.log 7 | false 8 | [%D %T] [%L] [%S] %M 9 | true 10 | 0M 11 | 0K 12 | 16 13 | true 14 | 15 | 16 | warn_file 17 | file 18 | WARNING 19 | logs/kafka2es_kafka_warn.log 20 | false 21 | [%D %T] [%L] [%S] %M 22 | true 23 | 0M 24 | 0K 25 | 16 26 | true 27 | 28 | 29 | -------------------------------------------------------------------------------- /profiles/release/log.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | stdout 4 | console 5 | 6 | DEBUG 7 | false 8 | [%D %T] [%L] (%S) %M 9 | 10 | 11 | debug_file 12 | file 13 | DEBUG 14 | logs/kafka2es_debug.log 15 | false 16 | [%D %T] [%L] [%S] %M 17 | true 18 | 0M 19 | 0K 20 | 16 21 | true 22 | 23 | 24 | info_file 25 | file 26 | INFO 27 | logs/kafka2es_info.log 28 | 39 | false 40 | [%D %T] [%L] [%S] %M 41 | true 42 | 0M 43 | 0K 44 | 16 45 | true 46 | 47 | 48 | warn_file 49 | file 50 | WARNING 51 | logs/kafka2es_warn.log 52 | false 53 | [%D %T] [%L] [%S] %M 54 | true 55 | 0M 56 | 0K 57 | 16 58 | true 59 | 60 | 61 | error_file 62 | file 63 | ERROR 64 | logs/kafka2es_error.log 65 | false 66 | [%D %T] [%L] [%S] %M 67 | true 68 | 0M 69 | 0K 70 | 16 71 | true 72 | 73 | 74 | -------------------------------------------------------------------------------- /profiles/test/config.yml: -------------------------------------------------------------------------------- 1 | core: 2 | worker_num: 2 3 | queue_num: 8192 4 | fail_fast_timeout: 3 # 当程序收到signal时候,要保证在fail_fast_timeout(unit: second)时间段内退出 5 | pid: 6 | enabled: false 7 | path: "kafka2es.pid" 8 | override: true 9 | 10 | kafka: 11 | brokers: "10.116.27.23:9292,10.116.27.24:9292,10.116.27.25:9292" 12 | topic: "tm_log" 13 | consumer_group: "es_stat" 14 | 15 | es: 16 | es_hosts: 17 | - 18 | http://119.81.218.90:5858 19 | shard_num: 5 20 | replica_num: 0 21 | refresh_interval: 300 22 | 23 | index: push 24 | index_time_suffix_format: -%d%02d%02d 25 | type: go 26 | kibana_time_filed: timestamp 27 | kibana_time_format: yyyyMMdd HH:mm:ss.SSSZ 28 | bulk_size: 5000 29 | bulk_timeout: 60 30 | -------------------------------------------------------------------------------- /profiles/test/kafka_log.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | debug_file 4 | file 5 | DEBUG 6 | logs/kafka2es_kafka_debug.log 7 | false 8 | [%D %T] [%L] [%S] %M 9 | true 10 | 0M 11 | 0K 12 | 16 13 | true 14 | 15 | 16 | warn_file 17 | file 18 | WARNING 19 | logs/kafka2es_kafka_warn.log 20 | false 21 | [%D %T] [%L] [%S] %M 22 | true 23 | 0M 24 | 0K 25 | 16 26 | true 27 | 28 | 29 | -------------------------------------------------------------------------------- /profiles/test/log.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | stdout 4 | console 5 | 6 | DEBUG 7 | false 8 | [%D %T] [%L] (%S) %M 9 | 10 | 11 | debug_file 12 | file 13 | DEBUG 14 | logs/kafka2es_debug.log 15 | false 16 | [%D %T] [%L] [%S] %M 17 | true 18 | 0M 19 | 0K 20 | 16 21 | true 22 | 23 | 24 | info_file 25 | file 26 | INFO 27 | logs/kafka2es_info.log 28 | 39 | [%D %T] [%L] [%S] %M 40 | true 41 | 0M 42 | 0K 43 | true 44 | 45 | 46 | warn_file 47 | file 48 | WARNING 49 | logs/kafka2es_warn.log 50 | false 51 | [%D %T] [%L] [%S] %M 52 | true 53 | 0M 54 | 0K 55 | 16 56 | true 57 | 58 | 59 | error_file 60 | file 61 | ERROR 62 | logs/kafka2es_error.log 63 | false 64 | [%D %T] [%L] [%S] %M 65 | true 66 | 0M 67 | 0K 68 | 16 69 | true 70 | 71 | 72 | --------------------------------------------------------------------------------