├── QQ群名片.png
├── README.md
├── README_ch.md
├── diss_config.png
├── diss_config_ch.png
└── src
├── adapter
├── adapter.go
├── common
│ ├── commonadapter.go
│ └── entity.go
├── datafile
│ └── datafile.go
├── elasticsearch
│ └── esadapter.go
├── kafka
│ └── kafkaadapter.go
├── redis
│ └── redisadapter.go
└── rediscluster
│ └── redisclusteradapter.go
├── binloghandler
└── binloghandler.go
├── canalconfigs
├── database1.pos
├── database1.toml
└── database2.toml
├── canalhandler
└── canalhandler.go
├── client
├── esclient
│ └── esclient.go
└── kafkaclient
│ └── kafkaclient.go
├── config
├── config.go
├── config.toml
└── log.xml
├── main.go
└── output
└── output.go
/QQ群名片.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gitstliu/MysqlToAll/14ccf4434ac5a8a49b9bf159a2636315ebcfa54f/QQ群名片.png
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | [中文](https://github.com/gitstliu/MysqlToAll/blob/master/README_ch.md)
2 |
3 | # MysqlToAll
4 | A Mysql data to file/elasticsearch/kafka/redis/redis-cluster with binlog tool.
5 |
6 | High Performance
7 |
8 | # How to config?
9 |
10 | ## canalconfigs
11 | ```
12 | xxx.toml (canal config file)
13 | xxx.pos (binlog pos file)
14 | ```
15 |
16 | ## config
17 | ```
18 | config.toml (Output source config file)
19 | ```
20 |
21 |
22 | ## QQ群
23 |
24 | 691944436
25 |
26 |
27 | The answer is zero
28 |
29 | # Performance Test Data
30 |
31 | Test Machine : 4C 8G
32 |
33 |
34 | #### Scene1:Mixed Channels:Tow mysql master binlog write to redis、redis-cluster、kafka、es、local file in one time.
35 | ```
36 | Total Binlog Raw Count:1 Million
37 | Bulksize: 1000
38 | Resource utilization
39 | CPU: 250%
40 | Mem:3%
41 | Duration:178s
42 | ```
43 |
44 |
45 | #### Scene2:Single Channel:One mysql master binlog write to redis.
46 | ```
47 | Total Binlog Raw Count:1 Million
48 | Bulksize: 1000
49 | Resource utilization
50 | CPU: 230%
51 | Mem:0.5%
52 | Duration:27s
53 | ```
54 |
55 | #### Scene3:Single Channel:One mysql master binlog write to redis-cluster.
56 | ```
57 | Total Binlog Raw Count:1 Million
58 | Bulksize: 1000
59 | Resource utilization
60 | CPU: 230%
61 | Mem:0.5%
62 | Duration:21s
63 | ```
64 |
65 | #### Scene4:Single Channel:One mysql master binlog write to elasticsearch.
66 | ```
67 | Total Binlog Raw Count:1 Million
68 | Bulksize: 1000
69 | Resource utilization
70 | CPU: 90%
71 | Mem:0.5%
72 | Duration:62s
73 | ```
74 |
75 | #### Scene5:Single Channel:One mysql master binlog write to kafka.
76 | ```
77 | Total Binlog Raw Count:1 Million
78 | Bulksize: 1000
79 | Resource utilization
80 | CPU: 230%
81 | Mem:1%
82 | Duration:24s
83 | ```
84 |
85 |
86 | #### Scene6:Single Channel:One mysql master binlog write to local file.
87 | ```
88 | Total Binlog Raw Count:1 Million
89 | Bulksize: 1000
90 | Resource utilization
91 | CPU: 280%
92 | Mem:0.5%
93 | Duration:38s
94 | ```
95 |
96 |
--------------------------------------------------------------------------------
/README_ch.md:
--------------------------------------------------------------------------------
1 | # MysqlToAll
2 |
3 |
4 | ## 用途:
5 | (部分功能未开源)
6 | 1. mysql到NoSQL,如ES、Redis、mongodb
7 | 2. mysql数据迁移,如postgresql
8 | 3. mysql到数据分析平台做数据聚合,如TiDB(待测试)
9 | 4. 数据审计
10 |
11 | ## 特点:
12 | 1. 支持多数据源多接收方
13 | 2. 单数据源配置多接收方时,支持近似数据同步(在出错的情况下可能相差最后一批数据)
14 | 3. redis及redis-cluster支持key格式化配置,支持json和分割字符串两种方式的数据存储。支持多种数据类型的处理
15 | 4. mongodb支持audit模式(可以把指定数据表的变化存入mongodb)和normal两种模式,并支持高速模式(打开时不支持upsert,关闭支持upsert)可以完全发挥mongodb的高性能
16 | 5. postgresql不支持upsert
17 | 6. mysql支持upsert(实现mysql的目的是为了支持TIDB)
18 | 7. 高性能,大部分中间件的数据同步在每秒5w条左右(性能会随着表列数增加而下降,随着同步中间的数量增多而下降)
19 |
20 | ## 注意:
21 | 1. 尽量选择支持upsert的用法
22 | 2. 尽量不要直接连接主库,而选择连接从库,以降低对主库的性能损耗
23 | 3. 一个数据源不要配置过多的接收方,这样会造成性能下降迅速(不要超过5个)
24 |
25 | # 配置方法
26 |
27 | ## canalconfigs
28 | ```
29 | xxx.toml canal配置文件
30 | xxx.pos binlog解析位置文件
31 | ```
32 |
33 | ## config
34 | ```
35 | config.toml 输出源配置文件
36 | ```
37 |
38 |
39 | ## QQ群
40 |
41 | 691944436
42 |
43 |
44 | 答案: zero
45 |
46 |
47 | # 性能测试数据
48 |
49 | 测试机4C 8G
50 |
51 |
52 | #### 场景:混合渠道-同时向redis、redis-cluster、kafka、es、本地文件串行写入。双数据库同步(两个mysql主同时同步binlog)
53 | ```
54 | 条数:1百万条binlog
55 | 批次:1000
56 | 资源使用情况
57 | CPU: 250%
58 | 内存:3%
59 | 开始时间:55:48
60 | 结束时间:58:46
61 | 总耗时:178秒
62 | ```
63 |
64 |
65 | #### 场景:单一渠道-redis。单数据库同步
66 | ```
67 | 条数:1百万条binlog
68 | 批次:1000
69 | 资源使用情况
70 | CPU: 230%
71 | 内存:0.5%
72 | 开始时间:12:04
73 | 结束时间:12:31
74 | 总耗时:27秒
75 | ```
76 |
77 | #### 场景:单一渠道-redis-cluster。单数据库同步
78 | ```
79 | 条数:1百万条binlog
80 | 批次:1000
81 | 资源使用情况
82 | CPU: 230%
83 | 内存:0.5%
84 | 开始时间:27:37
85 | 结束时间:27:58
86 | 总耗时:21秒
87 | ```
88 |
89 | #### 场景:单一渠道-elasticsearch。单数据库同步
90 | ```
91 | 条数:1百万条binlog
92 | 批次:1000
93 | 资源使用情况
94 | CPU: 90%
95 | 内存:0.5%
96 | 开始时间:33:42
97 | 结束时间:34:44
98 | 总耗时:62秒
99 | ```
100 |
101 | #### 场景:单一渠道-kafka。单数据库同步
102 | ```
103 | 条数:1百万条binlog
104 | 批次:1000
105 | 资源使用情况
106 | CPU: 230%
107 | 内存:1%
108 | 开始时间:38:29
109 | 结束时间:38:53
110 | 总耗时:24秒
111 | ```
112 |
113 |
114 | #### 场景:单一渠道-本地文件。单数据库同步
115 | ```
116 | 条数:1百万条binlog
117 | 批次:1000
118 | 资源使用情况
119 | CPU: 280%
120 | 内存:0.5%
121 | 开始时间:41:48
122 | 结束时间:42:26
123 | 总耗时:38秒
124 | ```
125 |
--------------------------------------------------------------------------------
/diss_config.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gitstliu/MysqlToAll/14ccf4434ac5a8a49b9bf159a2636315ebcfa54f/diss_config.png
--------------------------------------------------------------------------------
/diss_config_ch.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gitstliu/MysqlToAll/14ccf4434ac5a8a49b9bf159a2636315ebcfa54f/diss_config_ch.png
--------------------------------------------------------------------------------
/src/adapter/adapter.go:
--------------------------------------------------------------------------------
1 | package adapter
2 |
3 | import (
4 | "adapter/common"
5 | "adapter/datafile"
6 | "adapter/elasticsearch"
7 | "adapter/kafka"
8 | "adapter/redis"
9 | "adapter/rediscluster"
10 | "config"
11 | "errors"
12 |
13 | "github.com/gitstliu/log4go"
14 | )
15 |
16 | func CreateAdapterWithName(conf config.CommonConfig) (common.WriteAdapter, error) {
17 | if conf.GetConfigName() == "Redis" {
18 | return redisadapter.CreateAdapter(conf.(*config.RedisConfig)), nil
19 | } else if conf.GetConfigName() == "RedisCluster" {
20 | return redisclusteradapter.CreateAdapter(conf.(*config.RedisClusterConfig)), nil
21 | } else if conf.GetConfigName() == "Elasticsearch" {
22 | return elasticsearchadapter.CreateAdapter(conf.(*config.ElasticsearchConfig)), nil
23 | } else if conf.GetConfigName() == "Kafka" {
24 | return kafkaadapter.CreateAdapter(conf.(*config.KafkaConfig)), nil
25 | } else if conf.GetConfigName() == "Datafile" {
26 | return datafileadapter.CreateAdapter(conf.(*config.DatafileConfig)), nil
27 | }
28 |
29 | log4go.Error("Config Type %v is not support !!!!", conf.GetConfigName())
30 | return nil, errors.New("Config Type Error!")
31 | }
32 |
--------------------------------------------------------------------------------
/src/adapter/common/commonadapter.go:
--------------------------------------------------------------------------------
1 | package common
2 |
3 | type WriteAdapter interface {
4 | Write([]*RawLogEntity) error
5 | Close() error
6 | }
7 |
--------------------------------------------------------------------------------
/src/adapter/common/entity.go:
--------------------------------------------------------------------------------
1 | package common
2 |
3 | import (
4 | "fmt"
5 | "strings"
6 |
7 | "github.com/gitstliu/go-commonfunctions"
8 | )
9 |
10 | type RawLogEntity struct {
11 | TableName string
12 | Action string
13 | Rows [][]interface{}
14 | Header []string
15 | HeaderMap map[string]int
16 | ValueMap map[string]interface{}
17 | }
18 |
19 | func (this *RawLogEntity) ToString() string {
20 | meta := fmt.Sprintf("TableName:%v Action:%v Header:%v \r\n", this.TableName, this.Action, strings.Join(this.Header, "\t"))
21 | for _, currRow := range this.Rows {
22 | meta += fmt.Sprintf("%v \r\n", currRow)
23 | }
24 | meta += "************************** \r\n"
25 | return meta
26 | }
27 |
28 | func (this *RawLogEntity) ToJson() (string, error) {
29 | metaMap := map[string]interface{}{"TableName": this.TableName, "Action": this.Action, "Header": this.Header, "Rows": this.Rows}
30 | return commonfunctions.ObjectToJson(metaMap)
31 | }
32 |
--------------------------------------------------------------------------------
/src/adapter/datafile/datafile.go:
--------------------------------------------------------------------------------
1 | package datafileadapter
2 |
3 | import (
4 | "adapter/common"
5 | "config"
6 | "os"
7 | "path"
8 |
9 | "github.com/gitstliu/log4go"
10 | )
11 |
12 | type DatafileAdapter struct {
13 | common.WriteAdapter
14 | Config *config.DatafileConfig
15 | file *os.File
16 | Filepath string
17 | Filename string
18 | }
19 |
20 | func CreateAdapter(conf *config.DatafileConfig) common.WriteAdapter {
21 | adapter := &DatafileAdapter{Config: conf}
22 | adapter.Filepath, adapter.Filename = path.Split(adapter.Config.Filename)
23 |
24 | mkdirErr := os.MkdirAll(adapter.Filepath, os.ModePerm)
25 | if mkdirErr != nil {
26 | log4go.Error(mkdirErr)
27 | panic(mkdirErr)
28 | }
29 |
30 | currFile, openFileErr := os.OpenFile(adapter.Config.Filename, os.O_WRONLY|os.O_CREATE|os.O_APPEND, os.ModePerm)
31 | if openFileErr != nil {
32 | log4go.Error(openFileErr)
33 | panic(openFileErr)
34 | }
35 | adapter.file = currFile
36 |
37 | return adapter
38 | }
39 |
40 | func (this *DatafileAdapter) Write(entities []*common.RawLogEntity) error {
41 |
42 | content := ""
43 | for _, currEntity := range entities {
44 | content += currEntity.ToString()
45 | }
46 | _, writeToFileErr := this.file.WriteString(content)
47 | if writeToFileErr != nil {
48 | log4go.Error(writeToFileErr)
49 | panic(writeToFileErr)
50 | return writeToFileErr
51 | }
52 | syncToDiskErr := this.file.Sync()
53 |
54 | if syncToDiskErr != nil {
55 | log4go.Error(syncToDiskErr)
56 | panic(syncToDiskErr)
57 | return syncToDiskErr
58 | }
59 |
60 | return nil
61 | }
62 |
63 | func (this *DatafileAdapter) Close() error {
64 | closeFileErr := this.file.Close()
65 | if closeFileErr != nil {
66 | log4go.Error(closeFileErr)
67 | return closeFileErr
68 | }
69 | return nil
70 | }
71 |
--------------------------------------------------------------------------------
/src/adapter/elasticsearch/esadapter.go:
--------------------------------------------------------------------------------
1 | package elasticsearchadapter
2 |
3 | import (
4 | "adapter/common"
5 | "client/esclient"
6 | "config"
7 | "errors"
8 | "strings"
9 |
10 | "github.com/gitstliu/go-commonfunctions"
11 | "github.com/gitstliu/log4go"
12 | )
13 |
14 | type ElasticsearchAdapter struct {
15 | common.WriteAdapter
16 | esClient *esclient.Client
17 | Config *config.ElasticsearchConfig
18 | TableActionMap map[string]map[string][]*config.ElasticsearchTableConfig
19 | TableKeyMap map[string]map[string][]*config.ElasticsearchTableConfig
20 | }
21 |
22 | func CreateAdapter(conf *config.ElasticsearchConfig) common.WriteAdapter {
23 |
24 | adapter := &ElasticsearchAdapter{Config: conf}
25 | adapter.TableActionMap, adapter.TableKeyMap = DecoderAdapterTableMessage(conf.Tables)
26 | clientConfig := &esclient.ClientConfig{}
27 | clientConfig.Addr = conf.Address
28 | clientConfig.HTTPS = conf.IsHttps
29 | clientConfig.User = conf.User
30 | clientConfig.Password = conf.Password
31 | adapter.esClient = esclient.NewClient(clientConfig)
32 |
33 | if adapter.esClient == nil {
34 | log4go.Error("Can not create es client !!")
35 | return nil
36 | }
37 |
38 | return adapter
39 | }
40 |
41 | func (this *ElasticsearchAdapter) Write(entities []*common.RawLogEntity) error {
42 |
43 | esRequest := []*esclient.BulkRequest{}
44 |
45 | for _, currEntity := range entities {
46 |
47 | currConfigs := this.GetTableActionConfigs(currEntity.TableName, currEntity.Action)
48 |
49 | log4go.Debug("currEntity.Action = %v", currEntity.Action)
50 | if currEntity.Action == "insert" || currEntity.Action == "update" {
51 | for _, currConfig := range currConfigs {
52 | keyMeta := []interface{}{}
53 | for _, currKey := range currConfig.Key {
54 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
55 | }
56 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
57 |
58 | currRequest := &esclient.BulkRequest{}
59 | currRequest.Action = "index"
60 | currRequest.Data = currEntity.ValueMap
61 | currRequest.Index = currConfig.Index
62 | currRequest.ID = keyValue
63 | currRequest.Type = currConfig.IndexType
64 |
65 | esRequest = append(esRequest, currRequest)
66 | }
67 | } else if currEntity.Action == "delete" {
68 | for _, currConfig := range currConfigs {
69 | keyMeta := []interface{}{}
70 | for _, currKey := range currConfig.Key {
71 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
72 | }
73 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
74 |
75 | currRequest := &esclient.BulkRequest{}
76 | currRequest.Action = "delete"
77 | currRequest.Index = currConfig.Index
78 | currRequest.ID = keyValue
79 | currRequest.Type = currConfig.IndexType
80 |
81 | esRequest = append(esRequest, currRequest)
82 | }
83 | }
84 | }
85 |
86 | log4go.Debug("esRequest = %v", esRequest)
87 | bulkErr := this.doBulk(this.Config.Address, esRequest)
88 | return bulkErr
89 | }
90 |
91 | func (this *ElasticsearchAdapter) Close() error {
92 | return nil
93 | }
94 |
95 | func (this *ElasticsearchAdapter) doBulk(url string, reqs []*esclient.BulkRequest) error {
96 | if len(reqs) == 0 {
97 | return nil
98 | }
99 |
100 | resp, err := this.esClient.DoBulk(url, reqs)
101 | if err != nil {
102 | log4go.Error("sync docs err %v", err)
103 | return err
104 | } else if resp.Code/100 == 2 || resp.Errors {
105 | for i := 0; i < len(resp.Items); i++ {
106 | for action, item := range resp.Items[i] {
107 | if len(item.Error) > 0 {
108 | log4go.Error("%v index: %v, type: %v, id: %v, status: %v, error: %v",
109 | action, item.Index, item.Type, item.ID, item.Status, item.Error)
110 | return errors.New(string(item.Error))
111 | }
112 | }
113 | }
114 | }
115 |
116 | return nil
117 | }
118 |
119 | func DecoderAdapterTableMessage(configs map[string]*config.ElasticsearchTableConfig) (map[string]map[string][]*config.ElasticsearchTableConfig, map[string]map[string][]*config.ElasticsearchTableConfig) {
120 | tableActionMap := map[string]map[string][]*config.ElasticsearchTableConfig{}
121 | tableKeyMap := map[string]map[string][]*config.ElasticsearchTableConfig{}
122 | for _, tableConfig := range configs {
123 | if len(tableConfig.Actions) > 0 {
124 | _, currActionMapExist := tableActionMap[tableConfig.Tablename]
125 | if !currActionMapExist {
126 | tableActionMap[tableConfig.Tablename] = map[string][]*config.ElasticsearchTableConfig{}
127 | }
128 | for _, currAction := range tableConfig.Actions {
129 | _, actionConfigListExist := tableActionMap[tableConfig.Tablename][currAction]
130 | if !actionConfigListExist {
131 | tableActionMap[tableConfig.Tablename][currAction] = []*config.ElasticsearchTableConfig{}
132 | }
133 | tableActionMap[tableConfig.Tablename][currAction] = append(tableActionMap[tableConfig.Tablename][currAction], tableConfig)
134 | }
135 | }
136 |
137 | if len(tableConfig.Key) > 0 {
138 | _, currKeyMapExist := tableKeyMap[tableConfig.Tablename]
139 | if !currKeyMapExist {
140 | tableKeyMap[tableConfig.Tablename] = map[string][]*config.ElasticsearchTableConfig{}
141 | }
142 | for _, currKey := range tableConfig.Key {
143 | _, actionConfigListExist := tableKeyMap[tableConfig.Tablename][currKey]
144 | if !actionConfigListExist {
145 | tableKeyMap[tableConfig.Tablename][currKey] = []*config.ElasticsearchTableConfig{}
146 | }
147 | tableKeyMap[tableConfig.Tablename][currKey] = append(tableKeyMap[tableConfig.Tablename][currKey], tableConfig)
148 | }
149 | }
150 | }
151 | return tableActionMap, tableKeyMap
152 | }
153 |
154 | func (this *ElasticsearchAdapter) GetTableActionConfigs(table, action string) []*config.ElasticsearchTableConfig {
155 | _, tableExist := this.TableActionMap[table]
156 | if !tableExist {
157 | return nil
158 | }
159 | actionConfigs, actionExist := this.TableActionMap[table][action]
160 | if !actionExist {
161 | return nil
162 | }
163 |
164 | return actionConfigs
165 | }
166 |
167 | func (this *ElasticsearchAdapter) GetTableKeyConfigs(table, key string) []*config.ElasticsearchTableConfig {
168 | _, tableExist := this.TableKeyMap[table]
169 | if !tableExist {
170 | return nil
171 | }
172 | keyConfigs, keyExist := this.TableKeyMap[table][key]
173 | if !keyExist {
174 | return nil
175 | }
176 |
177 | return keyConfigs
178 | }
179 |
--------------------------------------------------------------------------------
/src/adapter/kafka/kafkaadapter.go:
--------------------------------------------------------------------------------
1 | package kafkaadapter
2 |
3 | import (
4 | "adapter/common"
5 | "client/kafkaclient"
6 | "config"
7 |
8 | "github.com/gitstliu/log4go"
9 | )
10 |
11 | type KafkaAdapter struct {
12 | common.WriteAdapter
13 | kafkaClient *kafkaclient.Client
14 | Config *config.KafkaConfig
15 | TableActionMap map[string]map[string][]*config.KafkaTableConfig
16 | }
17 |
18 | func CreateAdapter(conf *config.KafkaConfig) common.WriteAdapter {
19 |
20 | adapter := &KafkaAdapter{Config: conf}
21 | adapter.TableActionMap = DecoderAdapterTableMessage(conf.Tables)
22 | clientConfig := &kafkaclient.ClientConfig{}
23 | clientConfig.Address = conf.Address
24 |
25 | adapter.kafkaClient = kafkaclient.NewClient(clientConfig)
26 |
27 | if adapter.kafkaClient == nil {
28 | log4go.Error("Can not create kafka client !!")
29 | return nil
30 | }
31 |
32 | return adapter
33 | }
34 |
35 | func (this *KafkaAdapter) Write(entities []*common.RawLogEntity) error {
36 |
37 | // jsons := []string{}
38 | for _, currEntity := range entities {
39 | entityJson, entityJsonErr := currEntity.ToJson()
40 | if entityJsonErr != nil {
41 | log4go.Error(entityJsonErr)
42 | return entityJsonErr
43 | }
44 |
45 | currConfigs := this.GetTableActionConfigs(currEntity.TableName, currEntity.Action)
46 |
47 | for _, currConfig := range currConfigs {
48 | sendMessagesErr := this.kafkaClient.SendMessages([]string{entityJson}, currConfig.Topic)
49 | if sendMessagesErr != nil {
50 | log4go.Error(sendMessagesErr)
51 | return sendMessagesErr
52 | }
53 | }
54 | }
55 |
56 | return nil
57 | }
58 |
59 | func (this *KafkaAdapter) Close() error {
60 | this.kafkaClient.Close()
61 | return nil
62 | }
63 |
64 | func DecoderAdapterTableMessage(configs map[string]*config.KafkaTableConfig) map[string]map[string][]*config.KafkaTableConfig {
65 | tableActionMap := map[string]map[string][]*config.KafkaTableConfig{}
66 | for _, tableConfig := range configs {
67 | if len(tableConfig.Actions) > 0 {
68 | _, currActionMapExist := tableActionMap[tableConfig.Tablename]
69 | if !currActionMapExist {
70 | tableActionMap[tableConfig.Tablename] = map[string][]*config.KafkaTableConfig{}
71 | }
72 | for _, currAction := range tableConfig.Actions {
73 | _, actionConfigListExist := tableActionMap[tableConfig.Tablename][currAction]
74 | if !actionConfigListExist {
75 | tableActionMap[tableConfig.Tablename][currAction] = []*config.KafkaTableConfig{}
76 | }
77 | tableActionMap[tableConfig.Tablename][currAction] = append(tableActionMap[tableConfig.Tablename][currAction], tableConfig)
78 | }
79 | }
80 | }
81 | return tableActionMap
82 | }
83 |
84 | func (this *KafkaAdapter) GetTableActionConfigs(table, action string) []*config.KafkaTableConfig {
85 | _, tableExist := this.TableActionMap[table]
86 | if !tableExist {
87 | return nil
88 | }
89 | actionConfigs, actionExist := this.TableActionMap[table][action]
90 | if !actionExist {
91 | return nil
92 | }
93 |
94 | return actionConfigs
95 | }
96 |
--------------------------------------------------------------------------------
/src/adapter/redis/redisadapter.go:
--------------------------------------------------------------------------------
1 | package redisadapter
2 |
3 | import (
4 | "adapter/common"
5 | "config"
6 | "errors"
7 | "fmt"
8 | "strings"
9 |
10 | "github.com/garyburd/redigo/redis"
11 | "github.com/gitstliu/go-commonfunctions"
12 | "github.com/gitstliu/log4go"
13 | )
14 |
15 | type RedisPipelineCommand struct {
16 | CommandName string
17 | Key string
18 | Args []interface{}
19 | }
20 |
21 | type RedisAdapter struct {
22 | common.WriteAdapter
23 | redisClient redis.Conn
24 | Config *config.RedisConfig
25 | TableActionMap map[string]map[string][]*config.RedisTableConfig
26 | TableKeyMap map[string]map[string][]*config.RedisTableConfig
27 | }
28 |
29 | func CreateAdapter(conf *config.RedisConfig) common.WriteAdapter {
30 | adapter := &RedisAdapter{Config: conf}
31 | adapter.TableActionMap, adapter.TableKeyMap = DecoderAdapterTableMessage(conf.Tables)
32 | if conf.Password != "" {
33 | option := redis.DialPassword(conf.Password)
34 | currClient, redisConnErr := redis.Dial("tcp", conf.Address, option)
35 | if redisConnErr != nil {
36 | log4go.Error(redisConnErr)
37 | panic(redisConnErr)
38 | }
39 | adapter.redisClient = currClient
40 | } else {
41 | currClient, redisConnErr := redis.Dial("tcp", conf.Address)
42 | if redisConnErr != nil {
43 | log4go.Error(redisConnErr)
44 | panic(redisConnErr)
45 | }
46 | adapter.redisClient = currClient
47 | }
48 |
49 | adapter.redisClient.Do("SELECT", conf.DB)
50 | return adapter
51 | }
52 |
53 | func DecoderAdapterTableMessage(configs map[string]*config.RedisTableConfig) (map[string]map[string][]*config.RedisTableConfig, map[string]map[string][]*config.RedisTableConfig) {
54 | tableActionMap := map[string]map[string][]*config.RedisTableConfig{}
55 | tableKeyMap := map[string]map[string][]*config.RedisTableConfig{}
56 | for _, tableConfig := range configs {
57 | if len(tableConfig.Actions) > 0 {
58 | _, currActionMapExist := tableActionMap[tableConfig.Tablename]
59 | if !currActionMapExist {
60 | tableActionMap[tableConfig.Tablename] = map[string][]*config.RedisTableConfig{}
61 | }
62 | for _, currAction := range tableConfig.Actions {
63 | _, actionConfigListExist := tableActionMap[tableConfig.Tablename][currAction]
64 | if !actionConfigListExist {
65 | tableActionMap[tableConfig.Tablename][currAction] = []*config.RedisTableConfig{}
66 | }
67 | tableActionMap[tableConfig.Tablename][currAction] = append(tableActionMap[tableConfig.Tablename][currAction], tableConfig)
68 | }
69 | }
70 |
71 | if len(tableConfig.Key) > 0 {
72 | _, currKeyMapExist := tableKeyMap[tableConfig.Tablename]
73 | if !currKeyMapExist {
74 | tableKeyMap[tableConfig.Tablename] = map[string][]*config.RedisTableConfig{}
75 | }
76 | for _, currKey := range tableConfig.Key {
77 | _, actionConfigListExist := tableKeyMap[tableConfig.Tablename][currKey]
78 | if !actionConfigListExist {
79 | tableKeyMap[tableConfig.Tablename][currKey] = []*config.RedisTableConfig{}
80 | }
81 | tableKeyMap[tableConfig.Tablename][currKey] = append(tableKeyMap[tableConfig.Tablename][currKey], tableConfig)
82 | }
83 | }
84 | }
85 | return tableActionMap, tableKeyMap
86 | }
87 |
88 | func (this *RedisAdapter) GetTableActionConfigs(table, action string) []*config.RedisTableConfig {
89 | _, tableExist := this.TableActionMap[table]
90 | if !tableExist {
91 | return nil
92 | }
93 | actionConfigs, actionExist := this.TableActionMap[table][action]
94 | if !actionExist {
95 | return nil
96 | }
97 |
98 | return actionConfigs
99 | }
100 |
101 | func (this *RedisAdapter) GetTableKeyConfigs(table, key string) []*config.RedisTableConfig {
102 | _, tableExist := this.TableKeyMap[table]
103 | if !tableExist {
104 | return nil
105 | }
106 | keyConfigs, keyExist := this.TableKeyMap[table][key]
107 | if !keyExist {
108 | return nil
109 | }
110 |
111 | return keyConfigs
112 | }
113 |
114 | func (this *RedisAdapter) Write(entities []*common.RawLogEntity) error {
115 | commands := []*RedisPipelineCommand{}
116 |
117 | for _, currEntity := range entities {
118 | tableActionsConfig, tableConfigExist := this.TableActionMap[currEntity.TableName]
119 | if !tableConfigExist {
120 | continue
121 | }
122 |
123 | for currAction, currConfigs := range tableActionsConfig {
124 | if currAction == currEntity.Action {
125 | for _, currConfig := range currConfigs {
126 | var currCommand *RedisPipelineCommand
127 | var creatCommandErr error
128 | if currConfig.Struct == "string" {
129 | if currEntity.Action == "update" {
130 | currCommand, creatCommandErr = CreateRedisPipelineCommandForStringSet(currConfig, currEntity)
131 | log4go.Debug("currCommand = %v", currCommand)
132 | } else if currEntity.Action == "insert" {
133 | currCommand, creatCommandErr = CreateRedisPipelineCommandForStringSet(currConfig, currEntity)
134 | log4go.Debug("currCommand = %v", currCommand)
135 | } else if currEntity.Action == "delete" {
136 | currCommand, creatCommandErr = CreateRedisPipelineCommandForStringDel(currConfig, currEntity)
137 | log4go.Debug("currCommand = %v", currCommand)
138 | }
139 |
140 | } else if currConfig.Struct == "list" {
141 | if currEntity.Action == "update" {
142 | currCommand, creatCommandErr = CreateRedisPipelineCommandForListRPush(currConfig, currEntity)
143 | log4go.Debug("currCommand = %v", currCommand)
144 | } else if currEntity.Action == "insert" {
145 | currCommand, creatCommandErr = CreateRedisPipelineCommandForListRPush(currConfig, currEntity)
146 | log4go.Debug("currCommand = %v", currCommand)
147 | } else if currEntity.Action == "delete" {
148 | continue
149 | }
150 | } else if currConfig.Struct == "set" {
151 | if currEntity.Action == "update" {
152 | currCommand, creatCommandErr = CreateRedisPipelineCommandForSet(currConfig, currEntity, "SADD")
153 | log4go.Debug("currCommand = %v", currCommand)
154 | } else if currEntity.Action == "insert" {
155 | currCommand, creatCommandErr = CreateRedisPipelineCommandForSet(currConfig, currEntity, "SADD")
156 | log4go.Debug("currCommand = %v", currCommand)
157 | } else if currEntity.Action == "delete" {
158 | currCommand, creatCommandErr = CreateRedisPipelineCommandForSet(currConfig, currEntity, "SREM")
159 | log4go.Debug("currCommand = %v", currCommand)
160 | }
161 | } else if currConfig.Struct == "hash" {
162 | if currEntity.Action == "update" {
163 | currCommand, creatCommandErr = CreateRedisPipelineCommandForHashHSet(currConfig, currEntity)
164 | log4go.Debug("currCommand = %v", currCommand)
165 | } else if currEntity.Action == "insert" {
166 | currCommand, creatCommandErr = CreateRedisPipelineCommandForHashHSet(currConfig, currEntity)
167 | log4go.Debug("currCommand = %v", currCommand)
168 | } else if currEntity.Action == "delete" {
169 | currCommand, creatCommandErr = CreateRedisPipelineCommandForHashHDel(currConfig, currEntity)
170 | log4go.Debug("currCommand = %v", currCommand)
171 | }
172 | } else {
173 | log4go.Error("Redis data struct exception. struct is %v", currConfig.Struct)
174 | return errors.New(fmt.Sprintf("Redis-Cluster data struct exception. struct is %v", currConfig.Struct))
175 | }
176 | if creatCommandErr != nil {
177 | log4go.Error(creatCommandErr)
178 | return creatCommandErr
179 | }
180 |
181 | if currCommand != nil {
182 | log4go.Debug(commonfunctions.ObjectToJson(currCommand))
183 | commands = append(commands, currCommand)
184 | }
185 | }
186 | }
187 | }
188 | }
189 | _, commandsSendErrors := this.SendPipelineCommands(commands)
190 | for _, currErr := range commandsSendErrors {
191 | if currErr != nil {
192 | return currErr
193 | }
194 | }
195 |
196 | return nil
197 | }
198 |
199 | func CreateRedisPipelineCommandForStringSet(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
200 |
201 | currCommand := &RedisPipelineCommand{}
202 | currCommand.CommandName = "SET"
203 | keyMeta := []interface{}{}
204 | for _, currKey := range currConfig.Key {
205 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
206 | }
207 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
208 |
209 | bodyValue := ""
210 | if currConfig.Valuetype == "json" {
211 | valueMeta := map[string]interface{}{}
212 | for columnIndex, columnName := range currEntity.Header {
213 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
214 | }
215 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
216 | if valueMetaJsonErr != nil {
217 | log4go.Error(valueMetaJsonErr)
218 | panic(valueMetaJsonErr)
219 | return nil, valueMetaJsonErr
220 | }
221 | bodyValue = valueMetaJson
222 | } else if currConfig.Valuetype == "splitstring" {
223 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
224 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
225 | } else {
226 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
227 | valueTypeErr := errors.New("Error valuetype")
228 | panic(valueTypeErr)
229 | return nil, valueTypeErr
230 | }
231 |
232 | currCommand.Key = keyValue
233 | currCommand.Args = []interface{}{bodyValue}
234 | return currCommand, nil
235 | }
236 |
237 | func CreateRedisPipelineCommandForStringDel(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
238 |
239 | currCommand := &RedisPipelineCommand{}
240 | currCommand.CommandName = "DEL"
241 | keyMeta := []interface{}{}
242 | for _, currKey := range currConfig.Key {
243 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
244 | }
245 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
246 |
247 | currCommand.Key = keyValue
248 | currCommand.Args = []interface{}{}
249 | return currCommand, nil
250 | }
251 |
252 | func CreateRedisPipelineCommandForListRPush(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
253 |
254 | currCommand := &RedisPipelineCommand{}
255 | currCommand.CommandName = "RPUSH"
256 | keyValue := currConfig.Reidskey
257 |
258 | bodyValue := ""
259 | if currConfig.Valuetype == "json" {
260 | valueMeta := map[string]interface{}{}
261 | for columnIndex, columnName := range currEntity.Header {
262 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
263 | }
264 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
265 | if valueMetaJsonErr != nil {
266 | log4go.Error(valueMetaJsonErr)
267 | panic(valueMetaJsonErr)
268 | return nil, valueMetaJsonErr
269 | }
270 | bodyValue = valueMetaJson
271 | } else if currConfig.Valuetype == "splitstring" {
272 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
273 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
274 | } else {
275 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
276 | valueTypeErr := errors.New("Error valuetype")
277 | panic(valueTypeErr)
278 | return nil, valueTypeErr
279 | }
280 |
281 | currCommand.Key = keyValue
282 | currCommand.Args = []interface{}{bodyValue}
283 | return currCommand, nil
284 | }
285 |
286 | func CreateRedisPipelineCommandForSet(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity, command string) (*RedisPipelineCommand, error) {
287 | currCommand := &RedisPipelineCommand{}
288 | currCommand.CommandName = command
289 | keyValue := currConfig.Reidskey
290 |
291 | bodyValue := ""
292 | if currConfig.Valuetype == "json" {
293 | valueMeta := map[string]interface{}{}
294 | for columnIndex, columnName := range currEntity.Header {
295 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
296 | }
297 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
298 | if valueMetaJsonErr != nil {
299 | log4go.Error(valueMetaJsonErr)
300 | panic(valueMetaJsonErr)
301 | return nil, valueMetaJsonErr
302 | }
303 | bodyValue = valueMetaJson
304 | } else if currConfig.Valuetype == "splitstring" {
305 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
306 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
307 | } else {
308 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
309 | valueTypeErr := errors.New("Error valuetype")
310 | panic(valueTypeErr)
311 | return nil, valueTypeErr
312 | }
313 |
314 | currCommand.Key = keyValue
315 | currCommand.Args = []interface{}{bodyValue}
316 | return currCommand, nil
317 | }
318 |
319 | func CreateRedisPipelineCommandForHashHSet(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
320 | currCommand := &RedisPipelineCommand{}
321 | currCommand.CommandName = "HSET"
322 |
323 | keyMeta := []interface{}{}
324 | for _, currKey := range currConfig.Key {
325 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
326 | }
327 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
328 |
329 | bodyValue := ""
330 | if currConfig.Valuetype == "json" {
331 | valueMeta := map[string]interface{}{}
332 | for columnIndex, columnName := range currEntity.Header {
333 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
334 | }
335 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
336 | if valueMetaJsonErr != nil {
337 | log4go.Error(valueMetaJsonErr)
338 | panic(valueMetaJsonErr)
339 | return nil, valueMetaJsonErr
340 | }
341 | bodyValue = valueMetaJson
342 | } else if currConfig.Valuetype == "splitstring" {
343 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
344 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
345 | } else {
346 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
347 | valueTypeErr := errors.New("Error valuetype")
348 | panic(valueTypeErr)
349 | return nil, valueTypeErr
350 | }
351 |
352 | currCommand.Key = currConfig.Reidskey
353 | currCommand.Args = []interface{}{keyValue, bodyValue}
354 | return currCommand, nil
355 | }
356 |
357 | func CreateRedisPipelineCommandForHashHDel(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
358 | currCommand := &RedisPipelineCommand{}
359 | currCommand.CommandName = "HDEL"
360 |
361 | keyMeta := []interface{}{}
362 | for _, currKey := range currConfig.Key {
363 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
364 | }
365 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
366 |
367 | currCommand.Key = currConfig.Reidskey
368 | currCommand.Args = []interface{}{keyValue}
369 | return currCommand, nil
370 | }
371 |
372 | func (this *RedisAdapter) Close() error {
373 | closeErr := this.redisClient.Close()
374 | if closeErr != nil {
375 | log4go.Error(closeErr)
376 | }
377 | return closeErr
378 | }
379 |
380 | func (this *RedisAdapter) SET(key, value string) (string, error) {
381 | log4go.Debug("key is %v", key)
382 | return redis.String(this.redisClient.Do("SET", key, value))
383 | }
384 |
385 | func (this *RedisAdapter) GET(key string) (string, error) {
386 | log4go.Debug("key is %v", key)
387 | return redis.String(this.redisClient.Do("GET", key))
388 | }
389 |
390 | func (this *RedisAdapter) KEYS(key string) ([]string, error) {
391 | log4go.Debug("key is %v", key)
392 | return redis.Strings(this.redisClient.Do("KEYS", key))
393 | }
394 |
395 | func (this *RedisAdapter) LPUSH(key string, value []interface{}) (interface{}, error) {
396 | log4go.Debug("key is %v, value is %v", key, value)
397 | return this.redisClient.Do("LPUSH", append([](interface{}){key}, value...)...)
398 | }
399 |
400 | func (this *RedisAdapter) RPUSH(key string, value []interface{}) (interface{}, error) {
401 | log4go.Debug("key is %v, value is %v", key, value)
402 | return this.redisClient.Do("RPUSH", append([](interface{}){key}, value...)...)
403 | }
404 |
405 | func (this *RedisAdapter) LPOP(key string) (string, error) {
406 | return redis.String(this.redisClient.Do("LPOP", key))
407 | }
408 |
409 | func (this *RedisAdapter) LRANGE(key string, index int, endIndex int) ([]string, error) {
410 | return redis.Strings(this.redisClient.Do("LRANGE", key, index, endIndex))
411 | }
412 |
413 | func (this *RedisAdapter) SendPipelineCommands(commands []*RedisPipelineCommand) ([]interface{}, []error) {
414 | // defer commonfunctions.PanicHandler()
415 | log4go.Debug("commands %v", commands)
416 | errorList := make([]error, 0, len(commands)+1)
417 |
418 | for index, value := range commands {
419 | log4go.Debug("Curr Commands index is %v value is %v", index, value)
420 | log4go.Debug("********************")
421 | log4go.Debug("%v", [](interface{}){value.Key})
422 |
423 | for in, v := range value.Args {
424 | log4go.Debug("===== %v %v", in, v)
425 | }
426 |
427 | log4go.Debug("%v", value.Args...)
428 | log4go.Debug("%v", append([](interface{}){value.Key}, value.Args...))
429 | log4go.Debug("%v", append([](interface{}){value.Key}, value.Args...)...)
430 | log4go.Debug("CommandName is %v", value.CommandName)
431 | currErr := this.redisClient.Send(value.CommandName, append([](interface{}){value.Key}, value.Args...)...)
432 |
433 | if currErr != nil {
434 | log4go.Error(currErr)
435 | errorList = append(errorList, currErr)
436 | log4go.Debug("command === %v", value)
437 | }
438 | }
439 |
440 | log4go.Debug("Send finished!!")
441 |
442 | fulshErr := this.redisClient.Flush()
443 |
444 | if fulshErr != nil {
445 | log4go.Error(fulshErr)
446 | errorList = append(errorList, fulshErr)
447 |
448 | return nil, errorList
449 | }
450 |
451 | replys := [](interface{}){}
452 |
453 | replysLength := len(commands)
454 |
455 | for i := 0; i < replysLength; i++ {
456 | reply, receiveErr := this.redisClient.Receive()
457 |
458 | if receiveErr != nil {
459 | log4go.Error(receiveErr)
460 | errorList = append(errorList, receiveErr)
461 | }
462 |
463 | replys = append(replys, reply)
464 | }
465 |
466 | log4go.Debug("Receive finished!!")
467 |
468 | if len(errorList) != 0 {
469 | return replys, errorList
470 | }
471 |
472 | return replys, nil
473 | }
474 |
--------------------------------------------------------------------------------
/src/adapter/rediscluster/redisclusteradapter.go:
--------------------------------------------------------------------------------
1 | package redisclusteradapter
2 |
3 | import (
4 | "adapter/common"
5 | "config"
6 | "errors"
7 | "fmt"
8 | "strings"
9 | "time"
10 |
11 | "github.com/gitstliu/go-commonfunctions"
12 | "github.com/gitstliu/go-redis-cluster"
13 | "github.com/gitstliu/log4go"
14 | )
15 |
16 | type RedisPipelineCommand struct {
17 | CommandName string
18 | Key string
19 | Args []interface{}
20 | }
21 |
22 | type RedisClusterAdapter struct {
23 | common.WriteAdapter
24 | redisClient *redis.Cluster
25 | Config *config.RedisClusterConfig
26 | TableActionMap map[string]map[string][]*config.RedisTableConfig
27 | TableKeyMap map[string]map[string][]*config.RedisTableConfig
28 | }
29 |
30 | func CreateAdapter(conf *config.RedisClusterConfig) common.WriteAdapter {
31 | adapter := &RedisClusterAdapter{Config: conf}
32 | adapter.TableActionMap, adapter.TableKeyMap = DecoderAdapterTableMessage(conf.Tables)
33 | cluster, err := redis.NewCluster(
34 | &redis.Options{
35 | StartNodes: conf.Address,
36 | ConnTimeout: time.Duration(conf.ConnTimeout) * time.Second,
37 | ReadTimeout: time.Duration(conf.ReadTimeout) * time.Second,
38 | WriteTimeout: time.Duration(conf.WriteTimeout) * time.Second,
39 | KeepAlive: conf.Keepalive,
40 | AliveTime: time.Duration(conf.AliveTime) * time.Second,
41 | })
42 |
43 | if err != nil {
44 | log4go.Error("Cluster Create Error: %v", err)
45 | return nil
46 | } else {
47 | adapter.redisClient = cluster
48 | }
49 |
50 | // adapter.redisClient.Do("SELECT", conf.DB)
51 | return adapter
52 | }
53 |
54 | /*
55 | cluster, err := redis.NewCluster(
56 | &redis.Options{
57 | StartNodes: hosts,
58 | ConnTimeout: connTimeout,
59 | ReadTimeout: readTimeout,
60 | WriteTimeout: writeTimeout,
61 | KeepAlive: keepAlive,
62 | AliveTime: aliveTime,
63 | })
64 |
65 | if err != nil {
66 | log4go.Error("Cluster Create Error: %v", err)
67 | } else {
68 | redisClusterClient = cluster
69 | }
70 | */
71 |
72 | func DecoderAdapterTableMessage(configs map[string]*config.RedisTableConfig) (map[string]map[string][]*config.RedisTableConfig, map[string]map[string][]*config.RedisTableConfig) {
73 | tableActionMap := map[string]map[string][]*config.RedisTableConfig{}
74 | tableKeyMap := map[string]map[string][]*config.RedisTableConfig{}
75 | for _, tableConfig := range configs {
76 | if len(tableConfig.Actions) > 0 {
77 | _, currActionMapExist := tableActionMap[tableConfig.Tablename]
78 | if !currActionMapExist {
79 | tableActionMap[tableConfig.Tablename] = map[string][]*config.RedisTableConfig{}
80 | }
81 | for _, currAction := range tableConfig.Actions {
82 | _, actionConfigListExist := tableActionMap[tableConfig.Tablename][currAction]
83 | if !actionConfigListExist {
84 | tableActionMap[tableConfig.Tablename][currAction] = []*config.RedisTableConfig{}
85 | }
86 | tableActionMap[tableConfig.Tablename][currAction] = append(tableActionMap[tableConfig.Tablename][currAction], tableConfig)
87 | }
88 | }
89 |
90 | if len(tableConfig.Key) > 0 {
91 | _, currKeyMapExist := tableKeyMap[tableConfig.Tablename]
92 | if !currKeyMapExist {
93 | tableKeyMap[tableConfig.Tablename] = map[string][]*config.RedisTableConfig{}
94 | }
95 | for _, currKey := range tableConfig.Key {
96 | _, actionConfigListExist := tableKeyMap[tableConfig.Tablename][currKey]
97 | if !actionConfigListExist {
98 | tableKeyMap[tableConfig.Tablename][currKey] = []*config.RedisTableConfig{}
99 | }
100 | tableKeyMap[tableConfig.Tablename][currKey] = append(tableKeyMap[tableConfig.Tablename][currKey], tableConfig)
101 | }
102 | }
103 | }
104 | return tableActionMap, tableKeyMap
105 | }
106 |
107 | func (this *RedisClusterAdapter) GetTableActionConfigs(table, action string) []*config.RedisTableConfig {
108 | _, tableExist := this.TableActionMap[table]
109 | if !tableExist {
110 | return nil
111 | }
112 | actionConfigs, actionExist := this.TableActionMap[table][action]
113 | if !actionExist {
114 | return nil
115 | }
116 |
117 | return actionConfigs
118 | }
119 |
120 | func (this *RedisClusterAdapter) GetTableKeyConfigs(table, key string) []*config.RedisTableConfig {
121 | _, tableExist := this.TableKeyMap[table]
122 | if !tableExist {
123 | return nil
124 | }
125 | keyConfigs, keyExist := this.TableKeyMap[table][key]
126 | if !keyExist {
127 | return nil
128 | }
129 |
130 | return keyConfigs
131 | }
132 |
133 | func (this *RedisClusterAdapter) Write(entities []*common.RawLogEntity) error {
134 | commands := []*RedisPipelineCommand{}
135 |
136 | for _, currEntity := range entities {
137 | tableActionsConfig, tableConfigExist := this.TableActionMap[currEntity.TableName]
138 | if !tableConfigExist {
139 | continue
140 | }
141 |
142 | for currAction, currConfigs := range tableActionsConfig {
143 | if currAction == currEntity.Action {
144 | for _, currConfig := range currConfigs {
145 | var currCommand *RedisPipelineCommand
146 | var creatCommandErr error
147 | if currConfig.Struct == "string" {
148 | if currEntity.Action == "update" {
149 | currCommand, creatCommandErr = CreateRedisPipelineCommandForStringSet(currConfig, currEntity)
150 | } else if currEntity.Action == "insert" {
151 | currCommand, creatCommandErr = CreateRedisPipelineCommandForStringSet(currConfig, currEntity)
152 | } else if currEntity.Action == "delete" {
153 | currCommand, creatCommandErr = CreateRedisPipelineCommandForStringDel(currConfig, currEntity)
154 | }
155 |
156 | } else if currConfig.Struct == "list" {
157 | if currEntity.Action == "update" {
158 | currCommand, creatCommandErr = CreateRedisPipelineCommandForListRPush(currConfig, currEntity)
159 | } else if currEntity.Action == "insert" {
160 | currCommand, creatCommandErr = CreateRedisPipelineCommandForListRPush(currConfig, currEntity)
161 | } else if currEntity.Action == "delete" {
162 | continue
163 | }
164 | } else if currConfig.Struct == "set" {
165 | if currEntity.Action == "update" {
166 | currCommand, creatCommandErr = CreateRedisPipelineCommandForSet(currConfig, currEntity, "SADD")
167 | } else if currEntity.Action == "insert" {
168 | currCommand, creatCommandErr = CreateRedisPipelineCommandForSet(currConfig, currEntity, "SADD")
169 | } else if currEntity.Action == "delete" {
170 | currCommand, creatCommandErr = CreateRedisPipelineCommandForSet(currConfig, currEntity, "SREM")
171 | }
172 | } else if currConfig.Struct == "hash" {
173 | if currEntity.Action == "update" {
174 | currCommand, creatCommandErr = CreateRedisPipelineCommandForHashHSet(currConfig, currEntity)
175 | } else if currEntity.Action == "insert" {
176 | currCommand, creatCommandErr = CreateRedisPipelineCommandForHashHSet(currConfig, currEntity)
177 | } else if currEntity.Action == "delete" {
178 | currCommand, creatCommandErr = CreateRedisPipelineCommandForHashHDel(currConfig, currEntity)
179 | }
180 | } else {
181 | log4go.Error("Redis-Cluster data struct exception. struct is %v", currConfig.Struct)
182 | return errors.New(fmt.Sprintf("Redis-Cluster data struct exception. struct is %v", currConfig.Struct))
183 | }
184 | if creatCommandErr != nil {
185 | log4go.Error(creatCommandErr)
186 | return creatCommandErr
187 | }
188 |
189 | if currCommand != nil {
190 | commands = append(commands, currCommand)
191 | }
192 | }
193 | }
194 | }
195 |
196 | // else if tableConfig.Struct == "zset" {
197 | // if currEntity.Action == "update" {
198 | // currCommand.CommandName = "ZADD"
199 | // } else if currEntity.Action == "insert" {
200 | // currCommand.CommandName = "ZADD"
201 | // } else if currEntity.Action == "delete" {
202 | // currCommand.CommandName = "ZDEL"
203 | // }
204 | // }
205 | // switch currCommand.CommandName {
206 | // case "SET", "DEL":
207 | // key := ""
208 | // for _, this.
209 | // case "RPUSH", "SADD", "SREM"
210 | // case "HSET", "HDEL":
211 | // }
212 | }
213 | _, commandsSendErrors := this.SendPipelineCommands(commands)
214 | for _, currErr := range commandsSendErrors {
215 | if currErr != nil {
216 | return currErr
217 | }
218 | }
219 |
220 | return nil
221 | }
222 |
223 | func CreateRedisPipelineCommandForStringSet(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
224 |
225 | currCommand := &RedisPipelineCommand{}
226 | currCommand.CommandName = "SET"
227 | keyMeta := []interface{}{}
228 | for _, currKey := range currConfig.Key {
229 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
230 | }
231 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
232 |
233 | bodyValue := ""
234 | if currConfig.Valuetype == "json" {
235 | valueMeta := map[string]interface{}{}
236 | for columnIndex, columnName := range currEntity.Header {
237 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
238 | }
239 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
240 | if valueMetaJsonErr != nil {
241 | log4go.Error(valueMetaJsonErr)
242 | panic(valueMetaJsonErr)
243 | return nil, valueMetaJsonErr
244 | }
245 | bodyValue = valueMetaJson
246 | } else if currConfig.Valuetype == "splitstring" {
247 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
248 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
249 | } else {
250 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
251 | valueTypeErr := errors.New("Error valuetype")
252 | panic(valueTypeErr)
253 | return nil, valueTypeErr
254 | }
255 |
256 | currCommand.Key = keyValue
257 | currCommand.Args = []interface{}{bodyValue}
258 | return currCommand, nil
259 | }
260 |
261 | func CreateRedisPipelineCommandForStringDel(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
262 |
263 | currCommand := &RedisPipelineCommand{}
264 | currCommand.CommandName = "DEL"
265 | keyMeta := []interface{}{}
266 | for _, currKey := range currConfig.Key {
267 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
268 | }
269 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
270 |
271 | currCommand.Key = keyValue
272 | currCommand.Args = []interface{}{}
273 | return currCommand, nil
274 | }
275 |
276 | func CreateRedisPipelineCommandForListRPush(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
277 |
278 | currCommand := &RedisPipelineCommand{}
279 | currCommand.CommandName = "RPUSH"
280 | keyValue := currConfig.Reidskey
281 |
282 | bodyValue := ""
283 | if currConfig.Valuetype == "json" {
284 | valueMeta := map[string]interface{}{}
285 | for columnIndex, columnName := range currEntity.Header {
286 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
287 | }
288 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
289 | if valueMetaJsonErr != nil {
290 | log4go.Error(valueMetaJsonErr)
291 | panic(valueMetaJsonErr)
292 | return nil, valueMetaJsonErr
293 | }
294 | bodyValue = valueMetaJson
295 | } else if currConfig.Valuetype == "splitstring" {
296 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
297 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
298 | } else {
299 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
300 | valueTypeErr := errors.New("Error valuetype")
301 | panic(valueTypeErr)
302 | return nil, valueTypeErr
303 | }
304 |
305 | currCommand.Key = keyValue
306 | currCommand.Args = []interface{}{bodyValue}
307 | return currCommand, nil
308 | }
309 |
310 | func CreateRedisPipelineCommandForSet(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity, command string) (*RedisPipelineCommand, error) {
311 | currCommand := &RedisPipelineCommand{}
312 | currCommand.CommandName = command
313 | keyValue := currConfig.Reidskey
314 |
315 | bodyValue := ""
316 | if currConfig.Valuetype == "json" {
317 | valueMeta := map[string]interface{}{}
318 | for columnIndex, columnName := range currEntity.Header {
319 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
320 | }
321 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
322 | if valueMetaJsonErr != nil {
323 | log4go.Error(valueMetaJsonErr)
324 | panic(valueMetaJsonErr)
325 | return nil, valueMetaJsonErr
326 | }
327 | bodyValue = valueMetaJson
328 | } else if currConfig.Valuetype == "splitstring" {
329 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
330 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
331 | } else {
332 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
333 | valueTypeErr := errors.New("Error valuetype")
334 | panic(valueTypeErr)
335 | return nil, valueTypeErr
336 | }
337 |
338 | currCommand.Key = keyValue
339 | currCommand.Args = []interface{}{bodyValue}
340 | return currCommand, nil
341 | }
342 |
343 | func CreateRedisPipelineCommandForHashHSet(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
344 | currCommand := &RedisPipelineCommand{}
345 | currCommand.CommandName = "HSET"
346 |
347 | keyMeta := []interface{}{}
348 | for _, currKey := range currConfig.Key {
349 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
350 | }
351 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
352 |
353 | bodyValue := ""
354 | if currConfig.Valuetype == "json" {
355 | valueMeta := map[string]interface{}{}
356 | for columnIndex, columnName := range currEntity.Header {
357 | valueMeta[columnName] = currEntity.Rows[len(currEntity.Rows)-1][columnIndex]
358 | }
359 | valueMetaJson, valueMetaJsonErr := commonfunctions.ObjectToJson(valueMeta)
360 | if valueMetaJsonErr != nil {
361 | log4go.Error(valueMetaJsonErr)
362 | panic(valueMetaJsonErr)
363 | return nil, valueMetaJsonErr
364 | }
365 | bodyValue = valueMetaJson
366 | } else if currConfig.Valuetype == "splitstring" {
367 | valueMeta := currEntity.Rows[len(currEntity.Rows)-1]
368 | bodyValue = strings.Join(commonfunctions.InterfacesToStringsConverter(valueMeta), currConfig.Valuesplit)
369 | } else {
370 | log4go.Error("Error valuetype %v", currConfig.Valuetype)
371 | valueTypeErr := errors.New("Error valuetype")
372 | panic(valueTypeErr)
373 | return nil, valueTypeErr
374 | }
375 |
376 | currCommand.Key = currConfig.Reidskey
377 | currCommand.Args = []interface{}{keyValue, bodyValue}
378 | return currCommand, nil
379 | }
380 |
381 | func CreateRedisPipelineCommandForHashHDel(currConfig *config.RedisTableConfig, currEntity *common.RawLogEntity) (*RedisPipelineCommand, error) {
382 | currCommand := &RedisPipelineCommand{}
383 | currCommand.CommandName = "HDEL"
384 |
385 | keyMeta := []interface{}{}
386 | for _, currKey := range currConfig.Key {
387 | keyMeta = append(keyMeta, currEntity.Rows[len(currEntity.Rows)-1][currEntity.HeaderMap[currKey]])
388 | }
389 | keyValue := strings.Join([]string{currConfig.KeyPrefix, strings.Join(commonfunctions.InterfacesToStringsConverter(keyMeta), currConfig.Keysplit), currConfig.KeyPostfix}, ":")
390 |
391 | currCommand.Key = currConfig.Reidskey
392 | currCommand.Args = []interface{}{keyValue}
393 | return currCommand, nil
394 | }
395 |
396 | func (this *RedisClusterAdapter) Close() error {
397 | this.redisClient.Close()
398 |
399 | return nil
400 | }
401 |
402 | func (this *RedisClusterAdapter) SET(key, value string) (string, error) {
403 | log4go.Debug("key is %v", key)
404 | return redis.String(this.redisClient.Do("SET", key, value))
405 | }
406 |
407 | func (this *RedisClusterAdapter) GET(key string) (string, error) {
408 | log4go.Debug("key is %v", key)
409 | return redis.String(this.redisClient.Do("GET", key))
410 | }
411 |
412 | func (this *RedisClusterAdapter) KEYS(key string) ([]string, error) {
413 | log4go.Debug("key is %v", key)
414 | return redis.Strings(this.redisClient.Do("KEYS", key))
415 | }
416 |
417 | func (this *RedisClusterAdapter) LPUSH(key string, value []interface{}) (interface{}, error) {
418 | log4go.Debug("key is %v, value is %v", key, value)
419 | return this.redisClient.Do("LPUSH", append([](interface{}){key}, value...)...)
420 | }
421 |
422 | func (this *RedisClusterAdapter) RPUSH(key string, value []interface{}) (interface{}, error) {
423 | log4go.Debug("key is %v, value is %v", key, value)
424 | return this.redisClient.Do("RPUSH", append([](interface{}){key}, value...)...)
425 | }
426 |
427 | func (this *RedisClusterAdapter) LPOP(key string) (string, error) {
428 | return redis.String(this.redisClient.Do("LPOP", key))
429 | }
430 |
431 | func (this *RedisClusterAdapter) LRANGE(key string, index int, endIndex int) ([]string, error) {
432 | return redis.Strings(this.redisClient.Do("LRANGE", key, index, endIndex))
433 | }
434 |
435 | func (this *RedisClusterAdapter) SendPipelineCommands(commands []*RedisPipelineCommand) ([]interface{}, []error) {
436 | log4go.Debug("commands %v", commands)
437 | // log4go.Info("len(commands) %v", len(commands))
438 | errorList := make([]error, 0, len(commands)+1)
439 |
440 | client := this.redisClient
441 | //defer client.Close()
442 |
443 | batch := client.NewBatch()
444 | for index, value := range commands {
445 | log4go.Debug("Curr Commands index is %v value is %v", index, value)
446 | //params :=
447 | //tempParams :=
448 |
449 | //params := [](interface{}){"Cache:/api/brands/888/view:ListValue", "666", "999"}
450 | //params := []interface{}{"LPUSH", "666", "999"}
451 | //currErr := conn.Send(value.CommandName, params...)
452 | //params := append([](interface{}){value.Key}, value.Args...)
453 | //log4go.Debug("Params : %v", params)
454 | log4go.Debug("********************")
455 | log4go.Debug("%v", [](interface{}){value.Key})
456 |
457 | for in, v := range value.Args {
458 | log4go.Debug("===== %v %v", in, v)
459 | }
460 |
461 | log4go.Debug("%v", value.Args...)
462 | log4go.Debug("%v", append([](interface{}){value.Key}, value.Args...))
463 | log4go.Debug("%v", append([](interface{}){value.Key}, value.Args...)...)
464 | //client.
465 | //currErr := client.Send(value.CommandName, append([](interface{}){value.Key}, value.Args...)...)
466 | currErr := batch.Put(value.CommandName, append([](interface{}){value.Key}, value.Args...)...)
467 |
468 | //currErr := conn.Send(value.CommandName, value.Key, "666", "999")
469 |
470 | if currErr != nil {
471 | errorList = append(errorList, currErr)
472 | }
473 | }
474 |
475 | log4go.Debug("Send finished!!")
476 |
477 | reply, batchErr := client.RunBatch(batch)
478 | //fulshErr := client.Flush()
479 |
480 | if batchErr != nil {
481 | errorList = append(errorList, batchErr)
482 | panic(batchErr)
483 | log4go.Error(batchErr)
484 | return nil, errorList
485 | }
486 |
487 | replys := [](interface{}){}
488 |
489 | replysLength := len(commands)
490 |
491 | // resp := (interface{}){}
492 | var resp string
493 | for i := 0; i < replysLength; i++ {
494 | // log4go.Debug("Get respose %v", i)
495 | reply, receiveErr := redis.Scan(reply, &resp)
496 | // reply, receiveErr := redis.Scan(reply, resp)
497 |
498 | if receiveErr != nil {
499 | log4go.Error(receiveErr)
500 | log4go.Debug("%v", reply)
501 | errorList = append(errorList, receiveErr)
502 | }
503 |
504 | replys = append(replys, reply)
505 | }
506 |
507 | log4go.Debug("Receive finished!!")
508 |
509 | if len(errorList) != 0 {
510 | return replys, errorList
511 | }
512 |
513 | return replys, nil
514 | }
515 |
--------------------------------------------------------------------------------
/src/binloghandler/binloghandler.go:
--------------------------------------------------------------------------------
1 | package handler
2 |
3 | import (
4 | "adapter/common"
5 | "output"
6 |
7 | "github.com/gitstliu/log4go"
8 | "github.com/siddontang/go-mysql/canal"
9 | "github.com/siddontang/go-mysql/mysql"
10 | "github.com/siddontang/go-mysql/replication"
11 | )
12 |
13 | type CommonEventHandler struct {
14 | canal.DummyEventHandler
15 | CurrOutput *output.Output
16 | // PosSync chan *mysql.Position
17 | }
18 |
19 | func (this *CommonEventHandler) OnRow(e *canal.RowsEvent) error {
20 | log4go.Debug("OnRow")
21 | entity := &common.RawLogEntity{}
22 | entity.Action = e.Action
23 | entity.Rows = e.Rows
24 | entity.TableName = e.Table.Name
25 | entity.Header = []string{}
26 | entity.HeaderMap = map[string]int{}
27 | entity.ValueMap = map[string]interface{}{}
28 |
29 | for columnIndex, currColumn := range e.Table.Columns {
30 | entity.Header = append(entity.Header, currColumn.Name)
31 | entity.HeaderMap[currColumn.Name] = columnIndex
32 | entity.ValueMap[currColumn.Name] = e.Rows[len(e.Rows)-1][columnIndex]
33 | }
34 | log4go.Debug(entity)
35 | this.CurrOutput.Write(entity)
36 |
37 | return nil
38 | }
39 |
40 | func (this *CommonEventHandler) String() string {
41 | return "MyEventHandler"
42 | }
43 |
44 | func (this *CommonEventHandler) OnRotate(e *replication.RotateEvent) error {
45 | this.CurrOutput.Write(&mysql.Position{Name: string(e.NextLogName), Pos: uint32(e.Position)})
46 | return nil
47 | }
48 |
49 | func (this *CommonEventHandler) OnTableChanged(schema string, table string) error {
50 | return nil
51 | }
52 |
53 | func (this *CommonEventHandler) OnDDL(nextPos mysql.Position, queryEvent *replication.QueryEvent) error {
54 | return nil
55 | }
56 |
57 | func (this *CommonEventHandler) OnXID(mysql.Position) error {
58 | return nil
59 | }
60 |
61 | func (this *CommonEventHandler) OnGTID(mysql.GTIDSet) error {
62 | return nil
63 | }
64 |
65 | func (this *CommonEventHandler) OnPosSynced(mysql.Position, bool) error {
66 | return nil
67 | }
68 |
--------------------------------------------------------------------------------
/src/canalconfigs/database1.pos:
--------------------------------------------------------------------------------
1 | bin_name = "mysql-bin.000010"
2 | bin_pos = 722
3 |
--------------------------------------------------------------------------------
/src/canalconfigs/database1.toml:
--------------------------------------------------------------------------------
1 | addr = "192.168.17.133:3306"
2 | user = "root"
3 | password = "123456"
4 | charset = "utf8"
5 | server_id = 2001
6 | flavor = "mysql"
7 | heartbeat_period = 0
8 | read_timeout = 0
9 | include_table_regex = []
10 | exclude_table_regex = []
11 | discard_no_meta_row_event = false
12 | use_decimal = false
13 | parse_time = false
14 | semi_sync_enabled = false
15 | max_reconnect_attempts = 0
16 | [dump]
17 | mysqldump = "mysqldump"
18 | tables = ["ztest"]
19 | table_db = "diss2_test"
20 | dbs = []
21 | ignore_tables = []
22 | where = ""
23 | discard_err = true
24 | skip_master_data = false
25 | max_allowed_packet_mb = 0
26 | protocol = ""
27 |
28 |
--------------------------------------------------------------------------------
/src/canalconfigs/database2.toml:
--------------------------------------------------------------------------------
1 | addr = "10.0.90.186:3306"
2 | user = "user_write_read"
3 | password = "Yonghui123"
4 | charset = "utf8"
5 | server_id = 1002
6 | flavor = "mysql"
7 | heartbeat_period = 0
8 | read_timeout = 0
9 | include_table_regex = []
10 | exclude_table_regex = []
11 | discard_no_meta_row_event = false
12 | use_decimal = false
13 | parse_time = false
14 | semi_sync_enabled = false
15 | max_reconnect_attempts = 0
16 | [dump]
17 | mysqldump = "mysqldump"
18 | tables = ["t1"]
19 | table_db = "test"
20 | dbs = []
21 | ignore_tables = []
22 | where = ""
23 | discard_err = true
24 | skip_master_data = false
25 | max_allowed_packet_mb = 0
26 | protocol = ""
27 |
28 |
--------------------------------------------------------------------------------
/src/canalhandler/canalhandler.go:
--------------------------------------------------------------------------------
1 | package canalhandler
2 |
3 | import (
4 | "binloghandler"
5 | "config"
6 | "output"
7 |
8 | "github.com/gitstliu/log4go"
9 | "github.com/siddontang/go-mysql/canal"
10 | "github.com/siddontang/go-mysql/mysql"
11 | )
12 |
13 | type CommonCanalMeta struct {
14 | Name string
15 | ConfigFilePath string
16 | Config *canal.Config
17 | Canal *canal.Canal
18 | CurrOutput *output.Output
19 | BinData chan interface{}
20 | }
21 |
22 | //func (this *CommonCanalMeta) RunWithConfig(filePath string, name string, out *output.Output, pos *config.Pos) {
23 | func (this *CommonCanalMeta) RunWithConfig(name string, conf *config.CanalConfig) {
24 |
25 | // log4go.Info("Start bin log at %v %v", conf.LogPos.Name, conf.LogPos.Pos)
26 | currOutput, createOutputErr := output.CreateByName(name)
27 |
28 | if createOutputErr != nil {
29 | log4go.Error(createOutputErr)
30 | panic(createOutputErr)
31 | return
32 | }
33 |
34 | this.BinData = make(chan interface{}, 4096)
35 |
36 | this.Name = name
37 | this.CurrOutput = currOutput
38 | cfg, loadConfigErr := canal.NewConfigWithFile(conf.Cancalconfigpath)
39 | if loadConfigErr != nil {
40 | log4go.Error(loadConfigErr)
41 | panic(loadConfigErr)
42 | }
43 | currCanal, createCanalErr := canal.NewCanal(cfg)
44 |
45 | if createCanalErr != nil {
46 | log4go.Error(createCanalErr)
47 | panic(createCanalErr)
48 | }
49 |
50 | go this.CurrOutput.Run()
51 | currCanal.SetEventHandler(&handler.CommonEventHandler{CurrOutput: this.CurrOutput})
52 | if conf.LogPos != nil {
53 | startPos := mysql.Position{Name: conf.LogPos.Name, Pos: conf.LogPos.Pos}
54 | log4go.Info("Run with pos")
55 | currCanal.RunFrom(startPos)
56 | } else {
57 | log4go.Info("Run without pos")
58 | currCanal.Run()
59 | }
60 |
61 | }
62 |
63 | //func (this *CommonCanalMeta) RunSync() {
64 | // for true {
65 | // currData := <-this.BinData
66 | // }
67 | //}
68 |
--------------------------------------------------------------------------------
/src/client/esclient/esclient.go:
--------------------------------------------------------------------------------
1 | package esclient
2 |
3 | import (
4 | "bytes"
5 | "crypto/tls"
6 | "encoding/json"
7 | "fmt"
8 | "io/ioutil"
9 | "net/http"
10 | "net/url"
11 |
12 | "github.com/gitstliu/log4go"
13 | "github.com/juju/errors"
14 | )
15 |
16 | type Client struct {
17 | Protocol string
18 | Addr string
19 | User string
20 | Password string
21 |
22 | c *http.Client
23 | }
24 |
25 | // ClientConfig is the configuration for the client.
26 | type ClientConfig struct {
27 | HTTPS bool
28 | Addr string
29 | User string
30 | Password string
31 | }
32 |
33 | // NewClient creates the Cient with configuration.
34 | func NewClient(conf *ClientConfig) *Client {
35 | c := new(Client)
36 |
37 | c.Addr = conf.Addr
38 | c.User = conf.User
39 | c.Password = conf.Password
40 |
41 | if conf.HTTPS {
42 | c.Protocol = "https"
43 | tr := &http.Transport{
44 | TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
45 | }
46 | c.c = &http.Client{Transport: tr}
47 | } else {
48 | c.Protocol = "http"
49 | c.c = &http.Client{}
50 | }
51 |
52 | return c
53 | }
54 |
55 | // ResponseItem is the ES item in the response.
56 | type ResponseItem struct {
57 | ID string `json:"_id"`
58 | Index string `json:"_index"`
59 | Type string `json:"_type"`
60 | Version int `json:"_version"`
61 | Found bool `json:"found"`
62 | Source map[string]interface{} `json:"_source"`
63 | }
64 |
65 | // Response is the ES response
66 | type Response struct {
67 | Code int
68 | ResponseItem
69 | }
70 |
71 | // See http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/bulk.html
72 | const (
73 | ActionCreate = "create"
74 | ActionUpdate = "update"
75 | ActionDelete = "delete"
76 | ActionIndex = "index"
77 | )
78 |
79 | // BulkRequest is used to send multi request in batch.
80 | type BulkRequest struct {
81 | Action string
82 | Index string
83 | Type string
84 | ID string
85 | Parent string
86 | Pipeline string
87 |
88 | Data map[string]interface{}
89 | }
90 |
91 | func (r *BulkRequest) bulk(buf *bytes.Buffer) error {
92 | meta := make(map[string]map[string]string)
93 | metaData := make(map[string]string)
94 | if len(r.Index) > 0 {
95 | metaData["_index"] = r.Index
96 | }
97 | if len(r.Type) > 0 {
98 | metaData["_type"] = r.Type
99 | }
100 |
101 | if len(r.ID) > 0 {
102 | metaData["_id"] = r.ID
103 | }
104 | if len(r.Parent) > 0 {
105 | metaData["_parent"] = r.Parent
106 | }
107 | if len(r.Pipeline) > 0 {
108 | metaData["pipeline"] = r.Pipeline
109 | }
110 |
111 | meta[r.Action] = metaData
112 |
113 | data, err := json.Marshal(meta)
114 | if err != nil {
115 | return errors.Trace(err)
116 | }
117 |
118 | buf.Write(data)
119 | buf.WriteByte('\n')
120 |
121 | switch r.Action {
122 | case ActionDelete:
123 | //nothing to do
124 | case ActionUpdate:
125 | doc := map[string]interface{}{
126 | "doc": r.Data,
127 | }
128 | data, err = json.Marshal(doc)
129 | if err != nil {
130 | return errors.Trace(err)
131 | }
132 |
133 | buf.Write(data)
134 | buf.WriteByte('\n')
135 | default:
136 | //for create and index
137 | data, err = json.Marshal(r.Data)
138 | if err != nil {
139 | return errors.Trace(err)
140 | }
141 |
142 | buf.Write(data)
143 | buf.WriteByte('\n')
144 | }
145 |
146 | return nil
147 | }
148 |
149 | // BulkResponse is the response for the bulk request.
150 | type BulkResponse struct {
151 | Code int
152 | Took int `json:"took"`
153 | Errors bool `json:"errors"`
154 |
155 | Items []map[string]*BulkResponseItem `json:"items"`
156 | }
157 |
158 | // BulkResponseItem is the item in the bulk response.
159 | type BulkResponseItem struct {
160 | Index string `json:"_index"`
161 | Type string `json:"_type"`
162 | ID string `json:"_id"`
163 | Version int `json:"_version"`
164 | Status int `json:"status"`
165 | Error json.RawMessage `json:"error"`
166 | Found bool `json:"found"`
167 | }
168 |
169 | // MappingResponse is the response for the mapping request.
170 | type MappingResponse struct {
171 | Code int
172 | Mapping Mapping
173 | }
174 |
175 | // Mapping represents ES mapping.
176 | type Mapping map[string]struct {
177 | Mappings map[string]struct {
178 | Properties map[string]struct {
179 | Type string `json:"type"`
180 | Fields interface{} `json:"fields"`
181 | } `json:"properties"`
182 | } `json:"mappings"`
183 | }
184 |
185 | // DoRequest sends a request with body to ES.
186 | func (c *Client) DoRequest(method string, url string, body *bytes.Buffer) (*http.Response, error) {
187 | log4go.Debug("url = %v", url)
188 | req, err := http.NewRequest(method, url, body)
189 | if err != nil {
190 | return nil, errors.Trace(err)
191 | }
192 |
193 | req.Header.Add("Content-Type", "application/json")
194 |
195 | if len(c.User) > 0 && len(c.Password) > 0 {
196 | req.SetBasicAuth(c.User, c.Password)
197 | }
198 | log4go.Debug("req Header", req.Header)
199 | resp, err := c.c.Do(req)
200 |
201 | return resp, err
202 | }
203 |
204 | // Do sends the request with body to ES.
205 | func (c *Client) Do(method string, url string, body map[string]interface{}) (*Response, error) {
206 | bodyData, err := json.Marshal(body)
207 | if err != nil {
208 | return nil, errors.Trace(err)
209 | }
210 |
211 | buf := bytes.NewBuffer(bodyData)
212 | if body == nil {
213 | buf = bytes.NewBuffer(nil)
214 | }
215 |
216 | resp, err := c.DoRequest(method, url, buf)
217 | if err != nil {
218 | return nil, errors.Trace(err)
219 | }
220 |
221 | defer resp.Body.Close()
222 |
223 | ret := new(Response)
224 | ret.Code = resp.StatusCode
225 |
226 | data, err := ioutil.ReadAll(resp.Body)
227 | if err != nil {
228 | return nil, errors.Trace(err)
229 | }
230 |
231 | if len(data) > 0 {
232 | err = json.Unmarshal(data, &ret.ResponseItem)
233 | }
234 |
235 | return ret, errors.Trace(err)
236 | }
237 |
238 | // DoBulk sends the bulk request to the ES.
239 | func (c *Client) DoBulk(url string, items []*BulkRequest) (*BulkResponse, error) {
240 | var buf bytes.Buffer
241 |
242 | for _, item := range items {
243 | if err := item.bulk(&buf); err != nil {
244 | log4go.Error(err)
245 | return nil, errors.Trace(err)
246 | }
247 | }
248 |
249 | log4go.Debug(string(buf.Bytes()))
250 |
251 | resp, err := c.DoRequest("POST", url+"/_bulk", &buf)
252 | log4go.Debug("Do es request finished")
253 | log4go.Debug("Do es request err %v", err)
254 | if err != nil {
255 | log4go.Error(err)
256 | return nil, errors.Trace(err)
257 | }
258 |
259 | defer resp.Body.Close()
260 |
261 | ret := new(BulkResponse)
262 | ret.Code = resp.StatusCode
263 |
264 | data, err := ioutil.ReadAll(resp.Body)
265 | log4go.Debug("Read response err %v", err)
266 | if err != nil {
267 | log4go.Error(err)
268 | return nil, errors.Trace(err)
269 | }
270 |
271 | if len(data) > 0 {
272 | log4go.Debug("data %v", string(data))
273 | err = json.Unmarshal(data, &ret)
274 | log4go.Debug("Read response Unmarshal err %v", err)
275 | }
276 |
277 | log4go.Debug("DoBulk finished")
278 | return ret, errors.Trace(err)
279 | }
280 |
281 | // CreateMapping creates a ES mapping.
282 | func (c *Client) CreateMapping(index string, docType string, mapping map[string]interface{}) error {
283 | reqURL := fmt.Sprintf("%s://%s/%s", c.Protocol, c.Addr,
284 | url.QueryEscape(index))
285 |
286 | r, err := c.Do("HEAD", reqURL, nil)
287 | if err != nil {
288 | return errors.Trace(err)
289 | }
290 |
291 | // if index doesn't exist, will get 404 not found, create index first
292 | if r.Code == http.StatusNotFound {
293 | _, err = c.Do("PUT", reqURL, nil)
294 |
295 | if err != nil {
296 | return errors.Trace(err)
297 | }
298 | } else if r.Code != http.StatusOK {
299 | return errors.Errorf("Error: %s, code: %d", http.StatusText(r.Code), r.Code)
300 | }
301 |
302 | reqURL = fmt.Sprintf("%s://%s/%s/%s/_mapping", c.Protocol, c.Addr,
303 | url.QueryEscape(index),
304 | url.QueryEscape(docType))
305 |
306 | _, err = c.Do("POST", reqURL, mapping)
307 | return errors.Trace(err)
308 | }
309 |
310 | // GetMapping gets the mapping.
311 | func (c *Client) GetMapping(index string, docType string) (*MappingResponse, error) {
312 | reqURL := fmt.Sprintf("%s://%s/%s/%s/_mapping", c.Protocol, c.Addr,
313 | url.QueryEscape(index),
314 | url.QueryEscape(docType))
315 | buf := bytes.NewBuffer(nil)
316 | resp, err := c.DoRequest("GET", reqURL, buf)
317 |
318 | if err != nil {
319 | return nil, errors.Trace(err)
320 | }
321 |
322 | defer resp.Body.Close()
323 |
324 | data, err := ioutil.ReadAll(resp.Body)
325 | if err != nil {
326 | return nil, errors.Trace(err)
327 | }
328 |
329 | ret := new(MappingResponse)
330 | err = json.Unmarshal(data, &ret.Mapping)
331 | if err != nil {
332 | return nil, errors.Trace(err)
333 | }
334 |
335 | ret.Code = resp.StatusCode
336 | return ret, errors.Trace(err)
337 | }
338 |
339 | // DeleteIndex deletes the index.
340 | func (c *Client) DeleteIndex(index string) error {
341 | reqURL := fmt.Sprintf("%s://%s/%s", c.Protocol, c.Addr,
342 | url.QueryEscape(index))
343 |
344 | r, err := c.Do("DELETE", reqURL, nil)
345 | if err != nil {
346 | return errors.Trace(err)
347 | }
348 |
349 | if r.Code == http.StatusOK || r.Code == http.StatusNotFound {
350 | return nil
351 | }
352 |
353 | return errors.Errorf("Error: %s, code: %d", http.StatusText(r.Code), r.Code)
354 | }
355 |
356 | // Get gets the item by id.
357 | func (c *Client) Get(index string, docType string, id string) (*Response, error) {
358 | reqURL := fmt.Sprintf("%s://%s/%s/%s/%s", c.Protocol, c.Addr,
359 | url.QueryEscape(index),
360 | url.QueryEscape(docType),
361 | url.QueryEscape(id))
362 |
363 | return c.Do("GET", reqURL, nil)
364 | }
365 |
366 | // Update creates or updates the data
367 | func (c *Client) Update(index string, docType string, id string, data map[string]interface{}) error {
368 | reqURL := fmt.Sprintf("%s://%s/%s/%s/%s", c.Protocol, c.Addr,
369 | url.QueryEscape(index),
370 | url.QueryEscape(docType),
371 | url.QueryEscape(id))
372 |
373 | r, err := c.Do("PUT", reqURL, data)
374 | if err != nil {
375 | return errors.Trace(err)
376 | }
377 |
378 | if r.Code == http.StatusOK || r.Code == http.StatusCreated {
379 | return nil
380 | }
381 |
382 | return errors.Errorf("Error: %s, code: %d", http.StatusText(r.Code), r.Code)
383 | }
384 |
385 | // Exists checks whether id exists or not.
386 | func (c *Client) Exists(index string, docType string, id string) (bool, error) {
387 | reqURL := fmt.Sprintf("%s://%s/%s/%s/%s", c.Protocol, c.Addr,
388 | url.QueryEscape(index),
389 | url.QueryEscape(docType),
390 | url.QueryEscape(id))
391 |
392 | r, err := c.Do("HEAD", reqURL, nil)
393 | if err != nil {
394 | return false, err
395 | }
396 |
397 | return r.Code == http.StatusOK, nil
398 | }
399 |
400 | // Delete deletes the item by id.
401 | func (c *Client) Delete(index string, docType string, id string) error {
402 | reqURL := fmt.Sprintf("%s://%s/%s/%s/%s", c.Protocol, c.Addr,
403 | url.QueryEscape(index),
404 | url.QueryEscape(docType),
405 | url.QueryEscape(id))
406 |
407 | r, err := c.Do("DELETE", reqURL, nil)
408 | if err != nil {
409 | return errors.Trace(err)
410 | }
411 |
412 | if r.Code == http.StatusOK || r.Code == http.StatusNotFound {
413 | return nil
414 | }
415 |
416 | return errors.Errorf("Error: %s, code: %d", http.StatusText(r.Code), r.Code)
417 | }
418 |
419 | // Bulk sends the bulk request.
420 | // only support parent in 'Bulk' related apis
421 | func (c *Client) Bulk(items []*BulkRequest) (*BulkResponse, error) {
422 | reqURL := fmt.Sprintf("%s://%s/_bulk", c.Protocol, c.Addr)
423 |
424 | return c.DoBulk(reqURL, items)
425 | }
426 |
427 | // IndexBulk sends the bulk request for index.
428 | func (c *Client) IndexBulk(index string, items []*BulkRequest) (*BulkResponse, error) {
429 | reqURL := fmt.Sprintf("%s://%s/%s/_bulk", c.Protocol, c.Addr,
430 | url.QueryEscape(index))
431 |
432 | return c.DoBulk(reqURL, items)
433 | }
434 |
435 | // IndexTypeBulk sends the bulk request for index and doc type.
436 | func (c *Client) IndexTypeBulk(index string, docType string, items []*BulkRequest) (*BulkResponse, error) {
437 | reqURL := fmt.Sprintf("%s://%s/%s/%s/_bulk", c.Protocol, c.Addr,
438 | url.QueryEscape(index),
439 | url.QueryEscape(docType))
440 |
441 | return c.DoBulk(reqURL, items)
442 | }
443 |
--------------------------------------------------------------------------------
/src/client/kafkaclient/kafkaclient.go:
--------------------------------------------------------------------------------
1 | package kafkaclient
2 |
3 | import (
4 | "github.com/confluentinc/confluent-kafka-go/kafka"
5 | "github.com/gitstliu/log4go"
6 | )
7 |
8 | type Client struct {
9 | Producer *kafka.Producer
10 | Config *ClientConfig
11 | }
12 |
13 | type ClientConfig struct {
14 | Address string
15 | }
16 |
17 | func NewClient(conf *ClientConfig) *Client {
18 | client := &Client{Config: conf}
19 | p, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": conf.Address})
20 | if err != nil {
21 | log4go.Error(err)
22 | return nil
23 | }
24 |
25 | go func() {
26 | for e := range p.Events() {
27 | switch ev := e.(type) {
28 | case *kafka.Message:
29 | if ev.TopicPartition.Error != nil {
30 | log4go.Error("Delivery failed: %v\n", ev.TopicPartition)
31 | } else {
32 | log4go.Debug("Delivered message to %v\n", ev.TopicPartition)
33 | }
34 | }
35 | }
36 | }()
37 |
38 | client.Producer = p
39 |
40 | return client
41 | }
42 |
43 | func (this *Client) Close() {
44 | this.Producer.Close()
45 | }
46 |
47 | func (this *Client) SendMessages(messages []string, topics []string) error {
48 |
49 | var err error = nil
50 | for _, currMessage := range messages {
51 | for _, currTopic := range topics {
52 | produceErr := this.Producer.Produce(&kafka.Message{
53 | TopicPartition: kafka.TopicPartition{Topic: &currTopic, Partition: kafka.PartitionAny},
54 | Value: []byte(currMessage),
55 | }, nil)
56 | if produceErr != nil {
57 | log4go.Error(produceErr)
58 | err = produceErr
59 | }
60 | }
61 | }
62 |
63 | return err
64 | }
65 |
66 | func (this *Client) FlushAll() {
67 | for true {
68 | if this.Producer.Flush(100) == 0 {
69 | log4go.Debug("Flush message to kafka finished")
70 | break
71 | }
72 | }
73 | }
74 |
--------------------------------------------------------------------------------
/src/config/config.go:
--------------------------------------------------------------------------------
1 | package config
2 |
3 | import (
4 | "io/ioutil"
5 | "strings"
6 |
7 | "github.com/BurntSushi/toml"
8 | "github.com/gitstliu/log4go"
9 | )
10 |
11 | type CanalConfig struct {
12 | Cancalconfigpath string `toml:"cancalconfigpath"`
13 | Posconfigfile string `toml:"posconfigfile"`
14 | Bulksize int `toml:"bulksize"`
15 | Flushbulktime int64 `toml:"flushbulktime"`
16 | CacheSize int64 `toml:"cachesize"`
17 | Redis *RedisConfig `toml:"redis"`
18 | RedisCluster *RedisClusterConfig `toml:"redis-cluster"`
19 | Elasticsearch *ElasticsearchConfig `toml:"elasticsearch"`
20 | Kafka *KafkaConfig `toml:"kafka"`
21 | Datafile *DatafileConfig `toml:"datafile"`
22 | LogPos *Pos
23 | // Target map[string]string `toml:"target"`
24 | }
25 |
26 | type Pos struct {
27 | Name string `toml:"bin_name"`
28 | Pos uint32 `toml:"bin_pos"`
29 | }
30 |
31 | type CommonConfig interface {
32 | GetConfigName() string
33 | }
34 |
35 | type RedisConfig struct {
36 | CommonConfig
37 | Address string `toml:"address"`
38 | Password string `toml:"password"`
39 | DB int `toml:"db"`
40 | Tables map[string]*RedisTableConfig `toml:"tables"`
41 | }
42 |
43 | func (this *RedisConfig) GetConfigName() string {
44 | return "Redis"
45 | }
46 |
47 | type RedisTableConfig struct {
48 | Tablename string `toml:"tablename"`
49 | Actions []string `toml:"actions"`
50 | Struct string `toml:"struct"`
51 | Key []string `toml:"key"`
52 | Keysplit string `toml:"keysplit"`
53 | Valuetype string `toml:"valuetype"`
54 | Valuesplit string `toml:"valuesplit"`
55 | KeyPrefix string `toml:"keyprefix"`
56 | KeyPostfix string `toml:"keypostfix"`
57 | Reidskey string `toml:"reidskey"`
58 | }
59 |
60 | type RedisClusterConfig struct {
61 | CommonConfig
62 | Address []string `toml:"address"`
63 | ReadTimeout int64 `toml:"readtimeout"`
64 | ConnTimeout int64 `toml:"conntimeout"`
65 | WriteTimeout int64 `toml:"writetimeout"`
66 | AliveTime int64 `toml:"alivetime"`
67 | Keepalive int `toml:"keepalive"`
68 | Tables map[string]*RedisTableConfig `toml:"tables"`
69 | }
70 |
71 | func (this *RedisClusterConfig) GetConfigName() string {
72 | return "RedisCluster"
73 | }
74 |
75 | type ElasticsearchConfig struct {
76 | CommonConfig
77 | Address string `toml:"address"`
78 | User string `toml:"user"`
79 | Password string `toml:"password"`
80 | IsHttps bool `toml:"ishttps"`
81 | Tables map[string]*ElasticsearchTableConfig `toml:"tables"`
82 | }
83 |
84 | type ElasticsearchTableConfig struct {
85 | Tablename string `toml:"tablename"`
86 | Actions []string `toml:"actions"`
87 | Index string `toml:"index"`
88 | IndexType string `toml:"indextype"`
89 | Key []string `toml:"key"`
90 | Keysplit string `toml:"keysplit"`
91 | KeyPrefix string `toml:"keyprefix"`
92 | KeyPostfix string `toml:"keypostfix"`
93 | }
94 |
95 | func (this *ElasticsearchConfig) GetConfigName() string {
96 | return "Elasticsearch"
97 | }
98 |
99 | type KafkaConfig struct {
100 | CommonConfig
101 | Address string `toml:"address"`
102 | Tables map[string]*KafkaTableConfig `toml:"tables"`
103 | }
104 |
105 | type KafkaTableConfig struct {
106 | Tablename string `toml:"tablename"`
107 | Actions []string `toml:"actions"`
108 | Topic []string `toml:"topic"`
109 | }
110 |
111 | func (this *KafkaConfig) GetConfigName() string {
112 | return "Kafka"
113 | }
114 |
115 | type DatafileConfig struct {
116 | CommonConfig
117 | Filename string `toml:"filename"`
118 | }
119 |
120 | func (this *DatafileConfig) GetConfigName() string {
121 | return "Datafile"
122 | }
123 |
124 | type Configure struct {
125 | // Datafile DatafileConfig `toml:"datafile"`
126 | CanalConfigs map[string]*CanalConfig `toml:"canal"`
127 | }
128 |
129 | var configure *Configure
130 |
131 | func LoadConfigWithFile(name string) error {
132 | data, readFileErr := ioutil.ReadFile(name)
133 | if readFileErr != nil {
134 | log4go.Error(readFileErr)
135 | return readFileErr
136 | }
137 |
138 | conf := &Configure{}
139 | _, decodeTomlErr := toml.Decode(string(data), &conf)
140 | if decodeTomlErr != nil {
141 | log4go.Error(decodeTomlErr)
142 | return decodeTomlErr
143 | }
144 |
145 | for _, currCanalConfig := range conf.CanalConfigs {
146 | if strings.Trim(currCanalConfig.Posconfigfile, " ") != "" {
147 | currPos, readPosErr := ioutil.ReadFile(currCanalConfig.Posconfigfile)
148 | if readPosErr != nil {
149 | log4go.Error(readPosErr)
150 | return readPosErr
151 | }
152 | pos := &Pos{}
153 | _, decodePosErr := toml.Decode(string(currPos), &pos)
154 |
155 | if decodePosErr != nil {
156 | log4go.Error(decodePosErr)
157 | return decodePosErr
158 | }
159 |
160 | if pos.Name != "" {
161 | currCanalConfig.LogPos = pos
162 | }
163 | }
164 |
165 | }
166 |
167 | configure = conf
168 |
169 | return nil
170 | }
171 |
172 | func GetConfigure() *Configure {
173 | return configure
174 | }
175 |
--------------------------------------------------------------------------------
/src/config/config.toml:
--------------------------------------------------------------------------------
1 | [canals]
2 | [canal.database1]
3 | cancalconfigpath = "canalconfigs/database1.toml"
4 | posconfigfile = "canalconfigs/database1.pos"
5 | bulksize = 1000
6 | flushbulktime = 5000
7 | cachesize = 1000
8 | # [canal.database1.redis]
9 | # address = "10.0.90.164:6379"
10 | # password = ""
11 | # db = 0
12 | # [canal.database1.redis.tables.t1]
13 | # tablename = "t1"
14 | # actions = ["delete","update","insert"]
15 | # struct = "hash" # string, list, set, hash
16 | # key = ["id","location_id"]
17 | # keysplit = "-"
18 | # keyprefix = "t3"
19 | # keypostfix = "end"
20 | # valuetype = "json" # json, splitstring
21 | # valuesplit = "-"
22 | # splitcolumns = [""]
23 | # reidskey = "zztest1" #list, set, hash
24 | # [canal.database1.redis-cluster]
25 | # address = ["10.0.71.50:7001","10.0.71.50:7006","10.0.71.52:7003","10.0.71.52:7005","10.0.71.51:7002","10.0.71.51:7004"]
26 | # readtimeout = 2 #s
27 | # conntimeout = 2 #s
28 | # writetimeout = 2 #s
29 | # alivetime = 60 #s
30 | # keepalive = 100
31 | # [canal.database1.redis-cluster.tables.t1]
32 | # tablename = "t1"
33 | # actions = ["delete","update","insert"]
34 | # struct = "hash" # string, list, set, hash
35 | # key = ["id","location_id"]
36 | # keysplit = "-"
37 | # keyprefix = "t3"
38 | # keypostfix = "end"
39 | # valuetype = "json" # json, splitstring
40 | # valuesplit = "-"
41 | # splitcolumns = [""]
42 | # reidskey = "zztest1" #list, set, hash
43 | [canal.database1.elasticsearch]
44 | #address = "http://10.0.91.125:9200"
45 | #address = "http://es-cn-4590ox5kl000avgam.elasticsearch.aliyuncs.com:9200"
46 | address = "http://172.19.1.205:9200"
47 | user = "elastic"
48 | password = "AkcTest2018"
49 | ishttps = false
50 | [canal.database1.elasticsearch.tables.t1]
51 | tablename = "ztest"
52 | actions = ["delete","update","insert"]
53 | index = "dadacang_t1"
54 | indextype = "type1"
55 | key = ["id","location_id"]
56 | keysplit = "-"
57 | keyprefix = "t3"
58 | keypostfix = "end"
59 | # [canal.database1.kafka]
60 | ### address = ["10.0.91.85:9092","10.0.91.114:9092","10.0.91.150:9092"]
61 | # address = "10.0.91.85:9092"
62 | # [canal.database1.kafka.tables.t1]
63 | # tablename = "t1"
64 | # actions = ["delete","update","insert"]
65 | # topic = ["demo1"]
66 | # [canal.database1.datafile]
67 | # filename = "datafile/1.data"
68 |
69 |
70 | # [canal.database2]
71 | # cancalconfigpath = "canalconfigs/database2.toml"
72 | # posconfigfile = "canalconfigs/database2.pos"
73 | # bulksize = 1000
74 | # flushbulktime = 5000
75 | # cachesize = 1000
76 | # [canal.database2.redis]
77 | # address = "10.0.90.164:6379"
78 | # password = ""
79 | # db = 1
80 | # [canal.database2.redis.tables.t1]
81 | # tablename = "t1"
82 | # actions = ["delete","update","insert"]
83 | # struct = "hash" # string, list, set, hash, zset
84 | # key = ["id","location_id"]
85 | # keysplit = "-"
86 | # keyprefix = "t3"
87 | # keypostfix = "end"
88 | # valuetype = "json" # json, splitstring
89 | # valuesplit = "-"
90 | # splitcolumns = [""]
91 | # reidskey = "zztest2" #list, set, hash, zset
92 | # [canal.database2.redis-cluster]
93 | # address = ["10.0.71.50:7001","10.0.71.50:7006","10.0.71.52:7003","10.0.71.52:7005","10.0.71.51:7002","10.0.71.51:7004"]
94 | # readtimeout = 2 #s
95 | # conntimeout = 2 #s
96 | # writetimeout = 2 #s
97 | # alivetime = 60 #s
98 | # keepalive = 100
99 | # [canal.database2.redis-cluster.tables.t1]
100 | # tablename = "t1"
101 | # actions = ["delete","update","insert"]
102 | # struct = "hash" # string, list, set, hash, zset
103 | # key = ["id","location_id"]
104 | # keysplit = "-"
105 | # keyprefix = "t3"
106 | # keypostfix = "end"
107 | # valuetype = "json" # json, splitstring
108 | # valuesplit = "-"
109 | # splitcolumns = [""]
110 | # reidskey = "zztest2" #list, set, hash, zset
111 | # [canal.database2.elasticsearch]
112 | # address = "http://10.0.91.125:9200"
113 | # user = ""
114 | # password = ""
115 | # ishttps = false
116 | # [canal.database2.elasticsearch.tables.t1]
117 | # tablename = "t1"
118 | # actions = ["delete","update","insert"]
119 | # index = "t2_index"
120 | # indextype = "type1"
121 | # key = ["id","location_id"]
122 | # keysplit = "-"
123 | # keyprefix = "t3"
124 | # keypostfix = "end"
125 | # [canal.database2.kafka]
126 | ## address = ["10.0.91.85:9092","10.0.91.114:9092","10.0.91.150:9092"]
127 | # address = "10.0.91.85:9092"
128 | # [canal.database2.kafka.tables.t1]
129 | # tablename = "t1"
130 | # actions = ["delete","update","insert"]
131 | # topic = ["demo2"]
132 | # [canal.database2.datafile]
133 | # filename = "datafile/2.data"
134 |
135 | # [canal.database2]
136 | # cancalconfigpath = "canalconfigs/database2.toml"
137 | # bulksize = 200
138 | # flushbulktime = 5000
139 | # cachesize = 1000
140 | # [canal.database2.redis]
141 | # address = ["10.0.91.85:9092"]
--------------------------------------------------------------------------------
/src/config/log.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 | stdout
4 | console
5 |
6 | INFO
7 |
8 |
9 | file
10 | file
11 |
12 | INFO
13 | logs/log/monitor.log
14 |
25 | [%D %T] [%L] %M
26 |
27 | false
28 | 0M
29 | 0K
30 | true
31 |
32 |
33 |
--------------------------------------------------------------------------------
/src/main.go:
--------------------------------------------------------------------------------
1 | package main
2 |
3 | import (
4 | "canalhandler"
5 | "config"
6 | "time"
7 |
8 | "github.com/gitstliu/go-commonfunctions"
9 | "github.com/gitstliu/log4go"
10 | )
11 |
12 | func main() {
13 |
14 | log4go.LoadConfiguration("config/log.xml")
15 |
16 | defer log4go.Close()
17 |
18 | config.LoadConfigWithFile("config/config.toml")
19 | meta, _ := commonfunctions.ObjectToJson(config.GetConfigure())
20 | log4go.Debug(meta)
21 |
22 | log4go.Debug("********************************************")
23 |
24 | for name, currConfig := range config.GetConfigure().CanalConfigs {
25 |
26 | currCancal := &canalhandler.CommonCanalMeta{}
27 | // go currCancal.RunWithConfig(currConfig.Cancalconfigpath, name, currOutput)
28 | go currCancal.RunWithConfig(name, currConfig)
29 | log4go.Info("Started %v", name)
30 | }
31 |
32 | for true {
33 | time.Sleep(10 * time.Second)
34 | }
35 |
36 | }
37 |
--------------------------------------------------------------------------------
/src/output/output.go:
--------------------------------------------------------------------------------
1 | package output
2 |
3 | import (
4 | "adapter"
5 | "adapter/common"
6 | "config"
7 | "errors"
8 | "fmt"
9 | "os"
10 | "path"
11 | "time"
12 |
13 | "sync"
14 |
15 | "github.com/siddontang/go/ioutil2"
16 |
17 | "github.com/gitstliu/log4go"
18 | "github.com/siddontang/go-mysql/mysql"
19 | )
20 |
21 | type Output struct {
22 | Config *config.CanalConfig
23 | // PosFile *os.File
24 | Adapters map[string]common.WriteAdapter
25 | DataChannel chan interface{}
26 | Datas []interface{}
27 | lastWriteTime time.Time
28 | writeLock *sync.Mutex
29 | writeDataLength int64
30 | }
31 |
32 | func CreateByName(name string) (*Output, error) {
33 |
34 | currConfig, isConfigExist := config.GetConfigure().CanalConfigs[name]
35 | if !isConfigExist {
36 | return nil, errors.New(fmt.Sprintf("Output Config is not exist for name %s!!", name))
37 | }
38 | currOutput := &Output{}
39 | currOutput.Config = currConfig
40 | currOutput.Adapters = map[string]common.WriteAdapter{}
41 | // currOutput.DataChannel = make(chan *common.RawLogEntity, currConfig.CacheSize)
42 | currOutput.DataChannel = make(chan interface{}, currConfig.CacheSize)
43 | currOutput.Datas = []interface{}{}
44 | currOutput.lastWriteTime = time.Now()
45 | currOutput.writeLock = &sync.Mutex{}
46 | posPath := path.Dir(currOutput.Config.Posconfigfile)
47 | makePosPathErr := os.MkdirAll(posPath, os.ModePerm)
48 |
49 | if makePosPathErr != nil {
50 | log4go.Error(makePosPathErr)
51 | panic(makePosPathErr)
52 | }
53 |
54 | // currFile, openFileErr := os.OpenFile(currOutput.Config.Posconfigfile, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, os.ModePerm)
55 |
56 | // if openFileErr != nil {
57 | // log4go.Error(openFileErr)
58 | // panic(openFileErr)
59 | // }
60 |
61 | // currOutput.PosFile = currFile
62 |
63 | if currConfig.Redis != nil {
64 | currOutput.Adapters[currConfig.Redis.GetConfigName()] = createAdapter(currConfig.Redis)
65 | }
66 |
67 | if currConfig.RedisCluster != nil {
68 | currOutput.Adapters[currConfig.RedisCluster.GetConfigName()] = createAdapter(currConfig.RedisCluster)
69 | }
70 |
71 | if currConfig.Elasticsearch != nil {
72 | currOutput.Adapters[currConfig.Elasticsearch.GetConfigName()] = createAdapter(currConfig.Elasticsearch)
73 | }
74 |
75 | if currConfig.Kafka != nil {
76 | currOutput.Adapters[currConfig.Kafka.GetConfigName()] = createAdapter(currConfig.Kafka)
77 | }
78 |
79 | if currConfig.Datafile != nil {
80 | currOutput.Adapters[currConfig.Datafile.GetConfigName()] = createAdapter(currConfig.Datafile)
81 | }
82 |
83 | return currOutput, nil
84 | }
85 |
86 | func createAdapter(conf config.CommonConfig) common.WriteAdapter {
87 | currAdapter, createAdapterErr := adapter.CreateAdapterWithName(conf)
88 | if createAdapterErr != nil {
89 | log4go.Error(createAdapterErr)
90 | panic(createAdapterErr)
91 | return nil
92 | }
93 | return currAdapter
94 | }
95 |
96 | func (this *Output) Run() {
97 | this.lastWriteTime = time.Now()
98 | go this.writeTimeProcess()
99 | for true {
100 | currData := <-this.DataChannel
101 | this.Datas = append(this.Datas, currData)
102 | dataLength := len(this.Datas)
103 | if dataLength >= this.Config.Bulksize {
104 | log4go.Info("Bulksize write")
105 | this.writeDataToAdapter()
106 | }
107 | }
108 | }
109 |
110 | func (this *Output) writeDataToAdapter() {
111 | log4go.Debug("Output write!!")
112 | this.writeLock.Lock()
113 | defer this.writeLock.Unlock()
114 | dataLength := len(this.Datas)
115 | if dataLength > 0 {
116 | mainData := []*common.RawLogEntity{}
117 | var posData *mysql.Position = nil
118 | for _, currData := range this.Datas {
119 | switch v := currData.(type) {
120 | case *common.RawLogEntity:
121 | mainData = append(mainData, v)
122 | case *mysql.Position:
123 | posData = v
124 | }
125 | }
126 | for adapterName, currAdapter := range this.Adapters {
127 | adapterWriteErr := currAdapter.Write(mainData)
128 | log4go.Debug("CanalConfig %v Adapter %v write data length %v", this.Config.Cancalconfigpath, adapterName, dataLength)
129 | // log4go.Debug("Adapter is %v", currAdapter.(type))
130 | if adapterWriteErr != nil {
131 | log4go.Error(adapterWriteErr)
132 | panic(adapterWriteErr)
133 | configTimeDuration := this.Config.Flushbulktime * int64(time.Millisecond)
134 | time.Sleep(time.Duration(configTimeDuration))
135 | return
136 | }
137 | }
138 |
139 | if posData != nil {
140 | log4go.Info("Write Pos Data")
141 | binFileName := fmt.Sprintf("bin_name = \"%v\" \r\n", posData.Name)
142 | binFilePos := fmt.Sprintf("bin_pos = %v \r\n", posData.Pos)
143 | content := binFileName + binFilePos
144 | if err := ioutil2.WriteFileAtomic(this.Config.Posconfigfile, []byte(content), 0644); err != nil {
145 | log4go.Error("canal save master info to file %s err %v", this.Config.Posconfigfile, err)
146 | }
147 | // truncateErr := this.PosFile.Truncate(0)
148 | // if truncateErr != nil {
149 | // log4go.Error(truncateErr)
150 | // panic(truncateErr)
151 | // }
152 | // // _, writePosErr := this.PosFile.WriteString(content)
153 | // if writePosErr != nil {
154 | // log4go.Error(writePosErr)
155 | // panic(writePosErr)
156 | // }
157 | // // syncPosErr := this.PosFile.Sync()
158 | // if syncPosErr != nil {
159 | // log4go.Error(syncPosErr)
160 | // panic(syncPosErr)
161 | // }
162 | }
163 |
164 | this.Datas = []interface{}{}
165 | this.lastWriteTime = time.Now()
166 | this.writeDataLength = this.writeDataLength + int64(dataLength)
167 | log4go.Info("Writed Data Length = %v", this.writeDataLength)
168 | }
169 | }
170 |
171 | func (this *Output) writeTimeProcess() {
172 | for true {
173 | currTimeDuration := time.Now().UnixNano() - this.lastWriteTime.UnixNano()
174 | configTimeDuration := this.Config.Flushbulktime * int64(time.Millisecond)
175 | if currTimeDuration >= configTimeDuration {
176 | log4go.Info("Time write")
177 | this.writeDataToAdapter()
178 | time.Sleep(time.Duration(configTimeDuration))
179 | } else {
180 | time.Sleep(time.Duration(configTimeDuration - currTimeDuration))
181 | }
182 | }
183 | }
184 |
185 | func (this *Output) Write(data interface{}) {
186 | this.DataChannel <- data
187 | }
188 |
--------------------------------------------------------------------------------