├── README.md
├── okmq-api
├── pom.xml
└── src
│ └── main
│ └── java
│ └── com
│ └── sayhiai
│ └── arch
│ └── okmq
│ └── api
│ ├── Packet.java
│ └── producer
│ ├── AbstractProducer.java
│ ├── Callback.java
│ ├── HaException.java
│ ├── SendResult.java
│ └── ha
│ ├── HA.java
│ ├── Ha2Db.java
│ ├── Ha2Redis.java
│ └── Ha2SimpleLog.java
├── okmq-kafka
├── pom.xml
└── src
│ ├── main
│ └── java
│ │ └── com
│ │ └── sayhiai
│ │ └── arch
│ │ └── okmq
│ │ └── kafka
│ │ ├── KafkaProducer.java
│ │ └── ProducerBuilder.java
│ └── test
│ ├── java
│ └── com
│ │ └── sayhiai
│ │ └── arch
│ │ └── okmq
│ │ └── kafka
│ │ ├── HaH2Test.java
│ │ └── HaRedisTest.java
│ └── resources
│ └── logback.xml
└── pom.xml
/README.md:
--------------------------------------------------------------------------------
1 | >本工具的核心思想就是:赌。只有两个基础组件同时死亡,才会受到`严重影响`。哦,断电除外。
2 |
3 | mq是个好东西,我们都在用。这也决定了mq应该是高高高可用的。某团就因为这个组件,出了好几次生产事故,呵呵。
4 |
5 | 大部分业务系统,要求的消息语义都是`at least once`,即都会有重复消息,但保证不会丢。即使这样,依然有很多问题:
6 |
7 | **一、mq可用性无法保证。** mq的意外死亡,造成生产端发送失败。很多消息要通过扒取日志进行回放,成本高耗时长。
8 |
9 | **二、mq阻塞业务正常进行。** mq卡顿或者网络问题,会造成业务线程卡在mq的发送方法上,正常业务进行不下去,造成灾难性的后果。
10 |
11 | **三、消息延迟。** mq死了就用不着说了,消息还没投胎就已死亡。消息延迟主要是客户端消费能力不强,或者是消费通道单一造成的。
12 |
13 | **使用组合存储来保证消息的可靠投递,就是`okmq`。**
14 |
15 | >注意:okmq注重的是可靠性。对于顺序性、事务等其他要素,不予考虑。当然,速度是必须的。
16 |
17 |
18 | # 设计想法
19 |
20 | 我即使用两套redis来模拟一些mq操作,都会比现有的一些解决方案要强。但这肯定不是我们需要的,因为redis的堆积能力太有限,内存占用率直线上升的感觉并不太好。
21 |
22 | 但我们可以用redis来作为额外的发送确认机制。这个想法,在[《使用多线程增加kafka消费能力》](https://mp.weixin.qq.com/s/FhLuPSwJFCdQ1oCWXf-V2Q)一文中曾经提到过,现在到了实现的时候了。
23 | ## 首先看下使用Api
24 |
25 | ```
26 | OkmqKafkaProducer producer = new ProducerBuilder()
27 | .defaultSerializer()
28 | .eanbleHa("redis")
29 | .any("okmq.redis.mode", "single")
30 | .any("okmq.redis.endpoint", "127.0.0.1:6379")
31 | .any("okmq.redis.poolConfig.maxTotal", 100)
32 | .servers("localhost:9092")
33 | .clientID("okMQProducerTest")
34 | .build();
35 |
36 | Packet packet = new Packet();
37 | packet.setTopic("okmq-test-topic");
38 | packet.setContent("i will send you a msg");
39 | producer.sendAsync(packet, null);
40 | producer.shutdown();
41 | ```
42 | ## 以redis为例
43 | 
44 | 我们按照数字标号来介绍:
45 |
46 | **1、** 在消息发送到kafka之前,首先入库redis。由于后续回调需要用到一个唯一表示,我们在packet包里添加了一个uuid。
47 |
48 | **2、** 调用底层的api,进行真正的消息投递。
49 |
50 | **3、** 通过监听kafka的回调,删除redis中对应的key。在这里可以得到某条消息确切的的ack时间。那么长时间没有删除的,就算是投递失败的消息。
51 |
52 | **4、** 后台会有一个线程进行这些失败消息的遍历和重新投递。我们叫做recovery。最复杂的也就是这一部分。对于redis来说,会首先争抢一个持续5min的锁,然后遍历相关hashkey。
53 |
54 | 所以,对于以上代码,redis发出以下命令:
55 | ```
56 | 1559206423.395597 [0 127.0.0.1:62858] "HEXISTS" "okmq:indexhash" "okmq:5197354"
57 | 1559206423.396670 [0 127.0.0.1:62858] "HSET" "okmq:indexhash" "okmq:5197354" ""
58 | 1559206423.397300 [0 127.0.0.1:62858] "HSET" "okmq:5197354" "okmq::2b9b33fd-95fd-4cd6-8815-4c572f13f76e" "{\"content\":\"i will send you a msg104736623015238\",\"topic\":\"okmq-test-topic\",\"identify\":\"2b9b33fd-95fd-4cd6-8815-4c572f13f76e\",\"timestamp\":1559206423318}"
59 | 1559206423.676212 [0 127.0.0.1:62858] "HDEL" "okmq:5197354" "okmq::2b9b33fd-95fd-4cd6-8815-4c572f13f76e"
60 | 1559206428.327788 [0 127.0.0.1:62861] "SET" "okmq:recovery:lock" "01fb85a9-0670-40c3-8386-b2b7178d4faf" "px" "300000"
61 | 1559206428.337930 [0 127.0.0.1:62858] "HGETALL" "okmq:indexhash"
62 | 1559206428.341365 [0 127.0.0.1:62858] "HSCAN" "okmq:5197354" "0"
63 | 1559206428.342446 [0 127.0.0.1:62858] "HDEL" "okmq:indexhash" "okmq:5197354"
64 | 1559206428.342788 [0 127.0.0.1:62861] "GET" "okmq:recovery:lock"
65 | 1559206428.343119 [0 127.0.0.1:62861] "DEL" "okmq:recovery:lock"
66 | ```
67 |
68 | ## 以上问题解答
69 |
70 | #### 所以对于以上的三个问题,回答如下:
71 |
72 | **一、mq可用性无法保证。**
73 |
74 | 为什么要要通过事后进行恢复呢?我把recovery机制带着不是更好么?通过对未收到ack的消息进行遍历,可以把这个过程做成自动化。
75 |
76 |
77 | **二、mq阻塞业务正常进行。**
78 |
79 | 通过设置kafka的MAX_BLOCK_MS_CONFIG
80 | 参数,其实是可以不阻塞业务的,但会丢失消息。我可以使用其他存储来保证这些丢失的消息重新发送。
81 |
82 | **三、消息延迟。**
83 |
84 | mq死掉了,依然有其他备用通道进行正常服务。也有的团队采用双写mq双消费的方式来保证这个过程,也是被逼急了:)。如果kafka死掉了,业务会切换到备用通道进行消费。
85 |
86 | ## 扩展自己的HA
87 |
88 | 如果你不想用redis,比如你先要用hbase,那也是很简单的。
89 | 但需要实现一个HA接口。
90 | ```
91 | public interface HA {
92 | void close();
93 |
94 | void configure(Properties properties);
95 |
96 | void preSend(Packet packet) throws HaException;
97 |
98 | void postSend(Packet packet) throws HaException;
99 |
100 | void doRecovery(AbstractProducer producer) throws HaException;
101 | }
102 | ```
103 | 使用之前,还需要注册一下你的插件。
104 |
105 | ```
106 | AbstractProducer.register("log", "com.sayhiai.arch.okmq.api.producer.ha.Ha2SimpleLog");
107 | ```
108 |
109 | ## 重要参数
110 |
111 | ```
112 | okmq.ha.recoveryPeriod 恢复线程检测周期,默认5秒
113 |
114 | okmq.redis.mode redis的集群模式,可选:single、sentinel、cluster
115 | okmq.redis.endpoint 地址,多个地址以,分隔
116 | okmq.redis.connectionTimeout 连接超时
117 | okmq.redis.soTimeout socket超时
118 | okmq.redis.lockPx 分布式锁的持有时间,可默认,5min
119 | okmq.redis.splitMillis 间隔时间,redis换一个key进行运算,默认5min
120 | okmq.redis.poolConfig.* 兼容jedis的所有参数
121 | ```
122 | # 1.0.0 版本功能
123 |
124 | 1、进行了生产端的高可用抽象,实现了kafka的样例。
125 |
126 | 2、增加了SimpleLog的ping、pong日志实现。
127 |
128 | 3、增加了Redis的生产端备用通道。包含single、cluster、sentinel三种模式。
129 |
130 | 4、可以自定义其他备用通道。
131 |
132 | 5、兼容kakfa的所有参数设置。
133 |
134 | ## 规划
135 |
136 | ### 2.0.0
137 |
138 | 1、实现ActiveMQ的集成。
139 |
140 | 2、实现消费者的备用通道集成。
141 |
142 | 3、增加嵌入式kv存储的生产者集成。
143 |
144 | 4、更精细的控制系统的行为。
145 |
146 | 5、加入开关和预热,避免新启动mq即被压垮。
147 |
148 | 6、redis分片机制,大型系统专用。
149 |
150 | ### 3.0.0
151 |
152 | 1、监控功能添加。
153 |
154 | 2、rest接口添加。
155 |
156 | # 使用限制
157 |
158 | 当你把参数ha设置为true,表明你已经收到以下的使用限制。反之,系统反应于原生无异。
159 |
160 | **使用限制:**
161 | 本工具仅适用于非顺序性、非事务性的普通消息投递,且客户端已经做了幂等。一些订单系统、消息通知等业务,非常适合。如果你需要其他特性,请跳出此页面。
162 |
163 | kafka死亡,或者redis单独死亡,消息最终都会被发出,仅当kafka与redis同时死亡,消息才会发送失败,并记录在日志文件里。
164 |
165 | 正常情况下,redis的使用容量极少极少。异常情况下,redis的容量有限,会迅速占满。redis的剩余时间就是你的`StopWatch`,你必须在这个时间内恢复你的消息系统,一定要顶住哇。
166 |
167 | # End
168 |
169 | 系统目前处于1.0.0版本,正在线上小规模试用。工具小众,但适用于大部分应用场景。如果你正在寻求这样的解决方案,欢迎一块完善代码。
170 |
171 |
172 |
173 | github地址:
174 | ```
175 | https://github.com/sayhiai/okmq
176 | ```
177 |
178 | 也欢迎关注《小姐姐味道》微信公众号,进行交流。
179 |
180 | 
181 |
--------------------------------------------------------------------------------
/okmq-api/pom.xml:
--------------------------------------------------------------------------------
1 |
2 |
5 |
6 | okmq
7 | com.sayhiai.arch
8 | 1.0.3
9 |
10 | 4.0.0
11 |
12 | okmq-api
13 | 1.0.3
14 |
15 |
16 |
17 | redis.clients
18 | jedis
19 | 3.0.1
20 | provided
21 |
22 |
23 |
24 | com.h2database
25 | h2
26 | 1.4.199
27 | provided
28 |
29 |
30 |
31 | com.alibaba
32 | druid
33 | 1.1.17
34 | provided
35 |
36 |
37 |
38 | com.fasterxml.jackson.core
39 | jackson-databind
40 | 2.9.8
41 |
42 |
43 |
44 |
45 |
46 |
47 | org.apache.maven.plugins
48 | maven-compiler-plugin
49 |
50 | 1.8
51 | 1.8
52 | UTF-8
53 |
54 |
55 |
56 |
57 |
58 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/Packet.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api;
2 |
3 | import lombok.Data;
4 |
5 | import java.util.UUID;
6 |
7 | @Data
8 | public class Packet {
9 | private String content;
10 | private String topic;
11 | private String identify = UUID.randomUUID().toString();
12 |
13 |
14 | private long timestamp = System.currentTimeMillis();
15 | }
16 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/AbstractProducer.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer;
2 |
3 |
4 | import com.sayhiai.arch.okmq.api.Packet;
5 | import com.sayhiai.arch.okmq.api.producer.ha.HA;
6 | import com.sayhiai.arch.okmq.api.producer.ha.Ha2SimpleLog;
7 | import lombok.extern.slf4j.Slf4j;
8 |
9 | import java.util.HashMap;
10 | import java.util.Properties;
11 | import java.util.concurrent.ScheduledExecutorService;
12 | import java.util.concurrent.ScheduledThreadPoolExecutor;
13 | import java.util.concurrent.TimeUnit;
14 |
15 | @Slf4j
16 | public abstract class AbstractProducer {
17 | private AbstractProducer() {
18 | }
19 |
20 | public AbstractProducer(boolean ha, String haMode) {
21 | this.config = new HA.Config();
22 | config.setHa(ha);
23 | config.setHaType(haMode);
24 | }
25 |
26 |
27 | protected HA ha;
28 | protected HA.Config config;
29 | protected Properties properties;
30 |
31 | private ScheduledExecutorService recoveryExecutor =
32 | new ScheduledThreadPoolExecutor(1);
33 |
34 |
35 | private static HashMap registers = new HashMap<>();
36 |
37 | static {
38 | register("log", "com.sayhiai.arch.okmq.api.producer.ha.Ha2SimpleLog");
39 | register("redis", "com.sayhiai.arch.okmq.api.producer.ha.Ha2Redis");
40 | register("h2", "com.sayhiai.arch.okmq.api.producer.ha.Ha2Db");
41 | }
42 |
43 | public static final void register(String name, String haClass) {
44 | registers.putIfAbsent(name, haClass);
45 | }
46 |
47 |
48 | public void init(final Properties properties) {
49 | this.properties = properties;
50 |
51 | if (config.isHa()) {
52 |
53 | //recovery period
54 | Object recoveryPeriodVal = properties.get("okmq.ha.recoveryPeriod");
55 | if (null != recoveryPeriodVal) {
56 | long value = Long.valueOf(recoveryPeriodVal + "");
57 | if (value > 5000L) {//magic
58 | config.setRecoveryPeriod(value);
59 | }
60 | }
61 |
62 | final String haClass = registers.get(config.getHaType());
63 | try {
64 | ha = (HA) Class.forName(haClass).getDeclaredConstructors()[0].newInstance();
65 | } catch (Exception ex) {
66 | log.error("okmq:ha invoke error", ex);
67 | }
68 | if (null == ha) {
69 | log.warn("NO HA instance supplied! use the Ha2SimpleLog default!");
70 | ha = new Ha2SimpleLog();
71 | }
72 | ha.configure(properties);
73 | }
74 | doInit();
75 |
76 | //Begin do recovery!
77 | doRecovery();
78 |
79 | }
80 |
81 | protected abstract void doInit();
82 |
83 | public abstract void doShutdown();
84 |
85 | public void shutdown() {
86 | this.doShutdown();
87 | if (config.isHa()) {
88 | ha.close();
89 | }
90 | recoveryExecutor.shutdown();
91 | }
92 |
93 | public abstract SendResult sendSerial(Packet packet, long timeoutMillis);
94 |
95 | public SendResult sendSerial(Packet packet) {
96 | return sendSerial(packet, -1);
97 | }
98 |
99 | public abstract SendResult sendAsync(Packet packet);
100 |
101 | public abstract SendResult sendAsync(Packet packet, Callback callback);
102 |
103 | public abstract T getBackend();
104 |
105 |
106 | private long preRecovery = System.currentTimeMillis();
107 |
108 | private void doRecovery() {
109 | if (config.isHa()) {
110 | final long rVal = config.getRecoveryPeriod();
111 | recoveryExecutor.scheduleAtFixedRate(new Runnable() {
112 | @Override
113 | public void run() {
114 | long now = System.currentTimeMillis();
115 | if (now - preRecovery > rVal) {
116 | try {
117 | ha.doRecovery(AbstractProducer.this);
118 | } catch (HaException e) {
119 | log.error("okmq:recovery", e);
120 | }
121 | preRecovery = System.currentTimeMillis();
122 | } else {
123 | log.info("recovery request ignored! {}ms passed.", now - preRecovery);
124 | }
125 | }
126 | }, rVal, rVal, TimeUnit.MILLISECONDS);
127 | }
128 | }
129 |
130 | }
131 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/Callback.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer;
2 |
3 | import com.sayhiai.arch.okmq.api.Packet;
4 |
5 | public interface Callback {
6 | void callback(Packet packet, T orgiMeta, Exception exception);
7 | }
8 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/HaException.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer;
2 |
3 | public class HaException extends Exception {
4 | public HaException(String msg) {
5 | super(msg);
6 | }
7 | }
8 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/SendResult.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer;
2 |
3 |
4 | import lombok.Data;
5 |
6 | @Data
7 | public class SendResult {
8 | private int code;
9 | private String msg;
10 |
11 |
12 | public static final int OK = 200;
13 | public static final int ERROR = 500;
14 | }
15 |
16 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/ha/HA.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer.ha;
2 |
3 | import com.sayhiai.arch.okmq.api.Packet;
4 | import com.sayhiai.arch.okmq.api.producer.AbstractProducer;
5 | import com.sayhiai.arch.okmq.api.producer.HaException;
6 | import lombok.Getter;
7 | import lombok.Setter;
8 |
9 | import java.util.Properties;
10 |
11 | /**
12 | * like
13 | * org.apache.kafka.clients.producer.ProducerInterceptor
14 | */
15 | public interface HA {
16 | void close();
17 |
18 | void configure(Properties properties);
19 |
20 | void preSend(Packet packet) throws HaException;
21 |
22 | void postSend(Packet packet) throws HaException;
23 |
24 | void doRecovery(AbstractProducer producer) throws HaException;
25 |
26 | @Getter
27 | @Setter
28 | final class Config {
29 | private boolean ha = true;
30 | private String haType = "log";
31 | private long recoveryPeriod = 5000L;
32 | }
33 | }
34 |
35 |
36 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/ha/Ha2Db.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer.ha;
2 |
3 | import com.alibaba.druid.pool.DruidDataSource;
4 | import com.sayhiai.arch.okmq.api.Packet;
5 | import com.sayhiai.arch.okmq.api.producer.AbstractProducer;
6 | import com.sayhiai.arch.okmq.api.producer.HaException;
7 | import lombok.Getter;
8 | import lombok.Setter;
9 | import lombok.extern.slf4j.Slf4j;
10 | import org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException;
11 |
12 | import java.beans.BeanInfo;
13 | import java.beans.Introspector;
14 | import java.beans.PropertyDescriptor;
15 | import java.lang.reflect.Method;
16 | import java.sql.Connection;
17 | import java.sql.ResultSet;
18 | import java.sql.SQLException;
19 | import java.sql.Statement;
20 | import java.util.ArrayList;
21 | import java.util.List;
22 | import java.util.Properties;
23 |
24 | @Slf4j
25 | public class Ha2Db implements HA {
26 |
27 | private DBConnection dbConnection;
28 |
29 | private final static String Props_Prefix = "okmq.h2.";
30 |
31 | private final static int DEFAULT_LENGTH = 32768;
32 |
33 | private static final String INIT_DB = "CREATE TABLE okmq_msg (id VARCHAR(255) PRIMARY KEY NOT NULL,topic VARCHAR(255) NOT NULL,content VARCHAR(%s) NOT NULL,createtime bigint(20) NOT NULL);CREATE INDEX IDX_OKMQ_CREATETIME ON okmq_msg(createtime);";
34 |
35 | private static final String SQL_INSERT = "INSERT INTO okmq_msg VALUES('%s','%s','%s',%s)";
36 |
37 | private static final String SQL_DELETE = "DELETE FROM okmq_msg WHERE id = '%s'";
38 |
39 | private static final String SQL_QUERY = "SELECT TOP 1000 id,topic,content,createtime FROM okmq_msg ORDER BY createtime ASC";
40 |
41 | @Override
42 | public void close() {
43 | dbConnection.close();
44 | org.h2.Driver.unload();
45 | }
46 |
47 |
48 |
49 | @Override
50 | public void configure(Properties properties) {
51 | dbConnection = new DBConnection();
52 |
53 | try {
54 | BeanInfo beanInfo = Introspector.getBeanInfo(DBConnection.class);
55 | for (PropertyDescriptor desc : beanInfo.getPropertyDescriptors()) {
56 | final String name = desc.getName();
57 | final Object value = properties.get(Props_Prefix + name);
58 | if (null != value && !"".equals(value)) {
59 | Method method = desc.getWriteMethod();
60 | method.invoke(dbConnection, properties.get(Props_Prefix + name));
61 | }
62 | }
63 | } catch (Exception e) {
64 | log.error("error while config h2 . pls check it !!");
65 | }
66 |
67 | org.h2.Driver.load();
68 | Connection conn = dbConnection.connect();
69 | try (Statement stat = conn.createStatement()) {
70 | stat.execute(String.format(INIT_DB, dbConnection.getDataLength()>0?dbConnection.getDataLength():DEFAULT_LENGTH));
71 | } catch (SQLException e) {
72 | log.warn("h2 table already exist");
73 | }
74 |
75 | }
76 |
77 | @Override
78 | public void preSend(Packet packet) throws HaException {
79 |
80 | this.savePacket(packet);
81 | }
82 |
83 | @Override
84 | public void postSend(Packet packet) throws HaException {
85 |
86 | this.deletePacket(packet.getIdentify());
87 | }
88 |
89 | @Override
90 | public void doRecovery(AbstractProducer producer) throws HaException {
91 | try (Statement stat = dbConnection.getConnection().createStatement()) {
92 | ResultSet rs = stat.executeQuery(SQL_QUERY);
93 | List result = convertPacket(rs);
94 | while (result.size()>0){
95 | for (Packet packet: result) {
96 | producer.sendAsync(packet);
97 | deletePacket(packet.getIdentify());
98 | }
99 | rs = stat.executeQuery(SQL_QUERY);
100 | result = convertPacket(rs);
101 | }
102 | } catch (Exception e) {
103 | throw new HaException("okmq:h2:" + e.getMessage());
104 | }
105 |
106 | }
107 |
108 | private void savePacket(Packet packet) throws HaException {
109 | String sql = String.format(SQL_INSERT, packet.getIdentify(), packet.getTopic(), packet.getContent(), packet.getTimestamp());
110 | try (Statement stat = dbConnection.getConnection().createStatement()) {
111 | stat.execute(sql);
112 | }catch(JdbcSQLIntegrityConstraintViolationException e){
113 | log.error("do recovery message ,please check kafka status and local network!!!");
114 | }catch (SQLException e) {
115 | throw new HaException("okmq:h2:" + e.getMessage());
116 | }
117 | }
118 |
119 | private void deletePacket(String id) throws HaException {
120 | String sql = String.format(SQL_DELETE, id);
121 | try (Statement stat = dbConnection.getConnection().createStatement()) {
122 | stat.execute(sql);
123 | } catch (SQLException e) {
124 | throw new HaException("okmq:h2:" + e.getMessage());
125 | }
126 | }
127 |
128 | private List convertPacket(ResultSet rs) throws SQLException {
129 | List result = new ArrayList<>(1000);
130 | Packet packet;
131 | while (rs.next()) {
132 | packet = new Packet();
133 | packet.setIdentify(rs.getString(1));
134 | packet.setTopic(rs.getString(2));
135 | packet.setContent(rs.getString(3));
136 | packet.setTimestamp(rs.getLong(4));
137 | result.add(packet);
138 | }
139 | return result;
140 | }
141 |
142 | public class DBConnection {
143 |
144 | @Getter
145 | @Setter
146 | private String url;
147 | @Getter
148 | @Setter
149 | private String user;
150 | @Getter
151 | @Setter
152 | private String passwd;
153 | @Getter
154 | @Setter
155 | private int dataLength;
156 | private String DRIVER_CLASS = "org.h2.Driver";
157 |
158 | private Connection connection;
159 |
160 | private DruidDataSource dataSource;
161 |
162 | public DBConnection() {
163 | }
164 |
165 | public DBConnection(String url, String user, String passwd) {
166 | this.url = url;
167 | this.user = user;
168 | this.passwd = passwd;
169 | }
170 |
171 | public Connection connect() {
172 |
173 | DruidDataSource dataSource = new DruidDataSource();
174 | dataSource.setDriverClassName(DRIVER_CLASS);
175 | dataSource.setUrl(this.url);
176 | dataSource.setUsername(this.user);
177 | dataSource.setPassword(this.passwd);
178 | dataSource.setMaxActive(10);
179 | dataSource.setInitialSize(2);
180 | dataSource.setMaxWait(60000);
181 | dataSource.setMinIdle(2);
182 | dataSource.setTimeBetweenEvictionRunsMillis(60000);
183 | dataSource.setMinEvictableIdleTimeMillis(300000);
184 | dataSource.setValidationQuery("select 1");
185 | dataSource.setTestWhileIdle(false);
186 | dataSource.setTestOnBorrow(false);
187 | dataSource.setTestOnReturn(false);
188 | dataSource.setRemoveAbandoned(false);
189 | dataSource.setRemoveAbandonedTimeout(1800);
190 | dataSource.setLogAbandoned(true);
191 | dataSource.setPoolPreparedStatements(true);
192 | dataSource.setMaxOpenPreparedStatements(30);
193 |
194 | try {
195 | connection = dataSource.getConnection();
196 | this.dataSource = dataSource;
197 | return this.connection;
198 | } catch (SQLException e) {
199 | log.error("h2 connection error", e);
200 | }
201 | return null;
202 | }
203 |
204 | public Connection getConnection() {
205 | if(this.connection == null){
206 | try {
207 | this.connection = this.dataSource.getConnection();
208 | }catch (SQLException e) {
209 | log.error("h2 connection error", e);
210 | }
211 | }
212 | return this.connection;
213 | }
214 |
215 | public void close() {
216 | try {
217 | this.connection.close();
218 | } catch (SQLException e) {
219 | log.error("close h2 connection error", e);
220 | }
221 | }
222 |
223 | }
224 |
225 |
226 | }
227 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/ha/Ha2Redis.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer.ha;
2 |
3 | import com.fasterxml.jackson.core.JsonProcessingException;
4 | import com.fasterxml.jackson.databind.ObjectMapper;
5 | import com.sayhiai.arch.okmq.api.Packet;
6 | import com.sayhiai.arch.okmq.api.producer.AbstractProducer;
7 | import com.sayhiai.arch.okmq.api.producer.HaException;
8 | import lombok.Getter;
9 | import lombok.Setter;
10 | import lombok.extern.slf4j.Slf4j;
11 | import redis.clients.jedis.*;
12 | import redis.clients.jedis.params.SetParams;
13 |
14 | import java.beans.BeanInfo;
15 | import java.beans.Introspector;
16 | import java.beans.PropertyDescriptor;
17 | import java.lang.reflect.Method;
18 | import java.util.*;
19 | import java.util.concurrent.TimeUnit;
20 | import java.util.stream.Collectors;
21 |
22 | @Slf4j
23 | public class Ha2Redis implements HA {
24 | JedisPoolAbstract pool;
25 | JedisCluster cluster;
26 | boolean isCluster;
27 |
28 | final static String RedisKeyPrefix = "okmq:";
29 |
30 | final static String RedisRecoveryLockKey = RedisKeyPrefix + "recovery:lock";
31 | final static String LockIdentify = UUID.randomUUID().toString();
32 |
33 | final static String IdxHashKey = RedisKeyPrefix + "indexhash";
34 |
35 | private final static String Props_Prefix = "okmq.redis.";
36 |
37 | private final ObjectMapper mapper = new ObjectMapper();
38 |
39 | private String getInnerIdxHashKeyByDate(long millis) {
40 | //5min every key
41 | final int time = (int) (millis / config.getSplitMillis());
42 | return RedisKeyPrefix + time;
43 | }
44 |
45 |
46 | @Override
47 | public void close() {
48 | if (isCluster) {
49 | this.cluster.close();
50 | } else {
51 | pool.close();
52 | }
53 | }
54 |
55 | final Ha2RedisConfig config = new Ha2RedisConfig();
56 |
57 | @Override
58 | public void configure(Properties properties) {
59 | final JedisPoolConfig poolConfig = new JedisPoolConfig();
60 | config.setPoolConfig(poolConfig);
61 |
62 | try {
63 | BeanInfo beanInfo = Introspector.getBeanInfo(Ha2RedisConfig.class);
64 | for (PropertyDescriptor desc : beanInfo.getPropertyDescriptors()) {
65 | if (desc.getPropertyType().equals(JedisPoolConfig.class)) {
66 |
67 | BeanInfo beanInfoLvl2 = Introspector.getBeanInfo(JedisPoolConfig.class);
68 | for (PropertyDescriptor descLvl2 : beanInfoLvl2.getPropertyDescriptors()) {
69 | final String name = descLvl2.getName();
70 | final Object value = properties.get(Props_Prefix + "poolConfig." + name);
71 | if (null != value && !"".equals(value)) {
72 | Method method = descLvl2.getWriteMethod();
73 | method.invoke(poolConfig, value);
74 | }
75 | }
76 | } else {
77 | final String name = desc.getName();
78 | final Object value = properties.get(Props_Prefix + name);
79 | if (null != value && !"".equals(value)) {
80 | Method method = desc.getWriteMethod();
81 | method.invoke(config, properties.get(Props_Prefix + name));
82 | }
83 | }
84 | }
85 | } catch (Exception e) {
86 | log.error("error while config redis . pls check it !!");
87 | }
88 |
89 |
90 | isCluster = false;
91 |
92 | switch (config.getMode().toLowerCase()) {
93 | case Ha2RedisConfig.REDIS_MODE_SINGLE: {
94 | final String[] parts = config.getEndpoint().split(":");
95 | if (parts.length != 2) {
96 | throw new RuntimeException("okmq:redis:config error: ex: 127.0.0.1:6379");
97 | }
98 |
99 | final String host = parts[0];
100 | final int port = Integer.valueOf(parts[1]);
101 |
102 | pool = new JedisPool(config.getPoolConfig(), host, port, config.getConnectionTimeout());
103 | }
104 | break;
105 | case Ha2RedisConfig.REDIS_MODE_SENTINEL: {
106 | final String[] parts = config.getEndpoint().split(",");
107 |
108 | Set hostAndPorts = Arrays.stream(parts)
109 | .map(item -> item.split(":"))
110 | .map(item -> new HostAndPort(item[0], Integer.valueOf(item[1])).toString())
111 | .collect(Collectors.toSet());
112 |
113 | pool = new JedisSentinelPool(config.getMasterName(), hostAndPorts, config.getPoolConfig());
114 | }
115 | break;
116 | case Ha2RedisConfig.REDIS_MODE_CLUSTER: {
117 | final String[] parts = config.getEndpoint().split(",");
118 | Set hostAndPorts = Arrays.stream(parts)
119 | .map(item -> item.split(":"))
120 | .map(item -> new HostAndPort(item[0], Integer.valueOf(item[1])))
121 | .collect(Collectors.toSet());
122 |
123 | cluster = new JedisCluster(hostAndPorts, config.getPoolConfig());
124 | isCluster = true;
125 | }
126 | break;
127 | default:
128 | throw new RuntimeException("okmq:redis:no redis mode supply. ex: single|sentinel|cluster");
129 | }
130 | }
131 |
132 | @Override
133 | public void preSend(Packet packet) throws HaException {
134 |
135 | final String idxKey = getInnerIdxHashKeyByDate(packet.getTimestamp());
136 | final String key = String.format("%s:%s", RedisKeyPrefix, packet.getIdentify());
137 |
138 |
139 | String value = "";
140 | try {
141 | value = mapper.writeValueAsString(packet);
142 | } catch (JsonProcessingException e) {
143 | throw new HaException("okmq:redis:json error:" + packet);
144 | }
145 |
146 | if (isCluster) {
147 | if (!cluster.hexists(IdxHashKey, idxKey)) {
148 | cluster.hset(IdxHashKey, idxKey, "");
149 | }
150 | cluster.hset(idxKey, key, value);
151 | } else {
152 | try (Jedis jedis = pool.getResource()) {
153 | if (!jedis.hexists(IdxHashKey, idxKey)) {
154 | jedis.hset(IdxHashKey, idxKey, "");
155 | }
156 | jedis.hset(idxKey, key, value);
157 | } catch (Exception ex) {
158 | throw new HaException("okmq:redis:" + ex.getMessage());
159 | }
160 | }
161 | }
162 |
163 | @Override
164 | public void postSend(Packet packet) throws HaException {
165 | final String key = String.format("%s:%s", RedisKeyPrefix, packet.getIdentify());
166 | final String idxKey = getInnerIdxHashKeyByDate(packet.getTimestamp());
167 |
168 | if (isCluster) {
169 | cluster.hdel(idxKey, key);
170 | } else {
171 | try (Jedis jedis = pool.getResource()) {
172 | jedis.hdel(idxKey, key);
173 | } catch (Exception ex) {
174 | throw new HaException("okmq:redis:" + ex.getMessage());
175 | }
176 | }
177 | }
178 |
179 | @Override
180 | public void doRecovery(AbstractProducer producer) throws HaException {
181 | if (isCluster) {
182 | this.doRecoveryClusterMode(producer);
183 | } else {
184 | this.doRecoveryJedisMode(producer);
185 | }
186 | }
187 |
188 | private boolean lock() {
189 | SetParams setParams = new SetParams().px(config.getLockPx()); //expire px millis
190 |
191 | String result = "";
192 | if (isCluster) {
193 | result = cluster.set(RedisRecoveryLockKey, LockIdentify, setParams);
194 | } else {
195 | try (Jedis jedis = pool.getResource()) {
196 | result = jedis.set(RedisRecoveryLockKey, LockIdentify, setParams);
197 | } catch (Exception ex) {
198 | log.error("okmq:redis:recovery: " + ex.getMessage());
199 | return false;
200 | }
201 | }
202 | if ("OK".equals(result)) {
203 | return true;
204 | } else {
205 | log.info("okmq:redis:recovery:can not get the lock! ");
206 | }
207 | return false;
208 | }
209 |
210 | private boolean unlock() {
211 | if (isCluster) {
212 | String id = cluster.get(RedisRecoveryLockKey);
213 | if (null == id || "".equals(id)) {
214 | return true;
215 | } else {
216 | if (id.equals(LockIdentify)) {
217 | cluster.del(RedisRecoveryLockKey);
218 | return true;
219 | }
220 | }
221 | } else {
222 | try (Jedis jedis = pool.getResource()) {
223 | String id = jedis.get(RedisRecoveryLockKey);
224 | if (null == id || "".equals(id)) {
225 | return true;
226 | } else {
227 | if (id.equals(LockIdentify)) {
228 | jedis.del(RedisRecoveryLockKey);
229 | return true;
230 | }
231 | }
232 | } catch (Exception ex) {
233 | log.error("okmq:redis:recovery: " + ex.getMessage());
234 | return true;
235 | }
236 |
237 | }
238 | return false;
239 | }
240 |
241 | private void doRecoveryClusterMode(AbstractProducer producer) throws HaException {
242 | final long begin = System.currentTimeMillis();
243 |
244 | if (!lock()) return;
245 |
246 | List keys = new ArrayList(cluster.hgetAll(IdxHashKey).keySet());
247 | Collections.sort(keys);
248 |
249 | for (String key : keys) {
250 | String cur = ScanParams.SCAN_POINTER_START;
251 | for (; ; ) {
252 | ScanResult> result = cluster.hscan(key, cur);
253 |
254 | for (Map.Entry kv : result.getResult()) {
255 | if (System.currentTimeMillis() - begin >= config.getLockPx()) {
256 | //bala bala
257 | if (unlock()) return;
258 | }
259 |
260 | final String k = kv.getKey();
261 | final String v = kv.getValue();
262 |
263 | final Packet p = mapper.convertValue(v, Packet.class);
264 | if (System.currentTimeMillis() - p.getTimestamp() > 10 * 1000L) {
265 | producer.sendAsync(p);
266 | cluster.hdel(key, k);
267 | }
268 | }
269 |
270 | cur = result.getCursor();
271 | if ("0".equals(cur)) {
272 | cluster.hdel(IdxHashKey, key);
273 | break;
274 | }
275 | }
276 | }
277 |
278 | if (System.currentTimeMillis() - begin < config.getLockPx()) {
279 | //bala bala
280 | unlock();
281 | }
282 | }
283 |
284 | private void doRecoveryJedisMode(AbstractProducer producer) throws HaException {
285 | try (Jedis jedis = pool.getResource()) {
286 | final long begin = System.currentTimeMillis();
287 |
288 | if (!lock()) return;
289 |
290 | List keys = new ArrayList(jedis.hgetAll(IdxHashKey).keySet());
291 | Collections.sort(keys);
292 |
293 | for (String key : keys) {
294 | String cur = ScanParams.SCAN_POINTER_START;
295 | for (; ; ) {
296 | ScanResult> result = jedis.hscan(key, cur);
297 |
298 | for (Map.Entry kv : result.getResult()) {
299 | if (System.currentTimeMillis() - begin >= config.getLockPx()) {
300 | //bala bala
301 | if (unlock()) return;
302 | }
303 |
304 | final String k = kv.getKey();
305 | final String v = kv.getValue();
306 |
307 | final Packet p = mapper.convertValue(v, Packet.class);
308 | if (System.currentTimeMillis() - p.getTimestamp() > 10 * 1000L) {
309 | producer.sendAsync(p);
310 | jedis.hdel(key, k);
311 | }
312 | }
313 |
314 | cur = result.getCursor();
315 | if ("0".equals(cur)) {
316 | jedis.hdel(IdxHashKey, key);
317 | break;
318 | }
319 | }
320 | }
321 |
322 | if (System.currentTimeMillis() - begin < config.getLockPx()) {
323 | //bala bala
324 | unlock();
325 | }
326 | } catch (Exception ex) {
327 | throw new HaException("okmq:redis:" + ex.getMessage());
328 | }
329 | }
330 |
331 |
332 | }
333 |
334 | @Setter
335 | @Getter
336 | class Ha2RedisConfig {
337 | public static final String REDIS_MODE_SINGLE = "single";
338 | public static final String REDIS_MODE_SENTINEL = "sentinel";
339 | public static final String REDIS_MODE_CLUSTER = "cluster";
340 |
341 | private String mode = "";
342 | private String endpoint = "";
343 | private int connectionTimeout = 2000;
344 | private int soTimeout = 2000;
345 | private long lockPx = 1000L * 60 * 5;
346 | private long splitMillis = 1000L * 60 * 5;
347 |
348 |
349 | private JedisPoolConfig poolConfig;
350 |
351 |
352 | /**
353 | * Only used by sentinel mode
354 | */
355 | private String masterName = "";
356 |
357 | }
358 |
--------------------------------------------------------------------------------
/okmq-api/src/main/java/com/sayhiai/arch/okmq/api/producer/ha/Ha2SimpleLog.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.api.producer.ha;
2 |
3 | import com.sayhiai.arch.okmq.api.Packet;
4 | import com.sayhiai.arch.okmq.api.producer.AbstractProducer;
5 | import com.sayhiai.arch.okmq.api.producer.HaException;
6 | import lombok.extern.slf4j.Slf4j;
7 |
8 | import java.util.Properties;
9 |
10 | @Slf4j
11 | public class Ha2SimpleLog implements HA {
12 | @Override
13 | public void close() {
14 | //pass
15 | }
16 |
17 | @Override
18 | public void configure(Properties properties) {
19 | }
20 |
21 | @Override
22 | public void preSend(Packet packet) throws HaException {
23 | log.info("ping:okmq:{}|{}|{}|{}", packet.getTimestamp(), packet.getIdentify(), packet.getTopic(), packet.getContent());
24 | }
25 |
26 | @Override
27 | public void postSend(Packet packet) throws HaException {
28 | log.info("pong:okmq:{}|{}", packet.getIdentify(), packet.getTopic());
29 | }
30 |
31 | @Override
32 | public void doRecovery(AbstractProducer producer) throws HaException {
33 | log.info("HA mode not support recovery mode yet!");
34 | }
35 | }
36 |
--------------------------------------------------------------------------------
/okmq-kafka/pom.xml:
--------------------------------------------------------------------------------
1 |
2 |
5 |
6 | okmq
7 | com.sayhiai.arch
8 | 1.0.3
9 |
10 | 4.0.0
11 |
12 | okmq-kafka
13 | 1.0.3
14 |
15 |
16 |
17 | org.apache.kafka
18 | kafka-clients
19 | 2.2.0
20 |
21 |
22 | com.sayhiai.arch
23 | okmq-api
24 | 1.0.3
25 | compile
26 |
27 |
28 |
29 |
30 |
31 | ch.qos.logback
32 | logback-access
33 | 1.2.3
34 |
35 |
36 | ch.qos.logback
37 | logback-classic
38 | 1.2.3
39 |
40 |
41 |
42 | ch.qos.logback
43 | logback-core
44 | 1.2.3
45 |
46 |
47 |
48 |
49 | org.junit.jupiter
50 | junit-jupiter-api
51 | RELEASE
52 | test
53 |
54 |
55 |
56 | redis.clients
57 | jedis
58 | 3.0.1
59 | test
60 |
61 |
62 |
63 | com.h2database
64 | h2
65 | 1.4.199
66 | test
67 |
68 |
69 |
70 | com.alibaba
71 | druid
72 | 1.1.17
73 | test
74 |
75 |
76 |
77 |
78 |
79 |
--------------------------------------------------------------------------------
/okmq-kafka/src/main/java/com/sayhiai/arch/okmq/kafka/KafkaProducer.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.kafka;
2 |
3 | import com.sayhiai.arch.okmq.api.Packet;
4 | import com.sayhiai.arch.okmq.api.producer.AbstractProducer;
5 | import com.sayhiai.arch.okmq.api.producer.Callback;
6 | import com.sayhiai.arch.okmq.api.producer.HaException;
7 | import com.sayhiai.arch.okmq.api.producer.SendResult;
8 | import lombok.Getter;
9 | import lombok.extern.slf4j.Slf4j;
10 | import org.apache.kafka.clients.producer.ProducerRecord;
11 | import org.apache.kafka.clients.producer.RecordMetadata;
12 |
13 | import java.util.concurrent.Future;
14 | import java.util.concurrent.TimeUnit;
15 |
16 | @Slf4j
17 | public class KafkaProducer extends AbstractProducer {
18 |
19 | /**
20 | * @see KafkaProducer#getBackend()
21 | */
22 | @Getter
23 | org.apache.kafka.clients.producer.KafkaProducer producer;
24 |
25 | public KafkaProducer(boolean ha, String haMode) {
26 | super(ha, haMode);
27 | }
28 |
29 | public void doInit() {
30 | producer = new org.apache.kafka.clients.producer.KafkaProducer(properties);
31 | }
32 |
33 | @Override
34 | public void doShutdown() {
35 | producer.flush();
36 | producer.close();
37 | }
38 |
39 | @Override
40 | /**
41 | * ha.pre or post method , is more like the interceptor
42 | * @see org.apache.kafka.clients.producer.ProducerInterceptor
43 | */
44 | public SendResult sendSerial(Packet packet, long timeoutMillis) {
45 | ///////PRE/////////
46 | if (config.isHa()) {
47 | try {
48 | ha.preSend(packet);
49 | } catch (HaException ex) {
50 | log.error(ex.getMessage());
51 | }
52 | }
53 |
54 | SendResult result = new SendResult();
55 | ProducerRecord record = new ProducerRecord(packet.getTopic(), packet.getContent());
56 | Future future = producer.send(record);
57 |
58 | try {
59 | RecordMetadata metadata = future.get(timeoutMillis, TimeUnit.MILLISECONDS);
60 |
61 | log.info("okmq:kafka:{}|{}|cost:{}", metadata.offset(), metadata.partition(), System.currentTimeMillis() - packet.getTimestamp());
62 |
63 | result.setCode(SendResult.OK);
64 | } catch (Exception e) {
65 | log.error("okmq:sendSerial:" + e.getMessage() + packet);
66 |
67 | result.setCode(SendResult.ERROR);
68 | result.setMsg(e.getMessage());
69 | }
70 |
71 | ///////END/////////
72 | if (config.isHa()) {
73 | try {
74 | ha.postSend(packet);
75 | } catch (HaException ex) {
76 | log.error(ex.getMessage());
77 | }
78 | }
79 | return result;
80 | }
81 |
82 | @Override
83 | public SendResult sendAsync(Packet packet) {
84 | return
85 | this.sendAsync(packet, null);
86 | }
87 |
88 | @Override
89 | public SendResult sendAsync(final Packet packet, final Callback callback) {
90 | if (config.isHa()) {
91 | try {
92 | ha.preSend(packet);
93 | } catch (HaException ex) {
94 | log.error(ex.getMessage());
95 | }
96 | }
97 |
98 |
99 | SendResult result = new SendResult();
100 | result.setCode(SendResult.OK);
101 |
102 | ProducerRecord record = new ProducerRecord(packet.getTopic(), packet.getContent());
103 | producer.send(record, new org.apache.kafka.clients.producer.Callback() {
104 | @Override
105 | public void onCompletion(final RecordMetadata metadata, final Exception exception) {
106 | if (null == exception) {
107 | log.info("okmq:kafka:{}|{}|cost:{}", metadata.offset(), metadata.partition(), System.currentTimeMillis() - packet.getTimestamp());
108 | if (config.isHa()) {
109 | try {
110 | ha.postSend(packet);
111 | } catch (HaException ex) {
112 | log.error(ex.getMessage());
113 | }
114 | }
115 | } else {
116 | log.info("okmq:kafka:" + exception.getMessage(), exception);
117 | }
118 | if (null != callback) {
119 | callback.callback(packet, metadata, exception);
120 | }
121 | }
122 | });
123 | return result;
124 | }
125 |
126 | @Override
127 | public T getBackend() {
128 | return (T) this.producer;
129 | }
130 |
131 | }
132 |
--------------------------------------------------------------------------------
/okmq-kafka/src/main/java/com/sayhiai/arch/okmq/kafka/ProducerBuilder.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.kafka;
2 |
3 | import com.sayhiai.arch.okmq.api.producer.ha.HA;
4 | import org.apache.kafka.clients.producer.ProducerConfig;
5 | import org.apache.kafka.common.serialization.StringSerializer;
6 |
7 | import java.util.Properties;
8 |
9 | public class ProducerBuilder {
10 | Properties props = new Properties();
11 | HA.Config haConfig = new HA.Config();
12 |
13 | public ProducerBuilder() {
14 | this.any(ProducerConfig.ACKS_CONFIG, "1");
15 | this.any(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
16 | this.any(ProducerConfig.RETRIES_CONFIG, "1");
17 | }
18 |
19 | public ProducerBuilder defaultSerializer() {
20 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
21 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
22 | return this;
23 | }
24 |
25 |
26 | public ProducerBuilder any(final Object key, final Object value) {
27 | props.put(key, value);
28 | return this;
29 | }
30 |
31 | public ProducerBuilder maxBlockMs(long ms) {
32 | this.any(ProducerConfig.MAX_BLOCK_MS_CONFIG, ms);
33 | return this;
34 | }
35 |
36 | public ProducerBuilder eanbleHa(String mode) {
37 | haConfig.setHa(true);
38 | haConfig.setHaType(mode);
39 | return this;
40 | }
41 |
42 | public ProducerBuilder servers(final String servers) {
43 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
44 | return this;
45 | }
46 |
47 | public ProducerBuilder clientID(final String name) {
48 | props.put(ProducerConfig.CLIENT_ID_CONFIG, name);
49 | return this;
50 | }
51 |
52 |
53 | public KafkaProducer build() {
54 | KafkaProducer producer = new KafkaProducer(this.haConfig.isHa(), this.haConfig.getHaType());
55 | producer.init(this.props);
56 | return producer;
57 | }
58 |
59 | }
60 |
--------------------------------------------------------------------------------
/okmq-kafka/src/test/java/com/sayhiai/arch/okmq/kafka/HaH2Test.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.kafka;
2 |
3 | import com.sayhiai.arch.okmq.api.Packet;
4 | import org.junit.jupiter.api.Test;
5 |
6 | import java.util.concurrent.TimeUnit;
7 |
8 | public class HaH2Test {
9 |
10 | @Test
11 | public void testH2SendAsync() {
12 | KafkaProducer producer = new ProducerBuilder()
13 | .defaultSerializer()
14 | .eanbleHa("h2")
15 | .any("okmq.h2.url", "jdbc:h2:file:D:/testh2/db;AUTO_SERVER=TRUE")
16 | .any("okmq.h2.user", "sa")
17 | .any("okmq.h2.passwd", "sa")
18 | // .any("okmq.h2.dataLength", 2097152)
19 | //.servers("10.30.94.8:9092")
20 | .servers("10.30.94.8:9092")
21 | .clientID("okMQProducerTest")
22 | .build();
23 |
24 |
25 | for (int i = 0; i < 999999; i++) {
26 | Packet packet = new Packet();
27 | packet.setTopic("okmq-test-topic");
28 | packet.setContent("i will send you a msg" + System.nanoTime());
29 | producer.sendAsync(packet, null);
30 | try {
31 | TimeUnit.SECONDS.sleep(1);
32 | } catch (InterruptedException e) {
33 | e.printStackTrace();
34 | }
35 | // producer.sendSerial(packet, 61000);
36 | }
37 |
38 | System.out.println("We'll be blue");
39 |
40 | try {
41 | Thread.sleep(10000);
42 | } catch (InterruptedException e) {
43 | e.printStackTrace();
44 | }
45 |
46 |
47 | producer.shutdown();
48 | }
49 | }
50 |
--------------------------------------------------------------------------------
/okmq-kafka/src/test/java/com/sayhiai/arch/okmq/kafka/HaRedisTest.java:
--------------------------------------------------------------------------------
1 | package com.sayhiai.arch.okmq.kafka;
2 |
3 | import com.sayhiai.arch.okmq.api.Packet;
4 | import org.junit.jupiter.api.Test;
5 |
6 | public class HaRedisTest {
7 |
8 | @Test
9 | public void testRedisSendAsync() {
10 | KafkaProducer producer = new ProducerBuilder()
11 | .defaultSerializer()
12 | .eanbleHa("redis")
13 | .any("okmq.redis.mode", "single")
14 | .any("okmq.redis.endpoint", "127.0.0.1:6379")
15 | .any("okmq.redis.poolConfig.maxTotal", 100)
16 | //.servers("10.30.94.8:9092")
17 | .servers("localhost:9092")
18 | .clientID("okMQProducerTest")
19 | .build();
20 |
21 |
22 | for (int i = 0; i < 1; i++) {
23 | Packet packet = new Packet();
24 | packet.setTopic("okmq-test-topic");
25 | packet.setContent("i will send you a msg" + System.nanoTime());
26 | producer.sendAsync(packet, null);
27 | // producer.sendSerial(packet, 1000);
28 | }
29 |
30 | System.out.println("We'll be blue");
31 |
32 | try {
33 | Thread.sleep(10000);
34 | } catch (InterruptedException e) {
35 | e.printStackTrace();
36 | }
37 |
38 |
39 | producer.shutdown();
40 | }
41 |
42 | }
43 |
--------------------------------------------------------------------------------
/okmq-kafka/src/test/resources/logback.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | [%-5level] %d{HH:mm:ss.SSS} [%thread] [%X{X-B3-TraceId}] %logger{36} - %msg%n
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
--------------------------------------------------------------------------------
/pom.xml:
--------------------------------------------------------------------------------
1 |
2 |
5 | 4.0.0
6 |
7 | com.sayhiai.arch
8 | okmq
9 | pom
10 | 1.0.3
11 |
12 | okmq-api
13 | okmq-kafka
14 |
15 |
16 |
17 |
18 |
19 | org.projectlombok
20 | lombok
21 | 1.16.22
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------