{
25 | /**
26 | * Returns the value to which the specified key is mapped,
27 | * or {@code null} if this map contains no mapping for the key.
28 | *
29 | * More formally, if this map contains a mapping from a key
30 | * {@code k} to a value {@code v} such that {@code key.equals(k)},
31 | * then this method returns {@code v}; otherwise it returns
32 | * {@code null}. (There can be at most one such mapping.)
33 | *
34 | * @throws NullPointerException if the specified key is null
35 | */
36 | V get(K key);
37 |
38 |
39 | /**
40 | * Maps the specified key to the specified value in this table.
41 | * Neither the key nor the value can be null.
42 | *
43 | *
The value can be retrieved by calling the {@code get} method
44 | * with a key that is equal to the original key.
45 | *
46 | * @param key key with which the specified value is to be associated
47 | * @param value value to be associated with the specified key
48 | * @return the previous value associated with {@code key}, or
49 | * {@code null} if there was no mapping for {@code key}
50 | * @throws NullPointerException if the specified key or value is null
51 | */
52 | V put(K key, V value);
53 |
54 |
55 |
56 | /**
57 | * Removes all of the mappings from this map (optional operation).
58 | * The map will be empty after this call returns.
59 | *
60 | * @throws UnsupportedOperationException if the clear operation
61 | * is not supported by this map
62 | */
63 | void clear();
64 |
65 |
66 | /**
67 | * Removes the key (and its corresponding value) from this map.
68 | * This method does nothing if the key is not in the map.
69 | *
70 | * @param key the key that needs to be removed
71 | * @return the previous value associated with {@code key}, or
72 | * {@code null} if there was no mapping for {@code key}
73 | * @throws NullPointerException if the specified key is null
74 | */
75 | V remove(K key);
76 |
77 |
78 | /**
79 | * Returns the max memory size of the cache.
80 | *
81 | * @return max memory size.
82 | */
83 | int getMaxMemorySize();
84 |
85 | /**
86 | * Returns the current memory size of the cache.
87 | *
88 | * @return current memory size.
89 | */
90 | int getMemorySize();
91 | }
92 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # LXU-Cache
2 |
3 | 中文|[English](https://github.com/65487123/LXU-Cache/blob/master/README-EN.md)
4 | # 功能介绍
5 | 和Redis类似,一个分布式缓存中间件。redis的主要功能基本都实现了,
6 | 包括其主要五种数据类型(string、list、hash、set、zset)、缓存淘汰、
7 | key设置过期时间、持久化、集群等等。
8 | 并且提供了各种批量操作api来减少网络IO次数,达到提升写缓存效率目的。
9 | # 写这个项目的目的
10 | 主要目的就是用作学习
11 | 1、熟悉redis各种功能:要写一个和redis类似的分布式缓存中间件,肯定得
12 | 知道redis的各种功能。
13 | 2、熟悉redis设计思想和实现原理:在完成这个项目的过程中,肯定会借鉴
14 | redis的实现原理。并且通过实践来验证redis为什么要这样设计(比如为什么
15 | server端主线程设计为单线程)。
16 | 3、熟悉jdk类库、netty框架的运用,增加网络编程、并发编程的实践经验:
17 | 网络通信是通过netty实现的,通过这个项目可以增加netty的实践经验(自定义
18 | 协议、长连接心跳机制、cluster重新选举master得通过事件监听等等)。
19 |
20 | # 主要实现原理
21 | ## 线程模型
22 | 由于用到了netty,所以主体线程模型是主从Reactor模型,主Reactor接收
23 | 连接事件,从Reactor接收业务请求。从Reactor接收业务请求后会把消息丢
24 | 到一个阻塞队列中去,由一个单线程消费队列中的消息,消费完一个消息就会
25 | 把返回结果包装成任务丢给iO线程处理(在非io线程中调用channelHandler
26 | 的write方法,netty底层会这样做)。序列化协议用到了Protobuf,并且用
27 | 自己实现的编解码器简化了序列化反序列化过程,减少了不必要的操作。
28 | ## 队列实现
29 | 自己实现了两个高性能阻塞队列。1、多对一:无锁设计,底层是一个数组,
30 | 并且通过把不同生产线程的消息丢到不同的块中以及头指针和尾指针之间
31 | 进行内存填充解决了伪共享问题。2、一对一:同样是无锁设计并解决了伪共享。
32 | 单机模式下所有请求都会被丢到多对一阻塞队列中由另一个独立线程单线程消费。
33 | 由于每个数组块只被一个生产线程所占有,所以在放入元素时都无需CAS,进一步
34 | 提高了性能
35 | 集群模式下,所有主节点用多对一队列,从节点用一对一队列
36 |
37 |
38 | ## 持久化实现
39 | 持久化文件有两个文件:内存快照文件snapshot.ser和写缓存的日志文件
40 | journal.txt。写这两个文件是由一个独立线程单线程执行的。每当处理
41 | 请求队列的线程从队列中取出一个写的请求时,就会把这个请求添加到
42 | journal.txt中,当取出的写缓存请求达到一定数量时(可以自己配置)
43 | 会把journal.txt内容清空并重写快照文件(这也是一个任务,会被丢
44 | 进专门持久化的线程中执行,丢进去之前会先修改快照版本号,在线程
45 | 执行到这个任务之前,前面的加日志任务里判断到加任务时刻的版本号
46 | 和当前版本号不一致,直接返回,不会去写日志)。当系统重启时,会
47 | 读取快照这两个文件,先根据快照恢复缓存,然后一行一行执行日志文
48 | 件里的请求,恢复完成后,会清空两个文件并生成新的快照文件。
49 | 由于key的过期时间是记录在另一个容器中,所以这个系统会生成两份
50 | 持久化文件,分别放在两个目录下面。也就是说,这个系统总共会生成
51 | 四个持久化文件,都是由一个独立单线程执行的。
52 |
53 | ## 集群原理
54 | client ▉
55 | /| \
56 | / | \
57 | / | \
58 | ↙ ↓ ↘ master
59 | master ▉ ▉ ▉ ➜➜➜ ▉slave
60 | ↙ ↓ master ↘
61 | ▉ ▉ ↓ ↘ ▉slave
62 | slave slave ▉ ▉
63 | slave slave
64 | 客户端接受请求,把不同的key映射到不同的主节点中,主节点写完会异步
65 | 向这个主的所有从节点发送这个请求,然后返回结果。当主挂了,server
66 | 端会自动选举新的主,客户端也会和新的主建立连接,用户无感知。
67 |
68 | # 使用方法:
69 | ## 单机模式
70 | 1、config.properties的cluster-enabled配置为no
71 | 2、通过修改config.properties配置文件设置缓存使用的端口、缓存淘汰策略、
72 | 缓存最大条目数量、key过期策略、生成快照时的日志条目数等。
73 | 3、编译项目,生成jar包
74 | 4、丢到服务器执行nohup java -jar lxucache-server-1.x.x.jar &
75 | 可以根据自己需求添加其他JVM启动参数
76 |
77 | ## 集群模式
78 | 1、config.properties的cluster-enabled配置为yes
79 | 2、如果是主节点配置文件里isMaster属性设为yes,如果是从节点,masterIpAndPort
80 | 需要配置为主节点的ip加端口,示例:masterIpAndPort=127.0.0.1:4445
81 | 3、先启动主节点,主节点的目录下面如果有持久化文件,启动后会恢复数据。
82 | 4、再启动从节点,启动后会自动同步主节点的数据到从节点中。可以启动多个从节点。
83 | 示例:nohup java -jar lxucache-server-1.x.x.jar &
84 | 可以根据自己需求添加其他JVM启动参数
85 | 5、客户端通过CacheClusterClient来连接集群。把所有节点当作参数传入客户端构造方法中,
86 | 客户端会自动找到主节点并做负载均衡。主挂了会自动找到新的主,用户无感知,目前不支持
87 | 运行时增加新的主节点或删除原有主节点(可以增删从节点)。
88 |
89 | ## 新版本主要改动
90 | 1.0.1:原先返回值是一个Response对象,用的protobuf序列化,1.0.1版本把返回值改为单个字符串,取消protobuf序列化。
91 | 1.0.1-sr1:修改从节点使用的队列,伪共享官方解决方案改为手动内存填充,好处:1、占用更少内存 2、不用加启动参数
92 | [客户端源码](https://github.com/65487123/LxuCache-Client)
93 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/datastructure/queue/NoLockBlockingQueue.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.common.datastructure.queue;
17 |
18 |
19 | import com.lzp.lxucache.common.util.HashUtil;
20 |
21 | /**
22 | * Description:高性能阻塞队列,适用于多个生产者对一个消费者(线程),无锁设计,并且解决了伪共享问题。
23 | * 使用方法:消费者线程必须得设置为2的次方,不然性能反而比jdk自带的队列差
24 | *
25 | * @author: Lu ZePing
26 | * @date: 2019/7/20 12:19
27 | */
28 | public class NoLockBlockingQueue extends BlockingQueueAdapter {
29 | /**
30 | * 指针压缩后4字节
31 | *
32 | * 这里可能会出现可见性问题,但是完全不会出任何问题:
33 | *
34 | * 对生产者来说,先cas拿到将要存放元素的位置,然后判断当前位置的元素是否已经被消费(是否是null),
35 | * 如果为null,才放入元素,如果出现可见性问题,可能会出现位置已经为null了,但是生产者看起来
36 | * 还是null。这样造成的后果只是空轮循几次而已,结果并不会出错。(这其实是面向实现编程而不是面向
37 | * 规范编程了,不是非常严谨,不加volatile,按照java规范,其实是有可能出现永远不可见的情况的,
38 | * Thread.yield()以及Thread.sleep(1)并没有保证重新取得cpu资源时会刷新寄存器,而且也不是
39 | * 所有CPU都保证支持缓存一致性协议。。只能说根据绝大部份底层实现,不会出现一直不可见的情况)
40 | *
41 | * 同样,对消费者来说。如果出现可见性问题,最多就是当前位置已经有元素了,而消费者没看到。造成的
42 | * 后果也是出现空轮询几次。
43 | *
44 | *
45 | * 由于每部份数组块只被同一个线程操作,所以写数据的时候也不需要进行cas(不可能会出现两个写线程
46 | * 同时在等一个位置被释放)
47 | */
48 | private E[][] array;
49 | /**
50 | * 4字节,加上对象头12字节,一共20字节,还差44字节
51 | */
52 | private final int m;
53 |
54 | private long padding1, padding2, padding3, padding4, padding5;
55 | private int padding6;
56 | /**
57 | * @sun.misc.Contended 这个注解修饰数组,数组里的元素可能还是会出现伪共享,所以还是自己填充解决好了
58 | */
59 |
60 | private int[] head;
61 | private int[] tail;
62 |
63 |
64 | public NoLockBlockingQueue(int preferCapacity, int threadSum) {
65 | int capacity = HashUtil.tableSizeFor(preferCapacity);
66 | int capacityPerSlot = capacity / threadSum;
67 | array = (E[][]) new Object[threadSum][capacityPerSlot];
68 | //int占4个字节
69 | head = new int[16 * threadSum];
70 | tail = new int[16 * threadSum];
71 | m = capacityPerSlot - 1;
72 | }
73 |
74 | @Override
75 | public void put(E obj, int threadId) throws InterruptedException {
76 | int p = head[16 * threadId]++ & m;
77 | while (array[threadId][p] != null) {
78 | Thread.yield();
79 | }
80 | array[threadId][p] = obj;
81 | }
82 |
83 |
84 | @Override
85 | public E take() throws InterruptedException {
86 | E r;
87 | while (true) {
88 | for (int i = 0; i < tail.length; i += 16) {
89 | int p = tail[i] & this.m;
90 | if ((r = array[i / 16][p]) != null) {
91 | array[i / 16][p] = null;
92 | tail[i]++;
93 | return r;
94 | }
95 | }
96 | Thread.yield();
97 | }
98 | }
99 |
100 | }
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/datastructure/queue/OneToOneBlockingQueue.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.common.datastructure.queue;
17 |
18 |
19 | import com.lzp.lxucache.common.util.HashUtil;
20 |
21 | import java.util.concurrent.TimeUnit;
22 |
23 |
24 | /**
25 | * Description:高性能阻塞队列,适用于一个生产者对一个消费者(线程),无锁设计,并且解决了伪共享问题。
26 | *
27 | * @author: Lu ZePing
28 | * @date: 2019/7/20 12:19
29 | */
30 | public class OneToOneBlockingQueue extends BlockingQueueAdapter {
31 |
32 | /**
33 | * 指针压缩后4字节
34 | *
35 | * 这里可能会出现可见性问题,但是完全不会出任何问题:
36 | *
37 | * 对生产者来说,先cas拿到将要存放元素的位置,然后判断当前位置的元素是否已经被消费(是否是null),
38 | * 如果为null,才放入元素,如果出现可见性问题,可能会出现位置已经为null了,但是生产者看起来
39 | * 还是null。这样造成的后果只是空轮循几次而已,结果并不会出错。(这其实是面向实现编程而不是面向
40 | * 规范编程了,不是非常严谨,不加volatile,按照java规范,其实是有可能出现永远不可见的情况的,
41 | * Thread.yield()以及Thread.sleep(1)并没有保证重新取得cpu资源时会刷新寄存器,而且也不是
42 | * 所有CPU都保证支持缓存一致性协议。。只能说根据绝大部份底层实现,不会出现一直不可见的情况)
43 | *
44 | * 同样,对消费者来说。如果出现可见性问题,最多就是当前位置已经有元素了,而消费者没看到。造成的
45 | * 后果也是出现空轮询几次。
46 | *
47 | *
48 | * 由于每部份数组块只被同一个线程操作,所以写数据的时候也不需要进行cas(不可能会出现两个写线程
49 | * 同时在等一个位置被释放)
50 | */
51 | private final E[] ARRAY;
52 | /**
53 | * 4字节,加上对象头12字节,一共20字节,还差44字节
54 | */
55 | private final int m;
56 |
57 | private final int[] HEAD = new int[27];
58 |
59 | private final int[] TAIL = new int[16];
60 |
61 |
62 | public OneToOneBlockingQueue(int preferCapacity) {
63 | int capacity = HashUtil.tableSizeFor(preferCapacity);
64 | ARRAY = (E[]) new Object[capacity];
65 | m = capacity - 1;
66 | }
67 |
68 | @Override
69 | public void put(E obj) throws InterruptedException {
70 |
71 | int p = HEAD[11]++ & m;
72 | while (ARRAY[p] != null) {
73 | Thread.yield();
74 | }
75 | ARRAY[p] = obj;
76 | }
77 |
78 |
79 | @Override
80 | public E take() throws InterruptedException {
81 | E e;
82 | int p = TAIL[0]++ & m;
83 | while ((e = ARRAY[p]) == null) {
84 | Thread.yield();
85 | }
86 | ARRAY[p] = null;
87 | return e;
88 | }
89 |
90 | @Override
91 | public E poll(long timeout, TimeUnit unit) throws InterruptedException {
92 | long now = 0;
93 | long time = unit.toMillis(timeout);
94 | E e;
95 | int p = TAIL[0]++ & m;
96 | while ((e = ARRAY[p]) == null) {
97 | if (now == 0) {
98 | now = System.currentTimeMillis();
99 | } else if (System.currentTimeMillis() - now > time) {
100 | TAIL[0]--;
101 | throw new InterruptedException();
102 | } else {
103 | Thread.yield();
104 | }
105 | }
106 | ARRAY[p] = null;
107 | return e;
108 | }
109 | }
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/datastructure/queue/BlockingQueueAdapter.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.common.datastructure.queue;
17 |
18 | import java.util.Collection;
19 | import java.util.Iterator;
20 | import java.util.concurrent.BlockingQueue;
21 | import java.util.concurrent.TimeUnit;
22 |
23 | /**
24 | * Description:为了适配 TakeNolockQueue
25 | *
26 | * @author: Lu ZePing
27 | * @date: 2019/7/20 12:20
28 | */
29 | public abstract class BlockingQueueAdapter implements BlockingQueue {
30 |
31 | @Override
32 | public void put(E e) throws InterruptedException {
33 |
34 | }
35 |
36 | public void put(E e, int threadId) throws InterruptedException{
37 |
38 | }
39 |
40 | @Override
41 | public E poll(long timeout, TimeUnit unit) throws InterruptedException {
42 | return null;
43 | }
44 |
45 | @Override
46 | public boolean add(E e) {
47 | return false;
48 | }
49 |
50 | @Override
51 | public boolean offer(E e) {
52 | return false;
53 | }
54 |
55 | @Override
56 | public E remove() {
57 | return null;
58 | }
59 |
60 | @Override
61 | public E poll() {
62 | return null;
63 | }
64 |
65 | @Override
66 | public E element() {
67 | return null;
68 | }
69 |
70 | @Override
71 | public E peek() {
72 | return null;
73 | }
74 |
75 |
76 | @Override
77 | public boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException {
78 | return false;
79 | }
80 |
81 |
82 | @Override
83 | public int remainingCapacity() {
84 | return 0;
85 | }
86 |
87 | @Override
88 | public boolean remove(Object o) {
89 | return false;
90 | }
91 |
92 | @Override
93 | public boolean containsAll(Collection> c) {
94 | return false;
95 | }
96 |
97 | @Override
98 | public boolean addAll(Collection extends E> c) {
99 | return false;
100 | }
101 |
102 | @Override
103 | public boolean removeAll(Collection> c) {
104 | return false;
105 | }
106 |
107 | @Override
108 | public boolean retainAll(Collection> c) {
109 | return false;
110 | }
111 |
112 | @Override
113 | public void clear() {
114 |
115 | }
116 |
117 | @Override
118 | public int size() {
119 | return 0;
120 | }
121 |
122 | @Override
123 | public boolean isEmpty() {
124 | return false;
125 | }
126 |
127 | @Override
128 | public boolean contains(Object o) {
129 | return false;
130 | }
131 |
132 | @Override
133 | public Iterator iterator() {
134 | return null;
135 | }
136 |
137 | @Override
138 | public Object[] toArray() {
139 | return new Object[0];
140 | }
141 |
142 | @Override
143 | public T[] toArray(T[] a) {
144 | return null;
145 | }
146 |
147 | @Override
148 | public int drainTo(Collection super E> c) {
149 | return 0;
150 | }
151 |
152 | @Override
153 | public int drainTo(Collection super E> c, int maxElements) {
154 | return 0;
155 | }
156 | }
157 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/util/SerialUtil.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.common.util;
17 |
18 | import org.slf4j.Logger;
19 | import org.slf4j.LoggerFactory;
20 | import org.springframework.util.StringUtils;
21 |
22 | import java.util.*;
23 |
24 | /**
25 | * Description:序列化工具类,针对自己的协议制定的序列化规则,达到效率最高,网络传输最少的目的。
26 | * @author: Lu ZePing
27 | * @date: 2019/7/10 9:46
28 | */
29 | public class SerialUtil {
30 | private static final Logger logger = LoggerFactory.getLogger(SerialUtil.class);
31 |
32 |
33 | public static byte[] toByteArray(String hexString) {
34 | if (StringUtils.isEmpty(hexString)) {
35 | return new byte[0];
36 | }
37 |
38 | hexString = hexString.toLowerCase();
39 | final byte[] byteArray = new byte[hexString.length() / 2];
40 | int k = 0;
41 | for (int i = 0; i < byteArray.length; i++) {
42 | //因为是16进制,最多只会占用4位,转换成字节需要两个16进制的字符,高位在先
43 | byte high = (byte) (Character.digit(hexString.charAt(k), 16) & 0xff);
44 | byte low = (byte) (Character.digit(hexString.charAt(k + 1), 16) & 0xff);
45 | byteArray[i] = (byte) (high << 4 | low);
46 | k += 2;
47 | }
48 | return byteArray;
49 | }
50 |
51 | /**
52 | * 字节数组转成16进制表示格式的字符串
53 | *
54 | * @param byteArray 需要转换的字节数组
55 | * @return 16进制表示格式的字符串
56 | **/
57 | public static String toHexString(byte[] byteArray) {
58 | if (byteArray == null || byteArray.length < 1) {
59 | return "";
60 | }
61 | final StringBuilder hexString = new StringBuilder();
62 | for (int i = 0; i < byteArray.length; i++) {
63 | if ((byteArray[i] & 0xff) < 0x10) {
64 | hexString.append("0");
65 | }
66 | hexString.append(Integer.toHexString(0xFF & byteArray[i]));
67 | }
68 | return hexString.toString().toLowerCase();
69 | }
70 |
71 |
72 |
73 | /**
74 | * Description :把string转为map
75 |
76 | **/
77 | public static Map stringToMap(String string) {
78 | String[] strings = string.split("È");
79 | Map map = new HashMap(16);
80 | for (int i = 0; i < strings.length; i++) {
81 | String[] keyValue = strings[i].split("©");
82 | map.put(keyValue[0].intern(),keyValue[1].intern());
83 | }
84 | return map;
85 | }
86 |
87 |
88 | /**
89 | * Description :把collection变为字符串
90 | *
91 | **/
92 | public static String collectionToString(Collection strings) {
93 | StringBuilder stringBuilder = new StringBuilder();
94 | for (String string : strings) {
95 | stringBuilder.append(string).append("È");
96 | }
97 | stringBuilder.deleteCharAt(stringBuilder.length() - 1);
98 | return stringBuilder.toString();
99 | }
100 | /**
101 | * Description :把字符串转为List
102 |
103 | **/
104 | public static List stringToList(String listString) {
105 | List list = new ArrayList<>();
106 | for (String string : listString.split("È")) {
107 | list.add(string.intern());
108 | }
109 | return list;
110 | }
111 | /**
112 | * Description :把字符串转为Set
113 | *
114 | * @param
115 | * @Return
116 | **/
117 | public static Set stringToSet(String listString) {
118 | Set set = new HashSet<>();
119 | for (String string : listString.split("È")) {
120 | set.add(string);
121 | }
122 | return set;
123 | }
124 |
125 |
126 | }
127 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/singlemachine/service/ExpireService.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 |
17 | package com.lzp.lxucache.singlemachine.service;
18 |
19 | import com.lzp.lxucache.common.constant.Const;
20 | import com.lzp.lxucache.common.protocol.CommandDTO;
21 | import com.lzp.lxucache.common.service.PersistenceService;
22 | import com.lzp.lxucache.common.service.ThreadFactoryImpl;
23 | import com.lzp.lxucache.common.util.FileUtil;
24 | import org.slf4j.Logger;
25 | import org.slf4j.LoggerFactory;
26 |
27 | import java.io.*;
28 | import java.time.Instant;
29 | import java.util.Iterator;
30 | import java.util.Map;
31 | import java.util.concurrent.*;
32 |
33 | /**
34 | * Description:用来处理key超时过期相关事件的服务,独立一个线程
35 | *
36 | * key过期处理主要有三种办法:
37 | * 1、在下一次访问key时判断这个key是否设置了超时时间、是否已经过期,如果过期了,删除key,返回null
38 | * 2、收到给key设置超时时间的请求后就开一个线程,sleep超时时间的秒数后删除这个key。
39 | * 3、new一个map用来存放设置超时时间的key。独立起个线程,每个一段时间轮询这个map中的key是否已经超时如果超时了,就把key从缓存中删除
40 | *
41 | * 第一种方法影响读缓存效率,所以不考虑
42 | * 第二种办法可能会导致线程数量太多,也会影响性能。并且,一个key的超时时间只能设置一次,不能覆盖超时时间。
43 | * 所以这里选择用第三种方法来处理过期key。适量浪费单个cpu的资源问题也不是很大
44 | *
45 | * @author: Lu ZePing
46 | * @date: 2019/7/14 11:07
47 | */
48 | public class ExpireService {
49 | /**
50 | * 用来存放设置过期时间的key以及它的具体过期时刻
51 | */
52 | public static Map keyTimeMap;
53 |
54 | /**
55 | * 处理过期key的线程
56 | */
57 | private static final ExecutorService THREAD_POOL = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new ArrayBlockingQueue(1), new ThreadFactoryImpl("expire handler"));
58 |
59 | private static final long POLLING_INTERVAL;
60 |
61 | private static final Logger logger = LoggerFactory.getLogger(ExpireService.class);
62 |
63 |
64 | static {
65 | File file = new File(Const.EXPIRE_SNAPSHOT_PATH);
66 | if (!file.exists()) {
67 | keyTimeMap = new ConcurrentHashMap<>(16);
68 | } else {
69 | ObjectInputStream objectInputStream = null;
70 | try {
71 | objectInputStream = new ObjectInputStream(new FileInputStream(file));
72 | keyTimeMap = (ConcurrentHashMap) objectInputStream.readObject();
73 | } catch (IOException | ClassNotFoundException e) {
74 | logger.error(e.getMessage(), e);
75 | throw new RuntimeException();
76 | } finally {
77 | FileUtil.closeResource(objectInputStream);
78 | }
79 | BufferedReader bufferedReader = null;
80 | try {
81 | bufferedReader = new BufferedReader(new FileReader(new File(Const.EXPIRE_JOURNAL_PATH)));
82 | String cmd;
83 | bufferedReader.readLine();
84 | while ((cmd = bufferedReader.readLine()) != null) {
85 | restoreData(cmd.split("ÈÈ"));
86 | }
87 | } catch (IOException e) {
88 | logger.error(e.getMessage(), e);
89 | throw new RuntimeException();
90 | } finally {
91 | FileUtil.closeResource(bufferedReader);
92 | }
93 | }
94 | //清空持久化文件,生成一次快照
95 | PersistenceService.generateExpireSnapshot(keyTimeMap);
96 | POLLING_INTERVAL = Long.parseLong(FileUtil.getProperty("pollingInterval"));
97 | THREAD_POOL.execute(ExpireService::detecExpiredAndRem);
98 | }
99 |
100 |
101 | private static void detecExpiredAndRem(){
102 | while (true) {
103 | //这里假设遍历map的时间是很快的,所以不用在判断每个key的时候重新计算当前时间,如果有漏网之鱼,只能在下一次轮询的时候揪出来删除了
104 | Long now = Instant.now().toEpochMilli();
105 | Iterator> iterator = keyTimeMap.entrySet().iterator();
106 | while (iterator.hasNext()) {
107 | Map.Entry entry = iterator.next();
108 | if (now > entry.getValue()) {
109 | ConsMesService.addMessage(new ConsMesService.Message(CommandDTO.Command.newBuilder().setKey(entry.getKey()).setType("remove").build(), null), 0);
110 | PersistenceService.writeExpireJournal(entry.getKey());
111 | iterator.remove();
112 | }
113 | }
114 | try {
115 | Thread.sleep(POLLING_INTERVAL);
116 | } catch (InterruptedException e) {
117 | logger.error(e.getMessage(), e);
118 | }
119 | }
120 | }
121 |
122 | private static void restoreData(String[] strings) {
123 | if (strings.length == 2) {
124 | keyTimeMap.put(strings[0], Long.parseLong(strings[1]));
125 | } else {
126 | keyTimeMap.remove(strings[0]);
127 | }
128 | }
129 |
130 | public static void setKeyAndTime(String key, Long expireTime) {
131 | keyTimeMap.put(key, expireTime);
132 | }
133 |
134 | }
135 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/cluster/service/SlaveExpireService.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.cluster.service;
17 |
18 | import com.lzp.lxucache.common.constant.Const;
19 | import com.lzp.lxucache.common.protocol.CommandDTO;
20 | import com.lzp.lxucache.common.service.PersistenceService;
21 | import com.lzp.lxucache.common.service.ThreadFactoryImpl;
22 | import com.lzp.lxucache.common.util.FileUtil;
23 | import org.slf4j.Logger;
24 | import org.slf4j.LoggerFactory;
25 |
26 | import java.io.*;
27 | import java.time.Instant;
28 | import java.util.Iterator;
29 | import java.util.Map;
30 | import java.util.concurrent.*;
31 |
32 | /**
33 | * Description:用来处理key超时过期相关事件的服务,独立一个线程
34 | *
35 | * key过期处理主要有三种办法:
36 | * 1、在下一次访问key时判断这个key是否设置了超时时间、是否已经过期,如果过期了,删除key,返回null
37 | * 2、收到给key设置超时时间的请求后就开一个线程,sleep超时时间的秒数后删除这个key。
38 | * 3、new一个map用来存放设置超时时间的key。独立起个线程,每个一段时间轮询这个map中的key是否已经超时如果超时了,就把key从缓存中删除
39 | *
40 | * 第一种方法影响读缓存效率,所以不考虑
41 | * 第二种办法可能会导致线程数量太多,也会影响性能。并且,一个key的超时时间只能设置一次,不能覆盖超时时间。
42 | * 所以这里选择用第三种方法来处理过期key。适量浪费单个cpu的资源问题也不是很大
43 | *
44 | * @author: Lu ZePing
45 | * @date: 2019/7/14 11:07
46 | */
47 | public class
48 |
49 | SlaveExpireService {
50 | /**
51 | * 用来存放设置过期时间的key以及它的具体过期时刻
52 | */
53 | public static Map keyTimeMap;
54 |
55 | /**
56 | * 处理过期key的线程
57 | */
58 | private static ExecutorService threadPool = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new ArrayBlockingQueue(1), new ThreadFactoryImpl("expire handler"));
59 |
60 | private static final long pollingInterval;
61 |
62 | private static Logger logger = LoggerFactory.getLogger(SlaveExpireService.class);
63 |
64 | static {
65 | File file = new File(Const.EXPIRE_SNAPSHOT_PATH);
66 | if (!file.exists()) {
67 | keyTimeMap = new ConcurrentHashMap<>(16);
68 | } else {
69 | ObjectInputStream objectInputStream = null;
70 | try {
71 | objectInputStream = new ObjectInputStream(new FileInputStream(file));
72 | keyTimeMap = (ConcurrentHashMap) objectInputStream.readObject();
73 | } catch (IOException | ClassNotFoundException e) {
74 | logger.error(e.getMessage(), e);
75 | throw new RuntimeException();
76 | } finally {
77 | FileUtil.closeResource(objectInputStream);
78 | }
79 | BufferedReader bufferedReader = null;
80 | try {
81 | bufferedReader = new BufferedReader(new InputStreamReader(new FileInputStream(Const.EXPIRE_JOURNAL_PATH), "UTF-8"));
82 | String cmd;
83 | bufferedReader.readLine();
84 | while ((cmd = bufferedReader.readLine()) != null) {
85 | restoreData(cmd.split("ÈÈ"));
86 | }
87 | } catch (IOException e) {
88 | logger.error(e.getMessage(), e);
89 | throw new RuntimeException();
90 | } finally {
91 | FileUtil.closeResource(bufferedReader);
92 | }
93 | }
94 | //清空持久化文件,生成一次快照
95 | PersistenceService.generateExpireSnapshot(keyTimeMap);
96 | pollingInterval = Long.parseLong(FileUtil.getProperty("pollingInterval"));
97 | threadPool.execute(SlaveExpireService::detecExpiredAndRem);
98 | }
99 |
100 | private static void detecExpiredAndRem(){
101 | while (true) {
102 | //这里假设遍历map的时间是很快的,所以不用在判断每个key的时候重新计算当前时间,如果有漏网之鱼,只能在下一次轮询的时候揪出来删除了
103 | Long now = Instant.now().toEpochMilli();
104 | Iterator> iterator = keyTimeMap.entrySet().iterator();
105 | while (iterator.hasNext()) {
106 | Map.Entry entry = iterator.next();
107 | if (now > entry.getValue()) {
108 | SlaveConsMesService.addMessage(new SlaveConsMesService.Message(CommandDTO.Command.newBuilder().setKey(entry.getKey()).setType("remove").build(), null));
109 | PersistenceService.writeExpireJournal(entry.getKey());
110 | iterator.remove();
111 | }
112 | }
113 | try {
114 | Thread.sleep(pollingInterval);
115 | } catch (InterruptedException e) {
116 | logger.error(e.getMessage(), e);
117 | }
118 | }
119 | }
120 |
121 | private static void restoreData(String[] strings) {
122 | if (strings.length == 2) {
123 | keyTimeMap.put(strings[0], Long.parseLong(strings[1]));
124 | } else {
125 | keyTimeMap.remove(strings[0]);
126 | }
127 | }
128 |
129 | public static void setKeyAndTime(String key, Long expireTime) {
130 | keyTimeMap.put(key, expireTime);
131 | }
132 |
133 | public static void close() {
134 | threadPool.shutdownNow();
135 | threadPool = null;
136 | logger = null;
137 | keyTimeMap = null;
138 | }
139 | }
140 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/cluster/service/MasterExpireService.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.cluster.service;
17 |
18 | import com.lzp.lxucache.common.constant.Const;
19 | import com.lzp.lxucache.common.protocol.CommandDTO;
20 | import com.lzp.lxucache.common.service.PersistenceService;
21 | import com.lzp.lxucache.common.service.ThreadFactoryImpl;
22 | import com.lzp.lxucache.common.util.FileUtil;
23 | import org.slf4j.Logger;
24 | import org.slf4j.LoggerFactory;
25 |
26 | import java.io.*;
27 | import java.time.Instant;
28 | import java.util.Iterator;
29 | import java.util.Map;
30 | import java.util.concurrent.*;
31 |
32 | /**
33 | * Description:用来处理key超时过期相关事件的服务,独立一个线程
34 | *
35 | * key过期处理主要有三种办法:
36 | * 1、在下一次访问key时判断这个key是否设置了超时时间、是否已经过期,如果过期了,删除key,返回null
37 | * 2、收到给key设置超时时间的请求后就开一个线程,sleep超时时间的秒数后删除这个key。
38 | * 3、new一个map用来存放设置超时时间的key。独立起个线程,每个一段时间轮询这个map中的key是否已经超时如果超时了,就把key从缓存中删除
39 | *
40 | * 第一种方法影响读缓存效率,所以不考虑
41 | * 第二种办法可能会导致线程数量太多,也会影响性能。并且,一个key的超时时间只能设置一次,不能覆盖超时时间。
42 | * 所以这里选择用第三种方法来处理过期key。适量浪费单个cpu的资源问题也不是很大
43 | *
44 | * @author: Lu ZePing
45 | * @date: 2019/7/14 11:07
46 | */
47 | public class MasterExpireService {
48 | /**
49 | * 用来存放设置过期时间的key以及它的具体过期时刻
50 | */
51 | public static Map keyTimeMap;
52 |
53 | /**
54 | * 处理过期key的线程
55 | */
56 | private static final ExecutorService threadPool = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new ArrayBlockingQueue(1), new ThreadFactoryImpl("expire handler"));
57 |
58 | private static final long pollingInterval;
59 |
60 | private static final Logger logger = LoggerFactory.getLogger(MasterExpireService.class);
61 |
62 | private static Boolean isMaster = !Const.YES.equals(FileUtil.getProperty("cluster-enabled")) ? null : Const.YES.equals(FileUtil.getProperty("isMaster"));
63 |
64 | static {
65 | File file = new File(Const.EXPIRE_SNAPSHOT_PATH);
66 | if (!file.exists()) {
67 | keyTimeMap = new ConcurrentHashMap<>(16);
68 | } else {
69 | ObjectInputStream objectInputStream = null;
70 | try {
71 | objectInputStream = new ObjectInputStream(new FileInputStream(file));
72 | keyTimeMap = (ConcurrentHashMap) objectInputStream.readObject();
73 | } catch (IOException | ClassNotFoundException e) {
74 | logger.error(e.getMessage(), e);
75 | throw new RuntimeException();
76 | } finally {
77 | FileUtil.closeResource(objectInputStream);
78 | }
79 | BufferedReader bufferedReader = null;
80 | try {
81 | bufferedReader = new BufferedReader(new InputStreamReader(new FileInputStream(Const.EXPIRE_JOURNAL_PATH), "UTF-8"));
82 | String cmd;
83 | bufferedReader.readLine();
84 | while ((cmd = bufferedReader.readLine()) != null) {
85 | restoreData(cmd.split("ÈÈ"));
86 | }
87 | } catch (IOException e) {
88 | logger.error(e.getMessage(), e);
89 | throw new RuntimeException();
90 | } finally {
91 | FileUtil.closeResource(bufferedReader);
92 | }
93 | }
94 | //清空持久化文件,生成一次快照
95 | PersistenceService.generateExpireSnapshot(keyTimeMap);
96 | pollingInterval = Long.parseLong(FileUtil.getProperty("pollingInterval"));
97 | threadPool.execute(MasterExpireService::detecExpiredAndRem);
98 | }
99 |
100 | private static void detecExpiredAndRem(){
101 | while (true) {
102 | //这里假设遍历map的时间是很快的,所以不用在判断每个key的时候重新计算当前时间,如果有漏网之鱼,只能在下一次轮询的时候揪出来删除了
103 | Long now = Instant.now().toEpochMilli();
104 | Iterator> iterator = keyTimeMap.entrySet().iterator();
105 | while (iterator.hasNext()) {
106 | Map.Entry entry = iterator.next();
107 | if (now > entry.getValue()) {
108 | MasterConsMesService.addMessage(new MasterConsMesService.Message(CommandDTO.Command.newBuilder().setKey(entry.getKey()).setType("remove").build(), null), 0);
109 | PersistenceService.writeExpireJournal(entry.getKey());
110 | iterator.remove();
111 | }
112 | }
113 | try {
114 | Thread.sleep(pollingInterval);
115 | } catch (InterruptedException e) {
116 | logger.error(e.getMessage(), e);
117 | }
118 | }
119 | }
120 |
121 | private static void restoreData(String[] strings) {
122 | if (strings.length == Const.TWO) {
123 | keyTimeMap.put(strings[0], Long.parseLong(strings[1]));
124 | } else {
125 | keyTimeMap.remove(strings[0]);
126 | }
127 | }
128 |
129 | public static void setKeyAndTime(String key, Long expireTime) {
130 | keyTimeMap.put(key, expireTime);
131 | }
132 |
133 |
134 | public static void upgrade() {
135 | isMaster = true;
136 | }
137 |
138 |
139 | }
140 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/datastructure/set/Zset.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.common.datastructure.set;
17 |
18 |
19 | import java.io.Serializable;
20 | import java.util.*;
21 |
22 | /**
23 | * Description:有序Set
24 | *
25 | * @author: Lu ZePing
26 | * @date: 2019/7/20 12:19
27 | */
28 | public class Zset implements Serializable {
29 | private final Set TREESET = new TreeSet();
30 | private final Map MAP = new HashMap();
31 | private static final long serialVersionUID = 3L;
32 |
33 | private static class Node implements Comparable, Serializable {
34 | private final double SCORE;
35 | private String e;
36 |
37 | public Node(double score, String e) {
38 | this.SCORE = score;
39 | this.e = e;
40 | }
41 |
42 | @Override
43 | public int compareTo(Node node) {
44 | if (this.SCORE == node.SCORE) {
45 | return this.e.equals(node.e) ? 0 : -1;
46 | } else {
47 | return this.SCORE - node.SCORE > 0 ? 1 : -1;
48 | }
49 | }
50 |
51 | @Override
52 | public boolean equals(Object node) {
53 | if (node instanceof Node) {
54 | Node node1 = (Node) node;
55 | return this.e.equals(node1.e);
56 | }
57 | return false;
58 | }
59 |
60 | @Override
61 | public int hashCode() {
62 | return this.e.hashCode();
63 | }
64 |
65 | @Override
66 | public String toString() {
67 | return "{member=" + e + ",score=" + SCORE + "}";
68 | }
69 | }
70 |
71 | public void zadd(double score,String e) {
72 | Double preScore;
73 | if ((preScore = MAP.put(e, score)) != null) {
74 | TREESET.remove(new Node(preScore, e));
75 | }
76 | TREESET.add(new Node(score, e));
77 | }
78 |
79 |
80 | public String zrange(long start, long end) {
81 | Iterator iterator = TREESET.iterator();
82 | StringBuilder setString = new StringBuilder();
83 | for (long i = 0; i < start; i++) {
84 | if (!iterator.hasNext()) {
85 | break;
86 | }
87 | iterator.next();
88 | }
89 | for (long i = start; i <= end; i++) {
90 | if (!iterator.hasNext()) {
91 | break;
92 | }
93 | setString.append(iterator.next().e).append("È");
94 | }
95 | setString.deleteCharAt(setString.length() - 1);
96 | return setString.toString();
97 | }
98 |
99 | public void zrem(String... member) {
100 | for (String men : member) {
101 | TREESET.remove(new Node(MAP.remove(men), men));
102 | }
103 | }
104 |
105 | Double zincrby(double score, String member) throws NullPointerException {
106 | double preScore = MAP.get(member);
107 | TREESET.remove(new Node(preScore, member));
108 | double afterSocre = preScore + score;
109 | MAP.put(member, afterSocre);
110 | TREESET.add(new Node(afterSocre, member));
111 | return afterSocre;
112 | }
113 |
114 | Long zrank(String member) {
115 | long i = -1;
116 | Iterator iterator = TREESET.iterator();
117 | while (iterator.hasNext()) {
118 | i++;
119 | Node node = iterator.next();
120 | if (node.e.equals(member)) {
121 | return i;
122 | }
123 | }
124 | return i;
125 | }
126 |
127 | Long zrevrank(String key, String member) {
128 | ListIterator listIterator = new ArrayList(TREESET).listIterator(TREESET.size());
129 | long rank = -1;
130 | while (listIterator.hasPrevious()) {
131 | rank++;
132 | if (listIterator.previous().e.equals(member)) {
133 | return rank;
134 | }
135 | }
136 | return rank;
137 | }
138 |
139 | String zrevrange(String key, long start, long end) {
140 | ListIterator listIterator = new ArrayList(TREESET).listIterator(TREESET.size());
141 | StringBuilder setString = new StringBuilder();
142 | for (long i = 0; i < start; i++) {
143 | if (!listIterator.hasPrevious()) {
144 | break;
145 | }
146 | listIterator.previous();
147 | }
148 | for (long i = start; i <= end; i++) {
149 | if (!listIterator.hasPrevious()) {
150 | break;
151 | }
152 | setString.append(listIterator.previous().e).append("È");
153 | }
154 | setString.deleteCharAt(setString.length() - 1);
155 | return setString.toString();
156 | }
157 |
158 | public long size() {
159 | return TREESET.size();
160 | }
161 |
162 | Double zscore(String member) {
163 | return MAP.get(member);
164 | }
165 |
166 | Long zcount(double min, double max) {
167 | long count = 0;
168 | for (Node node : TREESET) {
169 | if (node.SCORE > min) {
170 | if (node.SCORE > max) {
171 | return count;
172 | }
173 | count++;
174 | }
175 | }
176 | return count;
177 | }
178 |
179 | String zrangeByScore(double min, double max) {
180 | StringBuilder setString = new StringBuilder();
181 | for (Node node : TREESET) {
182 | if (node.SCORE > min) {
183 | if (node.SCORE > max) {
184 | break;
185 | }
186 | setString.append(node.e).append("È");
187 | }
188 | }
189 | setString.deleteCharAt(setString.length() - 1);
190 | return setString.toString();
191 | }
192 |
193 |
194 | @Override
195 | public String toString() {
196 | return TREESET.toString();
197 | }
198 | }
199 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/Server.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 |
17 | package com.lzp.lxucache;
18 |
19 | import com.lzp.lxucache.cluster.client.ClientService;
20 | import com.lzp.lxucache.cluster.handler.MasterChannelInitializer;
21 | import com.lzp.lxucache.cluster.handler.SlaveChannelInitializer;
22 | import com.lzp.lxucache.cluster.service.MasterConsMesService;
23 | import com.lzp.lxucache.cluster.service.SlaveConsMesService;
24 | import com.lzp.lxucache.cluster.service.SlaveExpireService;
25 | import com.lzp.lxucache.common.constant.Const;
26 | import com.lzp.lxucache.common.service.ThreadFactoryImpl;
27 | import com.lzp.lxucache.common.util.FileUtil;
28 | import com.lzp.lxucache.singlemachine.handler.SocketChannelInitializer;
29 | import com.lzp.lxucache.singlemachine.service.ConsMesService;
30 | import io.netty.bootstrap.ServerBootstrap;
31 | import io.netty.channel.Channel;
32 | import io.netty.channel.ChannelOption;
33 | import io.netty.channel.EventLoopGroup;
34 | import io.netty.channel.nio.NioEventLoopGroup;
35 | import io.netty.channel.socket.nio.NioServerSocketChannel;
36 | import org.slf4j.Logger;
37 | import org.slf4j.LoggerFactory;
38 | import org.springframework.boot.autoconfigure.SpringBootApplication;
39 |
40 | import java.util.List;
41 | import java.util.concurrent.LinkedBlockingQueue;
42 | import java.util.concurrent.ThreadPoolExecutor;
43 | import java.util.concurrent.TimeUnit;
44 |
45 | /**
46 | * @Author:luzeping
47 | * @Date: 2019/1/6 20:23
48 | */
49 | @SpringBootApplication
50 | public class Server {
51 | private static final Logger logger = LoggerFactory.getLogger(Server.class);
52 | private static final int PORT;
53 | private static EventLoopGroup bossGroup = new NioEventLoopGroup(1);
54 | private static EventLoopGroup workerGroup;
55 | private static ServerBootstrap serverBootstrap = new ServerBootstrap();
56 |
57 | private static Channel serverChannel;
58 |
59 | /**
60 | * 启动顺序:先去初始化 com.lzp.singlemachine.service.ConsMesService,初始化过程会把持久化数据恢复到内存,恢复完成后会
61 | * 清空文件并生成一个新的快照文件。然后初始化com.lzp.service.ExpireService,恢复key过期时间的持久化文件
62 | * 同样,恢复完成后会删除文件并生成新的快照。(单机模式下或者集群模式下的主节点)
63 | */
64 | static {
65 | PORT = Integer.parseInt(FileUtil.getProperty("port"));
66 | //单机模式下
67 | if (!Const.YES.equals(FileUtil.getProperty("cluster-enabled"))) {
68 | try {
69 | Class.forName("com.lzp.lxucache.singlemachine.service.ConsMesService");
70 | Class.forName("com.lzp.lxucache.singlemachine.service.ExpireService");
71 | workerGroup = new NioEventLoopGroup(ConsMesService.THREAD_NUM);
72 | serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 1000)
73 | .childHandler(new SocketChannelInitializer());
74 | } catch (ClassNotFoundException e) {
75 | logger.error(e.getMessage(), e);
76 | }
77 | } else if (Const.YES.equals(FileUtil.getProperty("isMaster"))) {
78 | //集群模式主节点
79 | try {
80 | Class.forName("com.lzp.lxucache.cluster.service.MasterConsMesService");
81 | Class.forName("com.lzp.lxucache.cluster.service.MasterExpireService");
82 | workerGroup = new NioEventLoopGroup(MasterConsMesService.THREAD_NUM);
83 | serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 1000)
84 | .childHandler(new MasterChannelInitializer());
85 | } catch (ClassNotFoundException e) {
86 | logger.error(e.getMessage(), e);
87 | }
88 | } else {
89 | //集群模式从节点
90 | workerGroup = new NioEventLoopGroup(1);
91 | serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
92 | .childHandler(new SlaveChannelInitializer());
93 | }
94 | }
95 |
96 | public static void main(String[] args) {
97 | try {
98 | serverChannel = serverBootstrap.bind(PORT).sync().channel();
99 | if (Const.YES.equals(FileUtil.getProperty("cluster-enabled")) && (!"yes".equals(FileUtil.getProperty("isMaster")))) {
100 | String[] masterIpAndPort = FileUtil.getProperty("masterIpAndPort").split(":");
101 | ClientService.sentFullSyncReq(masterIpAndPort[0], Integer.parseInt(masterIpAndPort[1]));
102 | }
103 | logger.info("start server successfully");
104 | serverChannel.closeFuture().sync();
105 | } catch (InterruptedException e) {
106 | logger.error(e.getMessage(), e);
107 | } finally {
108 | bossGroup.shutdownGracefully();
109 | workerGroup.shutdownGracefully();
110 | }
111 | }
112 |
113 | private static void startMasterServer(List slaves) {
114 | new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new LinkedBlockingQueue(), new ThreadFactoryImpl("startMasterServer")).execute(() -> {
115 | try {
116 | //释放资源,并停止接受消息
117 | SlaveExpireService.close();
118 | SlaveConsMesService.close();
119 | //释放监听端口,close后,主方法里就会收到通知,并调用shutdownGracefully方法。
120 | serverChannel.close().sync();
121 | //升级为主节点,初始化消息队列服务,恢复持久化文件,并把原来和其他从节点建立的连接传入
122 | MasterConsMesService.setSlaves(slaves);
123 | try {
124 | bossGroup = new NioEventLoopGroup(1);
125 | workerGroup = new NioEventLoopGroup(MasterConsMesService.THREAD_NUM);
126 | serverBootstrap = new ServerBootstrap();
127 | serverBootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 1000)
128 | .childOption(ChannelOption.SO_KEEPALIVE, true).childHandler(new MasterChannelInitializer());
129 | serverChannel = serverBootstrap.bind(PORT).sync().channel();
130 | serverChannel.closeFuture().sync();
131 | } catch (Exception e) {
132 | logger.error(e.getMessage(), e);
133 | } finally {
134 | bossGroup.shutdownGracefully();
135 | workerGroup.shutdownGracefully();
136 | }
137 | } catch (InterruptedException e) {
138 | logger.error(e.getMessage(), e);
139 | }
140 | });
141 | }
142 |
143 | /**
144 | * 最多只会被调用一次,主挂了,选举为主的时候
145 | *
146 | * @param
147 | */
148 | public static void upgradeTomasterNode(List slaves) {
149 | startMasterServer(slaves);
150 | }
151 |
152 | }
153 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/service/PersistenceService.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.common.service;
17 |
18 | import com.lzp.lxucache.common.constant.Const;
19 | import com.lzp.lxucache.common.protocol.CommandDTO;
20 | import com.lzp.lxucache.common.util.FileUtil;
21 | import com.lzp.lxucache.singlemachine.service.ExpireService;
22 | import org.slf4j.Logger;
23 | import org.slf4j.LoggerFactory;
24 |
25 | import java.io.*;
26 | import java.util.concurrent.*;
27 | import java.util.concurrent.atomic.AtomicInteger;
28 |
29 | /**
30 | * Description:持久化服务,生成快照会阻塞线程,生成日志是另一个线程异步执行
31 | *
32 | * @author: Lu ZePing
33 | * @date: 2019/7/24 14:58
34 | */
35 | public class PersistenceService {
36 | private static ObjectOutputStream objectOutputStream;
37 |
38 | private static BufferedWriter bufferedWriter;
39 |
40 | private static volatile int snapshotVersion = 0;
41 |
42 | private static final ExecutorService JOURNAL_THREAD_POOL = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new LinkedBlockingQueue(), new ThreadFactoryImpl("journal handler"));
43 |
44 | private static final Logger logger = LoggerFactory.getLogger(PersistenceService.class);
45 |
46 | private static ObjectOutputStream expireObjectOutputStream;
47 |
48 | private static BufferedWriter expireBufferedWriter;
49 |
50 | private static volatile int expireSnapshotVersion = 0;
51 |
52 | private static AtomicInteger journalNum = new AtomicInteger();
53 |
54 | /**暂时先配置死为16384*/
55 | private final static int EX_SNAPSHOT_BATCH_COUNT_D1 = 16383;
56 |
57 | static {
58 | //生成文件目录
59 | FileUtil.generateFileIfNotExist(new File("./persistence/corecache"));
60 | FileUtil.generateFileIfNotExist(new File("./persistence/expire"));
61 | }
62 |
63 |
64 | public static void generateSnapshot(Object object) {
65 | snapshotVersion++;
66 | JOURNAL_THREAD_POOL.execute(() -> {
67 | //如果进程在这里挂了,这一版本号的快照没有生成,只保留前一次的快照,而版本号被改后的日志也都不会写进文件,所以就丢失了这些操作日志
68 | clearJournal();
69 | //如果进程在这里挂了,这一版本的快照没生成,日志也都被清空了,只保留了上一版本的快照(如果有的话)。所以这里挂了,信息丢失最多
70 | generateSnapshot0(object);
71 | });
72 | }
73 |
74 | /**
75 | * 重写快照文件,现在的做法是直接把快照文件清空,然后把新对象写进行,这个做法有个问题就是清空后和写入前这个时间段,进程挂了,
76 | * 那么原先的快照文件就会丢失。解决起来也比较简单,不直接重写这个快照文件,生成一个新的快照文件,新快照文件生成完再把原先快照覆盖
77 | * 并且这个过程会生成另一个文件记录状态,0表示正在生成新快照,1表示新快照生成完成,2表示成功覆盖快照文件。覆盖完会删除这两个临时文件
78 | * 下次重启后,先读有没有这个记录状态的文件,没有直接恢复,根据状态来判断要读哪个快照。
79 | * 由于在生成快照期间宕机是小概率事件,况且这个软件也没人用。所以现在就先这样做,以后看情况再优化
80 | *
81 | * @param
82 | */
83 | private static void generateSnapshot0(Object object) {
84 | try {
85 | //就算reset(),他也是追加写,没法覆盖文件
86 | if (objectOutputStream!=null) {
87 | objectOutputStream.close();
88 | }
89 | objectOutputStream = new ObjectOutputStream(new FileOutputStream(new File(Const.SNAPSHOT_PATH)));
90 | objectOutputStream.writeObject(object);
91 | objectOutputStream.flush();
92 | } catch (IOException e) {
93 | logger.error(e.getMessage(),e);
94 | }
95 | }
96 |
97 | private static void generateExpireSnapshot0(Object object) {
98 | try {
99 | if (expireObjectOutputStream!=null) {
100 | expireObjectOutputStream.close();
101 | }
102 | expireObjectOutputStream = new ObjectOutputStream(new FileOutputStream(new File(Const.EXPIRE_SNAPSHOT_PATH)));
103 | expireObjectOutputStream.writeObject(object);
104 | expireObjectOutputStream.flush();
105 | } catch (IOException e) {
106 | logger.error(e.getMessage(),e);
107 | }
108 | }
109 |
110 | /**
111 | * 清空持久化日志文件
112 | *
113 | * @param
114 | */
115 | private static void clearJournal() {
116 | try {
117 | if (bufferedWriter!=null) {
118 | bufferedWriter.close();
119 | }
120 | bufferedWriter = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(Const.JOURNAL_PATH),"UTF-8"));
121 | } catch (IOException e) {
122 | logger.error(e.getMessage(), e);
123 | }
124 | }
125 |
126 | private static void clearExpireJournal() {
127 | try {
128 | if (expireBufferedWriter != null) {
129 | expireBufferedWriter.close();
130 | }
131 | expireBufferedWriter = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(Const.EXPIRE_JOURNAL_PATH),"UTF-8"));
132 | } catch (IOException e) {
133 | logger.error(e.getMessage(), e);
134 | }
135 | }
136 |
137 | public static void generateExpireSnapshot(Object object) {
138 | expireSnapshotVersion++;
139 | JOURNAL_THREAD_POOL.execute(() -> {
140 | //如果进程在这里挂了,这一版本号的快照没有生成,只保留前一次的快照,而版本号被改后的日志也都不会写进文件,所以就丢失了这些操作日志
141 | clearExpireJournal();
142 | //如果进程在这里挂了,这一版本的快照没生成,日志也都被清空了,只保留了上一版本的快照(如果有的话)。所以这里挂了,信息丢失最多
143 | generateExpireSnapshot0(object);
144 | });
145 | }
146 |
147 | public static void writeJournal(CommandDTO.Command command) {
148 | final int snapshotVersion = PersistenceService.snapshotVersion;
149 | JOURNAL_THREAD_POOL.execute(() -> {
150 | try {
151 | if (snapshotVersion != PersistenceService.snapshotVersion) {
152 | return;
153 | }
154 | bufferedWriter.newLine();
155 | bufferedWriter.write(commandToString(command));
156 | bufferedWriter.flush();
157 | } catch (IOException e) {
158 | logger.error(e.getMessage(), e);
159 | }
160 | });
161 | }
162 |
163 | private static String commandToString(CommandDTO.Command command){
164 | StringBuilder stringBuilder = new StringBuilder(256);
165 | return stringBuilder.append(command.getType()).append("ÈÈ").append(command.getKey()).append("ÈÈ")
166 | .append(command.getValue()).toString();
167 | }
168 |
169 |
170 | public static void writeExpireJournal(String command) {
171 | if (((journalNum.incrementAndGet()) & EX_SNAPSHOT_BATCH_COUNT_D1) == 0) {
172 | PersistenceService.generateExpireSnapshot(ExpireService.keyTimeMap);
173 | }
174 | final int expireSnapshotVersion = PersistenceService.expireSnapshotVersion;
175 | JOURNAL_THREAD_POOL.execute(() -> {
176 | try {
177 | if (expireSnapshotVersion != PersistenceService.expireSnapshotVersion) {
178 | return;
179 | }
180 | expireBufferedWriter.newLine();
181 | expireBufferedWriter.write(command);
182 | expireBufferedWriter.flush();
183 | } catch (IOException e) {
184 | logger.error(e.getMessage(), e);
185 | }
186 | });
187 | }
188 |
189 | public static Future submitTask(Runnable runnable){
190 | return JOURNAL_THREAD_POOL.submit(runnable);
191 | }
192 | }
193 |
--------------------------------------------------------------------------------
/README-EN.md:
--------------------------------------------------------------------------------
1 | # LXU-Cache
2 | [中文](https://github.com/65487123/LXU-Cache/blob/master/README.md)|English
3 | # Features
4 | Similar to Redis, a distributed cache middleware. The main
5 | functions of redis are basically realized,Including its five
6 | main data structures(string、list、hash、set、zset), cache
7 | elimination, key setting expiration ime, persistence, clustering
8 | and so on.
9 | It also provides various batch operation APIs to reduce the number
10 | of network IOs and achieve the purpose of improving write cache efficiency.
11 |
12 | # The purpose of writing this project
13 | The main purpose is to study
14 | 1、Familiar with the various functions of redis:
15 | To write a distributed cache middleware similar to redis,
16 | must know the various functions of redis.
17 | 2、Familiar with redis design ideas and implementation principles:
18 | In the process of completing this project, I will definitely learn
19 | from the implementation principle of redis. And verify through practice
20 | Prove why redis is designed like this (for example, why the main thread
21 | of the server side is designed to be single-threaded).
22 | 3、Familiar with the use of jdk library and netty framework, increase practical experience in network programming and concurrent programming:
23 | Network communication is achieved through netty, through this project can
24 | increase the practical experience of netty (custom Protocol, long-connection
25 | heartbeat mechanism, cluster re-election master must be monitored through events, etc.).
26 |
27 | # Main realization principle
28 | ## Thread model
29 | Because netty is used, the main thread model is the master-slave Reactor model: the master
30 | Reactor receives Connection events,the slave Reactor receives business requests. Messages
31 | will be put after receiving business requests from Reactor into a blocking queue, a single
32 | thread consumes the messages in the queue, and after consuming a message, it will wrap
33 | the returned result into a task and throw it to the IO thread for processing (Call write
34 | of channelHandler in non-io thread, the bottom layer of netty will do it). The serialization
35 | protocol uses Protobuf, and uses the codec implemented by myself simplifies the serialization
36 | and deserialization process and reduces unnecessary operations.
37 | ## Queue implementation
38 | I have implemented two high-performance blocking queues by myself.
39 | 1. many-to-one: lock-free design, the bottom layer is an array,And by throwing the messages of
40 | different production threads into different blocks and between the head pointer and the tail pointer
41 | Memory filling solves the problem of false sharing.
42 | 2. One-to-one: The same is a lock-free design and solved for sharing.
43 | In the stand-alone mode, all requests will be thrown into the many-to-one blocking queue for
44 | consumption by another independent thread.
45 | Since each array block is only occupied by one production thread, no CAS is required when placing elements,
46 | which further improves performance
47 | In cluster mode, all master nodes use many-to-one queues, and slave nodes use one-to-one queues
48 |
49 |
50 | ## Persistence implementation
51 | There are two persistent files: memory snapshot file snapshot.ser and write cache log file
52 | journal.txt. Writing these two files is executed by a separate single thread. Whenever processing
53 | When the thread requesting the queue takes out a write request from the queue, it will add the request to
54 | In journal.txt, when the write cache request reaches a certain number (you can configure it yourself)
55 | The content of journal.txt will be emptied and the snapshot file will be overwritten (this is also a task and will be lost
56 | It is executed in a special persistent thread, and the snapshot version number will be modified before throwing in. In the thread
57 | Before this task is executed, the version number at the time when the task is added is judged in the previous log task
58 | If it is inconsistent with the current version number, it will return directly without writing logs). When the system restarts, it will
59 | Read the two files of the snapshot, first restore the cache according to the snapshot, and then execute the log file line by line
60 | After the restoration is completed, the two files will be cleared and a new snapshot file will be generated.
61 | Since the expiration time of the key is recorded in another container, this system will generate two copies
62 | Persistent files are placed in two directories respectively. In other words, this system will generate
63 | The four persistent files are all executed by an independent single thread.
64 |
65 |
66 | ## Cluster principle
67 | client ▉
68 | /| \
69 | / | \
70 | / | \
71 | ↙ ↓ ↘ master
72 | master ▉ ▉ ▉ ➜➜➜ ▉slave
73 | ↙ ↓ master ↘
74 | ▉ ▉ ↓ ↘ ▉slave
75 | slave slave ▉ ▉
76 | slave slave
77 | The client accepts the request and maps different keys to different master nodes.
78 | The master node will asynchronously send this request to all slave nodes of the master
79 | after writing, and then return the result. When the master hangs up, the server will
80 | automatically elect a new master, and the client will also establish a connection with
81 | the new master, and the user will not perceive it.
82 |
83 | # How to use:
84 | ## Stand-alone mode
85 | 1. The cluster-enabled configuration of config.properties is no
86 | 2. Set the port used by the cache, the cache elimination strategy, the maximum number
87 | of cache entries, the key expiration strategy, and the number of log entries when generating
88 | a snapshot by modifying the config.properties configuration file.
89 | 3. Compile the project and generate the jar package
90 | 4. Throw it to the server and execute: nohup java -jar lxucache-server-1.x.x.jar &
91 | you can add other JVM startup parameters according to your needs
92 |
93 | ## Cluster mode
94 | 1. The cluster-enabled configuration of config.properties is yes
95 | 2. If it is the master node, the isMaster attribute in the configuration file is set to yes, if it is a slave node, masterIpAndPort
96 | Need to configure the ip and port of the master node, example: masterIpAndPort=127.0.0.1:4445
97 | 3. Start the master node first. If there are persistent files under the master node's directory, the data will be restored after startup.
98 | 4. Start the slave node. After startup, it will automatically synchronize the data of the master node to the slave node.
99 | You can start multiple slave nodes.
100 | Example: nohup java -jar lxucache-server-1.x.x.jar &
101 | You can add other JVM startup parameters according to your needs
102 | 5. The client connects to the cluster through CacheClusterClient. Pass all nodes as parameters into the client construction method,
103 | and the client will automatically find the master node and perform load balancing. The new master will be found automatically when
104 | the master hangs, and the user has no perception. Currently, it does not support adding a new master node or deleting the original
105 | master node at runtime (you can add or delete slave nodes).
106 |
107 | ## Major changes in the new version
108 |
109 | 1.0.1:Originally the return value was a Response object, which was serialized using protobuf. Version 1.0.1 changed the return value
110 | to a single string, canceling protobuf serialization.
111 | 1.0.1-sr1: Modify the queue used by the slave node, and the official pseudo-sharing solution is changed to manual memory filling.
112 | benefits: 1. It takes up less memory 2. No need to add startup parameters
113 | [Source code of client](https://github.com/65487123/LxuCache-Client)
114 |
115 |
116 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/cluster/service/SlaveConsMesService.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 | package com.lzp.lxucache.cluster.service;
17 |
18 | import com.lzp.lxucache.cluster.client.ClientService;
19 | import com.lzp.lxucache.common.cache.AutoDeleteMap;
20 | import com.lzp.lxucache.common.cache.Cache;
21 | import com.lzp.lxucache.common.constant.Const;
22 | import com.lzp.lxucache.common.constant.ReqName;
23 | import com.lzp.lxucache.common.datastructure.queue.OneToOneBlockingQueue;
24 | import com.lzp.lxucache.common.datastructure.set.Zset;
25 | import com.lzp.lxucache.common.protocol.CommandDTO;
26 | import com.lzp.lxucache.common.util.HashUtil;
27 | import com.lzp.lxucache.common.service.PersistenceService;
28 | import com.lzp.lxucache.common.service.ThreadFactoryImpl;
29 | import com.lzp.lxucache.common.util.FileUtil;
30 | import com.lzp.lxucache.common.util.SerialUtil;
31 | import io.netty.channel.Channel;
32 | import io.netty.channel.ChannelHandlerContext;
33 | import org.slf4j.Logger;
34 | import org.slf4j.LoggerFactory;
35 |
36 | import java.io.*;
37 | import java.net.InetSocketAddress;
38 | import java.nio.charset.StandardCharsets;
39 | import java.time.Instant;
40 | import java.util.ArrayList;
41 | import java.util.List;
42 | import java.util.Map;
43 | import java.util.Set;
44 | import java.util.concurrent.ExecutorService;
45 | import java.util.concurrent.LinkedBlockingQueue;
46 | import java.util.concurrent.ThreadPoolExecutor;
47 | import java.util.concurrent.TimeUnit;
48 |
49 | /**
50 | * Description:只有一个消息队列的缓存服务,对应一个消费消息的线程
51 | * 集群环境下供从节点使用
52 | *
53 | * @author: Lu ZePing
54 | * @date: 2019/7/1 18:13
55 | */
56 | public class SlaveConsMesService {
57 | private static OneToOneBlockingQueue queue;
58 |
59 | private static Cache cache;
60 |
61 | private static Logger logger = LoggerFactory.getLogger(SlaveConsMesService.class);
62 |
63 | private final static int SNAPSHOT_BATCH_COUNT_D1;
64 |
65 | private static int journalNum = 0;
66 |
67 | public static final int THREAD_NUM;
68 |
69 | private static ExecutorService threadPool = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), new ThreadFactoryImpl("operCache"));
70 |
71 | private static ThreadPoolExecutor heartBeatThreadPool = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new LinkedBlockingQueue<>(), new ThreadFactoryImpl("heartBeat"));
72 |
73 |
74 | public static List laterSlaves = new ArrayList<>();
75 |
76 | static {
77 | int approHalfCpuCore;
78 | THREAD_NUM = (approHalfCpuCore = HashUtil.tableSizeFor(Runtime.getRuntime().availableProcessors()) / 2) < 1 ? 1 : approHalfCpuCore;
79 | SNAPSHOT_BATCH_COUNT_D1 = Integer.parseInt(FileUtil.getProperty("snapshot-batch-count")) - 1;
80 | queue = new OneToOneBlockingQueue<>(Integer.parseInt(FileUtil.getProperty("queueSize")));
81 | threadPool.execute(SlaveConsMesService::operCache);
82 | heartBeatThreadPool.execute(SlaveConsMesService::heartBeat);
83 | }
84 |
85 |
86 | public static class Message{
87 | CommandDTO.Command command;
88 | ChannelHandlerContext channelHandlerContext;
89 |
90 | public Message(CommandDTO.Command command,ChannelHandlerContext channelHandlerContext) {
91 | this.command = command;
92 | this.channelHandlerContext = channelHandlerContext;
93 | }
94 |
95 | }
96 |
97 | private static void heartBeat(){
98 | while (true) {
99 | for (Channel channel : laterSlaves) {
100 | channel.writeAndFlush(CommandDTO.Command.newBuilder().build());
101 | }
102 | try {
103 | Thread.sleep(4000);
104 | } catch (InterruptedException e) {
105 | logger.error(e.getMessage(),e);
106 | }
107 | }
108 | }
109 |
110 | private static void restoreData(String[] strings){
111 | switch (strings[0]){
112 | case ReqName.PUT: {
113 | cache.put(strings[1], strings[2]);
114 | break;
115 | }
116 | case ReqName.INCR: {
117 | String afterValue;
118 | try {
119 | afterValue = String.valueOf(Integer.parseInt((String) cache.get(strings[1])) + 1);
120 | cache.put(strings[1], afterValue);
121 | } catch (Exception e) {
122 | break;
123 | }
124 | break;
125 | }
126 | case ReqName.DECR: {
127 | String afterValue ;
128 | try {
129 | afterValue = String.valueOf(Integer.parseInt((String) cache.get(strings[1])) - 1);
130 | cache.put(strings[1], afterValue);
131 | } catch (Exception e) {
132 | break;
133 | }
134 | break;
135 | }
136 | case ReqName.HPUT: {
137 | Object value;
138 | if ((value = cache.get(strings[1])) !=null && !(value instanceof Map)){
139 | break;
140 | }
141 | Map values = SerialUtil.stringToMap(strings[2]);
142 | cache.put(strings[1],values);
143 | break;
144 | }
145 | case ReqName.HMERGE: {
146 | Object value;
147 | if ((value = cache.get(strings[1])) == null) {
148 | Map values = SerialUtil.stringToMap(strings[2]);
149 | cache.put(strings[1], values);
150 | } else if (!(value instanceof Map)) {
151 | break;
152 | } else {
153 | Map mapValue = (Map) value;
154 | Map values = SerialUtil.stringToMap(strings[2]);
155 | for (Map.Entry entry : values.entrySet()) {
156 | mapValue.put(entry.getKey(), entry.getValue());
157 | }
158 | }
159 | break;
160 | }
161 | case ReqName.LPUSH: {
162 | Object value;
163 | if ((value = cache.get(strings[1])) == null) {
164 | //不values.addAll(Arrays.asList(message.command.getValue().split(","))) 这样写的原因是他底层也是要addAll的,没区别
165 | //而且还多了一步new java.util.Arrays.ArrayList()的操作。虽然jvm在编译的时候可能就会优化成和我写的一样,但最终结果都一样,这样写直观一点。下面同样
166 | cache.put(strings[1], SerialUtil.stringToList(strings[2]));
167 | } else if (!(value instanceof List)) {
168 | break;
169 | } else {
170 | List listValue = (List) value;
171 | listValue.addAll(SerialUtil.stringToList(strings[2]));
172 | }
173 | break;
174 | }
175 | case ReqName.SADD: {
176 | Object value;
177 | if ((value = cache.get(strings[1])) == null) {
178 | cache.put(strings[1], SerialUtil.stringToSet(strings[2]));
179 | } else if (!(value instanceof List)) {
180 | break;
181 | } else {
182 | Set setValue = (Set) value;
183 | setValue.addAll(SerialUtil.stringToList(strings[2]));
184 | }
185 | break;
186 | }
187 | case ReqName.ZADD: {
188 | try {
189 | Zset zset = (Zset) cache.get(strings[1]);
190 | String[] strings1 = (strings[2].split("È"));
191 | for (String e : strings1) {
192 | String[] scoreMem = e.split("©");
193 | zset.zadd(Double.parseDouble(scoreMem[0]), scoreMem[1]);
194 | }
195 | } catch (Exception e) {
196 | break;
197 | }
198 | break;
199 | }
200 | case ReqName.REMOVE: {
201 | cache.remove(strings[1]);
202 | break;
203 | }
204 | default:
205 | }
206 | }
207 |
208 | private static void operCache() {
209 | while (true) {
210 | try {
211 | SlaveConsMesService.Message message = queue.poll(1, TimeUnit.SECONDS);
212 | switch (message.command.getType()) {
213 | case ReqName.PUT: {
214 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
215 | PersistenceService.generateSnapshot(cache);
216 | }
217 | PersistenceService.writeJournal(message.command);
218 | String key = message.command.getKey();
219 | Object preValue;
220 | if ((preValue = cache.get(key)) instanceof String || preValue == null) {
221 | cache.put(key, message.command.getValue());
222 | } else {
223 | }
224 | break;
225 | }
226 | case ReqName.INCR: {
227 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
228 | PersistenceService.generateSnapshot(cache);
229 | }
230 | PersistenceService.writeJournal(message.command);
231 | String key = message.command.getKey();
232 | String afterValue;
233 | try {
234 | afterValue = String.valueOf(Integer.parseInt((String) cache.get(message.command.getKey())) + 1);
235 | cache.put(key, afterValue);
236 | } catch (Exception e) {
237 | break;
238 | }
239 | break;
240 | }
241 | case ReqName.DECR: {
242 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
243 | PersistenceService.generateSnapshot(cache);
244 | }
245 | PersistenceService.writeJournal(message.command);
246 | String key = message.command.getKey();
247 | String afterValue;
248 | try {
249 | afterValue = String.valueOf(Integer.parseInt((String) cache.get(message.command.getKey())) - 1);
250 | cache.put(key, afterValue);
251 | } catch (Exception e) {
252 | break;
253 | }
254 | break;
255 | }
256 | case ReqName.HPUT: {
257 | //写持久化日志
258 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
259 | PersistenceService.generateSnapshot(cache);
260 | }
261 | PersistenceService.writeJournal(message.command);
262 | String key = message.command.getKey();
263 | Object value;
264 | if ((value = cache.get(key)) != null && !(value instanceof Map)) {
265 | break;
266 | }
267 | Map values = SerialUtil.stringToMap(message.command.getValue());
268 | cache.put(key, values);
269 | break;
270 | }
271 | case ReqName.HMERGE: {
272 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
273 | PersistenceService.generateSnapshot(cache);
274 | }
275 | PersistenceService.writeJournal(message.command);
276 | String key = message.command.getKey();
277 | Object value;
278 | if ((value = cache.get(key)) == null) {
279 | Map values = SerialUtil.stringToMap(message.command.getValue());
280 | cache.put(key, values);
281 | } else if (!(value instanceof Map)) {
282 | break;
283 | } else {
284 | Map mapValue = (Map) value;
285 | Map values = SerialUtil.stringToMap(message.command.getValue());
286 | for (Map.Entry entry : values.entrySet()) {
287 | mapValue.put(entry.getKey(), entry.getValue());
288 | }
289 | }
290 | break;
291 | }
292 | case ReqName.LPUSH: {
293 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
294 | PersistenceService.generateSnapshot(cache);
295 | }
296 | PersistenceService.writeJournal(message.command);
297 | String key = message.command.getKey();
298 | Object value;
299 | if ((value = cache.get(key)) == null) {
300 | cache.put(key, SerialUtil.stringToList(message.command.getValue()));
301 | } else if (!(value instanceof List)) {
302 | break;
303 | } else {
304 | List listValue = (List) value;
305 | listValue.addAll(SerialUtil.stringToList(message.command.getValue()));
306 | }
307 | break;
308 | }
309 | case ReqName.SADD: {
310 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
311 | PersistenceService.generateSnapshot(cache);
312 | }
313 | PersistenceService.writeJournal(message.command);
314 | String key = message.command.getKey();
315 | Object value;
316 | if ((value = cache.get(key)) == null) {
317 | cache.put(key, SerialUtil.stringToSet(message.command.getValue()));
318 | } else if (!(value instanceof Set)) {
319 | break;
320 | } else {
321 | Set setValue = (Set) value;
322 | setValue.addAll(SerialUtil.stringToList(message.command.getValue()));
323 | }
324 | break;
325 | }
326 | case ReqName.ZADD: {
327 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
328 | PersistenceService.generateSnapshot(cache);
329 | }
330 | PersistenceService.writeJournal(message.command);
331 | String key = message.command.getKey();
332 | Object value = cache.get(key);
333 | if (value == null) {
334 | value = new Zset();
335 | String[] strings = message.command.getValue().split("È");
336 | for (String e : strings) {
337 | String[] scoreMem = e.split("©");
338 | ((Zset) value).zadd(Double.parseDouble(scoreMem[0]), scoreMem[1]);
339 | }
340 | cache.put(key, value);
341 | } else if (value instanceof Zset) {
342 | String[] strings = message.command.getValue().split("È");
343 | for (String e : strings) {
344 | String[] scoreMem = e.split("©");
345 | ((Zset) value).zadd(Double.parseDouble(scoreMem[0]), scoreMem[1]);
346 | }
347 | }
348 | break;
349 | }
350 | case ReqName.HSET: {
351 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
352 | PersistenceService.generateSnapshot(cache);
353 | }
354 | PersistenceService.writeJournal(message.command);
355 | String key = message.command.getKey();
356 | Object value;
357 | if ((value = cache.get(key)) == null) {
358 | Map values = SerialUtil.stringToMap(message.command.getValue());
359 | cache.put(key, values);
360 | } else if (!(value instanceof Map)) {
361 | break;
362 | } else {
363 | Map mapValue = (Map) value;
364 | String[] keyValue = message.command.getValue().split("©");
365 | mapValue.put(keyValue[0], keyValue[1]);
366 | }
367 | break;
368 | }
369 |
370 | case ReqName.EXPIRE: {
371 | String key = message.command.getKey();
372 | if (cache.get(key) != null) {
373 | long expireTime = Instant.now().toEpochMilli() + (Long.parseLong(message.command.getValue()) * 1000);
374 | SlaveExpireService.setKeyAndTime(key, expireTime);
375 | PersistenceService.writeExpireJournal(key +
376 | "ÈÈ" + expireTime);
377 | }
378 | break;
379 | }
380 | case ReqName.REMOVE: {
381 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
382 | PersistenceService.generateSnapshot(cache);
383 | }
384 | PersistenceService.writeJournal(message.command);
385 | cache.remove(message.command.getKey());
386 | break;
387 | }
388 |
389 | case ReqName.ZREM: {
390 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
391 | break;
392 | }
393 | case ReqName.ZINCRBY: {
394 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
395 | break;
396 | }
397 | case ReqName.ZRANK: {
398 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
399 | break;
400 | }
401 | case ReqName.ZREVRANK: {
402 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
403 | break;
404 | }
405 | case ReqName.ZREVRANGE: {
406 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
407 | break;
408 | }
409 | case ReqName.ZCARD: {
410 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
411 | break;
412 | }
413 | case ReqName.ZSCORE: {
414 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
415 | break;
416 | }
417 | case ReqName.ZCOUNT: {
418 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
419 | break;
420 | }
421 | case ReqName.ZRANGEBYSCORE: {
422 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
423 | break;
424 | }
425 | //启动后第一次来的请求一定是这个
426 | case ReqName.FULL_SYNC: {
427 | recoverData(message);
428 | break;
429 | }
430 | case ReqName.NOTICE: {
431 | laterSlaves.add(ClientService.getConnection(message.command.getKey(), Integer.parseInt(message.command.getValue())));
432 | break;
433 | }
434 | case ReqName.REFRESH_MASTER: {
435 | InetSocketAddress inetSocketAddress = (InetSocketAddress) message.channelHandlerContext.channel().remoteAddress();
436 | String newMaster = inetSocketAddress.getHostString() + ":" + message.command.getKey();
437 | FileUtil.setProperty("masterIpAndPort", newMaster);
438 | break;
439 | }
440 | case ReqName.GET_MASTER: {
441 | message.channelHandlerContext.writeAndFlush(FileUtil.getProperty("masterIpAndPort").getBytes(StandardCharsets.UTF_8));
442 | break;
443 | }
444 | default:
445 | throw new IllegalStateException("Unexpected value: " + message.command.getType());
446 | }
447 | } catch (InterruptedException e) {
448 | //一秒钟内没请求过来
449 | } catch (NullPointerException e) {
450 | logger.info("关闭从节点消费队列服务");
451 | break;
452 | }
453 | }
454 | }
455 |
456 | /**
457 | * Description :读取主节点传来的持久化文件,恢复持久化数据
458 | *
459 | * @Return
460 | **/
461 | private static void recoverData(Message message) {
462 | String[] snaps = message.command.getKey().split("■■■■■");
463 | String[] jours = message.command.getValue().split("■■■■■");
464 | String snapshots = snaps[0];
465 | String journal = jours.length > 0 ? jours[0] : "";
466 | String expireSnap = snaps[1];
467 | String expireJour = jours.length == 2 ? jours[1] : "";
468 | FileUtil.generateFileIfNotExist(new File("./persistence/corecache"));
469 | FileUtil.generateFileIfNotExist(new File("./persistence/expire"));
470 | File journalFile = new File(Const.JOURNAL_PATH);
471 | byte[] journalBytes = SerialUtil.toByteArray(journal);
472 | FileOutputStream jourfileOutputStream = null;
473 | FileOutputStream expJourFileOutputStream = null;
474 | FileOutputStream expSnapFileOutputStream = null;
475 | try {
476 | jourfileOutputStream = new FileOutputStream(journalFile);
477 | jourfileOutputStream.write(journalBytes);
478 | jourfileOutputStream.flush();
479 | expJourFileOutputStream = new FileOutputStream(Const.EXPIRE_JOURNAL_PATH);
480 | expJourFileOutputStream.write(SerialUtil.toByteArray(expireJour));
481 | expJourFileOutputStream.flush();
482 | expSnapFileOutputStream = new FileOutputStream(Const.EXPIRE_SNAPSHOT_PATH);
483 | expSnapFileOutputStream.write(SerialUtil.toByteArray(expireSnap));
484 | expSnapFileOutputStream.flush();
485 | } catch (IOException e) {
486 | logger.error(e.getMessage(), e);
487 | } finally {
488 | FileUtil.closeResource(jourfileOutputStream, expJourFileOutputStream, expSnapFileOutputStream);
489 | }
490 | if ("LRU".equals(FileUtil.getProperty("strategy"))) {
491 | recoverCacheOfLRU(snapshots, journalFile);
492 | } else {
493 | //todo 和lrucache一样的逻辑
494 | }
495 | //清空持久化文件,生成一次快照
496 | PersistenceService.generateSnapshot(cache);
497 | try {
498 | Class.forName("com.lzp.lxucache.cluster.service.SlaveExpireService");
499 | } catch (ClassNotFoundException e) {
500 | logger.error(e.getMessage(),e);
501 | }
502 | }
503 |
504 | private static void recoverCacheOfLRU(String snapshots, File journalFile) {
505 | ObjectInputStream objectInputStream = null;
506 | try {
507 | objectInputStream = new ObjectInputStream(new ByteArrayInputStream(SerialUtil.toByteArray(snapshots)));
508 | cache = (AutoDeleteMap) objectInputStream.readObject();
509 | } catch (IOException | ClassNotFoundException e) {
510 | logger.error(e.getMessage(), e);
511 | throw new RuntimeException();
512 | } catch (ClassCastException e) {
513 | logger.error("持久化文件的缓存淘汰策略和配置文件不一致");
514 | throw e;
515 | } finally {
516 | FileUtil.closeResource(objectInputStream);
517 | }
518 | BufferedReader bufferedReader = null;
519 | try {
520 | bufferedReader = new BufferedReader(new FileReader(journalFile));
521 | String cmd;
522 | bufferedReader.readLine();
523 | while ((cmd = bufferedReader.readLine()) != null) {
524 | restoreData(cmd.split("ÈÈ"));
525 | }
526 | } catch (IOException e) {
527 | logger.error(e.getMessage(), e);
528 | throw new RuntimeException();
529 | } finally {
530 | FileUtil.closeResource(bufferedReader);
531 | }
532 | }
533 |
534 |
535 | public static void addMessage(SlaveConsMesService.Message message) {
536 | try {
537 | queue.put(message);
538 | } catch (InterruptedException e) {
539 | logger.error(e.getMessage(),e);
540 | }
541 | }
542 |
543 | /**
544 | * Description :升级为主节点后释放资源。
545 | *
546 | * @Return
547 | **/
548 | public static void close() {
549 | queue = null;
550 | threadPool.shutdown();
551 | threadPool = null;
552 | cache = null;
553 | logger = null;
554 | }
555 | }
556 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/singlemachine/service/ConsMesService.java:
--------------------------------------------------------------------------------
1 | /* Copyright zeping lu
2 | *
3 | * Licensed under the Apache License, Version 2.0 (the "License");
4 | * you may not use this file except in compliance with the License.
5 | * You may obtain a copy of the License at
6 | *
7 | * http://www.apache.org/licenses/LICENSE-2.0
8 | *
9 | * Unless required by applicable law or agreed to in writing, software
10 | * distributed under the License is distributed on an "AS IS" BASIS,
11 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | * See the License for the specific language governing permissions and
13 | * limitations under the License.
14 | */
15 |
16 |
17 | package com.lzp.lxucache.singlemachine.service;
18 |
19 | import com.lzp.lxucache.common.cache.AutoDeleteMap;
20 | import com.lzp.lxucache.common.cache.Cache;
21 | import com.lzp.lxucache.common.cache.LfuCache;
22 | import com.lzp.lxucache.common.constant.ReqName;
23 | import com.lzp.lxucache.common.constant.Const;
24 | import com.lzp.lxucache.common.datastructure.queue.NoLockBlockingQueue;
25 | import com.lzp.lxucache.common.datastructure.set.Zset;
26 | import com.lzp.lxucache.common.protocol.CommandDTO;
27 | import com.lzp.lxucache.common.service.PersistenceService;
28 | import com.lzp.lxucache.common.service.ThreadFactoryImpl;
29 | import com.lzp.lxucache.common.util.FileUtil;
30 | import com.lzp.lxucache.common.util.HashUtil;
31 | import com.lzp.lxucache.common.util.SerialUtil;
32 | import io.netty.channel.ChannelHandlerContext;
33 | import org.slf4j.Logger;
34 | import org.slf4j.LoggerFactory;
35 |
36 | import java.io.*;
37 | import java.nio.charset.StandardCharsets;
38 | import java.time.Instant;
39 | import java.util.*;
40 | import java.util.concurrent.*;
41 |
42 | /**
43 | * Description:有一个消息队列的缓存服务,对应一个消费消息的线程。
44 | *
45 | * @author: Lu ZePing
46 | * @date: 2019/7/1 18:13
47 | */
48 | public class ConsMesService {
49 |
50 | private static final NoLockBlockingQueue QUEUE;
51 |
52 | private static final Cache CACHE;
53 |
54 | private static final Logger logger = LoggerFactory.getLogger(ConsMesService.class);
55 |
56 | private final static int SNAPSHOT_BATCH_COUNT_D1;
57 |
58 | private static int journalNum = 0;
59 |
60 | public static final int THREAD_NUM;
61 |
62 | static {
63 | int approHalfCpuCore;
64 | THREAD_NUM = (approHalfCpuCore = HashUtil.tableSizeFor(Runtime.getRuntime().availableProcessors()) / 2) < 1 ? 1 : approHalfCpuCore;
65 | int maxSize = Integer.parseInt(FileUtil.getProperty("lruCacheMaxSize"));
66 | SNAPSHOT_BATCH_COUNT_D1 = Integer.parseInt(FileUtil.getProperty("snapshot-batch-count")) - 1;
67 | ExecutorService threadPool = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new LinkedBlockingQueue<>(1), new ThreadFactoryImpl("operCache"));
68 | //如果有持久化文件就恢复数据,没有就初始化缓存
69 | if (Const.LRU.equals(FileUtil.getProperty("strategy"))) {
70 | File file = new File(Const.SNAPSHOT_PATH);
71 | if (!file.exists()) {
72 | CACHE = new AutoDeleteMap<>(maxSize);
73 | } else {
74 | ObjectInputStream objectInputStream = null;
75 | try {
76 | objectInputStream = new ObjectInputStream(new FileInputStream(file));
77 | CACHE = (AutoDeleteMap) objectInputStream.readObject();
78 | } catch (IOException | ClassNotFoundException e) {
79 | logger.error(e.getMessage(), e);
80 | throw new RuntimeException();
81 | } catch (ClassCastException e) {
82 | logger.error("持久化文件的缓存淘汰策略和配置文件不一致");
83 | throw e;
84 | } finally {
85 | FileUtil.closeResource(objectInputStream);
86 | }
87 | BufferedReader bufferedReader = null;
88 | try {
89 | bufferedReader = new BufferedReader(new InputStreamReader(new FileInputStream(Const.JOURNAL_PATH), StandardCharsets.UTF_8));
90 | String cmd;
91 | bufferedReader.readLine();
92 | while ((cmd = bufferedReader.readLine()) != null) {
93 | restoreData(cmd.split("ÈÈ"));
94 | }
95 | } catch (IOException e) {
96 | logger.error(e.getMessage(), e);
97 | throw new RuntimeException();
98 | } finally {
99 | FileUtil.closeResource(bufferedReader);
100 | }
101 |
102 | }
103 | } else {
104 | CACHE = new LfuCache(maxSize);
105 | }
106 | QUEUE = new NoLockBlockingQueue<>(Integer.parseInt(FileUtil.getProperty("queueSize")), THREAD_NUM);
107 | threadPool.execute(() -> operCache());
108 | //清空持久化文件,生成一次快照
109 | PersistenceService.generateSnapshot(CACHE);
110 | }
111 |
112 | public static class Message{
113 | CommandDTO.Command command;
114 | ChannelHandlerContext channelHandlerContext;
115 |
116 | public Message(CommandDTO.Command command, ChannelHandlerContext channelHandlerContext) {
117 | this.command = command;
118 | this.channelHandlerContext = channelHandlerContext;
119 | }
120 |
121 | }
122 | private static void restoreData(String[] strings){
123 | switch (strings[0]){
124 | case ReqName.PUT: {
125 | CACHE.put(strings[1], strings[2]);
126 | break;
127 | }
128 | case ReqName.INCR: {
129 | String afterValue;
130 | try {
131 | afterValue = String.valueOf(Integer.parseInt((String) CACHE.get(strings[1])) + 1);
132 | CACHE.put(strings[1], afterValue);
133 | } catch (Exception e) {
134 | break;
135 | }
136 | break;
137 | }
138 | case ReqName.DECR: {
139 | String afterValue ;
140 | try {
141 | afterValue = String.valueOf(Integer.parseInt((String) CACHE.get(strings[1])) - 1);
142 | CACHE.put(strings[1], afterValue);
143 | } catch (Exception e) {
144 | break;
145 | }
146 | break;
147 | }
148 | case ReqName.HPUT: {
149 | Object value;
150 | if ((value = CACHE.get(strings[1])) !=null && !(value instanceof Map)){
151 | break;
152 | }
153 | Map values = SerialUtil.stringToMap(strings[2]);
154 | CACHE.put(strings[1],values);
155 | break;
156 | }
157 | case ReqName.HMERGE: {
158 | Object value;
159 | if ((value = CACHE.get(strings[1])) == null) {
160 | Map values = SerialUtil.stringToMap(strings[2]);
161 | CACHE.put(strings[1], values);
162 | } else if (!(value instanceof Map)) {
163 | break;
164 | } else {
165 | Map mapValue = (Map) value;
166 | Map values = SerialUtil.stringToMap(strings[2]);
167 | for (Map.Entry entry : values.entrySet()) {
168 | mapValue.put(entry.getKey(), entry.getValue());
169 | }
170 | }
171 | break;
172 | }
173 | case ReqName.LPUSH: {
174 | Object value;
175 | if ((value = CACHE.get(strings[1])) == null) {
176 | CACHE.put(strings[1], SerialUtil.stringToList(strings[2]));
177 | } else if (!(value instanceof List)) {
178 | break;
179 | } else {
180 | List listValue = (List) value;
181 | listValue.addAll(SerialUtil.stringToList(strings[2]));
182 | }
183 | break;
184 | }
185 | case ReqName.SADD: {
186 | Object value;
187 | if ((value = CACHE.get(strings[1])) == null) {
188 | CACHE.put(strings[1], SerialUtil.stringToSet(strings[2]));
189 | } else if (!(value instanceof List)) {
190 | break;
191 | } else {
192 | Set setValue = (Set) value;
193 | setValue.addAll(SerialUtil.stringToList(strings[2]));
194 | }
195 | break;
196 | }
197 | case ReqName.ZADD: {
198 | try {
199 | Zset zset = (Zset) CACHE.get(strings[1]);
200 | String[] strings1 = (strings[2].split("È"));
201 | for (String e : strings1) {
202 | String[] scoreMem = e.split("©");
203 | zset.zadd(Double.parseDouble(scoreMem[0]), scoreMem[1]);
204 | }
205 | } catch (Exception e) {
206 | break;
207 | }
208 | break;
209 | }
210 | case ReqName.REMOVE: {
211 | CACHE.remove(strings[1]);
212 | break;
213 | }
214 | default:
215 | }
216 | }
217 |
218 | private static void operCache() {
219 | try {
220 | while (true) {
221 | ConsMesService.Message message = QUEUE.take();
222 | switch (message.command.getType()) {
223 | case ReqName.GET: {
224 | Object retern = CACHE.get(message.command.getKey());
225 | String result = retern == null ? "null" : retern.toString();
226 | message.channelHandlerContext.writeAndFlush(result.getBytes(StandardCharsets.UTF_8));
227 | break;
228 | }
229 | case ReqName.PUT: {
230 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
231 | PersistenceService.generateSnapshot(CACHE);
232 | }
233 | PersistenceService.writeJournal(message.command);
234 | String key = message.command.getKey();
235 | Object preValue;
236 | if ((preValue = CACHE.get(key)) instanceof String || preValue == null) {
237 | CACHE.put(key, message.command.getValue());
238 | message.channelHandlerContext.writeAndFlush(new byte[1]);
239 | } else {
240 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
241 | }
242 | break;
243 | }
244 | case ReqName.INCR: {
245 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
246 | PersistenceService.generateSnapshot(CACHE);
247 | }
248 | PersistenceService.writeJournal(message.command);
249 | String key = message.command.getKey();
250 | String afterValue;
251 | try {
252 | afterValue = String.valueOf(Integer.parseInt((String) CACHE.get(message.command.getKey())) + 1);
253 | CACHE.put(key, afterValue);
254 | } catch (Exception e) {
255 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
256 | break;
257 | }
258 | message.channelHandlerContext.writeAndFlush(afterValue.getBytes(StandardCharsets.UTF_8));
259 | break;
260 | }
261 | case ReqName.DECR: {
262 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
263 | PersistenceService.generateSnapshot(CACHE);
264 | }
265 | PersistenceService.writeJournal(message.command);
266 | String key = message.command.getKey();
267 | String afterValue ;
268 | try {
269 | afterValue = String.valueOf(Integer.parseInt((String) CACHE.get(message.command.getKey())) - 1);
270 | CACHE.put(key, afterValue);
271 | } catch (Exception e) {
272 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
273 | break;
274 | }
275 | message.channelHandlerContext.writeAndFlush(afterValue.getBytes(StandardCharsets.UTF_8));
276 | break;
277 | }
278 | case ReqName.HPUT: {
279 | //写持久化日志
280 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
281 | PersistenceService.generateSnapshot(CACHE);
282 | }
283 | PersistenceService.writeJournal(message.command);
284 | String key = message.command.getKey();
285 | Object value;
286 | if ((value = CACHE.get(key)) !=null && !(value instanceof Map)){
287 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
288 | break;
289 | }
290 | Map values = SerialUtil.stringToMap(message.command.getValue());
291 | CACHE.put(key,values);
292 | message.channelHandlerContext.writeAndFlush(new byte[1]);
293 | break;
294 | }
295 | case ReqName.HMERGE: {
296 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
297 | PersistenceService.generateSnapshot(CACHE);
298 | }
299 | PersistenceService.writeJournal(message.command);
300 | String key = message.command.getKey();
301 | Object value;
302 | if ((value = CACHE.get(key)) == null) {
303 | Map values = SerialUtil.stringToMap(message.command.getValue());
304 | CACHE.put(key, values);
305 | } else if (!(value instanceof Map)) {
306 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
307 | break;
308 | } else {
309 | Map mapValue = (Map) value;
310 | Map values = SerialUtil.stringToMap(message.command.getValue());
311 | for (Map.Entry entry : values.entrySet()) {
312 | mapValue.put(entry.getKey(), entry.getValue());
313 | }
314 | }
315 | message.channelHandlerContext.writeAndFlush(new byte[1]);
316 | break;
317 | }
318 | case ReqName.LPUSH: {
319 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
320 | PersistenceService.generateSnapshot(CACHE);
321 | }
322 | PersistenceService.writeJournal(message.command);
323 | String key = message.command.getKey();
324 | Object value;
325 | if ((value = CACHE.get(key)) == null) {
326 | CACHE.put(key, SerialUtil.stringToList(message.command.getValue()));
327 | } else if (!(value instanceof List)) {
328 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
329 | break;
330 | } else {
331 | List listValue = (List) value;
332 | listValue.addAll(SerialUtil.stringToList(message.command.getValue()));
333 | }
334 | message.channelHandlerContext.writeAndFlush(new byte[1]);
335 | break;
336 | }
337 | case ReqName.SADD: {
338 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
339 | PersistenceService.generateSnapshot(CACHE);
340 | }
341 | PersistenceService.writeJournal(message.command);
342 | String key = message.command.getKey();
343 | Object value;
344 | if ((value = CACHE.get(key)) == null) {
345 | CACHE.put(key, SerialUtil.stringToSet(message.command.getValue()));
346 | } else if (!(value instanceof Set)) {
347 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
348 | break;
349 | } else {
350 | Set setValue = (Set) value;
351 | setValue.addAll(SerialUtil.stringToList(message.command.getValue()));
352 | }
353 | message.channelHandlerContext.writeAndFlush(new byte[1]);
354 | break;
355 | }
356 | case ReqName.ZADD: {
357 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
358 | PersistenceService.generateSnapshot(CACHE);
359 | }
360 | PersistenceService.writeJournal(message.command);
361 | String key = message.command.getKey();
362 | Object value = CACHE.get(key);
363 | if (value == null) {
364 | value = new Zset();
365 | String[] strings = message.command.getValue().split("È");
366 | for (String e : strings) {
367 | String[] scoreMem = e.split("©");
368 | ((Zset) value).zadd(Double.parseDouble(scoreMem[0]), scoreMem[1]);
369 | }
370 | CACHE.put(key, value);
371 | } else if (value instanceof Zset) {
372 | String[] strings = message.command.getValue().split("È");
373 | for (String e : strings) {
374 | String[] scoreMem = e.split("©");
375 | ((Zset) value).zadd(Double.parseDouble(scoreMem[0]), scoreMem[1]);
376 | }
377 | } else {
378 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
379 | }
380 | message.channelHandlerContext.writeAndFlush(new byte[1]);
381 | break;
382 | }
383 | case ReqName.HSET: {
384 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
385 | PersistenceService.generateSnapshot(CACHE);
386 | }
387 | PersistenceService.writeJournal(message.command);
388 | String key = message.command.getKey();
389 | Object value;
390 | if ((value = CACHE.get(key)) == null) {
391 | Map values = SerialUtil.stringToMap(message.command.getValue());
392 | CACHE.put(key, values);
393 | } else if (!(value instanceof Map)) {
394 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
395 | break;
396 | } else {
397 | Map mapValue = (Map) value;
398 | String[] keyValue = message.command.getValue().split("©");
399 | mapValue.put(keyValue[0],keyValue[1]);
400 | }
401 | message.channelHandlerContext.writeAndFlush(new byte[1]);
402 | break;
403 | }
404 | case ReqName.HGET: {
405 | try {
406 | Map values = (Map) CACHE.get(message.command.getKey());
407 | if (values == null) {
408 | message.channelHandlerContext.writeAndFlush("null".getBytes(StandardCharsets.UTF_8));
409 | } else {
410 | String result;
411 | message.channelHandlerContext.writeAndFlush((result = values.get(message.command.getValue())) == null ? "null".getBytes(StandardCharsets.UTF_8) : result.getBytes(StandardCharsets.UTF_8));
412 | }
413 | } catch (Exception e) {
414 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
415 | }
416 | break;
417 | }
418 | case ReqName.GET_LIST: {
419 | try {
420 | List values = (List) CACHE.get(message.command.getKey());
421 | message.channelHandlerContext.writeAndFlush(values == null ? "null".getBytes(StandardCharsets.UTF_8) : SerialUtil.collectionToString(values).getBytes(StandardCharsets.UTF_8));
422 | } catch (Exception e) {
423 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
424 | }
425 | break;
426 | }
427 | case ReqName.GET_SET: {
428 | try {
429 | Set values = (Set) CACHE.get(message.command.getKey());
430 | message.channelHandlerContext.writeAndFlush(values == null ? "null".getBytes(StandardCharsets.UTF_8) : SerialUtil.collectionToString(values).getBytes(StandardCharsets.UTF_8));
431 | } catch (Exception e) {
432 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
433 | }
434 | break;
435 | }
436 | case ReqName.SCONTAIN: {
437 | try {
438 | Set values = (Set) CACHE.get(message.command.getKey());
439 | message.channelHandlerContext.writeAndFlush(String.valueOf(values.contains(message.command.getValue())).getBytes(StandardCharsets.UTF_8));
440 | } catch (Exception e) {
441 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
442 | }
443 | break;
444 | }
445 | case ReqName.EXPIRE: {
446 | String key = message.command.getKey();
447 | if (CACHE.get(key) == null) {
448 | message.channelHandlerContext.writeAndFlush("0".getBytes(StandardCharsets.UTF_8));
449 | } else {
450 | long expireTime = Instant.now().toEpochMilli() + (Long.parseLong(message.command.getValue()) * 1000);
451 | ExpireService.setKeyAndTime(key, expireTime);
452 | PersistenceService.writeExpireJournal(key +
453 | "ÈÈ" + expireTime);
454 | message.channelHandlerContext.writeAndFlush("1".getBytes(StandardCharsets.UTF_8));
455 | }
456 | break;
457 | }
458 | case ReqName.REMOVE: {
459 | if (((++journalNum) & SNAPSHOT_BATCH_COUNT_D1) == 0) {
460 | PersistenceService.generateSnapshot(CACHE);
461 | }
462 | PersistenceService.writeJournal(message.command);
463 | CACHE.remove(message.command.getKey());
464 | if (message.channelHandlerContext != null) {
465 | message.channelHandlerContext.writeAndFlush(new byte[1]);
466 | }
467 | break;
468 | }
469 | case ReqName.ZRANGE: {
470 | try {
471 | Zset zset = (Zset) CACHE.get(message.command.getKey());
472 | String[] startAndEnd = message.command.getValue().split("©");
473 | message.channelHandlerContext.writeAndFlush(zset.zrange(Long.parseLong(startAndEnd[0]), Long.parseLong(startAndEnd[1])).getBytes(StandardCharsets.UTF_8));
474 | } catch (Exception e) {
475 | message.channelHandlerContext.writeAndFlush("e".getBytes(StandardCharsets.UTF_8));
476 | }
477 | break;
478 | }
479 | case ReqName.ZREM: {
480 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
481 | break;
482 | }
483 | case ReqName.ZINCRBY: {
484 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
485 | break;
486 | }
487 | case ReqName.ZRANK: {
488 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
489 | break;
490 | }
491 | case ReqName.ZREVRANK: {
492 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
493 | break;
494 | }
495 | case ReqName.ZREVRANGE: {
496 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
497 | break;
498 | }
499 | case ReqName.ZCARD: {
500 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
501 | break;
502 | }
503 | case ReqName.ZSCORE: {
504 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
505 | break;
506 | }
507 | case ReqName.ZCOUNT: {
508 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
509 | break;
510 | }
511 | case ReqName.ZRANGEBYSCORE: {
512 | //todo 本地调用Zset其实都实现了,rpc暂时没时间写,有空补上
513 | break;
514 | }
515 | default:
516 | throw new IllegalStateException("Unexpected value: " + message.command.getType());
517 | }
518 | }
519 | } catch (InterruptedException e) {
520 | logger.error(e.getMessage(), e);
521 | }
522 | }
523 |
524 |
525 | public static void addMessage(ConsMesService.Message message, int threadId) {
526 | try {
527 | QUEUE.put(message, threadId);
528 | } catch (InterruptedException e) {
529 | logger.error(e.getMessage(), e);
530 | }
531 | }
532 |
533 |
534 |
535 | }
536 |
--------------------------------------------------------------------------------
/LXU-Cache/src/main/java/com/lzp/lxucache/common/protocol/CommandDTO.java:
--------------------------------------------------------------------------------
1 | // Generated by the protocol buffer compiler. DO NOT EDIT!
2 | // source: command.proto
3 |
4 | package com.lzp.lxucache.common.protocol;
5 |
6 | public final class CommandDTO {
7 | private CommandDTO() {}
8 | public static void registerAllExtensions(
9 | com.google.protobuf.ExtensionRegistryLite registry) {
10 | }
11 |
12 | public static void registerAllExtensions(
13 | com.google.protobuf.ExtensionRegistry registry) {
14 | registerAllExtensions(
15 | (com.google.protobuf.ExtensionRegistryLite) registry);
16 | }
17 | public interface CommandOrBuilder extends
18 | // @@protoc_insertion_point(interface_extends:Command)
19 | com.google.protobuf.MessageOrBuilder {
20 |
21 | /**
22 | * string type = 1;
23 | */
24 | String getType();
25 | /**
26 | * string type = 1;
27 | */
28 | com.google.protobuf.ByteString
29 | getTypeBytes();
30 |
31 | /**
32 | * string key = 2;
33 | */
34 | String getKey();
35 | /**
36 | * string key = 2;
37 | */
38 | com.google.protobuf.ByteString
39 | getKeyBytes();
40 |
41 | /**
42 | * string value = 3;
43 | */
44 | String getValue();
45 | /**
46 | * string value = 3;
47 | */
48 | com.google.protobuf.ByteString
49 | getValueBytes();
50 | }
51 | /**
52 | * Protobuf type {@code Command}
53 | */
54 | public static final class Command extends
55 | com.google.protobuf.GeneratedMessageV3 implements
56 | // @@protoc_insertion_point(message_implements:Command)
57 | CommandOrBuilder {
58 | private static final long serialVersionUID = 0L;
59 | // Use Command.newBuilder() to construct.
60 | private Command(com.google.protobuf.GeneratedMessageV3.Builder> builder) {
61 | super(builder);
62 | }
63 | private Command() {
64 | type_ = "";
65 | key_ = "";
66 | value_ = "";
67 | }
68 |
69 | @Override
70 | public final com.google.protobuf.UnknownFieldSet
71 | getUnknownFields() {
72 | return this.unknownFields;
73 | }
74 | private Command(
75 | com.google.protobuf.CodedInputStream input,
76 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
77 | throws com.google.protobuf.InvalidProtocolBufferException {
78 | this();
79 | if (extensionRegistry == null) {
80 | throw new NullPointerException();
81 | }
82 | int mutable_bitField0_ = 0;
83 | com.google.protobuf.UnknownFieldSet.Builder unknownFields =
84 | com.google.protobuf.UnknownFieldSet.newBuilder();
85 | try {
86 | boolean done = false;
87 | while (!done) {
88 | int tag = input.readTag();
89 | switch (tag) {
90 | case 0:
91 | done = true;
92 | break;
93 | case 10: {
94 | String s = input.readStringRequireUtf8();
95 |
96 | type_ = s;
97 | break;
98 | }
99 | case 18: {
100 | String s = input.readStringRequireUtf8();
101 |
102 | key_ = s;
103 | break;
104 | }
105 | case 26: {
106 | String s = input.readStringRequireUtf8();
107 |
108 | value_ = s;
109 | break;
110 | }
111 | default: {
112 | if (!parseUnknownFieldProto3(
113 | input, unknownFields, extensionRegistry, tag)) {
114 | done = true;
115 | }
116 | break;
117 | }
118 | }
119 | }
120 | } catch (com.google.protobuf.InvalidProtocolBufferException e) {
121 | throw e.setUnfinishedMessage(this);
122 | } catch (java.io.IOException e) {
123 | throw new com.google.protobuf.InvalidProtocolBufferException(
124 | e).setUnfinishedMessage(this);
125 | } finally {
126 | this.unknownFields = unknownFields.build();
127 | makeExtensionsImmutable();
128 | }
129 | }
130 | public static final com.google.protobuf.Descriptors.Descriptor
131 | getDescriptor() {
132 | return CommandDTO.internal_static_Command_descriptor;
133 | }
134 |
135 | @Override
136 | protected FieldAccessorTable
137 | internalGetFieldAccessorTable() {
138 | return CommandDTO.internal_static_Command_fieldAccessorTable
139 | .ensureFieldAccessorsInitialized(
140 | CommandDTO.Command.class, CommandDTO.Command.Builder.class);
141 | }
142 |
143 | public static final int TYPE_FIELD_NUMBER = 1;
144 | private volatile Object type_;
145 | /**
146 | * string type = 1;
147 | */
148 | public String getType() {
149 | Object ref = type_;
150 | if (ref instanceof String) {
151 | return (String) ref;
152 | } else {
153 | com.google.protobuf.ByteString bs =
154 | (com.google.protobuf.ByteString) ref;
155 | String s = bs.toStringUtf8();
156 | type_ = s;
157 | return s;
158 | }
159 | }
160 | /**
161 | * string type = 1;
162 | */
163 | public com.google.protobuf.ByteString
164 | getTypeBytes() {
165 | Object ref = type_;
166 | if (ref instanceof String) {
167 | com.google.protobuf.ByteString b =
168 | com.google.protobuf.ByteString.copyFromUtf8(
169 | (String) ref);
170 | type_ = b;
171 | return b;
172 | } else {
173 | return (com.google.protobuf.ByteString) ref;
174 | }
175 | }
176 |
177 | public static final int KEY_FIELD_NUMBER = 2;
178 | private volatile Object key_;
179 | /**
180 | * string key = 2;
181 | */
182 | public String getKey() {
183 | Object ref = key_;
184 | if (ref instanceof String) {
185 | return (String) ref;
186 | } else {
187 | com.google.protobuf.ByteString bs =
188 | (com.google.protobuf.ByteString) ref;
189 | String s = bs.toStringUtf8();
190 | key_ = s;
191 | return s;
192 | }
193 | }
194 | /**
195 | * string key = 2;
196 | */
197 | public com.google.protobuf.ByteString
198 | getKeyBytes() {
199 | Object ref = key_;
200 | if (ref instanceof String) {
201 | com.google.protobuf.ByteString b =
202 | com.google.protobuf.ByteString.copyFromUtf8(
203 | (String) ref);
204 | key_ = b;
205 | return b;
206 | } else {
207 | return (com.google.protobuf.ByteString) ref;
208 | }
209 | }
210 |
211 | public static final int VALUE_FIELD_NUMBER = 3;
212 | private volatile Object value_;
213 | /**
214 | * string value = 3;
215 | */
216 | public String getValue() {
217 | Object ref = value_;
218 | if (ref instanceof String) {
219 | return (String) ref;
220 | } else {
221 | com.google.protobuf.ByteString bs =
222 | (com.google.protobuf.ByteString) ref;
223 | String s = bs.toStringUtf8();
224 | value_ = s;
225 | return s;
226 | }
227 | }
228 | /**
229 | * string value = 3;
230 | */
231 | public com.google.protobuf.ByteString
232 | getValueBytes() {
233 | Object ref = value_;
234 | if (ref instanceof String) {
235 | com.google.protobuf.ByteString b =
236 | com.google.protobuf.ByteString.copyFromUtf8(
237 | (String) ref);
238 | value_ = b;
239 | return b;
240 | } else {
241 | return (com.google.protobuf.ByteString) ref;
242 | }
243 | }
244 |
245 | private byte memoizedIsInitialized = -1;
246 | @Override
247 | public final boolean isInitialized() {
248 | byte isInitialized = memoizedIsInitialized;
249 | if (isInitialized == 1) return true;
250 | if (isInitialized == 0) return false;
251 |
252 | memoizedIsInitialized = 1;
253 | return true;
254 | }
255 |
256 | @Override
257 | public void writeTo(com.google.protobuf.CodedOutputStream output)
258 | throws java.io.IOException {
259 | if (!getTypeBytes().isEmpty()) {
260 | com.google.protobuf.GeneratedMessageV3.writeString(output, 1, type_);
261 | }
262 | if (!getKeyBytes().isEmpty()) {
263 | com.google.protobuf.GeneratedMessageV3.writeString(output, 2, key_);
264 | }
265 | if (!getValueBytes().isEmpty()) {
266 | com.google.protobuf.GeneratedMessageV3.writeString(output, 3, value_);
267 | }
268 | unknownFields.writeTo(output);
269 | }
270 |
271 | @Override
272 | public int getSerializedSize() {
273 | int size = memoizedSize;
274 | if (size != -1) return size;
275 |
276 | size = 0;
277 | if (!getTypeBytes().isEmpty()) {
278 | size += com.google.protobuf.GeneratedMessageV3.computeStringSize(1, type_);
279 | }
280 | if (!getKeyBytes().isEmpty()) {
281 | size += com.google.protobuf.GeneratedMessageV3.computeStringSize(2, key_);
282 | }
283 | if (!getValueBytes().isEmpty()) {
284 | size += com.google.protobuf.GeneratedMessageV3.computeStringSize(3, value_);
285 | }
286 | size += unknownFields.getSerializedSize();
287 | memoizedSize = size;
288 | return size;
289 | }
290 |
291 | @Override
292 | public boolean equals(final Object obj) {
293 | if (obj == this) {
294 | return true;
295 | }
296 | if (!(obj instanceof CommandDTO.Command)) {
297 | return super.equals(obj);
298 | }
299 | CommandDTO.Command other = (CommandDTO.Command) obj;
300 |
301 | boolean result = true;
302 | result = result && getType()
303 | .equals(other.getType());
304 | result = result && getKey()
305 | .equals(other.getKey());
306 | result = result && getValue()
307 | .equals(other.getValue());
308 | result = result && unknownFields.equals(other.unknownFields);
309 | return result;
310 | }
311 |
312 | @Override
313 | public int hashCode() {
314 | if (memoizedHashCode != 0) {
315 | return memoizedHashCode;
316 | }
317 | int hash = 41;
318 | hash = (19 * hash) + getDescriptor().hashCode();
319 | hash = (37 * hash) + TYPE_FIELD_NUMBER;
320 | hash = (53 * hash) + getType().hashCode();
321 | hash = (37 * hash) + KEY_FIELD_NUMBER;
322 | hash = (53 * hash) + getKey().hashCode();
323 | hash = (37 * hash) + VALUE_FIELD_NUMBER;
324 | hash = (53 * hash) + getValue().hashCode();
325 | hash = (29 * hash) + unknownFields.hashCode();
326 | memoizedHashCode = hash;
327 | return hash;
328 | }
329 |
330 | public static CommandDTO.Command parseFrom(
331 | java.nio.ByteBuffer data)
332 | throws com.google.protobuf.InvalidProtocolBufferException {
333 | return PARSER.parseFrom(data);
334 | }
335 | public static CommandDTO.Command parseFrom(
336 | java.nio.ByteBuffer data,
337 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
338 | throws com.google.protobuf.InvalidProtocolBufferException {
339 | return PARSER.parseFrom(data, extensionRegistry);
340 | }
341 | public static CommandDTO.Command parseFrom(
342 | com.google.protobuf.ByteString data)
343 | throws com.google.protobuf.InvalidProtocolBufferException {
344 | return PARSER.parseFrom(data);
345 | }
346 | public static CommandDTO.Command parseFrom(
347 | com.google.protobuf.ByteString data,
348 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
349 | throws com.google.protobuf.InvalidProtocolBufferException {
350 | return PARSER.parseFrom(data, extensionRegistry);
351 | }
352 | public static CommandDTO.Command parseFrom(byte[] data)
353 | throws com.google.protobuf.InvalidProtocolBufferException {
354 | return PARSER.parseFrom(data);
355 | }
356 | public static CommandDTO.Command parseFrom(
357 | byte[] data,
358 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
359 | throws com.google.protobuf.InvalidProtocolBufferException {
360 | return PARSER.parseFrom(data, extensionRegistry);
361 | }
362 | public static CommandDTO.Command parseFrom(java.io.InputStream input)
363 | throws java.io.IOException {
364 | return com.google.protobuf.GeneratedMessageV3
365 | .parseWithIOException(PARSER, input);
366 | }
367 | public static CommandDTO.Command parseFrom(
368 | java.io.InputStream input,
369 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
370 | throws java.io.IOException {
371 | return com.google.protobuf.GeneratedMessageV3
372 | .parseWithIOException(PARSER, input, extensionRegistry);
373 | }
374 | public static CommandDTO.Command parseDelimitedFrom(java.io.InputStream input)
375 | throws java.io.IOException {
376 | return com.google.protobuf.GeneratedMessageV3
377 | .parseDelimitedWithIOException(PARSER, input);
378 | }
379 | public static CommandDTO.Command parseDelimitedFrom(
380 | java.io.InputStream input,
381 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
382 | throws java.io.IOException {
383 | return com.google.protobuf.GeneratedMessageV3
384 | .parseDelimitedWithIOException(PARSER, input, extensionRegistry);
385 | }
386 | public static CommandDTO.Command parseFrom(
387 | com.google.protobuf.CodedInputStream input)
388 | throws java.io.IOException {
389 | return com.google.protobuf.GeneratedMessageV3
390 | .parseWithIOException(PARSER, input);
391 | }
392 | public static CommandDTO.Command parseFrom(
393 | com.google.protobuf.CodedInputStream input,
394 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
395 | throws java.io.IOException {
396 | return com.google.protobuf.GeneratedMessageV3
397 | .parseWithIOException(PARSER, input, extensionRegistry);
398 | }
399 |
400 | @Override
401 | public Builder newBuilderForType() { return newBuilder(); }
402 | public static Builder newBuilder() {
403 | return DEFAULT_INSTANCE.toBuilder();
404 | }
405 | public static Builder newBuilder(CommandDTO.Command prototype) {
406 | return DEFAULT_INSTANCE.toBuilder().mergeFrom(prototype);
407 | }
408 | @Override
409 | public Builder toBuilder() {
410 | return this == DEFAULT_INSTANCE
411 | ? new Builder() : new Builder().mergeFrom(this);
412 | }
413 |
414 | @Override
415 | protected Builder newBuilderForType(
416 | BuilderParent parent) {
417 | Builder builder = new Builder(parent);
418 | return builder;
419 | }
420 | /**
421 | * Protobuf type {@code Command}
422 | */
423 | public static final class Builder extends
424 | com.google.protobuf.GeneratedMessageV3.Builder implements
425 | // @@protoc_insertion_point(builder_implements:Command)
426 | CommandDTO.CommandOrBuilder {
427 | public static final com.google.protobuf.Descriptors.Descriptor
428 | getDescriptor() {
429 | return CommandDTO.internal_static_Command_descriptor;
430 | }
431 |
432 | @Override
433 | protected FieldAccessorTable
434 | internalGetFieldAccessorTable() {
435 | return CommandDTO.internal_static_Command_fieldAccessorTable
436 | .ensureFieldAccessorsInitialized(
437 | CommandDTO.Command.class, CommandDTO.Command.Builder.class);
438 | }
439 |
440 | // Construct using com.lzp.common.protocol.CommandDTO.Command.newBuilder()
441 | private Builder() {
442 | maybeForceBuilderInitialization();
443 | }
444 |
445 | private Builder(
446 | BuilderParent parent) {
447 | super(parent);
448 | maybeForceBuilderInitialization();
449 | }
450 | private void maybeForceBuilderInitialization() {
451 | if (com.google.protobuf.GeneratedMessageV3
452 | .alwaysUseFieldBuilders) {
453 | }
454 | }
455 | @Override
456 | public Builder clear() {
457 | super.clear();
458 | type_ = "";
459 |
460 | key_ = "";
461 |
462 | value_ = "";
463 |
464 | return this;
465 | }
466 |
467 | @Override
468 | public com.google.protobuf.Descriptors.Descriptor
469 | getDescriptorForType() {
470 | return CommandDTO.internal_static_Command_descriptor;
471 | }
472 |
473 | @Override
474 | public CommandDTO.Command getDefaultInstanceForType() {
475 | return CommandDTO.Command.getDefaultInstance();
476 | }
477 |
478 | @Override
479 | public CommandDTO.Command build() {
480 | CommandDTO.Command result = buildPartial();
481 | if (!result.isInitialized()) {
482 | throw newUninitializedMessageException(result);
483 | }
484 | return result;
485 | }
486 |
487 | @Override
488 | public CommandDTO.Command buildPartial() {
489 | CommandDTO.Command result = new CommandDTO.Command(this);
490 | result.type_ = type_;
491 | result.key_ = key_;
492 | result.value_ = value_;
493 | onBuilt();
494 | return result;
495 | }
496 |
497 | @Override
498 | public Builder clone() {
499 | return (Builder) super.clone();
500 | }
501 | @Override
502 | public Builder setField(
503 | com.google.protobuf.Descriptors.FieldDescriptor field,
504 | Object value) {
505 | return (Builder) super.setField(field, value);
506 | }
507 | @Override
508 | public Builder clearField(
509 | com.google.protobuf.Descriptors.FieldDescriptor field) {
510 | return (Builder) super.clearField(field);
511 | }
512 | @Override
513 | public Builder clearOneof(
514 | com.google.protobuf.Descriptors.OneofDescriptor oneof) {
515 | return (Builder) super.clearOneof(oneof);
516 | }
517 | @Override
518 | public Builder setRepeatedField(
519 | com.google.protobuf.Descriptors.FieldDescriptor field,
520 | int index, Object value) {
521 | return (Builder) super.setRepeatedField(field, index, value);
522 | }
523 | @Override
524 | public Builder addRepeatedField(
525 | com.google.protobuf.Descriptors.FieldDescriptor field,
526 | Object value) {
527 | return (Builder) super.addRepeatedField(field, value);
528 | }
529 | @Override
530 | public Builder mergeFrom(com.google.protobuf.Message other) {
531 | if (other instanceof CommandDTO.Command) {
532 | return mergeFrom((CommandDTO.Command)other);
533 | } else {
534 | super.mergeFrom(other);
535 | return this;
536 | }
537 | }
538 |
539 | public Builder mergeFrom(CommandDTO.Command other) {
540 | if (other == CommandDTO.Command.getDefaultInstance()) return this;
541 | if (!other.getType().isEmpty()) {
542 | type_ = other.type_;
543 | onChanged();
544 | }
545 | if (!other.getKey().isEmpty()) {
546 | key_ = other.key_;
547 | onChanged();
548 | }
549 | if (!other.getValue().isEmpty()) {
550 | value_ = other.value_;
551 | onChanged();
552 | }
553 | this.mergeUnknownFields(other.unknownFields);
554 | onChanged();
555 | return this;
556 | }
557 |
558 | @Override
559 | public final boolean isInitialized() {
560 | return true;
561 | }
562 |
563 | @Override
564 | public Builder mergeFrom(
565 | com.google.protobuf.CodedInputStream input,
566 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
567 | throws java.io.IOException {
568 | CommandDTO.Command parsedMessage = null;
569 | try {
570 | parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry);
571 | } catch (com.google.protobuf.InvalidProtocolBufferException e) {
572 | parsedMessage = (CommandDTO.Command) e.getUnfinishedMessage();
573 | throw e.unwrapIOException();
574 | } finally {
575 | if (parsedMessage != null) {
576 | mergeFrom(parsedMessage);
577 | }
578 | }
579 | return this;
580 | }
581 |
582 | private Object type_ = "";
583 | /**
584 | * string type = 1;
585 | */
586 | public String getType() {
587 | Object ref = type_;
588 | if (!(ref instanceof String)) {
589 | com.google.protobuf.ByteString bs =
590 | (com.google.protobuf.ByteString) ref;
591 | String s = bs.toStringUtf8();
592 | type_ = s;
593 | return s;
594 | } else {
595 | return (String) ref;
596 | }
597 | }
598 | /**
599 | * string type = 1;
600 | */
601 | public com.google.protobuf.ByteString
602 | getTypeBytes() {
603 | Object ref = type_;
604 | if (ref instanceof String) {
605 | com.google.protobuf.ByteString b =
606 | com.google.protobuf.ByteString.copyFromUtf8(
607 | (String) ref);
608 | type_ = b;
609 | return b;
610 | } else {
611 | return (com.google.protobuf.ByteString) ref;
612 | }
613 | }
614 | /**
615 | * string type = 1;
616 | */
617 | public Builder setType(
618 | String value) {
619 | if (value == null) {
620 | throw new NullPointerException();
621 | }
622 |
623 | type_ = value;
624 | onChanged();
625 | return this;
626 | }
627 | /**
628 | * string type = 1;
629 | */
630 | public Builder clearType() {
631 |
632 | type_ = getDefaultInstance().getType();
633 | onChanged();
634 | return this;
635 | }
636 | /**
637 | * string type = 1;
638 | */
639 | public Builder setTypeBytes(
640 | com.google.protobuf.ByteString value) {
641 | if (value == null) {
642 | throw new NullPointerException();
643 | }
644 | checkByteStringIsUtf8(value);
645 |
646 | type_ = value;
647 | onChanged();
648 | return this;
649 | }
650 |
651 | private Object key_ = "";
652 | /**
653 | * string key = 2;
654 | */
655 | public String getKey() {
656 | Object ref = key_;
657 | if (!(ref instanceof String)) {
658 | com.google.protobuf.ByteString bs =
659 | (com.google.protobuf.ByteString) ref;
660 | String s = bs.toStringUtf8();
661 | key_ = s;
662 | return s;
663 | } else {
664 | return (String) ref;
665 | }
666 | }
667 | /**
668 | * string key = 2;
669 | */
670 | public com.google.protobuf.ByteString
671 | getKeyBytes() {
672 | Object ref = key_;
673 | if (ref instanceof String) {
674 | com.google.protobuf.ByteString b =
675 | com.google.protobuf.ByteString.copyFromUtf8(
676 | (String) ref);
677 | key_ = b;
678 | return b;
679 | } else {
680 | return (com.google.protobuf.ByteString) ref;
681 | }
682 | }
683 | /**
684 | * string key = 2;
685 | */
686 | public Builder setKey(
687 | String value) {
688 | if (value == null) {
689 | throw new NullPointerException();
690 | }
691 |
692 | key_ = value;
693 | onChanged();
694 | return this;
695 | }
696 | /**
697 | * string key = 2;
698 | */
699 | public Builder clearKey() {
700 |
701 | key_ = getDefaultInstance().getKey();
702 | onChanged();
703 | return this;
704 | }
705 | /**
706 | * string key = 2;
707 | */
708 | public Builder setKeyBytes(
709 | com.google.protobuf.ByteString value) {
710 | if (value == null) {
711 | throw new NullPointerException();
712 | }
713 | checkByteStringIsUtf8(value);
714 |
715 | key_ = value;
716 | onChanged();
717 | return this;
718 | }
719 |
720 | private Object value_ = "";
721 | /**
722 | * string value = 3;
723 | */
724 | public String getValue() {
725 | Object ref = value_;
726 | if (!(ref instanceof String)) {
727 | com.google.protobuf.ByteString bs =
728 | (com.google.protobuf.ByteString) ref;
729 | String s = bs.toStringUtf8();
730 | value_ = s;
731 | return s;
732 | } else {
733 | return (String) ref;
734 | }
735 | }
736 | /**
737 | * string value = 3;
738 | */
739 | public com.google.protobuf.ByteString
740 | getValueBytes() {
741 | Object ref = value_;
742 | if (ref instanceof String) {
743 | com.google.protobuf.ByteString b =
744 | com.google.protobuf.ByteString.copyFromUtf8(
745 | (String) ref);
746 | value_ = b;
747 | return b;
748 | } else {
749 | return (com.google.protobuf.ByteString) ref;
750 | }
751 | }
752 | /**
753 | * string value = 3;
754 | */
755 | public Builder setValue(
756 | String value) {
757 | if (value == null) {
758 | throw new NullPointerException();
759 | }
760 |
761 | value_ = value;
762 | onChanged();
763 | return this;
764 | }
765 | /**
766 | * string value = 3;
767 | */
768 | public Builder clearValue() {
769 |
770 | value_ = getDefaultInstance().getValue();
771 | onChanged();
772 | return this;
773 | }
774 | /**
775 | * string value = 3;
776 | */
777 | public Builder setValueBytes(
778 | com.google.protobuf.ByteString value) {
779 | if (value == null) {
780 | throw new NullPointerException();
781 | }
782 | checkByteStringIsUtf8(value);
783 |
784 | value_ = value;
785 | onChanged();
786 | return this;
787 | }
788 | @Override
789 | public final Builder setUnknownFields(
790 | final com.google.protobuf.UnknownFieldSet unknownFields) {
791 | return super.setUnknownFieldsProto3(unknownFields);
792 | }
793 |
794 | @Override
795 | public final Builder mergeUnknownFields(
796 | final com.google.protobuf.UnknownFieldSet unknownFields) {
797 | return super.mergeUnknownFields(unknownFields);
798 | }
799 |
800 |
801 | // @@protoc_insertion_point(builder_scope:Command)
802 | }
803 |
804 | // @@protoc_insertion_point(class_scope:Command)
805 | private static final CommandDTO.Command DEFAULT_INSTANCE;
806 | static {
807 | DEFAULT_INSTANCE = new CommandDTO.Command();
808 | }
809 |
810 | public static CommandDTO.Command getDefaultInstance() {
811 | return DEFAULT_INSTANCE;
812 | }
813 |
814 | private static final com.google.protobuf.Parser
815 | PARSER = new com.google.protobuf.AbstractParser() {
816 | @Override
817 | public Command parsePartialFrom(
818 | com.google.protobuf.CodedInputStream input,
819 | com.google.protobuf.ExtensionRegistryLite extensionRegistry)
820 | throws com.google.protobuf.InvalidProtocolBufferException {
821 | return new Command(input, extensionRegistry);
822 | }
823 | };
824 |
825 | public static com.google.protobuf.Parser parser() {
826 | return PARSER;
827 | }
828 |
829 | @Override
830 | public com.google.protobuf.Parser getParserForType() {
831 | return PARSER;
832 | }
833 |
834 | @Override
835 | public CommandDTO.Command getDefaultInstanceForType() {
836 | return DEFAULT_INSTANCE;
837 | }
838 |
839 | }
840 |
841 | private static final com.google.protobuf.Descriptors.Descriptor
842 | internal_static_Command_descriptor;
843 | private static final
844 | com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
845 | internal_static_Command_fieldAccessorTable;
846 |
847 | public static com.google.protobuf.Descriptors.FileDescriptor
848 | getDescriptor() {
849 | return descriptor;
850 | }
851 | private static com.google.protobuf.Descriptors.FileDescriptor
852 | descriptor;
853 | static {
854 | String[] descriptorData = {
855 | "\n\rcommand.proto\"3\n\007Command\022\014\n\004type\030\001 \001(\t" +
856 | "\022\013\n\003key\030\002 \001(\t\022\r\n\005value\030\003 \001(\tB \n\020com.lzp." +
857 | "protocolB\nCommandDTOH\001b\006proto3"
858 | };
859 | com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
860 | new com.google.protobuf.Descriptors.FileDescriptor. InternalDescriptorAssigner() {
861 | public com.google.protobuf.ExtensionRegistry assignDescriptors(
862 | com.google.protobuf.Descriptors.FileDescriptor root) {
863 | descriptor = root;
864 | return null;
865 | }
866 | };
867 | com.google.protobuf.Descriptors.FileDescriptor
868 | .internalBuildGeneratedFileFrom(descriptorData,
869 | new com.google.protobuf.Descriptors.FileDescriptor[] {
870 | }, assigner);
871 | internal_static_Command_descriptor =
872 | getDescriptor().getMessageTypes().get(0);
873 | internal_static_Command_fieldAccessorTable = new
874 | com.google.protobuf.GeneratedMessageV3.FieldAccessorTable(
875 | internal_static_Command_descriptor,
876 | new String[] { "Type", "Key", "Value", });
877 | }
878 |
879 | // @@protoc_insertion_point(outer_class_scope)
880 | }
881 |
--------------------------------------------------------------------------------