├── images ├── README.md └── recloud.png ├── _config.yml ├── advancedapi ├── pubsub │ ├── pubsub.png │ └── README.md ├── geospatial │ └── README.md ├── luascripts │ └── README.md ├── bitfield │ └── README.md ├── HyperLogLog │ └── README.md ├── modules │ └── README.md ├── transactions │ └── README.md ├── bitmap │ └── README.md └── README.md ├── enterprise └── modules │ ├── redisgraph │ ├── redisgraph1.png │ ├── Screen Shot 2020-03-08 at 8.16.31 PM.png │ ├── Screen Shot 2020-03-08 at 8.24.09 PM.png │ ├── Screen Shot 2020-03-08 at 8.26.38 PM.png │ ├── Screen Shot 2020-03-08 at 8.50.26 PM.png │ ├── Screen Shot 2020-03-08 at 10.29.11 PM.png │ └── README.md │ ├── redisai │ ├── install │ │ ├── run.sh │ │ ├── test.sh │ │ ├── install.sh │ │ └── README.md │ └── README.md │ └── README.md ├── oss ├── Screen Shot 2020-03-09 at 3.14.14 PM.png ├── Screen Shot 2020-03-09 at 3.14.20 PM.png ├── feature │ └── in-memory │ │ └── README.md └── README.md ├── python ├── push.py └── push-catalog.py ├── kubernetes ├── redis-ruby-k8s │ ├── docker-images-redis-ruby │ │ ├── redis.conf │ │ └── Dockerfile │ ├── start-cluster.sh │ ├── redis-service.yaml │ ├── redis-deployment.yaml │ └── README.md ├── gke │ ├── redis-svc.yaml │ ├── app-deployment.yaml │ ├── redis-statefullset.yaml │ └── README.md ├── Okteto_Cloud │ ├── README.md │ ├── redis-cluster.yml │ └── okteto-redis-cluster.md └── install │ └── README.md ├── sentinel └── swarm │ ├── collabnix-redis-sentinel-Dockercompose.yml │ ├── collabnix-redis-sentinel-Dockerfile.yml │ ├── collabnix-redis-sentinel-entrypoint.sh │ ├── README.md │ ├── redis-stack.yaml │ └── collabnix-redis-sentinel.conf ├── ee ├── active-active │ ├── windows │ │ ├── network-setup.sh │ │ ├── README.md │ │ ├── rs-setup.sh │ │ └── connect-networks.sh │ ├── network-setup.sh │ ├── crdb.sh │ ├── cluster-setup.sh │ ├── rs-setup.sh │ ├── connect-networks.sh │ ├── README.md │ └── gke │ │ └── README.md └── install │ ├── multinode │ └── README.md │ └── README.md ├── sample └── README.md ├── datastore ├── intro.md ├── scan │ └── README.md ├── set │ └── README.md ├── sorted-sets │ └── README.md ├── hashes │ └── README.md ├── strings │ └── README.md └── lists │ └── README.md ├── aws └── ElastiCache │ └── README.md ├── sentine └── swarm │ ├── redis-master.yaml │ └── redis-slave.yaml ├── docker └── Getting_Started_with_Docker-Redis-Go.md ├── mac └── README.md ├── _layouts └── default.html ├── Redis-conf ├── README.md └── redis-as-lru-cache │ └── README.md ├── install └── ubuntu │ └── 18.04 │ └── README.md └── README.md /images/README.md: -------------------------------------------------------------------------------- 1 | # Images 2 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /images/recloud.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/images/recloud.png -------------------------------------------------------------------------------- /advancedapi/pubsub/pubsub.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/advancedapi/pubsub/pubsub.png -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/redisgraph1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/enterprise/modules/redisgraph/redisgraph1.png -------------------------------------------------------------------------------- /oss/Screen Shot 2020-03-09 at 3.14.14 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/oss/Screen Shot 2020-03-09 at 3.14.14 PM.png -------------------------------------------------------------------------------- /oss/Screen Shot 2020-03-09 at 3.14.20 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/oss/Screen Shot 2020-03-09 at 3.14.20 PM.png -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.16.31 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.16.31 PM.png -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.24.09 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.24.09 PM.png -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.26.38 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.26.38 PM.png -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.50.26 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 8.50.26 PM.png -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 10.29.11 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/redisplanet/HEAD/enterprise/modules/redisgraph/Screen Shot 2020-03-08 at 10.29.11 PM.png -------------------------------------------------------------------------------- /python/push.py: -------------------------------------------------------------------------------- 1 | import redis 2 | # Create connection object 3 | r = redis.Redis(host='', port=6379) 4 | # set a value for the foo object 5 | r.set('foo', 'bar') 6 | # retrieve and print the value for the foo object 7 | print(r.get('foo')) 8 | -------------------------------------------------------------------------------- /kubernetes/redis-ruby-k8s/docker-images-redis-ruby/redis.conf: -------------------------------------------------------------------------------- 1 | port 7000 2 | cluster-enabled yes 3 | cluster-config-file /data/nodes.conf 4 | cluster-node-timeout 5000 5 | appendonly yes 6 | appendfilename appendonly.aof 7 | dbfilename dump.rdb 8 | logfile /data/log.log 9 | -------------------------------------------------------------------------------- /enterprise/modules/redisai/install/run.sh: -------------------------------------------------------------------------------- 1 | # Put this script inside RedisAi main folder. 2 | # Run with 'bash run_redisai_torch.sh'. 3 | # Before running check that the script is executable 'chmod 755 run_redisai_torch.sh' 4 | 5 | redis-server --loadmodule install-cpu/redisai.so 6 | -------------------------------------------------------------------------------- /sentinel/swarm/collabnix-redis-sentinel-Dockercompose.yml: -------------------------------------------------------------------------------- 1 | version: "3.3" 2 | services: 3 | agile-redis: 4 | image: 127.0.0.1:5000/agile-redis-sentinel 5 | ports: 6 | - '26379:26379' 7 | build: 8 | context: ./ 9 | dockerfile: ./agile-redis-sentinel-Dockerfile.yml 10 | -------------------------------------------------------------------------------- /ee/active-active/windows/network-setup.sh: -------------------------------------------------------------------------------- 1 | echo "Script to create nat networks" 2 | docker network create -d nat redisnet1 --subnet=172.29.0.0/16 --gateway=172.29.0.1 3 | docker network create -d nat redisnet2 --subnet=172.30.0.0/16 --gateway=172.30.0.1 4 | docker network create -d nat redisnet3 --subnet=172.31.0.0/16 --gateway=172.21.0.1 5 | -------------------------------------------------------------------------------- /kubernetes/gke/redis-svc.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: redis-cluster 6 | spec: 7 | type: ClusterIP 8 | ports: 9 | - port: 6379 10 | targetPort: 6379 11 | name: client 12 | - port: 16379 13 | targetPort: 16379 14 | name: gossip 15 | selector: 16 | app: redis-cluster 17 | -------------------------------------------------------------------------------- /ee/active-active/network-setup.sh: -------------------------------------------------------------------------------- 1 | 2 | echo "Script to create bridge networks" 3 | docker network create -d bridge redisnet1 --subnet=172.18.0.0/16 --gateway=172.18.0.1 4 | docker network create -d bridge redisnet2 --subnet=172.19.0.0/16 --gateway=172.19.0.1 5 | docker network create -d bridge redisnet3 --subnet=172.20.0.0/16 --gateway=172.20.0.1 6 | -------------------------------------------------------------------------------- /advancedapi/geospatial/README.md: -------------------------------------------------------------------------------- 1 | # Geospatial Data and Indexes 2 | 3 | Redis data structures include geo specific commands allowing for extremely simple, low latency and efficient processing of location data in-memory 4 | 5 | ## Related commands 6 | 7 | - GEOADD 8 | - GEODIST 9 | - GEOHASH 10 | - GEOPOS 11 | - GEORADIUS 12 | - GEORADIUSBYMEMBER 13 | -------------------------------------------------------------------------------- /advancedapi/luascripts/README.md: -------------------------------------------------------------------------------- 1 | # Lua Scripts 2 | 3 | - It is possible to run Lua scripts server side to win latency and bandwidth 4 | - EVAL and EVALSHA are used to evaluate scripts using the Lua interpreter  5 | - Can speed up data processing 6 | - Less roundtrips are required to complete a complex task 7 | - Bring the complex operation to the data 8 | -------------------------------------------------------------------------------- /advancedapi/pubsub/README.md: -------------------------------------------------------------------------------- 1 | # Publish Subscribe 2 | 3 | - Redis is also a Pub-Sub server 4 | 5 | ## Example 6 | 7 | - Much like a topic subscription 8 | - Keyspace notifications allows clients to subscribe to Pub/Sub channels and receive events affecting the Redis data set in 9 | some way 10 | 11 | ![My Image](https://github.com/collabnix/redisplanet/blob/master/advancedapi/pubsub/pubsub.png) 12 | 13 | -------------------------------------------------------------------------------- /kubernetes/redis-ruby-k8s/start-cluster.sh: -------------------------------------------------------------------------------- 1 | echo 'yes' | /root/redis-trib.rb create --replicas 1 \ 2 | `getent hosts redis-0 | awk '{ print $1 }'`:7000 \ 3 | `getent hosts redis-1 | awk '{ print $1 }'`:7000 \ 4 | `getent hosts redis-2 | awk '{ print $1 }'`:7000 \ 5 | `getent hosts redis-3 | awk '{ print $1 }'`:7000 \ 6 | `getent hosts redis-4 | awk '{ print $1 }'`:7000 \ 7 | `getent hosts redis-5 | awk '{ print $1 }'`:7000 8 | 9 | -------------------------------------------------------------------------------- /enterprise/modules/redisai/install/test.sh: -------------------------------------------------------------------------------- 1 | # Put this script inside RedisAi main folder. 2 | # Run only after you started a redisai server (view run_redisai_torch.sh). 3 | 4 | set -x 5 | 6 | redis-cli -x AI.MODELSET m TORCH GPU < ./test/test_data/pt-minimal.pt 7 | 8 | redis-cli AI.TENSORSET a FLOAT 2 2 VALUES 2 3 2 3 9 | redis-cli AI.TENSORSET b FLOAT 2 2 VALUES 2 3 2 3 10 | 11 | redis-cli AI.MODELRUN m INPUTS a b OUTPUTS c 12 | 13 | redis-cli AI.TENSORGET c VALUES 14 | -------------------------------------------------------------------------------- /ee/active-active/crdb.sh: -------------------------------------------------------------------------------- 1 | # Create the CRDB 2 | echo “” 3 | echo “Creating a CRDB” 4 | docker exec -it redis-node-01 /opt/redislabs/bin/crdb-cli crdb create --name mycrdb --memory-size 5048mb --port 12000 --replication false --shards-count 1 --instance fqdn=cluster1.collabnix.com,username=ajeetraina@gmail.com,password=collab123 --instance fqdn=cluster2.collabnix.com,username=ajeetraina@gmail.com,password=collab123 --instance fqdn=cluster3.collabnix.com,username=ajeetraina@gmail.com,password=collab123 5 | -------------------------------------------------------------------------------- /sample/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ``` 4 | cat data.csv 5 | id1,1528961481,45.0,45.0 6 | id1,1528961482,45.1,45.1 7 | id2,1528961483,50.0,50.0 8 | id2,1528961484,50.1,50.0 9 | ``` 10 | 11 | 12 | ``` 13 | cat data.csv | awk -F "," '{print $1" "$2" "$3" "$4}' | xargs -n4 sh -c 'redis-cli -h 127.0.0.1 -p 6379 zadd $1 $2 "$3,$4"' sh 14 | ``` 15 | 16 | 17 | # Sample Beer Dataset 18 | 19 | ``` 20 | https://raw.githubusercontent.com/nickhould/craft-beers-dataset/master/data/processed/beers.csv 21 | ``` 22 | -------------------------------------------------------------------------------- /ee/active-active/windows/README.md: -------------------------------------------------------------------------------- 1 | # Demonstrating Geo Distributed Active Active Redis Enterprise using Docker Desktop for Windows 2 | 3 | ## Pre-requisite 4 | 5 | - Docker for Windows 6 | 7 | ## Steps 8 | 9 | ## Clone the Repository 10 | 11 | ``` 12 | git clone https://github.com/collabnix/redisplanet 13 | cd redisplanet/ee/active-active/ 14 | ``` 15 | 16 | ## Setting up Network 17 | 18 | ``` 19 | ./network-setup.sh 20 | ``` 21 | 22 | ## Setting up Redis Enterprise 23 | 24 | ``` 25 | ./rs-setup.sh 26 | ``` 27 | -------------------------------------------------------------------------------- /ee/active-active/cluster-setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo “Creating clusters” 4 | docker exec -it redis-node-01 /opt/redislabs/bin/rladmin cluster create name cluster1.collabnix.com username ajeetraina@gmail.com password collab123 5 | docker exec -it redis-node-02 /opt/redislabs/bin/rladmin cluster create name cluster2.collabnix.com username ajeetraina@gmail.com password collab123 6 | docker exec -it redis-node-03 /opt/redislabs/bin/rladmin cluster create name cluster3.collabnix.com username ajeetraina@gmail.com password collab123 7 | -------------------------------------------------------------------------------- /ee/active-active/rs-setup.sh: -------------------------------------------------------------------------------- 1 | docker run -d --cap-add sys_resource --network redisnet1 --name redis-node-01 -h redis-node-01 -p 8443:8443 -p 9443:9443 -p 12000:12000 --ip 172.18.0.2 redislabs/redis 2 | docker run -d --cap-add sys_resource --network redisnet2 --name redis-node-02 -h redis-node-02 -p 8444:8443 -p 9444:9443 -p 12001:12000 --ip 172.19.0.2 redislabs/redis 3 | docker run -d --cap-add sys_resource --network redisnet3 --name redis-node-03 -h redis-node-03 -p 8445:8443 -p 9445:9443 -p 12002:12000 --ip 172.20.0.2 redislabs/redis 4 | -------------------------------------------------------------------------------- /ee/active-active/windows/rs-setup.sh: -------------------------------------------------------------------------------- 1 | docker run -d --cap-add sys_resource --network redisnet1 --name redis-node-01 -h redis-node-01 -p 8443:8443 -p 9443:9443 -p 12000:12000 --ip 172.29.0.2 redislabs/redis 2 | docker run -d --cap-add sys_resource --network redisnet2 --name redis-node-02 -h redis-node-02 -p 8444:8443 -p 9444:9443 -p 12001:12000 --ip 172.30.0.2 redislabs/redis 3 | docker run -d --cap-add sys_resource --network redisnet3 --name redis-node-03 -h redis-node-03 -p 8445:8443 -p 9445:9443 -p 12002:12000 --ip 172.31.0.2 redislabs/redis 4 | -------------------------------------------------------------------------------- /ee/active-active/connect-networks.sh: -------------------------------------------------------------------------------- 1 | echo "connecting networks to containers" 2 | 3 | #connecting redisnet1 to node-02 and node-03 4 | docker network connect redisnet1 redis-node-02 5 | docker network connect redisnet1 redis-node-03 6 | 7 | #connecting redisnet2 to node-03 & node-01 8 | docker network connect redisnet2 redis-node-01 9 | docker network connect redisnet2 redis-node-03 10 | 11 | #connecting redisnet3 to node-01 & node-02 12 | docker network connect redisnet3 redis-node-01 13 | docker network connect redisnet3 redis-node-02 14 | -------------------------------------------------------------------------------- /ee/active-active/windows/connect-networks.sh: -------------------------------------------------------------------------------- 1 | echo "connecting networks to containers" 2 | 3 | #connecting redisnet1 to node-02 and node-03 4 | docker network connect redisnet1 redis-node-02 5 | docker network connect redisnet1 redis-node-03 6 | 7 | #connecting redisnet2 to node-03 & node-01 8 | docker network connect redisnet2 redis-node-01 9 | docker network connect redisnet2 redis-node-03 10 | 11 | #connecting redisnet3 to node-01 & node-02 12 | docker network connect redisnet3 redis-node-01 13 | docker network connect redisnet3 redis-node-02 14 | -------------------------------------------------------------------------------- /advancedapi/bitfield/README.md: -------------------------------------------------------------------------------- 1 | 2 | # BITFIELD 3 | 4 | - BITFIELD treats a Redis string as a array of bits 5 | - Is capable of addressing specific integer fields of varying bit widths and arbitrary non (necessary) aligned offset  6 | - Are not an actual data type. 7 | 8 | Similarly the command handles increments and decrements of the specified integers 9 | 10 | 11 | 12 | ``` 13 | > BITFIELD mykey incrby i5 100 1 get u4 0 14 | 1) (integer) 1 15 | 2) (integer) 0 16 | ``` 17 | 18 | - More on BITFIELD Operations http://redis.io/commands/BITFIELD 19 | 20 | -------------------------------------------------------------------------------- /ee/active-active/README.md: -------------------------------------------------------------------------------- 1 | # Building 3-Node Active-Active Redis Enterprise Cluster for Developers using Docker Desktop for Mac 2 | 3 | 4 | ## Prerequisite 5 | 6 | ### Clone the Repository 7 | 8 | ``` 9 | git clone https://github.com/collabnix/redis 10 | cd redis/ee/active-active/ 11 | ``` 12 | 13 | 14 | ## Sequence 15 | 16 | ``` 17 | sh network-setup.sh 18 | ``` 19 | 20 | 21 | ``` 22 | sh rs-setup.sh 23 | ``` 24 | 25 | ``` 26 | sh connect-networks.sh 27 | ``` 28 | 29 | 30 | ``` 31 | sh cluster-setup.sh 32 | ``` 33 | 34 | ``` 35 | sh crdb.sh 36 | ``` 37 | -------------------------------------------------------------------------------- /advancedapi/HyperLogLog/README.md: -------------------------------------------------------------------------------- 1 | # HyperLogLog – Estimating the cardinality of a set 2 | 3 | - A probabilistic data structure used in order to count unique things 4 | - Use an algorithm that trade memory for precision 5 | - Not actually store values 6 | 7 | HLLs in Redis is encoded as a Redis string 8 | 9 | ## Commands 10 | 11 | - PFADD : add an element to the count with PFADD 12 | - PFCOUNT : Retrieve the current approximation 
of the unique elements added with PFADD so far 13 | 14 | ``` 15 | > PFADD hll a b c d 16 | (integer) 1 17 | > PFCOUNT hll 18 | (integer) 4 19 | ``` 20 | 21 | -------------------------------------------------------------------------------- /advancedapi/modules/README.md: -------------------------------------------------------------------------------- 1 | # Modules 2 | 3 | - Refer [this](redismodules.com) link 4 | 5 | - Redis Modules are add-ons to Redis which extend Redis to cover most of the popular use cases for any industry 6 | - It is possible to extend Redis functionality using Redis Modules  7 | - Seamlessly plug into open source Redis or enterprise-class Redis 8 | - Modules can be created by anyone 9 | - The Module Hub marketplace includes Modules created by Redis Labs as well as by others 10 | 11 | ## Examples: Image processing, Full text search, Secure password DB 12 | 13 | - For a list of available modules go to [Redis Modules](redismodules.com/) 14 | -------------------------------------------------------------------------------- /advancedapi/transactions/README.md: -------------------------------------------------------------------------------- 1 | # Transactions 2 | 3 | A Redis transaction is entered using the MULTI command 4 | 5 | - At this point the user can issue multiple commands 6 | - Instead of executing these commands, Redis will queue them 7 | - All the commands are executed once EXEC is called 8 | commands are executed as a single isolated operation 9 | - Calling DISCARD instead will flush the transaction queue and will exit the transaction 10 | 11 | ``` 12 | 127.0.0.1:6379> multi 13 | OK 14 | 127.0.0.1:6379> incr foo 15 | QUEUED 16 | 127.0.0.1:6379> incr bar 17 | QUEUED 18 | 127.0.0.1:6379> exec 19 | 1) (integer) 1 20 | 2) (integer) 1 21 | 127.0.0.1:6379> 22 | ``` 23 | -------------------------------------------------------------------------------- /kubernetes/gke/app-deployment.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: hit-counter-lb 6 | spec: 7 | type: LoadBalancer 8 | ports: 9 | - port: 80 10 | protocol: TCP 11 | targetPort: 5000 12 | selector: 13 | app: myapp 14 | --- 15 | apiVersion: apps/v1 16 | kind: Deployment 17 | metadata: 18 | name: hit-counter-app 19 | spec: 20 | replicas: 1 21 | selector: 22 | matchLabels: 23 | app: myapp 24 | template: 25 | metadata: 26 | labels: 27 | app: myapp 28 | spec: 29 | containers: 30 | - name: myapp 31 | image: calinrus/api-redis-ha:1.0 32 | ports: 33 | - containerPort: 5000 34 | -------------------------------------------------------------------------------- /sentinel/swarm/collabnix-redis-sentinel-Dockerfile.yml: -------------------------------------------------------------------------------- 1 | FROM redis:5 2 | 3 | EXPOSE 26379 4 | 5 | ADD agile-redis-sentinel.conf /etc/redis/sentinel.conf 6 | 7 | RUN chown redis:redis /etc/redis/sentinel.conf 8 | 9 | ENV REDIS_MASTER_NAME=mymaster \ 10 | REDIS_MASTER_HOST=redis-master \ 11 | REDIS_MASTER_PORT=6379 \ 12 | SENTINEL_QUORUM=2 \ 13 | SENTINEL_DOWN_AFTER=30000\ 14 | SENTINEL_PARALLEL_SYNC=1 \ 15 | SENTINEL_FAILOVER_TIMEOUT=180000 16 | 17 | COPY agile-redis-sentinel-entrypoint.sh /usr/local/bin/ 18 | 19 | RUN chmod +x /usr/local/bin/agile-redis-sentinel-entrypoint.sh 20 | 21 | VOLUME /etc/redis 22 | 23 | ENTRYPOINT ["agile-redis-sentinel-entrypoint.sh"] 24 | -------------------------------------------------------------------------------- /sentinel/swarm/collabnix-redis-sentinel-entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | sed -i "s/\$REDIS_MASTER_NAME/$REDIS_MASTER_NAME/g" /etc/redis/sentinel.conf 3 | sed -i "s/\$REDIS_MASTER_HOST/$REDIS_MASTER_HOST/g" /etc/redis/sentinel.conf 4 | sed -i "s/\$REDIS_MASTER_PORT/$REDIS_MASTER_PORT/g" /etc/redis/sentinel.conf 5 | sed -i "s/\$SENTINEL_QUORUM/$SENTINEL_QUORUM/g" /etc/redis/sentinel.conf 6 | sed -i "s/\$SENTINEL_DOWN_AFTER/$SENTINEL_DOWN_AFTER/g" /etc/redis/sentinel.conf 7 | sed -i "s/\$SENTINEL_PARALLEL_SYNC/$SENTINEL_PARALLEL_SYNC/g" /etc/redis/sentinel.conf 8 | sed -i "s/\$SENTINEL_FAILOVER_TIMEOUT/$SENTINEL_FAILOVER_TIMEOUT/g" /etc/redis/sentinel.conf 9 | exec docker-entrypoint.sh redis-server /etc/redis/sentinel.conf --sentinel 10 | -------------------------------------------------------------------------------- /enterprise/modules/README.md: -------------------------------------------------------------------------------- 1 | # Redis Enterprise Modules 2 | 3 | Redis Labs developed and certified these modules for use with Redis Enterprise Software (RS): 4 | 5 | ## RedisBloom 6 | Enables RS to have a scalable bloom filter as a data type. Bloom filters are probabilistic data structures that do a very good job at quickly determining if something is contained within a set. 7 | 8 | ## RedisGraph 9 | RedisGraph is the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph. RedisGraph uses Cypher as its query language. 10 | 11 | 12 | ## RedisJSON 13 | Now you have the convenience JSON as a built-in data type and easily able to address nested data via a path. 14 | 15 | 16 | ## RediSearch 17 | This module turns RS into a supercharged distributed in-memory full-text indexing and search beast. 18 | -------------------------------------------------------------------------------- /datastore/intro.md: -------------------------------------------------------------------------------- 1 | 2 | # Data Store 3 | 4 | - A single very large data store 5 | - Key can be any String/Byte Array 6 | - Value can be any of the 
following types 7 | - Strings (Byte Arrays) 8 | - Lists 9 | - Sets 10 | - ZSET (Sorted Sets) 11 | - Hashes 12 | - Bitmap and HyperLogLogs 13 | 14 | 15 | ## Key & Value 16 | 17 | - Both key and value can be at max 512 Megabytes in length 18 | - Within complex data structure every value in the data structure can be at max 512 Megabytes 19 | - Key/Value basic command families 20 | - Setters – associating a key with a value 21 | - A Setter command name is dependent on the data structure you use 22 | - Getters – specifying a key and returning the value 23 | - A getter command name is dependent on the data structure you use 24 | - EXISTS command returns 1 or 0 to signal if a given key exists or not in the database 25 | - DEL command deletes a key and associated value 26 | -------------------------------------------------------------------------------- /enterprise/modules/redisai/install/install.sh: -------------------------------------------------------------------------------- 1 | sudo apt update 2 | sudo apt install -y git build-essential ninja-build cmake python3-pip python3-cffi redis unzip wget 3 | 4 | git clone https://github.com/RedisAI/RedisAI.git 5 | 6 | cd RedisAI 7 | 8 | mkdir build 9 | 10 | WITH_PT=0 WITH_TF=0 WITH_TFLITE=0 WITH_ORT=0 bash get_deps.sh 11 | 12 | mv deps/linux-arm64v8-cpu deps/linux-x64-cpu 13 | 14 | mkdir deps/linux-x64-cpu/libtorch 15 | 16 | cd deps/linux-x64-cpu/libtorch 17 | 18 | 19 | wget https://nvidia.box.com/shared/static/3ibazbiwtkl181n95n9em3wtrca7tdzp.whl -O torch-1.5.0-cp36-cp36m-linux_aarch64.whl 20 | sudo apt install -y libopenblas-base 21 | 22 | unzip torch-1.5.0-cp36-cp36m-linux_aarch64.whl 23 | mv torch/* . 24 | 25 | cd - 26 | 27 | cd build 28 | 29 | cmake -DBUILD_TF=OFF -DBUILD_TFLITE=OFF -DBUILD_TORCH=ON -DBUILD_ORT=OFF -DCMAKE_BUILD_TYPE=Release ../ 30 | 31 | make -j4 && make install 32 | -------------------------------------------------------------------------------- /ee/install/multinode/README.md: -------------------------------------------------------------------------------- 1 | # Setting up Multi-Node Redis Enterprise Cluster 2 | 3 | 4 | ## Pre-requisite 5 | 6 | - Docker Desktop for Mac 7 | - Ensure that atleast 3+ GB Memory is allocated 8 | 9 | 10 | ## Steps 11 | 12 | 13 | ``` 14 | docker run -d --cap-add sys_resource --name redis-sys1 -p 8443:8443 -p 9443:9443 -p 12000:12000 redislabs/redis 15 | a9d2cb8c34c8c2237337d015dff5b4929b0928cbd7004c2d4f718c95ceb5056d 16 | [Captains-Bay]🚩 > 17 | ``` 18 | 19 | ``` 20 | [Captains-Bay]🚩 > docker run -d --cap-add sys_resource --name redis-sys2 -p 8444:8443 -p 9444:9443 -p 12001:12000 redislabs/redis 21 | f587860d3d107325e0d74217c22aef6984db9bf04510421410c98b2cb51a8473 22 | [Captains-Bay]🚩 > docker run -d --cap-add sys_resource --name redis-sys3 -p 8445:8443 -p 9445:9443 -p 12002:12000 redislabs/redis 23 | 568d677498755fc94048d978f59dcee4a00580b972703d0f44af462a601c24a6 24 | ``` 25 | 26 | 27 | 28 | 29 | 30 | 31 | -------------------------------------------------------------------------------- /aws/ElastiCache/README.md: -------------------------------------------------------------------------------- 1 | # Elasticache 2 | 3 | - Fully Managed Redis or Memcache compatible 4 | - Low Latency 5 | - In-memory data store 6 | 7 | 8 | # Features 9 | 10 | - In Memory data store and cache for sub millisecond response times 11 | - AWS Manages all HW and software 12 | - Easily Scalable - Read Scaling with Replicas, Write and memory scaling with sharding 13 | 14 | 15 | # Elasticcache Redis 16 | 17 | - Cluster upto 6.1 TB of in-memory data 18 | - Scale out and Scale in 19 | - HA - Read Replicas, Multiple Primaries, Multi-AZ with automatic failover 20 | - Used as a database, cache, message broker & queue 21 | 22 | # Did you know? 23 | 24 | - A shard is a partition of your data and is comprised of one primary and up to five read replicas. 25 | - Elasticache cluster consists of 1 to 15 shards 26 | - Each shard has primary node and upto 5 replica nodes across muliple AZs for read scaling 27 | - Increase writes by adding shards 28 | 29 | 30 | 31 | -------------------------------------------------------------------------------- /advancedapi/bitmap/README.md: -------------------------------------------------------------------------------- 1 | # BITMAP 2 | 3 | - Bitmaps are a set of bit-oriented operations defined on the String type 4 | - Are not an actual data type 5 | - Since strings are binary safe blobs and their maximum length is 512 MB, Bits are set and retrieved using 6 | - SETBIT and  7 | - GETBIT commands 8 | 9 | 10 | ``` 11 | 127.0.0.1:6379> SETBIT key 10 1 12 | (integer) 0 13 | 14 | 127.0.0.1:6379> getbit key 10 15 | (integer) 1 16 | 127.0.0.1:6379> getbit key 11 17 | (integer) 0 18 | 127.0.0.1:6379> 19 | ``` 20 | 21 | # BITMAP BITOP Command 22 | 23 | - The BITOP command supports four bitwise operations:  24 | - AND,  25 | - OR,  26 | - XOR and 27 | - NOT 28 | 29 | Thus the valid forms to call the command are: 30 | 31 | ``` 32 | >BITOP AND destkey srckey1 srckey2 srckey3 ... srckeyN 33 | >BITOP OR destkey srckey1 srckey2 srckey3 ... srckeyN 34 | >BITOP XOR destkey srckey1 srckey2 srckey3 ... srckeyN 35 | >BITOP NOT destkey srckey 36 | ``` 37 | -------------------------------------------------------------------------------- /sentinel/swarm/README.md: -------------------------------------------------------------------------------- 1 | # High Availability with Redis Open Source using Redis Sentinel 2 | 3 | 4 | ## Pre-requisite 5 | 6 | - Play with Docker Platform 7 | - Setting up Local Private Registry 8 | 9 | ``` 10 | docker service create --name registry --publish published=5000,target=5000 registry:2 11 | ``` 12 | 13 | ``` 14 | $ docker service ls 15 | ID NAME MODE REPLICAS IMAGE PORTS 16 | y2jtw5wom7lo registry replicated 1/1 registry:2 *:5000->5000/tcp 17 | ``` 18 | 19 | 20 | ## Our Setup 21 | 22 | 23 | ``` 24 | For Redis Sentinel: 25 | 26 | image: 127.0.0.1:5000/collabnix-redis-sentinel 27 | 28 | For Redis Master and slaves: 29 | image: 127.0.0.1:5000/collabnix-redis 30 | 31 | For our python application: 32 | image: 127.0.0.1:5000/collabnix-python-app 33 | ``` 34 | 35 | 36 | ## Create a directory 37 | 38 | ``` 39 | $ mkdir stackredis 40 | $ cd stackredis 41 | ``` 42 | 43 | 44 | 45 | 46 | 47 | -------------------------------------------------------------------------------- /python/push-catalog.py: -------------------------------------------------------------------------------- 1 | # import the Redis client 2 | 3 | import redis 4 | 5 | 6 | 7 | # Create a redis client 8 | 9 | redisClient = redis.StrictRedis(host='', 10 | 11 | port=6379, 12 | 13 | db=0) 14 | 15 | 16 | 17 | # Add key value pairs to the Redis hash 18 | 19 | redisClient.hset("users:linus", "name", "Linus" ) 20 | 21 | redisClient.hset("users:linus", "email", "linus@linuxfoundation.com") 22 | 23 | redisClient.hset("users:linus", "phone", "+1234567891") 24 | 25 | 26 | 27 | # Retrieve the value for a specific key 28 | 29 | print("Value for the key 3 is") 30 | 31 | print(redisClient.hget("users:linus", "phone")) 32 | 33 | 34 | 35 | print("The keys present in the Redis hash:"); 36 | 37 | print(redisClient.hkeys("users:linus")) 38 | 39 | 40 | 41 | print("The values present in the Redis hash:"); 42 | 43 | print(redisClient.hvals("users:linus")) 44 | 45 | 46 | 47 | print("The keys and values present in the Redis hash are:") 48 | 49 | print(redisClient.hgetall("users:linus")) 50 | -------------------------------------------------------------------------------- /datastore/scan/README.md: -------------------------------------------------------------------------------- 1 | # SCAN 2 | 3 | - Getting a list of keys 4 | - The SCAN command and the closely related commands SSCAN, HSCAN and ZSCAN are used in order to incrementally iterate over a collection of elements. 5 | - You can use as a cursor for iterating key value pairs 6 | 7 | Example using SCAN in command line mode: 8 | 9 | ``` 10 | ubuntu@ip-172-31-25-81:~$ redis-cli 11 | 127.0.0.1:6379> set a1 200 12 | OK 13 | 127.0.0.1:6379> set b1 3000 14 | OK 15 | 127.0.0.1:6379> set c1 4000 16 | OK 17 | 127.0.0.1:6379> exit 18 | ubuntu@ip-172-31-25-81:~$ redis-cli --scan | head -n 20 19 | a1 20 | c1 21 | b1 22 | ubuntu@ip-172-31-25-81:~$ 23 | ``` 24 | 25 | ``` 26 | ubuntu@ip-172-31-25-81:~$ redis-cli --scan --pattern "*1" 27 | a1 28 | c1 29 | b1 30 | ubuntu@ip-172-31-25-81:~$ 31 | ``` 32 | 33 | 34 | ## More on SCAN 35 | 36 | - Scan returns 2 values 37 | - Next cursor to use in next scan 38 | - Array of keys 39 | - Use next cursor in next scan command 40 | 41 | ``` 42 | ubuntu@ip-172-31-25-81:~$ redis-cli 43 | 127.0.0.1:6379> scan 0 44 | 1) "0" 45 | 2) 1) "a1" 46 | 2) "c1" 47 | 3) "b1" 48 | 127.0.0.1:6379> 49 | ``` 50 | -------------------------------------------------------------------------------- /sentine/swarm/redis-master.yaml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | services: 3 | redismaster: 4 | image: redis:5.0.5 5 | command: | 6 | sh -c '\ 7 | redis-server \ 8 | --dbfilename redis.rdb \ 9 | --dir /opt/scripts/ \ 10 | '; 11 | logging: 12 | driver: "fluentd" 13 | options: 14 | tag: redismaster 15 | ports: 16 | - "6379:6379" 17 | stop_grace_period: 1m30s 18 | volumes: 19 | - "/var/run/docker.sock:/var/run/docker.sock" 20 | - "/opt/scripts/:/opt/scripts/" 21 | deploy: 22 | replicas: 1 23 | update_config: 24 | parallelism: 1 25 | delay: 10s 26 | order: stop-first 27 | failure_action: rollback 28 | rollback_config: 29 | parallelism: 1 30 | delay: 10s 31 | order: stop-first 32 | restart_policy: 33 | condition: on-failure 34 | delay: 10s 35 | max_attempts: 5 36 | window: 180s 37 | placement: 38 | constraints: [node.role == manager] 39 | labels: [APP=Redis Master API] 40 | networks: 41 | - cp1 42 | networks: 43 | cp1: 44 | -------------------------------------------------------------------------------- /kubernetes/redis-ruby-k8s/docker-images-redis-ruby/Dockerfile: -------------------------------------------------------------------------------- 1 | # Alpine with ruby support image 2 | FROM redis:latest 3 | 4 | # Copy redis-trib.rb 5 | COPY redis-trib.rb /root/redis-trib.rb 6 | 7 | # Copy redis.conf, port=7000, datadir=/data/ 8 | RUN mkdir -p /redis-conf 9 | COPY redis.conf /redis-conf/redis.conf 10 | 11 | # Update apt repo 12 | RUN echo 'deb http://mirrors.aliyun.com/debian wheezy main contrib non-free \n\ 13 | deb-src http://mirrors.aliyun.com/debian wheezy main contrib non-free \n\ 14 | deb http://mirrors.aliyun.com/debian wheezy-updates main contrib non-free \n\ 15 | deb-src http://mirrors.aliyun.com/debian wheezy-updates main contrib non-free \n\ 16 | deb http://mirrors.aliyun.com/debian-security wheezy/updates main contrib non-free \n\ 17 | deb-src http://mirrors.aliyun.com/debian-security wheezy/updates main contrib non-free' > /etc/apt/sources.list 18 | 19 | RUN apt-get update 20 | 21 | # Install ruby and ruby-bundler 22 | RUN apt-get install -y rubygems 23 | 24 | # Install gem redis plugin 25 | RUN gem install --no-rdoc redis 26 | 27 | # Run command below to build the image 28 | # docker build docker-images-redis-ruby/ -t redis:ruby 29 | # Run command below to run the container 30 | # docker run -ti --rm redis:ruby /bin/bash 31 | -------------------------------------------------------------------------------- /sentine/swarm/redis-slave.yaml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | services: 3 | redisslave: 4 | image: redis:5.0.5 5 | logging: 6 | driver: "fluentd" 7 | options: 8 | tag: redisslave 9 | ports: 10 | - "6380:6380" 11 | stop_grace_period: 1m30s 12 | volumes: 13 | - "/var/run/docker.sock:/var/run/docker.sock" 14 | - "/opt/scripts/:/opt/scripts/" 15 | command: | 16 | sh -c '\ 17 | redis-server \ 18 | --slave-announce-ip $$(hostname -i) \ 19 | --port 6380 \ 20 | --slaveof redismaster 6379 \ 21 | --save 60 2 \ 22 | --dbfilename redis.rdb \ 23 | --dir /opt/scripts/ \ 24 | '; 25 | deploy: 26 | mode: global 27 | update_config: 28 | parallelism: 1 29 | delay: 10s 30 | order: stop-first 31 | failure_action: rollback 32 | rollback_config: 33 | parallelism: 1 34 | delay: 10s 35 | order: stop-first 36 | restart_policy: 37 | condition: on-failure 38 | delay: 10s 39 | max_attempts: 5 40 | window: 180s 41 | placement: 42 | constraints: [node.role == manager] 43 | labels: [APP=Redis Slave API] 44 | networks: 45 | - cp1 46 | networks: 47 | cp1: 48 | -------------------------------------------------------------------------------- /ee/active-active/gke/README.md: -------------------------------------------------------------------------------- 1 | # Setting up Geo Distriuted Active Active Redis Enterprise Cluster on Google Cloud Platform 2 | 3 | ## Pre-requisite 4 | 5 | - Create 3 Ubuntu 18.04 LTS instances, each of 6GB on GCP 6 | - Chose Asia for 1st, US for 2nd & Europe for 3rd Instances 7 | - Install Docker on each of these nodes 8 | 9 | ## Cloning the Repository 10 | 11 | ``` 12 | git clone https://github.com/collabnix/redisplanet 13 | cd redisplanet/ee/active-active 14 | ``` 15 | 16 | ``` 17 | $ sudo docker run -d --cap-add sys_resource --name redis-node-01 -p 8443:8443 -p 9443:9443 -p 12000:12000 redislabs/redis 18 | ``` 19 | 20 | ``` 21 | sudo docker ps 22 | CONTAINER ID IMAGE COMMAND CREATED 23 | STATUS PORTS 24 | 25 | NAMES 26 | 36f10d5e16ec redislabs/redis "/opt/start.sh" 19 seconds ago 27 | Up 11 seconds 53/tcp, 5353/tcp, 8001/tcp, 8070/tcp, 0.0.0.0:8443 28 | ->8443/tcp, 8080/tcp, 0.0.0.0:9443->9443/tcp, 10000-11999/tcp, 12001-19999/ 29 | tcp, 0.0.0.0:12000->12000/tcp redis-node-01 30 | ``` 31 | 32 | Wait till 1 minute to let Redis Enterprise come up. 33 | 34 | -------------------------------------------------------------------------------- /datastore/set/README.md: -------------------------------------------------------------------------------- 1 | # Set 2 | 3 | - Unordered collections of strings 4 | - SADD adds new elements to a set 5 | - It's also possible to do a number of other operations against sets like: 6 | - testing if a given element already exists 7 | - performing the intersection, union or 
difference between multiple sets, and so forth 8 | 9 | 10 | ``` 11 | 127.0.0.1:6379> sadd myset 1 2 3 12 | (integer) 3 13 | 127.0.0.1:6379> smembers myset 14 | 1) "1" 15 | 2) "2" 16 | 3) "3" 17 | 127.0.0.1:6379> sismember myset 3 18 | (integer) 1 19 | 127.0.0.1:6379> 20 | ``` 21 | 22 | ## Set operations UNION etc… 23 | 24 | 25 | ## SUNION 26 | Returns the members of the set resulting 
from the union of all the given sets 27 | 28 | ## SINTER 29 | Returns the members of the set resulting 
from the intersection of all the given sets 30 | 31 | 32 | ## SDIFF 33 | Returns the members of the set resulting 
from the difference between the first set 
and all the successive sets 34 | 35 | 36 | ``` 37 | 127.0.0.1:6379> sadd key1 "a" "b" "c" 38 | (integer) 3 39 | 127.0.0.1:6379> sadd key2 "c" "d" "e" 40 | (integer) 3 41 | 127.0.0.1:6379> sunion key1 key2 42 | 1) "a" 43 | 2) "b" 44 | 3) "c" 45 | 4) "e" 46 | 5) "d" 47 | 127.0.0.1:6379> 48 | ``` 49 | 50 | ``` 51 | 127.0.0.1:6379> sdiff key1 key2 52 | 1) "a" 53 | 2) "b" 54 | 127.0.0.1:6379> sdiff key2 key1 55 | 1) "e" 56 | 2) "d" 57 | 127.0.0.1:6379> sinter key1 key2 58 | 1) "c" 59 | 127.0.0.1:6379> 60 | ``` 61 | -------------------------------------------------------------------------------- /kubernetes/gke/redis-statefullset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: redis-cluster 6 | data: 7 | update-node.sh: | 8 | #!/bin/sh 9 | REDIS_NODES="/data/nodes.conf" 10 | sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES} 11 | exec "$@" 12 | redis.conf: |+ 13 | cluster-enabled yes 14 | cluster-require-full-coverage no 15 | cluster-node-timeout 15000 16 | cluster-config-file /data/nodes.conf 17 | cluster-migration-barrier 1 18 | appendonly yes 19 | protected-mode no 20 | --- 21 | apiVersion: apps/v1 22 | kind: StatefulSet 23 | metadata: 24 | name: redis-cluster 25 | spec: 26 | serviceName: redis-cluster 27 | replicas: 6 28 | selector: 29 | matchLabels: 30 | app: redis-cluster 31 | template: 32 | metadata: 33 | labels: 34 | app: redis-cluster 35 | spec: 36 | containers: 37 | - name: redis 38 | image: redis:5.0.1-alpine 39 | ports: 40 | - containerPort: 6379 41 | name: client 42 | - containerPort: 16379 43 | name: gossip 44 | command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"] 45 | env: 46 | - name: POD_IP 47 | valueFrom: 48 | fieldRef: 49 | fieldPath: status.podIP 50 | volumeMounts: 51 | - name: conf 52 | mountPath: /conf 53 | readOnly: false 54 | - name: data 55 | mountPath: /data 56 | readOnly: false 57 | volumes: 58 | - name: conf 59 | configMap: 60 | name: redis-cluster 61 | defaultMode: 0755 62 | volumeClaimTemplates: 63 | - metadata: 64 | name: data 65 | spec: 66 | accessModes: [ "ReadWriteOnce" ] 67 | resources: 68 | requests: 69 | storage: 1Gi 70 | -------------------------------------------------------------------------------- /sentinel/swarm/redis-stack.yaml: -------------------------------------------------------------------------------- 1 | version: "3.3" 2 | services: 3 | 4 | redis-sentinel: 5 | image: 127.0.0.1:5000/collabnix-redis-sentinel 6 | volumes: 7 | - sentinel-conf:/etc/redis 8 | deploy: 9 | mode: global 10 | environment: 11 | - REDIS_MASTER_HOST=${REDIS_MASTER_IP} 12 | - SENTINEL_DOWN_AFTER=5000 13 | - SENTINEL_FAILOVER=15000 14 | networks: 15 | - host 16 | 17 | redis-master: 18 | image: 127.0.0.1:5000/collabnix-redis 19 | volumes: 20 | - redis-data:/data 21 | - redis-conf:/etc/redis 22 | deploy: 23 | mode: global 24 | placement: 25 | constraints: 26 | - node.hostname == ${REDIS_MASTER_HOSTNAME} 27 | networks: 28 | - host 29 | 30 | redis-slave-node1: 31 | image: 127.0.0.1:5000/collabnix-redis 32 | volumes: 33 | - redis-data:/data 34 | - redis-conf:/etc/redis 35 | environment: 36 | - REDIS_MASTER_HOST=${REDIS_MASTER_IP} 37 | deploy: 38 | mode: global 39 | placement: 40 | constraints: 41 | - node.hostname == ${REDIS_SLAVE_NODE1_HOSTNAME} 42 | networks: 43 | - host 44 | 45 | redis-slave-node2: 46 | image: 127.0.0.1:5000/collabnix-redis 47 | volumes: 48 | - redis-data:/data 49 | - redis-conf:/etc/redis 50 | environment: 51 | - REDIS_MASTER_HOST=${REDIS_MASTER_IP} 52 | deploy: 53 | mode: global 54 | placement: 55 | constraints: 56 | - node.hostname == ${REDIS_SLAVE_NODE2_HOSTNAME} 57 | networks: 58 | - host 59 | 60 | agile-python-app: 61 | image: 127.0.0.1:5000/collanix-python-app 62 | ports: 63 | - "38000:611" 64 | deploy: 65 | mode: replicated 66 | replicas: 3 67 | environment: 68 | - SENTINEL_HOST=${SENTINEL_IP} 69 | 70 | volumes: 71 | redis-data: 72 | redis-conf: 73 | sentinel-conf: 74 | networks: 75 | host: 76 | external: true 77 | -------------------------------------------------------------------------------- /docker/Getting_Started_with_Docker-Redis-Go.md: -------------------------------------------------------------------------------- 1 | ## Getting Started with Docker + Redis + Go 2 | 3 | 4 | ## Prerequisites 5 | 6 | - Your Local Development Environment 7 | 8 | - Docker Binaries installed, up and Running 9 | - Go version 1.12+ 10 | - Text editor such as Visual Studio Code 11 | 12 | ## Install Redis Using Docker Locally 13 | 14 | ``` 15 | sangam:~ sangam$ docker pull redis 16 | Using default tag: latest 17 | latest: Pulling from library/redis 18 | bc51dd8edc1b: Pull complete 19 | 37d80eb324ee: Pull complete 20 | 392b7748dfaf: Pull complete 21 | 48df82c3534d: Pull complete 22 | 2ec2bb0b4b0e: Pull complete 23 | 1302bce0b2cb: Pull complete 24 | Digest: sha256:7b84b346c01e5a8d204a5bb30d4521bcc3a8535bbf90c660b8595fad248eae82 25 | Status: Downloaded newer image for redis:latest 26 | docker.io/library/redis:latest 27 | sangam:~ sangam$ docker run --name redis-test-instance -p 6379:6379 -d redis 28 | 99514321cff7c75fb802933d0edf276bc94f71f70cebca1fb9149b45a2c4a072 29 | 30 | ``` 31 | 32 | # Using `go-redis' 33 | 34 | 35 | Create main.go with following content : 36 | 37 | ``` 38 | package main 39 | 40 | import ( 41 | "fmt" 42 | "github.com/go-redis/redis" 43 | ) 44 | 45 | func main() { 46 | fmt.Println("gopherlabs.collabnix.com") 47 | 48 | client := redis.NewClient(&redis.Options{ 49 | Addr: "localhost:6379", 50 | Password: "", 51 | DB: 0, 52 | }) 53 | 54 | pong, err := client.Ping().Result() 55 | fmt.Println(pong, err) 56 | 57 | } 58 | 59 | 60 | ``` 61 | 62 | ## Output 63 | 64 | 65 | When we go to run this now, we will see that our incredibly simple Go application has been able to 66 | successfully `ping` the redis instance that we have spun up and returned a successful `PONG` response 67 | 68 | 69 | ``` 70 | sangam:golang-daily sangam$ go run main.go 71 | gopherlabs.collabnix.com 72 | PONG 73 | sangam:golang-daily sangam$ 74 | 75 | 76 | ``` 77 | 78 | 79 | # Contributors 80 | 81 | [Sangam Biradar](https://twitter.com/BiradarSangam) 82 | -------------------------------------------------------------------------------- /kubernetes/Okteto_Cloud/README.md: -------------------------------------------------------------------------------- 1 | # Redis Open Source for Developers running on Kubernetes Cluster using Okteto Cloud 2 | 3 | 4 | ## Pre-requisite 5 | 6 | - Create Okteto Cloud Acccount - (https://okteto.com/) login via Github Credentials 7 | 8 | ### On Mac or Linux 9 | 10 | 11 | - Install okteto CLI for Mac/Linux User :- 12 | 13 | ``` 14 | curl https://get.okteto.com -sSfL | sh 15 | ``` 16 | 17 | ### On Windows: 18 | 19 | Download (https://downloads.okteto.com/cli/okteto.exe) and add it to your $PATH. 20 | 21 | ## Connect with Okteto Cli and Okteto namespaces 22 | 23 | ``` 24 | sangam:~ sangam$ okteto login 25 | Authentication will continue in your default browser 26 | You can also open a browser and navigate to the following address: 27 | ---- you will get link for re-direct via web browser 28 | ✓ Logged in as sangam14 29 | Run `okteto namespace` to switch your context and download your Kubernetes credentials. 30 | sangam:~ sangam$ okteto namespace 31 | ✓ Updated context 'cloud_okteto_com' in '/Users/sangam/.kube/config' 32 | sangam:~ sangam$ 33 | ``` 34 | 35 | Hence, this is how developer can connect to environment 36 | 37 | ## Acecssing Okteto Cloud Dashboard 38 | 39 | - https://cloud.okteto.com/ 40 | 41 | ## Fetching the Password: 42 | 43 | ``` 44 | export REDIS_PASSWORD=$(kubectl get secret --namespace sangam14 redis -o jsonpath="{.data.redis-password}" | base64 --decode) 45 | ``` 46 | 47 | ## Connecting to Redis server: 48 | 49 | 1. Run a Redis pod that you can use as a client: 50 | ``` 51 | kubectl run --namespace sangam14 redis-client --rm --tty -i --restart='Never' \ 52 | --env REDIS_PASSWORD=$REDIS_PASSWORD \ 53 | --image docker.io/bitnami/redis:5.0.5-debian-9-r141 -- bash 54 | ``` 55 | 2. Connect using the Redis CLI: 56 | 57 | ``` 58 | redis-cli -h redis -a $REDIS_PASSWORD 59 | ``` 60 | 61 | 3. To connect to your database from outside the cluster execute the following commands: 62 | 63 | ``` 64 | kubectl port-forward --namespace sangam14 svc/redis 6379:6379 & redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD 65 | ``` 66 | 67 | -------------------------------------------------------------------------------- /kubernetes/redis-ruby-k8s/redis-service.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: redis-0 6 | labels: 7 | app: redis 8 | node: node0 9 | spec: 10 | type: LoadBalancer 11 | ports: 12 | - name: port-7000 13 | port: 7000 14 | - name: port-17000 15 | port: 17000 16 | selector: 17 | app: redis 18 | node: node0 19 | 20 | --- 21 | apiVersion: v1 22 | kind: Service 23 | metadata: 24 | name: redis-1 25 | labels: 26 | app: redis 27 | node: node1 28 | spec: 29 | type: LoadBalancer 30 | ports: 31 | - name: port-7000 32 | port: 7000 33 | - name: port-17000 34 | port: 17000 35 | selector: 36 | app: redis 37 | node: node1 38 | 39 | --- 40 | apiVersion: v1 41 | kind: Service 42 | metadata: 43 | name: redis-2 44 | labels: 45 | app: redis 46 | node: node2 47 | spec: 48 | type: LoadBalancer 49 | ports: 50 | - name: port-7000 51 | port: 7000 52 | - name: port-17000 53 | port: 17000 54 | selector: 55 | app: redis 56 | node: node2 57 | 58 | --- 59 | apiVersion: v1 60 | kind: Service 61 | metadata: 62 | name: redis-3 63 | labels: 64 | app: redis 65 | node: node3 66 | spec: 67 | type: LoadBalancer 68 | ports: 69 | - name: port-7000 70 | port: 7000 71 | - name: port-17000 72 | port: 17000 73 | selector: 74 | app: redis 75 | node: node3 76 | 77 | --- 78 | apiVersion: v1 79 | kind: Service 80 | metadata: 81 | name: redis-4 82 | labels: 83 | app: redis 84 | node: node4 85 | spec: 86 | type: LoadBalancer 87 | ports: 88 | - name: port-7000 89 | port: 7000 90 | - name: port-17000 91 | port: 17000 92 | selector: 93 | app: redis 94 | node: node4 95 | 96 | --- 97 | apiVersion: v1 98 | kind: Service 99 | metadata: 100 | name: redis-5 101 | labels: 102 | app: redis 103 | node: node5 104 | spec: 105 | type: LoadBalancer 106 | ports: 107 | - name: port-7000 108 | port: 7000 109 | - name: port-17000 110 | port: 17000 111 | selector: 112 | app: redis 113 | node: node5 114 | -------------------------------------------------------------------------------- /mac/README.md: -------------------------------------------------------------------------------- 1 | ## Pre-requisite 2 | 3 | ### MacOS 4 | 5 | 6 | - Installing Redis 7 | 8 | ``` 9 | brew install redis 10 | ``` 11 | 12 | ### Ubuntu/Debian 13 | 14 | ``` 15 | wget 16 | tar xvf 17 | cd redis-6.0-rc2 18 | make & make install 19 | ``` 20 | 21 | - Verifying Redis Installation 22 | 23 | ``` 24 | $redis-server --version 25 | Redis server v=5.0.7 sha=00000000:0 malloc=libc bits=64 build=4bd99862b1ce82a9 26 | ``` 27 | 28 | ``` 29 | redis-cli --version 30 | redis-cli 5.0.7 31 | ``` 32 | 33 | - Install Python - We will require it for Demo Purpose 34 | 35 | ``` 36 | brew install python 37 | ``` 38 | 39 | - Install pip 40 | 41 | ``` 42 | sudo easy_install pip 43 | ``` 44 | 45 | ## Running Redis Server 46 | 47 | ``` 48 | redis-server 49 | ``` 50 | 51 | ## Connecting to Redis Server 52 | 53 | ``` 54 | redis-cli 55 | ``` 56 | 57 | ## Getting Started with Sample String Keys 58 | 59 | ``` 60 | set a1 100 61 | set a2 200 62 | set a3 300 63 | ``` 64 | 65 | ``` 66 | get a1 67 | get a2 68 | get a3 69 | ``` 70 | 71 | ## Inserting keys using Python 72 | 73 | Copy the below content into a file called push-keys.python 74 | 75 | ``` 76 | import redis 77 | # Create connection object 78 | r = redis.Redis(host='localhost', port=6379) 79 | # set a value for the foo object 80 | r.set('foo', 'bar') 81 | # retrieve and print the value for the foo object 82 | print(r.get('foo')) 83 | ``` 84 | 85 | 86 | # Protecting Redis with password 87 | 88 | Stop the old running redis server and run the below command: 89 | 90 | ``` 91 | redis-server --requirepass redis12# 92 | ``` 93 | 94 | ## Connecting to Redis using Python - with auth 95 | 96 | ``` 97 | import redis 98 | # Create connection object 99 | r = redis.StrictRedis(host='localhost', port=6379, db=0, password='redis12#') 100 | # set a value for the foo object 101 | r.set('foo', 'bar') 102 | # retrieve and print the value for the foo object 103 | print(r.get('foo')) 104 | ``` 105 | 106 | ## Running Redis on specific port 107 | 108 | ``` 109 | redis-server --port 6390 110 | ``` 111 | 112 | 113 | 114 | ## Running Redis inside Docker Container 115 | 116 | ``` 117 | docker run -d -p 6380:6379 --name redis1 redis 118 | ``` 119 | 120 | ``` 121 | docker exec -it redis1 sh 122 | #bash 123 | #redis-cli 124 | ``` 125 | -------------------------------------------------------------------------------- /datastore/sorted-sets/README.md: -------------------------------------------------------------------------------- 1 | # Sorted Sets 2 | 3 | - A mix between a Set and a Hash 4 | - Like sets, sorted sets are composed of unique, non-repeating string elements 5 | - Every element in a sorted set is associated with a floating point value called the score  6 | - Similar to a hash, since every element is mapped to a value 7 | 8 | 9 | ## Sorted Ordering 10 | 11 | - Elements in a sorted sets are taken in order. They are ordered according to the following rule 12 | - If A and B are two elements with a different score, then A > B if A.score is > B.score 13 | - If A and B have exactly the same score, then A > B if the A string is lexicographically greater than the B string. A and B 14 | strings can't be equal since sorted sets only have unique elements 15 | 16 | - Use  ZADD in order to add values to a sorted set 17 | - Sorted sets support all set operations 18 | 19 | 20 | ``` 21 | 22 | 127.0.0.1:6379> zadd hackers 200 "viraj" 23 | 24 | (integer) 0 25 | 127.0.0.1:6379> zadd hackers 150 "deepu" 26 | (integer) 1 27 | 127.0.0.1:6379> 28 | ``` 29 | 30 | ## Sorted Set Operations 31 | 32 | ``` 33 | 127.0.0.1:6379> zadd hackers 200 viraj 34 | (integer) 1 35 | 127.0.0.1:6379> zadd hackers 200 "viraj" 36 | (integer) 0 37 | 127.0.0.1:6379> zadd hackers 150 "deepu" 38 | (integer) 1 39 | 127.0.0.1:6379> zrange hackers 0 -1 40 | 1) "deepu" 41 | 2) "viraj" 42 | 127.0.0.1:6379> zadd hackers 100 "karthik" 43 | (integer) 1 44 | 127.0.0.1:6379> zadd hackers 50 "rahul" 45 | (integer) 1 46 | 127.0.0.1:6379> zadd hackers 25 "tom" 47 | (integer) 1 48 | 127.0.0.1:6379> zrange hackers 0 -1 49 | 1) "tom" 50 | 2) "rahul" 51 | 3) "karthik" 52 | 4) "deepu" 53 | 5) "viraj" 54 | 127.0.0.1:6379> 55 | ``` 56 | 57 | ## Operate on Ranges 58 | 59 | - ZRANGEBYSCORE 60 | 61 | ## Example 62 | Return all the elements with a score between negative infinity and 100 63 | 64 | ``` 65 | 127.0.0.1:6379> zrangebyscore hackers -inf 100 66 | 1) "tom" 67 | 2) "rahul" 68 | 3) "karthik" 69 | ``` 70 | 71 | ## Remove Elements in Range 72 | 73 | - ZREMRANGEBYSCORE 74 | Remove elements in range by score and returns the number of removed elements 75 | 76 | ``` 77 | 127.0.0.1:6379> zremrangebyscore hackers 100 200 78 | (integer) 3 79 | 127.0.0.1:6379> 80 | ``` 81 | 82 | 83 | 84 | 85 | For more information on sorted sets http://redis.io/topics/data-types-intro#sorted-sets 86 | 87 | 88 | 89 | ``` 90 | -------------------------------------------------------------------------------- /enterprise/modules/redisai/README.md: -------------------------------------------------------------------------------- 1 | 2 | # What is Redis? 3 | 4 | Redis is a key-value database system. It stores pairs of keys and values. It is optimized for the real-time web. It has in-memory data structure, meaning it stores data using RAM. THis makes it much faster than traditional databases and ideal to use as a caching solution. 5 | 6 | # What is Redis Enterprise? 7 | 8 | Even though Redis is an in-memory database, the goal of Redis Labs is to help enterprises with “use it as more than cache”. The company is primarily doing this via Redis Enterprise 9 | 10 | # What are Redis Modules? 11 | 12 | The way in which the core functionality of Redis is being extended into a “full database platform” by Redis Labs is via modules. Now there are several modules like 13 | 14 | - RedisTimeSeries, 15 | - RedisGraph, 16 | - RedisJSON, 17 | - Redisearch, 18 | - RedisBloom and 19 | - RedisAI. 20 | 21 | 22 | # What is RedisAI? 23 | 24 | - RedisAI is a Redis module for executing Deep Learning/Machine Learning models and managing their data. Redis is an in-memory data structure server RedisAI uses for storing all of its data. 25 | 26 | 27 | - The main data type supported by RedisAI is the Tensor that is the standard representation of data in the DL/ML domain. 28 | Because tensors are stored in the memory space of the Redis server, they are readily accessible to any of RedisAI’s backend libraries at minimal latency. 29 | 30 | 31 | - Its purpose is being a "workhorse" for model serving, by providing out-of-the-box support for popular DL/ML frameworks and unparalleled performance. 32 | RedisAI both simplifies the deployment and serving of graphs by leveraging on Redis' production-proven infrastructure, as well as maximizes computation throughput by adhering to the principle of data locality. 33 | 34 | ## Please remember: 35 | 36 | - The RedisAI module doesn’t train your data - for that you need a tool like TensorFlow and PyTorch(Two open source projects for machine learning). 37 | - Where RedisAI comes in is at the application layer, when it’s time to apply logic to the data(inferencing) and then serve it to the user. 38 | - When you want to train your AI model somewhere in the Cloud. Once you want to do serving or the inferencing, Redis is the right database to do that 39 | 40 | 41 | # Blogs 42 | 43 | 44 | - [ Installing RedisAI on Jetson Nano](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisai/install/README.md) 45 | -------------------------------------------------------------------------------- /datastore/hashes/README.md: -------------------------------------------------------------------------------- 1 | 2 | # Hashes 3 | 4 | Redis hashes data structure contains field-value pairs 5 | 6 | 7 | ``` 8 | HMSET hmset user:1000 username antirez birthyear 1977 verified 1 9 | 10 | ``` 11 | 12 | 13 | Note: You might notice above that the key “user:1000” this way we can simulate a table name + PK in the single key. This is just like any other key and can be used anywhere in Redis. 14 | 15 | ## API 16 | - HMSET 17 | - HGET  18 | - HMGET  19 | - Is similar to HGET but might returns an array of values 20 | 21 | Hash value size < 512MB 22 | 23 | # HMSET vs HSET 24 | 25 | HMSET Sets the specified fields to their respective values in the hash stored at key. This command overwrites any specified fields already existing in the hash. If key does not exist, a new key holding a hash is created. 26 | As per Redis 4.0.0, HMSET is considered deprecated. Please use HSET in new code. 27 | 28 | - [Hashes](https://github.com/collabnix/redis/blob/master/datastructure/hashes/README.md) 29 | - [HSET](https://github.com/collabnix/redis/blob/master/datastructure/hashes/README.md#hset) 30 | - [HGET](https://github.com/collabnix/redis/blob/master/datastructure/hashes/README.md#hget) 31 | - [HINCRBY](https://github.com/collabnix/redis/blob/master/datastructure/hashes/README.md#hincrby) 32 | - [HGETALL](https://github.com/collabnix/redis/blob/master/datastructure/hashes/README.md#hgetall) 33 | 34 | 35 | 36 | # Storing object data in hashes 37 | 38 | ## HSET 39 | 40 | Let’s assume we want to store a number of fields about our users, such as a full name, email address, phone number, and number of visits to our application. We’ll 41 | use Redis’s hash management commands—like HSET, HGET, and HINCRBY—to store this information 42 | 43 | ``` 44 | 127.0.0.1:12000> hset users:jdoe name "John Doe" 45 | (integer) 1 46 | 127.0.0.1:12000> hset users:jdoe email "jdoe@test.com" 47 | (integer) 1 48 | 127.0.0.1:12000> hset users:jdoe phone "+1555313940" 49 | (integer) 1 50 | ``` 51 | 52 | ## HINCRBY 53 | 54 | ``` 55 | 127.0.0.1:12000> hincrby users:jdoe visits 1 56 | (integer) 1 57 | ``` 58 | 59 | ## HGET 60 | 61 | ``` 62 | 127.0.0.1:12000> hget users:jdoe email 63 | "jdoe@test.com" 64 | ``` 65 | 66 | ## HGETALL 67 | 68 | ``` 69 | 127.0.0.1:12000> hgetall users:jdoe 70 | 1) "name" 71 | 2) "John Doe" 72 | 3) "email" 73 | 4) "jdoe@test.com" 74 | 5) "phone" 75 | 6) "+1555313940" 76 | 7) "visits" 77 | 8) "1" 78 | 127.0.0.1:12000> 79 | ``` 80 | -------------------------------------------------------------------------------- /kubernetes/install/README.md: -------------------------------------------------------------------------------- 1 | # Setting up Redis Enterprise on 3-Node K8s Cluster 2 | 3 | 4 | # Installing Helm on PWK 5 | 6 | ## Installing OpenSSL 7 | 8 | ``` 9 | $ yum install -y openssl 10 | ``` 11 | 12 | # Installing Helm 13 | 14 | ``` 15 | $ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh 16 | $ chmod 700 get_helm.sh 17 | $ ./get_helm.sh 18 | ``` 19 | 20 | ## Initializing 21 | 22 | ``` 23 | helm init 24 | ``` 25 | 26 | ## Enabling RBAC 27 | 28 | ``` 29 | kubectl --namespace kube-system create serviceaccount tiller 30 | kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller 31 | kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 32 | helm init --service-account tiller --upgrade 33 | ``` 34 | 35 | ``` 36 | [node1 kubelabs]$ git clone https://github.com/RedisLabs/redislabs-helm 37 | Cloning into 'redislabs-helm'... 38 | remote: Enumerating objects: 87, done. 39 | remote: Total 87 (delta 0), reused 0 (delta 0), pack-reused 87 40 | Unpacking objects: 100% (87/87), done. 41 | [node1 kubelabs]$ cd redislabs-helm/ 42 | [node1 redislabs-helm]$ helm install --namespace redis -n 'redis-enterprise' ./redis-enterprise 43 | NAME: redis-enterprise 44 | LAST DEPLOYED: Mon Feb 3 06:32:53 2020 45 | NAMESPACE: redis 46 | STATUS: DEPLOYED 47 | 48 | RESOURCES: 49 | ==> v1/Pod(related) 50 | NAME AGE 51 | redis-enterprise-controller-7ddbcbf695-kch5t 0s 52 | redisenterprisecluster-0 0s 53 | 54 | ==> v1/Secret 55 | NAME AGE 56 | redis-enterprise 0s 57 | 58 | ==> v1/Service 59 | NAME AGE 60 | redis-enterprise 0s 61 | redis-enterprise-ui 0s 62 | 63 | ==> v1/ServiceAccount 64 | NAME AGE 65 | redis-enterprise 0s 66 | 67 | ==> v1beta1/Deployment 68 | NAME AGE 69 | redis-enterprise-controller 0s 70 | 71 | ==> v1beta1/PodDisruptionBudget 72 | NAME AGE 73 | redis-enterprise 0s 74 | 75 | ==> v1beta1/Role 76 | NAME AGE 77 | redis-enterprise 0s 78 | 79 | ==> v1beta1/RoleBinding 80 | NAME AGE 81 | redis-enterprise 0s 82 | 83 | ==> v1beta1/StatefulSet 84 | NAME AGE 85 | redisenterprisecluster 0s 86 | 87 | 88 | NOTES: 89 | Thank you for using redis enterprise. 90 | 91 | Web UI: 92 | ======= 93 | 94 | export POD_NAME=$(kubectl get pods --namespace redis -l "app=redis-enterprise,release=redis-enterprise" -o jsonpath="{.items[0].metadata.name}") 95 | kubectl port-forward --namespace redis $POD_NAME 8443 96 | 97 | [node1 redislabs-helm]$ 98 | I0203 03:32:06.614020 517 version.go:240] remote version is much newer: v1.17.2; falling back to: stable-1.14 99 | ``` 100 | -------------------------------------------------------------------------------- /kubernetes/Okteto_Cloud/redis-cluster.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: redis-cluster 6 | labels: 7 | app: redis-cluster 8 | data: 9 | fix-ip.sh: | 10 | #!/bin/sh 11 | CLUSTER_CONFIG="/data/nodes.conf" 12 | if [ -f ${CLUSTER_CONFIG} ]; then 13 | if [ -z "${POD_IP}" ]; then 14 | echo "Unable to determine Pod IP address!" 15 | exit 1 16 | fi 17 | echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" 18 | sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} 19 | fi 20 | exec "$@" 21 | redis.conf: |+ 22 | cluster-enabled yes 23 | cluster-require-full-coverage no 24 | cluster-node-timeout 15000 25 | cluster-config-file /data/nodes.conf 26 | cluster-migration-barrier 1 27 | appendonly yes 28 | protected-mode no 29 | --- 30 | apiVersion: v1 31 | kind: Service 32 | metadata: 33 | name: redis-cluster 34 | labels: 35 | app: redis-cluster 36 | spec: 37 | ports: 38 | - port: 6379 39 | targetPort: 6379 40 | name: client 41 | - port: 16379 42 | targetPort: 16379 43 | name: gossip 44 | clusterIP: None 45 | selector: 46 | app: redis-cluster 47 | --- 48 | apiVersion: apps/v1 49 | kind: StatefulSet 50 | metadata: 51 | name: redis-cluster 52 | labels: 53 | app: redis-cluster 54 | spec: 55 | serviceName: redis-cluster 56 | replicas: 6 57 | selector: 58 | matchLabels: 59 | app: redis-cluster 60 | template: 61 | metadata: 62 | labels: 63 | app: redis-cluster 64 | spec: 65 | containers: 66 | - name: redis 67 | image: redis:6.0 68 | ports: 69 | - containerPort: 6379 70 | name: client 71 | - containerPort: 16379 72 | name: gossip 73 | command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"] 74 | readinessProbe: 75 | exec: 76 | command: 77 | - sh 78 | - -c 79 | - "redis-cli -h $(hostname) ping" 80 | initialDelaySeconds: 15 81 | timeoutSeconds: 5 82 | livenessProbe: 83 | exec: 84 | command: 85 | - sh 86 | - -c 87 | - "redis-cli -h $(hostname) ping" 88 | initialDelaySeconds: 20 89 | periodSeconds: 3 90 | env: 91 | - name: POD_IP 92 | valueFrom: 93 | fieldRef: 94 | fieldPath: status.podIP 95 | volumeMounts: 96 | - name: conf 97 | mountPath: /conf 98 | readOnly: false 99 | - name: data 100 | mountPath: /data 101 | readOnly: false 102 | volumes: 103 | - name: conf 104 | configMap: 105 | name: redis-cluster 106 | defaultMode: 0755 107 | volumeClaimTemplates: 108 | - metadata: 109 | name: data 110 | labels: 111 | name: redis-cluster 112 | spec: 113 | accessModes: [ "ReadWriteOnce" ] 114 | resources: 115 | requests: 116 | storage: 100Mi 117 | -------------------------------------------------------------------------------- /datastore/strings/README.md: -------------------------------------------------------------------------------- 1 | # Strings 2 | 3 | - Strings are the most basic kind of Redis value 4 | - Redis Strings are binary safe 5 | - A Redis string can contain any kind of data, for instance a JPEG image or a serialized Ruby object 6 | 7 | 8 | ``` 9 | > set a1 10 10 | > get a1 11 | ``` 12 | 13 | # SET Vs GET 14 | 15 | - SET and the GET commands are the way we set and retrieve a string value 16 | - SET performs an assignment 17 | - SET will replace any existing value already stored into the key 18 | - Values can be strings (including binary data) of every kind, for instance you can store a jpeg image inside a key 19 | 20 | # Set Command 21 | 22 | - The SET command has interesting options, that are provided as additional arguments 23 | 24 | ## EX seconds  25 | Set the specified expire time, in seconds 26 | 27 | ## PX milliseconds  28 | Set the specified expire time, in milliseconds 29 | 30 | ## NX  31 | Only set the key if it does not already exist 32 | 33 | ## XX  34 | Only set the key if it already exist 35 | 36 | ## Example: 37 | 38 | ``` 39 | 127.0.0.1:6379> get a1 40 | "100" 41 | 127.0.0.1:6379> set a1 101 xx 42 | OK 43 | 127.0.0.1:6379> get a1 44 | "101" 45 | 127.0.0.1:6379> set a1 102 nx 46 | (nil) 47 | 127.0.0.1:6379> get a1 48 | "101" 49 | 127.0.0.1:6379> 50 | ``` 51 | 52 | # Altering & Querying Redis Keyspace 53 | 54 | - EXISTS command returns 1 or 0 to signal if a given key exists or not in the database 55 | - DEL command deletes a key and associated value, whatever the value is 56 | - Returns 1 or 0 depending on whether the key was removed 57 | - These commands can be used on all keys regardless of the value 
data structure 58 | 59 | ## Example: 60 | 61 | ``` 62 | 127.0.0.1:6379> set z1 hello 63 | OK 64 | 127.0.0.1:6379> exists z1 65 | (integer) 1 66 | 127.0.0.1:6379> del z1 67 | (integer) 1 68 | 127.0.0.1:6379> exists z1 69 | (integer) 0 70 | 127.0.0.1:6379> 71 | ``` 72 | 73 | # String with Integer Values - INCR 74 | 75 | 76 | - Use Strings as atomic counters using commands in the INCR family:  77 | - INCR,  78 | - DECR,  79 | - INCRBY, 80 | - DECRBY 81 | - Even if strings are the basic values of Redis, there are interesting operations you can perform with them 82 | - For instance, one is atomic increment 83 | 84 | ## Example: 85 | 86 | ``` 87 | 127.0.0.1:6379> set counter 100 88 | OK 89 | 127.0.0.1:6379> incr counter 90 | (integer) 101 91 | 127.0.0.1:6379> incr counter 92 | (integer) 102 93 | 127.0.0.1:6379> incrby counter 50 94 | (integer) 152 95 | 127.0.0.1:6379> 96 | ``` 97 | 98 | # MSET and MGET 99 | 100 | - MSET and MGET commands enable to set or retrieve the value of multiple keys in a single command is also useful for reduced latency 101 | 102 | ## Example: 103 | 104 | ``` 105 | 127.0.0.1:6379> mset a 10 b 20 c 30 106 | OK 107 | 127.0.0.1:6379> mget a b c 108 | 1) "10" 109 | 2) "20" 110 | 3) "30" 111 | 127.0.0.1:6379> 112 | ``` 113 | 114 | - GETSET command sets a key to a new value, returning the old value as the result 115 | 116 | 117 | # Keys Expiration 118 | 119 | - Set using seconds or milliseconds precision 120 | - However the expire time resolution is always 1 millisecond 121 | - Information about expires are 
replicated and persisted on disk, 
the time virtually passes when your 
Redis server remains stopped 122 | - Redis saves the date when
a key will expire 123 | 124 | ## Example: 125 | 126 | ``` 127 | 127.0.0.1:6379> set key 150 128 | OK 129 | 127.0.0.1:6379> expire key 10 130 | (integer) 1 131 | 127.0.0.1:6379> ttl key 132 | (integer) 5 133 | 127.0.0.1:6379> get key 134 | (nil) 135 | 127.0.0.1:6379> 136 | ``` 137 | 138 | -------------------------------------------------------------------------------- /datastore/lists/README.md: -------------------------------------------------------------------------------- 1 | # Lists 2 | 3 | - A sequence of ordered elements 4 | 10,20,1,2,3 is a list 5 | 6 | - Redis lists are implemented via Linked Lists 7 | - Adding a new element in the head with the LPUSH command or in the tail of the list is performed in constant time 8 | - Same for a list with 10 or 1M elements 9 | - Every list value (Element) can be up to 512MB 10 | - Accessing an element by index is not so fast in lists implemented by linked lists 11 | - Redis Lists are implemented with linked lists 12 | 13 | 14 | # Basic API of Lists 15 | 16 | - LPUSH command adds a new element 
into a list, on the left (at the head) 17 | - RPUSH command adds a new element 
into a list ,on the right (at the tail) 18 | - LRANGE command extracts ranges of 
elements from lists 19 | - Takes two indexes, the first and the last element 
of the range to return 20 | - Both the indexes can be negative 21 | - -1 is the last element 
-2 is the penultimate element of the list, and so forth 22 | 23 | ## Example: 24 | 25 | ``` 26 | 127.0.0.1:6379> rpush mylist A 27 | (integer) 1 28 | 127.0.0.1:6379> rpush mylist B 29 | (integer) 2 30 | 127.0.0.1:6379> lpush mylist first 31 | (integer) 3 32 | 127.0.0.1:6379> lrange mylist 0 -1 33 | 1) "first" 34 | 2) "A" 35 | 3) "B" 36 | 127.0.0.1:6379> 37 | ``` 38 | 39 | ## Please Note: 40 | 41 | - LPUSH and RPUSH allow for variable number of parameters commands 42 | - You can push multiple elements into a list in a single call 43 | 44 | ## Example: 45 | 46 | ``` 47 | 127.0.0.1:6379> rpush mylist 2 3 4 5 "foo bar" 48 | (integer) 8 49 | 127.0.0.1:6379> lrange mylist 0 -1 50 | 1) "first" 51 | 2) "A" 52 | 3) "B" 53 | 4) "2" 54 | 5) "3" 55 | 6) "4" 56 | 7) "5" 57 | 8) "foo bar" 58 | 127.0.0.1:6379> 59 | ``` 60 | 61 | 62 | # RPOP and LPOP 63 | 64 | 65 | Popping elements is the operation of both 66 | - retrieving the element from the list, and 67 | - eliminating it from the list 68 | 69 | 70 | ``` 71 | 127.0.0.1:6379> lrange mylist 0 -1 72 | 1) "first" 73 | 2) "A" 74 | 3) "B" 75 | 4) "2" 76 | 5) "3" 77 | 6) "4" 78 | 7) "5" 79 | 8) "foo bar" 80 | 127.0.0.1:6379> rpush mylist a b c 81 | (integer) 11 82 | 127.0.0.1:6379> rpop mylist 83 | "c" 84 | 127.0.0.1:6379> rpop mylist 85 | "b" 86 | 127.0.0.1:6379> rpop mylist 87 | "a" 88 | 127.0.0.1:6379> rpop mylist 89 | "foo bar" 90 | 127.0.0.1:6379> 91 | ``` 92 | 93 | 94 | ## Common Usage of Lists 95 | 96 | - Remember the latest updates posted by users into a social network 97 | 98 | - Communication between processes 99 | - using a consumer-producer pattern where the producer pushes items into a list, and a consumer (usually a worker) consumes those items and executed actions 100 | 101 | 102 | LPUSH RPOP 103 | Producer =======> | M3 | M2 | M1 | =========> Consumer 104 | 105 | 106 | - Redis has special list commands to make this use case both more reliable and efficient 107 |  BRPOP and BLPOP are blocking xPOP operation which will wait for a value to be in the list 108 | 109 | 110 | LPUSH BRPOP 111 | Producer =======> | M3 | M2 | M1 | =========> Consumer 112 | 113 | 114 | ## Capped Lists using LTRIM 115 | 116 | 117 | - Use the LTRIM command to use lists as a capped collection, 118 | - Only remembering the latest N items and discarding all the oldest items 119 | 120 | 121 | ## Example: 122 | 123 | ``` 124 | 127.0.0.1:6379> flushall 125 | OK 126 | 127.0.0.1:6379> rpush mylist 1 2 3 4 5 127 | (integer) 5 128 | 127.0.0.1:6379> lrange mylist 0 -1 129 | 1) "1" 130 | 2) "2" 131 | 3) "3" 132 | 4) "4" 133 | 5) "5" 134 | 127.0.0.1:6379> ltrim mylist 0 2 135 | OK 136 | 127.0.0.1:6379> lrange mylist 0 -1 137 | 1) "1" 138 | 2) "2" 139 | 3) "3" 140 | 127.0.0.1:6379> 141 | ``` 142 | -------------------------------------------------------------------------------- /ee/install/README.md: -------------------------------------------------------------------------------- 1 | # Installing Redis Enterprise 2 | 3 | ``` 4 | wget https://s3.amazonaws.com/redis-enterprise-software-downloads/5.4.6/redislabs-5.4.6-18-xenial-amd64.tar 5 | tar xvf redislabs-5.4.6-18-xenial-amd64.tar 6 | ./install.sh 7 | ``` 8 | 9 | ``` 10 | 2019-12-03 05:35:28 [.] Checking root access 11 | 2019-12-03 05:35:28 [!] Running as user root, sudo is not required. 12 | 2019-12-03 05:35:28 [.] Creating paths.sh 13 | 2019-12-03 05:35:28 [.] Creating socket directory /var/opt/redislabs/run 14 | 2019-12-03 05:35:28 [.] Verifying that all required packages are installed 15 | 2019-12-03 05:35:29 [.] Deleting RedisLabs debug package if exist 16 | 2019-12-03 05:35:29 [.] Installing RedisLabs packages 17 | 2019-12-03 05:35:29 [$] executing: 'dpkg -i redislabs_5.4.6-18~xenial_amd64.deb redislabs-utils_5.4.6-18~xenial_amd64.deb' 18 | (Reading database ... 71133 files and directories currently installed.) 19 | Preparing to unpack redislabs_5.4.6-18~xenial_amd64.deb ... 20 | Unpacking redislabs (5.4.6-18~xenial) ... 21 | Preparing to unpack redislabs-utils_5.4.6-18~xenial_amd64.deb ... 22 | Unpacking redislabs-utils (5.4.6-18~xenial) over (5. 23 | ``` 24 | 25 | ``` 26 | 27 | Skipping test: verify_bootstrap_status 28 | Skipping test: verify_processes 29 | Running test: verify_dmcproxy 30 | PASS 31 | Running test: verify_port_range 32 | PASS 33 | Skipping test: verify_pidfiles 34 | Running test: verify_capabilities 35 | PASS 36 | Running test: verify_existing_sockets 37 | PASS 38 | Running test: verify_host_settings 39 | PASS 40 | Skipping test: verify_tcp_connectivity 41 | Summary: 42 | ------- 43 | ALL TESTS PASSED. 44 | 2019-12-03 05:36:07 [$] executing: 'chown redislabs:redislabs /var/opt/redislab 45 | s/log/rlcheck.log' 46 | 2019-12-03 05:36:07 [!] Please logout and login again to make sure all environm 47 | ent changes are applied. 48 | 2019-12-03 05:36:07 [!] Point your browser at the following URL to continue: 49 | 2019-12-03 05:36:08 [!] https://10.138.15.203:8443 50 | 2019-12-03 05:36:08 [$] executing: 'chmod 644 51 | /tmp/install.log' 52 | 2019-12-03 05:36:08 [$] executing: 'chown redislabs:redislabs /tmp/install.log' 53 | ubuntu@ts-qualification-sa-ajeet-raina-2:~$ 54 | ``` 55 | 56 | ``` 57 | ubuntu@ts-qualification-sa-ajeet-raina-2:~$ curl 10.138.15.203:8443 58 | 59 | 302 Found 60 | 61 |

302 Found

62 |
nginx
63 | 64 | 65 | ``` 66 | 67 | You can browse at 68 | 69 | ``` 70 | https://34.83.95.101:8443/ 71 | ``` 72 | 73 | 74 | 75 | 76 | ![My Image](https://github.com/collabnix/redistask/blob/master/image1.png) 77 | ![My Image](https://github.com/collabnix/redistask/blob/master/image2.png) 78 | ![My Image](https://github.com/collabnix/redistask/blob/master/image3.png) 79 | 80 | 81 | # Installing redis 82 | 83 | ``` 84 | apt install python-pip 85 | pip install redis 86 | ``` 87 | 88 | Go to Redis Open Source CLI 89 | 90 | Create a file named "script.py" and add the below content: 91 | 92 | ``` 93 | import redis 94 | 95 | r = redis.Redis( 96 | host='10.138.15.202', 97 | port=6666, 98 | password='') 99 | i=1 100 | while(i < 101): 101 | r.rpush('demolist', i) 102 | i += 1 103 | #Print out the values from 1 - 100 104 | print(r.lrange('demolist', 0, -1)) 105 | ``` 106 | 107 | ``` 108 | ubuntu@ts-qualification-sa-ajeet-raina-1:~$ python script.py 109 | ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100'] 110 | -------------------------------------------------------------------------------- /_layouts/default.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | {% if site.google_analytics %} 6 | 7 | 13 | {% endif %} 14 | 15 | 16 | {% seo %} 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 46 | 47 |
48 | 49 | {{ content }} 50 | 51 | 52 | 68 |
69 | 70 | 71 | 72 | 73 | -------------------------------------------------------------------------------- /kubernetes/redis-ruby-k8s/redis-deployment.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: extensions/v1beta1 3 | kind: Deployment 4 | metadata: 5 | name: redis-0 6 | spec: 7 | 8 | replicas: 1 9 | template: 10 | metadata: 11 | labels: 12 | app: redis 13 | node: node0 14 | spec: 15 | hostname: redis-0 16 | volumes: 17 | - name: redis-vol 18 | hostPath: 19 | path: /data/redis-cluster/redis-0 20 | containers: 21 | - name: redis-0 22 | image: redis:ruby 23 | imagePullPolicy: IfNotPresent 24 | command: ["/usr/local/bin/redis-server"] 25 | args: ["/redis-conf/redis.conf"] 26 | ports: 27 | - containerPort: 7000 28 | - containerPort: 17000 29 | volumeMounts: 30 | - name: redis-vol 31 | readOnly: false 32 | mountPath: "/data" 33 | 34 | --- 35 | apiVersion: extensions/v1beta1 36 | kind: Deployment 37 | metadata: 38 | name: redis-1 39 | spec: 40 | 41 | replicas: 1 42 | template: 43 | metadata: 44 | labels: 45 | app: redis 46 | node: node1 47 | spec: 48 | hostname: redis-1 49 | volumes: 50 | - name: redis-vol 51 | hostPath: 52 | path: /data/redis-cluster/redis-1 53 | containers: 54 | - name: redis-0 55 | image: redis:ruby 56 | imagePullPolicy: IfNotPresent 57 | command: ["/usr/local/bin/redis-server"] 58 | args: ["/redis-conf/redis.conf"] 59 | ports: 60 | - containerPort: 7000 61 | - containerPort: 17000 62 | volumeMounts: 63 | - name: redis-vol 64 | readOnly: false 65 | mountPath: "/data" 66 | 67 | --- 68 | apiVersion: extensions/v1beta1 69 | kind: Deployment 70 | metadata: 71 | name: redis-2 72 | spec: 73 | 74 | replicas: 1 75 | template: 76 | metadata: 77 | labels: 78 | app: redis 79 | node: node2 80 | spec: 81 | hostname: redis-2 82 | volumes: 83 | - name: redis-vol 84 | hostPath: 85 | path: /data/redis-cluster/redis-2 86 | containers: 87 | - name: redis-2 88 | image: redis:ruby 89 | imagePullPolicy: IfNotPresent 90 | command: ["/usr/local/bin/redis-server"] 91 | args: ["/redis-conf/redis.conf"] 92 | ports: 93 | - containerPort: 7000 94 | - containerPort: 17000 95 | volumeMounts: 96 | - name: redis-vol 97 | readOnly: false 98 | mountPath: "/data" 99 | 100 | --- 101 | apiVersion: extensions/v1beta1 102 | kind: Deployment 103 | metadata: 104 | name: redis-3 105 | spec: 106 | 107 | replicas: 1 108 | template: 109 | metadata: 110 | labels: 111 | app: redis 112 | node: node3 113 | spec: 114 | hostname: redis-3 115 | volumes: 116 | - name: redis-vol 117 | hostPath: 118 | path: /data/redis-cluster/redis-3 119 | containers: 120 | - name: redis-3 121 | image: redis:ruby 122 | imagePullPolicy: IfNotPresent 123 | command: ["/usr/local/bin/redis-server"] 124 | args: ["/redis-conf/redis.conf"] 125 | ports: 126 | - containerPort: 7000 127 | - containerPort: 17000 128 | volumeMounts: 129 | - name: redis-vol 130 | readOnly: false 131 | mountPath: "/data" 132 | 133 | --- 134 | apiVersion: extensions/v1beta1 135 | kind: Deployment 136 | metadata: 137 | name: redis-4 138 | spec: 139 | 140 | replicas: 1 141 | template: 142 | metadata: 143 | labels: 144 | app: redis 145 | node: node4 146 | spec: 147 | hostname: redis-4 148 | volumes: 149 | - name: redis-vol 150 | hostPath: 151 | path: /data/redis-cluster/redis-4 152 | containers: 153 | - name: redis-4 154 | image: redis:ruby 155 | imagePullPolicy: IfNotPresent 156 | command: ["/usr/local/bin/redis-server"] 157 | args: ["/redis-conf/redis.conf"] 158 | ports: 159 | - containerPort: 7000 160 | - containerPort: 17000 161 | volumeMounts: 162 | - name: redis-vol 163 | readOnly: false 164 | mountPath: "/data" 165 | 166 | --- 167 | apiVersion: extensions/v1beta1 168 | kind: Deployment 169 | metadata: 170 | name: redis-5 171 | spec: 172 | 173 | replicas: 1 174 | template: 175 | metadata: 176 | labels: 177 | app: redis 178 | node: node5 179 | spec: 180 | hostname: redis-5 181 | volumes: 182 | - name: redis-vol 183 | hostPath: 184 | path: /data/redis-cluster/redis-5 185 | containers: 186 | - name: redis-5 187 | image: redis:ruby 188 | imagePullPolicy: IfNotPresent 189 | command: ["/usr/local/bin/redis-server"] 190 | args: ["/redis-conf/redis.conf"] 191 | ports: 192 | - containerPort: 7000 193 | - containerPort: 17000 194 | volumeMounts: 195 | - name: redis-vol 196 | readOnly: false 197 | mountPath: "/data" 198 | 199 | -------------------------------------------------------------------------------- /kubernetes/Okteto_Cloud/okteto-redis-cluster.md: -------------------------------------------------------------------------------- 1 | # Redis cluster - okteto Cloud 2 | 3 | If the cluster configuration of a redis node is lost in some way, it will come back with a different ID, which upsets the balance in the cluster (and probably in the Force). To prevent this, the setup uses a combination of Kubernetes StatefulSets and PersistentVolumeClaims to make sure the state of the cluster is maintained after rescheduling or failures. 4 | 5 | # install okteto cli 6 | ``` 7 | curl https://get.okteto.com -sSfL | sh 8 | 9 | ``` 10 | # create okteto cloud account 11 | https://cloud.okteto.com/ 12 | 13 | ``` 14 | okteto login 15 | ``` 16 | 17 | ## Setup 18 | ``` bash 19 | kubectl apply -f https://raw.githubusercontent.com/collabnix/redisplanet/master/kubernetes/Okteto_Cloud/redis-cluster.yml 20 | ``` 21 | This will spin up 6 `redis-cluster` pods one by one, which may take a while. After all pods are in a running state, you can itialize the cluster using the `redis-cli` in any of the pods. After the initialization, you will end up with 3 master and 3 slave nodes. 22 | ``` bash 23 | kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 \ 24 | $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') 25 | ``` 26 | 27 | ## Adding nodes 28 | Adding nodes to the cluster involves a few manual steps. First, let's add two nodes: 29 | ``` bash 30 | kubectl scale statefulset redis-cluster --replicas=8 31 | ``` 32 | 33 | Have the first new node join the cluster as master: 34 | ``` bash 35 | kubectl exec redis-cluster-0 -- redis-cli --cluster add-node \ 36 | $(kubectl get pod redis-cluster-6 -o jsonpath='{.status.podIP}'):6379 \ 37 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 38 | ``` 39 | 40 | The second new node should join the cluster as slave. This will automatically bind to the master with the least slaves (in this case, `redis-cluster-6`) 41 | ``` bash 42 | kubectl exec redis-cluster-0 -- redis-cli --cluster add-node --cluster-slave \ 43 | $(kubectl get pod redis-cluster-7 -o jsonpath='{.status.podIP}'):6379 \ 44 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 45 | ``` 46 | 47 | Finally, automatically rebalance the masters: 48 | ``` bash 49 | kubectl exec redis-cluster-0 -- redis-cli --cluster rebalance --cluster-use-empty-masters \ 50 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 51 | ``` 52 | 53 | ## Removing nodes 54 | 55 | ### Removing slaves 56 | Slaves can be deleted safely. First, let's get the id of the slave: 57 | 58 | ``` bash 59 | $ kubectl exec redis-cluster-7 -- redis-cli cluster nodes | grep myself 60 | 3f7cbc0a7e0720e37fcb63a81dc6e2bf738c3acf 172.17.0.11:6379 myself,slave 32f250e02451352e561919674b8b705aef4dbdc6 0 0 0 connected 61 | ``` 62 | 63 | Then delete it: 64 | ``` bash 65 | kubectl exec redis-cluster-0 -- redis-cli --cluster del-node \ 66 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 \ 67 | 3f7cbc0a7e0720e37fcb63a81dc6e2bf738c3acf 68 | ``` 69 | 70 | ### Removing a master 71 | To remove master nodes from the cluster, we first have to move the slots used by them to the rest of the cluster, to avoid data loss. 72 | 73 | First, take note of the id of the master node we are removing: 74 | ``` bash 75 | $ kubectl exec redis-cluster-6 -- redis-cli cluster nodes | grep myself 76 | 27259a4ae75c616bbde2f8e8c6dfab2c173f2a1d 172.17.0.10:6379 myself,master - 0 0 9 connected 0-1364 5461-6826 10923-12287 77 | ``` 78 | 79 | Also note the id of any other master node: 80 | ``` bash 81 | $ kubectl exec redis-cluster-6 -- redis-cli cluster nodes | grep master | grep -v myself 82 | 32f250e02451352e561919674b8b705aef4dbdc6 172.17.0.4:6379 master - 0 1495120400893 2 connected 6827-10922 83 | 2a42aec405aca15ec94a2470eadf1fbdd18e56c9 172.17.0.6:6379 master - 0 1495120398342 8 connected 12288-16383 84 | 0990136c9a9d2e48ac7b36cfadcd9dbe657b2a72 172.17.0.2:6379 master - 0 1495120401395 1 connected 1365-5460 85 | ``` 86 | 87 | Then, use the `reshard` command to move all slots from `redis-cluster-6`: 88 | ``` bash 89 | kubectl exec redis-cluster-0 -- redis-cli --cluster reshard --cluster-yes \ 90 | --cluster-from 27259a4ae75c616bbde2f8e8c6dfab2c173f2a1d \ 91 | --cluster-to 32f250e02451352e561919674b8b705aef4dbdc6 \ 92 | --cluster-slots 16384 \ 93 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 94 | ``` 95 | 96 | After resharding, it is safe to delete the `redis-cluster-6` master node: 97 | ``` bash 98 | kubectl exec redis-cluster-0 -- redis-cli --cluster del-node \ 99 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 \ 100 | 27259a4ae75c616bbde2f8e8c6dfab2c173f2a1d 101 | ``` 102 | 103 | Finally, we can rebalance the remaining masters to evenly distribute slots: 104 | ``` bash 105 | kubectl exec redis-cluster-0 -- redis-cli --cluster rebalance --cluster-use-empty-masters \ 106 | $(kubectl get pod redis-cluster-0 -o jsonpath='{.status.podIP}'):6379 107 | ``` 108 | 109 | ### Scaling down 110 | After the master has been resharded and both nodes are removed from the cluster, it is safe to scale down the statefulset: 111 | ``` bash 112 | kubectl scale statefulset redis-cluster --replicas=6 113 | ``` 114 | 115 | ## Cleaning up 116 | ``` bash 117 | kubectl delete statefulset,svc,configmap,pvc -l app=redis-cluster 118 | ``` 119 | -------------------------------------------------------------------------------- /Redis-conf/README.md: -------------------------------------------------------------------------------- 1 | # redis.conf 2 | 3 | - Redis is able to start without a configuration file using a built-in default configuration 4 | - The proper way to configure Redis is by providing a Redis configuration file, usually called redis.conf. 5 | - The redis.conf file contains a number of directives that have a very simple format: 6 | 7 | 8 | ``` 9 | keyword argument1 argument2 ... argumentN 10 | ``` 11 | 12 | 13 | Example of configuration directive: 14 | 15 | ``` 16 | slaveof 127.0.0.1 6380 17 | ``` 18 | 19 | ## Is it posible to pass arguments via the command line? 20 | 21 | Yes. It is possible to also pass Redis configuration parameters using the command line directly. 22 | This is very useful for testing purposes. 23 | 24 | The following is an example that starts a new Redis instance using port 6380 as a slave of the instance running at 127.0.0.1 port 6379. 25 | 26 | 27 | ``` 28 | ./redis-server --port 6380 --slaveof 127.0.0.1 6379 29 | ``` 30 | 31 | The format of the arguments passed via the command line is exactly the same as the one used in the redis.conf file, 32 | with the exception that the keyword is prefixed with --. 33 | 34 | Note that internally this generates an in-memory temporary config file (possibly concatenating the config file passed by the user if any) where arguments are translated into the format of redis.conf. 35 | 36 | 37 | ## Is it possible to change Redis configuration while the server is running? 38 | 39 | Yes, It is possible to reconfigure Redis on the fly without stopping and restarting the service, or 40 | querying the current configuration programmatically using the special commands [CONFIG SET](https://redis.io/commands/config-set) 41 | and [CONFIG GET](https://redis.io/commands/config-get) 42 | 43 | Not all the configuration directives are supported in this way, but most are supported as expected. 44 | Please refer to the CONFIG SET and CONFIG GET pages for more information. 45 | Note that modifying the configuration on the fly has no effects on the redis.conf file so at the next restart of Redis the old 46 | configuration will be used instead. 47 | Make sure to also modify the redis.conf file accordingly to the configuration you set using CONFIG SET. 48 | 49 | ## Tell me something about CONFIG_REWRITE? 50 | 51 | You can do it manually, or starting with Redis 2.8, you can just use [CONFIG REWRITE](https://redis.io/commands/config-rewrite), which will automatically scan your redis.conf file and update the fields which don't match the current configuration value. 52 | Fields non existing but set to the default value are not added. Comments inside your configuration file are retained. 53 | 54 | 55 | ## How to configure Redis as a Cache? 56 | 57 | If you plan to use Redis just as a cache where every key will have an expire set, you may consider using the following configuration instead (assuming a max memory limit of 2 megabytes as an example): 58 | 59 | ``` 60 | maxmemory 2mb 61 | maxmemory-policy allkeys-lru 62 | ``` 63 | 64 | In this configuration there is no need for the application to set a time to live for keys using the EXPIRE command (or equivalent) since all the keys will be evicted using an approximated LRU algorithm as long as we hit the 2 megabyte memory limit. 65 | Basically in this configuration Redis acts in a similar way to memcached. 66 | 67 | 68 | ## How to use Redis as an LRU cache? 69 | 70 | Refer [this](https://github.com/collabnix/redisplanet/blob/master/Redis-conf/redis-as-lru-cache/README.md) link 71 | 72 | ## Where can I find a sample Redis.conf 73 | 74 | Refer [this](http://download.redis.io/redis-stable/redis.conf) link 75 | 76 | ## Tell me about the structure of Redis.conf 77 | 78 | 79 | - Lines beginning with a # are comments and are ignored 80 | - Directives are given using the syntax keyword value [value …] 81 | - String values should be enclosed with quotes when containing spaces 82 | - The empty string ("") is a valid value 83 | - Depending on the configuration directive, the value of '0' usually means off 84 | - Size values (e.g. maxmemory) are given in bytes, unless an explicit suffix is used (e.g. '10mb' and '50gb') 85 | 86 | 87 | ## Tell me more about Runtime Configuration? 88 | 89 | The configuration of an online server can be inspected using the the CONFIG GET command. Note that the command supports glob-like patterns, so if you don't remember the exact keyword you can still use it, e.g. CONFIG GET maxmem* or even CONFIG GET *. 90 | 91 | Note: the values of some configuration directives are also reported in INFO's output. 92 | 93 | The value of most configuration settings can be changed while the server is running and without interruption to the service using the CONFIG SET command. That said, a few directives can only be set at startup and cannot be changed during runtime, including: bind, port, unixsocket, unixsocketperm, daemonize, supervised and rename-command. 94 | 95 | Also, a several of the directives can only be changed using specialized commands rather then CONFIG SET. These include: 96 | 97 | - The slaveof directive is managed during runtime with the 'SLAVEOF' command 98 | - Modules are loaded using the MODULE LOAD command 99 | 100 | Runtime configuration changes that you perform are not persisted, and configuration settings will revert to their previously-defined/default values upon the next server restart. To persist such changes they have to be stored in the server's configuration file - do so by either editing the file directly, or use the CONFIG REWRITE command. 101 | -------------------------------------------------------------------------------- /kubernetes/redis-ruby-k8s/README.md: -------------------------------------------------------------------------------- 1 | # How to run Redis Cluster on Kubernetes? 2 | 3 | --- 4 | 5 | ### Prerequisites 6 | 7 | - Install Minikube 8 | 9 | Follow this guide: [Install minikube on local machine](https://kubernetes.io/docs/getting-started-guides/minikube/) 10 | 11 | Follow this guide: [Install kubernetes by kubeadm on cluster](https://kubernetes.io/docs/getting-started-guides/kubeadm/) 12 | 13 | - Install docker-engine 14 | 15 | Follow this guide: [Install Docker and run hello-world](https://docs.docker.com/engine/getstarted/step_one/) 16 | 17 | --- 18 | 19 | ### Clone the git repository 20 | 21 | - Clone the git repository and change working directory to ```redis-ruby-k8s``` 22 | 23 | ```sh 24 | $ git clone https://github.com/collabnix/redisplanet 25 | 26 | $ cd redisplanet/kubernetes/redis-ruby-k8s 27 | ``` 28 | 29 | --- 30 | 31 | ### Build the redis cluster image 32 | 33 | - Use docker to build redis cluster image 34 | 35 | ```sh 36 | $ docker build docker-images-redis-ruby/ -t redis:ruby 37 | ``` 38 | 39 | - Deploy 6 nodes redis cluster to kubernetes, it will create 6 pods and 6 service in kubernetes. 40 | 41 | Redis service port 7000, but we must enable 17000 port to make create redis cluster work! 42 | 43 | ```sh 44 | $ kubectl create -f . 45 | deployment "redis-0" created 46 | deployment "redis-1" created 47 | deployment "redis-2" created 48 | deployment "redis-3" created 49 | deployment "redis-4" created 50 | deployment "redis-5" created 51 | service "redis-0" created 52 | service "redis-1" created 53 | service "redis-2" created 54 | service "redis-3" created 55 | service "redis-4" created 56 | service "redis-5" created 57 | ``` 58 | 59 | Check the pods: 60 | 61 | ```sh 62 | $ kubectl get pods -l app=redis 63 | NAME READY STATUS RESTARTS AGE 64 | redis-0-2560206902-b8fvk 1/1 Running 0 6s 65 | redis-1-2919540793-4pbxm 1/1 Running 0 6s 66 | redis-2-3476006974-8wtrn 1/1 Running 0 6s 67 | redis-3-3933907010-4kvnm 1/1 Running 0 6s 68 | redis-4-97822790-vk3js 1/1 Running 0 5s 69 | redis-5-555722826-w0vxs 1/1 Running 0 5s 70 | ``` 71 | 72 | Use ssh to connect pod container, copy the shell in start-cluster.sh, and run it in container: 73 | ```sh 74 | $ kubectl exec -ti redis-node1-2942478609-lqk6j -- /bin/bash 75 | root@redis-node1-2942478609-lqk6j:/data# echo 'yes' | /root/redis-trib.rb create --replicas 1 \ 76 | > `getent hosts redis-0 | awk '{ print $1 }'`:7000 \ 77 | > `getent hosts redis-1 | awk '{ print $1 }'`:7000 \ 78 | > `getent hosts redis-2 | awk '{ print $1 }'`:7000 \ 79 | > `getent hosts redis-3 | awk '{ print $1 }'`:7000 \ 80 | > `getent hosts redis-4 | awk '{ print $1 }'`:7000 \ 81 | > `getent hosts redis-5 | awk '{ print $1 }'`:7000 82 | 83 | >>> Creating cluster 84 | >>> Performing hash slots allocation on 7 nodes... 85 | Using 3 masters: 86 | 10.0.0.134:7000 87 | 10.0.0.99:7000 88 | 10.0.0.157:7000 89 | Adding replica 10.0.0.62:7000 to 10.0.0.134:7000 90 | Adding replica 10.0.0.201:7000 to 10.0.0.99:7000 91 | Adding replica 10.0.0.149:7000 to 10.0.0.157:7000 92 | Adding replica 10.0.0.134:7000 to 10.0.0.134:7000 93 | S: a678b2e1acb26a9ddcb2200895614ebc028da621 10.0.0.201:7000 94 | replicates 436870b83b9cbc14dc1edfc3ca5210c265ad5500 95 | S: 63a60d597ed30c333b8abffe27f399d2a827d7e5 10.0.0.149:7000 96 | replicates 460a9f8276f1756953364d61c5b50bcf0a519e2d 97 | S: c247aac89ad61f52651eb5506afc7d1761066f54 10.0.0.62:7000 98 | replicates 7dfbe35cbd42719e2e4688147fb7b4bbe6e05ea5 99 | M: 460a9f8276f1756953364d61c5b50bcf0a519e2d 10.0.0.157:7000 100 | slots:10923-16383 (5461 slots) master 101 | M: 436870b83b9cbc14dc1edfc3ca5210c265ad5500 10.0.0.99:7000 102 | slots:5461-10922 (5462 slots) master 103 | M: 7dfbe35cbd42719e2e4688147fb7b4bbe6e05ea5 10.0.0.134:7000 104 | slots:0-5460 (5461 slots) master 105 | S: 7dfbe35cbd42719e2e4688147fb7b4bbe6e05ea5 10.0.0.134:7000 106 | replicates 7dfbe35cbd42719e2e4688147fb7b4bbe6e05ea5 107 | Can I set the above configuration? (type 'yes' to accept): >>> Nodes configuration updated 108 | >>> Assign a different config epoch to each node 109 | >>> Sending CLUSTER MEET messages to join the cluster 110 | Waiting for the cluster to join.. 111 | >>> Performing Cluster Check (using node 10.0.0.201:7000) 112 | S: a678b2e1acb26a9ddcb2200895614ebc028da621 10.0.0.201:7000 113 | slots: (0 slots) slave 114 | replicates 436870b83b9cbc14dc1edfc3ca5210c265ad5500 115 | M: 436870b83b9cbc14dc1edfc3ca5210c265ad5500 172.17.0.11:7000 116 | slots:5461-10922 (5462 slots) master 117 | 1 additional replica(s) 118 | M: 7dfbe35cbd42719e2e4688147fb7b4bbe6e05ea5 172.17.0.12:7000 119 | slots:0-5460 (5461 slots) master 120 | 1 additional replica(s) 121 | S: 63a60d597ed30c333b8abffe27f399d2a827d7e5 172.17.0.4:7000 122 | slots: (0 slots) slave 123 | replicates 460a9f8276f1756953364d61c5b50bcf0a519e2d 124 | S: c247aac89ad61f52651eb5506afc7d1761066f54 172.17.0.6:7000 125 | slots: (0 slots) slave 126 | replicates 7dfbe35cbd42719e2e4688147fb7b4bbe6e05ea5 127 | M: 460a9f8276f1756953364d61c5b50bcf0a519e2d 172.17.0.10:7000 128 | slots:10923-16383 (5461 slots) master 129 | 1 additional replica(s) 130 | [OK] All nodes agree about slots configuration. 131 | >>> Check for open slots... 132 | >>> Check slots coverage... 133 | [OK] All 16384 slots covered. 134 | ``` 135 | 136 | Verify the cluster start correct or not 137 | 138 | ```sh 139 | root@redis-node1-2942478609-lqk6j:/data# redis-cli -c -h redis-2 -p 7000 140 | redis-node1-svc:7000> set k1 v1 141 | -> Redirected to slot [12706] located at 172.17.0.10:7000 142 | OK 143 | 172.17.0.10:7000> set k2 v2 144 | -> Redirected to slot [449] located at 172.17.0.12:7000 145 | OK 146 | 172.17.0.12:7000> get k1 147 | -> Redirected to slot [12706] located at 172.17.0.10:7000 148 | "v1" 149 | 172.17.0.10:7000> get k2 150 | -> Redirected to slot [449] located at 172.17.0.12:7000 151 | "v2" 152 | 172.17.0.12:7000> exit 153 | ``` 154 | -------------------------------------------------------------------------------- /sentinel/swarm/collabnix-redis-sentinel.conf: -------------------------------------------------------------------------------- 1 | # Example sentinel.conf (from http://download.redis.io/redis-stable/sentinel.conf) 2 | 3 | # *** IMPORTANT *** 4 | # 5 | # By default Sentinel will not be reachable from interfaces different than 6 | # localhost, either use the 'bind' directive to bind to a list of network 7 | # interfaces, or disable protected mode with "protected-mode no" by 8 | # adding it to this configuration file. 9 | # 10 | # Before doing that MAKE SURE the instance is protected from the outside 11 | # world via firewalling or other means. 12 | # 13 | # For example you may use one of the following: 14 | # 15 | # bind 127.0.0.1 192.168.1.1 16 | # 17 | # protected-mode no 18 | 19 | # port 20 | # The port that this sentinel instance will run on 21 | port 26379 22 | 23 | # sentinel announce-ip 24 | # sentinel announce-port 25 | # 26 | # The above two configuration directives are useful in environments where, 27 | # because of NAT, Sentinel is reachable from outside via a non-local address. 28 | # 29 | # When announce-ip is provided, the Sentinel will claim the specified IP address 30 | # in HELLO messages used to gossip its presence, instead of auto-detecting the 31 | # local address as it usually does. 32 | # 33 | # Similarly when announce-port is provided and is valid and non-zero, Sentinel 34 | # will announce the specified TCP port. 35 | # 36 | # The two options don't need to be used together, if only announce-ip is 37 | # provided, the Sentinel will announce the specified IP and the server port 38 | # as specified by the "port" option. If only announce-port is provided, the 39 | # Sentinel will announce the auto-detected local IP and the specified port. 40 | # 41 | # Example: 42 | # 43 | # sentinel announce-ip $SENTINEL_IP 44 | 45 | # dir 46 | # Every long running process should have a well-defined working directory. 47 | # For Redis Sentinel to chdir to /tmp at startup is the simplest thing 48 | # for the process to don't interfere with administrative tasks such as 49 | # unmounting filesystems. 50 | dir /tmp 51 | 52 | # sentinel monitor 53 | # 54 | # Tells Sentinel to monitor this master, and to consider it in O_DOWN 55 | # (Objectively Down) state only if at least sentinels agree. 56 | # 57 | # Note that whatever is the ODOWN quorum, a Sentinel will require to 58 | # be elected by the majority of the known Sentinels in order to 59 | # start a failover, so no failover can be performed in minority. 60 | # 61 | # Slaves are auto-discovered, so you don't need to specify slaves in 62 | # any way. Sentinel itself will rewrite this configuration file adding 63 | # the slaves using additional configuration options. 64 | # Also note that the configuration file is rewritten when a 65 | # slave is promoted to master. 66 | # 67 | # Note: master name should not include special characters or spaces. 68 | # The valid charset is A-z 0-9 and the three characters ".-_". 69 | sentinel monitor $REDIS_MASTER_NAME $REDIS_MASTER_HOST $REDIS_MASTER_PORT $SENTINEL_QUORUM 70 | 71 | # sentinel auth-pass 72 | # 73 | # Set the password to use to authenticate with the master and slaves. 74 | # Useful if there is a password set in the Redis instances to monitor. 75 | # 76 | # Note that the master password is also used for slaves, so it is not 77 | # possible to set a different password in masters and slaves instances 78 | # if you want to be able to monitor these instances with Sentinel. 79 | # 80 | # However you can have Redis instances without the authentication enabled 81 | # mixed with Redis instances requiring the authentication (as long as the 82 | # password set is the same for all the instances requiring the password) as 83 | # the AUTH command will have no effect in Redis instances with authentication 84 | # switched off. 85 | # 86 | # Example: 87 | # 88 | # sentinel auth-pass $REDIS_MASTER_NAME MySUPER--secret-0123passw0rd 89 | 90 | # sentinel down-after-milliseconds 91 | # 92 | # Number of milliseconds the master (or any attached slave or sentinel) should 93 | # be unreachable (as in, not acceptable reply to PING, continuously, for the 94 | # specified period) in order to consider it in S_DOWN state (Subjectively 95 | # Down). 96 | # 97 | # Default is 30 seconds. 98 | sentinel down-after-milliseconds $REDIS_MASTER_NAME $SENTINEL_DOWN_AFTER 99 | 100 | # sentinel parallel-syncs 101 | # 102 | # How many slaves we can reconfigure to point to the new slave simultaneously 103 | # during the failover. Use a low number if you use the slaves to serve query 104 | # to avoid that all the slaves will be unreachable at about the same 105 | # time while performing the synchronization with the master. 106 | sentinel parallel-syncs $REDIS_MASTER_NAME $SENTINEL_PARALLEL_SYNC 107 | 108 | # sentinel failover-timeout 109 | # 110 | # Specifies the failover timeout in milliseconds. It is used in many ways: 111 | # 112 | # - The time needed to re-start a failover after a previous failover was 113 | # already tried against the same master by a given Sentinel, is two 114 | # times the failover timeout. 115 | # 116 | # - The time needed for a slave replicating to a wrong master according 117 | # to a Sentinel current configuration, to be forced to replicate 118 | # with the right master, is exactly the failover timeout (counting since 119 | # the moment a Sentinel detected the misconfiguration). 120 | # 121 | # - The time needed to cancel a failover that is already in progress but 122 | # did not produced any configuration change (SLAVEOF NO ONE yet not 123 | # acknowledged by the promoted slave). 124 | # 125 | # - The maximum time a failover in progress waits for all the slaves to be 126 | # reconfigured as slaves of the new master. However even after this time 127 | # the slaves will be reconfigured by the Sentinels anyway, but not with 128 | # the exact parallel-syncs progression as specified. 129 | # 130 | # Default is 3 minutes. 131 | sentinel failover-timeout $REDIS_MASTER_NAME $SENTINEL_FAILOVER_TIMEOUT 132 | 133 | # SCRIPTS EXECUTION 134 | # 135 | # sentinel notification-script and sentinel reconfig-script are used in order 136 | # to configure scripts that are called to notify the system administrator 137 | # or to reconfigure clients after a failover. The scripts are executed 138 | # with the following rules for error handling: 139 | # 140 | # If script exits with "1" the execution is retried later (up to a maximum 141 | # number of times currently set to 10). 142 | # 143 | # If script exits with "2" (or an higher value) the script execution is 144 | # not retried. 145 | # 146 | # If script terminates because it receives a signal the behavior is the same 147 | # as exit code 1. 148 | # 149 | # A script has a maximum running time of 60 seconds. After this limit is 150 | # reached the script is terminated with a SIGKILL and the execution retried. 151 | 152 | # NOTIFICATION SCRIPT 153 | # 154 | # sentinel notification-script 155 | # 156 | # Call the specified notification script for any sentinel event that is 157 | # generated in the WARNING level (for instance -sdown, -odown, and so forth). 158 | # This script should notify the system administrator via email, SMS, or any 159 | # other messaging system, that there is something wrong with the monitored 160 | # Redis systems. 161 | # 162 | # The script is called with just two arguments: the first is the event type 163 | # and the second the event description. 164 | # 165 | # The script must exist and be executable in order for sentinel to start if 166 | # this option is provided. 167 | # 168 | # Example: 169 | # 170 | # sentinel notification-script $REDIS_MASTER_NAME /var/redis/notify.sh 171 | 172 | # CLIENTS RECONFIGURATION SCRIPT 173 | # 174 | # sentinel client-reconfig-script 175 | # 176 | # When the master changed because of a failover a script can be called in 177 | # order to perform application-specific tasks to notify the clients that the 178 | # configuration has changed and the master is at a different address. 179 | # 180 | # The following arguments are passed to the script: 181 | # 182 | # 183 | # 184 | # is currently always "failover" 185 | # is either "leader" or "observer" 186 | # 187 | # The arguments from-ip, from-port, to-ip, to-port are used to communicate 188 | # the old address of the master and the new address of the elected slave 189 | # (now a master). 190 | # 191 | # This script should be resistant to multiple invocations. 192 | # 193 | # Example: 194 | # 195 | # sentinel client-reconfig-script $REDIS_MASTER_NAME /var/redis/reconfig.sh 196 | -------------------------------------------------------------------------------- /install/ubuntu/18.04/README.md: -------------------------------------------------------------------------------- 1 | # Install Redis Open Source on Ubuntu 18.04 2 | 3 | ## Pre-requisites: 4 | 5 | ``` 6 | ubuntu@ts-qualification-sa-ajeet-raina-1:~$ wget http://download.redis.io/releases/redis-3.0.7.tar.g 7 | z 8 | --2019-12-03 05:08:54-- http://download.redis.io/releases/redis-3.0.7.tar.gz 9 | Resolving download.redis.io (download.redis.io)... 109.74.203.151 10 | Connecting to download.redis.io (download.redis.io)|109.74.203.151|:80... connected. 11 | HTTP request sent, awaiting response... 200 OK 12 | Length: 1375200 (1.3M) [application/x-gzip] 13 | Saving to: ‘redis-3.0.7.tar.gz’ 14 | redis-3.0.7.tar.gz 100%[==================================>] 1.31M 1.68MB/s in 0.8s 15 | 2019-12-03 05:08:55 (1.68 MB/s) - ‘redis-3.0.7.tar.gz’ saved [1375200/1375200] 16 | ubuntu@ts-qualification-sa-ajeet-raina-1:~$ tar xzf redis-3.0.7.tar.gz 17 | ubuntu@ts-qualification-sa-ajeet-raina-1:~$ cd redis-3.0.7/ 18 | ubuntu@ts-qualification-sa-ajeet-raina-1:~/redis-3.0.7$ sudo make 19 | ``` 20 | 21 | ## Install Make 22 | 23 | ``` 24 | ubuntu@ts-qualification-sa-ajeet-raina-1:~/redis-3.0.7$ sudo apt install make 25 | Reading package lists... Done 26 | Building dependency tree 27 | Reading state information... Done 28 | Suggested packages: 29 | make-doc 30 | The following NEW packages will be installed: 31 | make 32 | 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 33 | Need to get 151 kB of archives. 34 | After this operation, 365 kB of additional disk space will be used. 35 | Get:1 http://us-west1.gce.archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] 36 | Fetched 151 kB in 0s (897 kB/s) 37 | Selecting previously unselected package make. 38 | (Reading database ... 71027 files and directories currently installed.) 39 | Preparing to unpack .../archives/make_4.1-6_amd64.deb ... 40 | Unpacking make (4.1-6) ... 41 | Processing triggers for man-db (2.7.5-1) ... 42 | ``` 43 | 44 | 45 | If you encounter an issue related to below message: 46 | 47 | ``` 48 | zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory 49 | zmalloc.h:55:2: error: #error "Newer version of jemalloc required" 50 | ``` 51 | 52 | These errors were caused by some files left here after the failure of the first make command. Apparently I had to clean these files, with the command : 53 | 54 | Follow the below steps to fix it: 55 | 56 | ``` 57 | sudo apt install gcc 58 | sudo make distclean 59 | ``` 60 | 61 | 62 | And then make again, and it worked. 63 | 64 | 65 | 66 | ``` 67 | CC blocked.o 68 | CC hyperloglog.o 69 | CC latency.o 70 | CC sparkline.o 71 | LINK redis-server 72 | INSTALL redis-sentinel 73 | CC redis-cli.o 74 | LINK redis-cli 75 | CC redis-benchmark.o 76 | LINK redis-benchmark 77 | CC redis-check-dump.o 78 | LINK redis-check-dump 79 | CC redis-check-aof.o 80 | LINK redis-check-aof 81 | 82 | Hint: It's a good idea to run 'make test' ;) 83 | 84 | make[1]: Leaving directory '/home/ubuntu/redis-3.0.7/src' 85 | ``` 86 | 87 | ``` 88 | dockercaptain81@server-a:~/redis-3.0.7$ sudo make install 89 | cd src && make install 90 | make[1]: Entering directory '/home/dockercaptain81/redis-3.0.7/src' 91 | Hint: It's a good idea to run 'make test' ;) 92 | INSTALL install 93 | INSTALL install 94 | INSTALL install 95 | INSTALL install 96 | INSTALL install 97 | make[1]: Leaving directory '/home/dockercaptain81/redis-3.0.7/src' 98 | ``` 99 | 100 | ## Starting Redis Server 101 | 102 | 103 | 104 | 105 | ``` 106 | src/redis-server 107 | 22562:C 03 Dec 05:16:20.677 # Warning: no config file specified, using the default config. In order to specify a config file use src/redis-server /path/to/redis.conf 108 | 22562:M 03 Dec 05:16:20.678 * Increased maximum number of open files to 10032 (it was originally set to 1024). 109 | _._ 110 | _.-``__ ''-._ 111 | _.-`` `. `_. ''-._ Redis 3.0.7 (00000000/0) 64 bit 112 | .-`` .-```. ```\/ _.,_ ''-._ 113 | ( ' , .-` | `, ) Running in standalone mode 114 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 115 | | `-._ `._ / _.-' | PID: 22562 116 | `-._ `-._ `-./ _.-' _.-' 117 | |`-._`-._ `-.__.-' _.-'_.-'| 118 | | `-._`-._ _.-'_.-' | http://redis.io 119 | `-._ `-._`-.__.-'_.-' _.-' 120 | |`-._`-._ `-.__.-' _.-'_.-'| 121 | | `-._`-._ _.-'_.-' | 122 | `-._ `-._`-.__.-'_.-' _.-' 123 | `-._ `-.__.-' _.-' 124 | `-._ _.-' 125 | `-.__.-' 126 | 127 | 22562:M 03 Dec 05:16:20.678 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 128 | 22562:M 03 Dec 05:16:20.678 # Server started, Redis version 3.0.7 129 | 22562:M 03 Dec 05:16:20.678 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 130 | 22562:M 03 Dec 05:16:20.679 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 131 | 22562:M 03 Dec 05:16:20.679 * The server is now ready to accept connections on port 6379 132 | ``` 133 | 134 | 135 | ## Shutting down Redis 136 | 137 | ``` 138 | ubuntu@ts-qualification-sa-ajeet-raina-1:~/redis-3.0.7$ src/redis-cli shutdown 139 | 22574:M 03 Dec 05:21:20.448 # User requested shutdown... 140 | 22574:M 03 Dec 05:21:20.448 * Saving the final RDB snapshot before exiting. 141 | 22574:M 03 Dec 05:21:20.475 * DB saved on disk 142 | 22574:M 03 Dec 05:21:20.475 # Redis is now ready to exit, bye bye... 143 | [1]+ Done src/redis-server 144 | ``` 145 | 146 | ## How to change Redis-server Port 147 | 148 | ``` 149 | ubuntu@ts-qualification-sa-ajeet-raina-1:~/redis-3.0.7$ src/redis-server redis.conf 150 | 22696:M 03 Dec 05:23:51.105 * Increased maximum number of open files to 10032 (it was originally set 151 | to 1024). 152 | _._ 153 | _.-``__ ''-._ 154 | _.-`` `. `_. ''-._ Redis 3.0.7 (00000000/0) 64 bit 155 | .-`` .-```. ```\/ _.,_ ''-._ 156 | ( ' , .-` | `, ) Running in standalone mode 157 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6666 158 | | `-._ `._ / _.-' | PID: 22696 159 | `-._ `-._ `-./ _.-' _.-' 160 | |`-._`-._ `-.__.-' _.-'_.-'| 161 | | `-._`-._ _.-'_.-' | http://redis.io 162 | `-._ `-._`-.__.-'_.-' _.-' 163 | |`-._`-._ `-.__.-' _.-'_.-'| 164 | | `-._`-._ _.-'_.-' | 165 | `-._ `-._`-.__.-'_.-' _.-' 166 | `-._ `-.__.-' _.-' 167 | `-._ _.-' 168 | `-.__.-' 169 | 22696:M 03 Dec 05:23:51.105 # WARNING: The TCP backlog setting of 511 cannot be enforced because /pr 170 | oc/sys/net/core/somaxconn is set to the lower value of 128. 171 | 22696:M 03 Dec 05:23:51.106 # Server started, Redis version 3.0.7 172 | 22696:M 03 Dec 05:23:51.106 # WARNING overcommit_memory is set to 0! Background save may fail under 173 | low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then 174 | reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 175 | 22696:M 03 Dec 05:23:51.106 # WARNING you have Transparent Huge Pages (THP) support enabled in your 176 | kernel. This will create latency and memory usage issues with Redis. To fix this issue run the comma 177 | nd 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.lo 178 | cal in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 179 | 22696:M 03 Dec 05:23:51.119 * DB loaded from disk: 0.014 seconds 180 | ``` 181 | -------------------------------------------------------------------------------- /enterprise/modules/redisai/install/README.md: -------------------------------------------------------------------------------- 1 | # Installing RedisAI on Jetson Nano in Easy steps 2 | 3 | - RedisAI is a Redis module for executing Deep Learning/Machine Learning models and managing their data. 4 | - Its purpose is being a “workhorse” for model serving, by providing out-of-the-box support for popular DL/ML frameworks and unparalleled performance. 5 | - RedisAI both simplifies the deployment and serving of graphs by leveraging on Redis’ production-proven infrastructure, as well as maximizes computation throughput by adhering to the principle of data locality. 6 | 7 | This introduction is intended to present the core concepts it uses and the functionality it provides. 8 | 9 | 10 |
11 | How RedisAI works 12 | 13 |
14 | 15 | RedisAI bundles together best-of-breed technologies for delivering stable and performant computation graph serving. Every DL/ML framework ships with a runtime for executing the models developed with it, and the common practice for serving these is building a simple server around them. 16 | 17 | RedisAI aims to be that server, saving you from the need of installing the backend you’re using and developing a server for it. By itself that does not justify RedisAI’s existence so there’s more to it. Because RedisAI is implemented as a Redis module it automatically benefits from the server’s capabilities: be it Redis’ native data types, its robust eco-system of clients, high-availability, persistence, clustering, and Enterprise support. 18 | 19 | Because Redis is an in-memory data structure server RedisAI uses it for storing all of its data. The main data type supported by RedisAI is the Tensor that is the standard representation of data in the DL/ML domain. Because tensors are stored in the memory space of the Redis server, they are readily accessible to any of RedisAI’s backend libraries at minimal latency. 20 | 21 | The locality of data, which is tensor data in adjacency to DL/ML models backends, allows RedisAI to provide optimal performance when serving models. It also makes it a perfect choice for deploying DL/ML models in production and allowing them to be used by any application. 22 | 23 | Furthermore, RedisAI is also an optimal testbed for models as it allows the parallel execution of multiple computation graphs and, in future versions, assessing their respective performance in real-time. 24 |
25 | 26 |
Data structures 27 | 28 |
29 | RedisAI provides the following data structures: 30 | 31 | - Tensor: Represents an n-dimensional array of values 32 | - Model: Represents a computation graph by one of the supported DL/ML framework backends 33 | - Script: Represents a TorchScript program 34 | 35 | DL/ML backends 36 | 37 | RedisAI supports DL/ML identifiers and their respective backend libraries, including: 38 | 39 | - TF: The TensorFlow backend 40 | - TFLITE: The TensorFlow Lite backend 41 | - TORCH: The PyTorch backend 42 | - ONNX: ONNXRuntime backend 43 | 44 | A complete list of supported backends is in the release notes for each version. 45 | 46 | Backend libraries are dynamically loaded as needed, but can also be loaded during booting or at runtime. Refer to these pages for more information on loading backends: 47 | 48 | [AI.CONFIG command](https://oss.redislabs.com/redisai/commands/#aiconfig?_ga=2.121196299.224366325.1600062424-1938870273.1595669189)
49 | [Backend configuration](https://oss.redislabs.com/redisai/configuration/#backend?_ga=2.121196299.224366325.1600062424-1938870273.1595669189) 50 |
51 | 52 | 53 | 54 | 55 | ## Pre-requisite 56 | 57 | - Jetson Nano Board 58 | - Power Adapter(5V 4A) 59 | - WiFi Module 60 | - Redis 6.x 61 | 62 | RedisAI require Redis 6.x. You might have to compile the newer version of Redis to get it up and running. 63 | 64 | ### Installing Redis 6.x 65 | 66 | 67 | 68 | Follow the below steps: 69 | 70 | ``` 71 | $ wget http://download.redis.io/releases/redis-6.0.8.tar.gz 72 | $ tar xzf redis-6.0.8.tar.gz 73 | $ cd redis-6.0.8 74 | $ make 75 | $ sudo cp src/redis-server /usr/local/bin/ 76 | $ sudo cp src/redis-cli /usr/local/bin/ 77 | 78 | ``` 79 | In case you are not able to stop Redis service, follow the below steps 80 | 81 | ``` 82 | redis-cli 83 | 127.0.0.1> shutdown 84 | ``` 85 | 86 | 87 | ## Steps 88 | 89 | ### Clone the repository 90 | 91 | ``` 92 | git clone --recursive https://github.com/RedisAI/RedisAI 93 | ``` 94 | 95 | ``` 96 | cd RedisAI 97 | ``` 98 | 99 | Download all the below scripts and place it under RedisAI directory. 100 | 101 | - [install.sh](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisai/install/install.sh) 102 | - [run.sh](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisai/install/run.sh) 103 | - [test.sh](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisai/install/test.sh) 104 | 105 | ### Building RedisAI 106 | 107 | 108 | ``` 109 | sh install.sh 110 | ``` 111 | 112 | 113 | 114 | It will take sometime based on your internet speed. 115 | 116 | 117 | ### Running RedisAI with PyTorch 118 | 119 | Before you execute the below script, provide sufficient permission to redisai.so 120 | 121 | ``` 122 | /RedisAI/install-cpu$ sudo chmod 755 redisai.so 123 | ``` 124 | 125 | ``` 126 | sudo sh run.sh 127 | 14438:C 20 Sep 2020 14:06:50.321 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 128 | 14438:C 20 Sep 2020 14:06:50.321 # Redis version=6.0.6, bits=64, commit=00000000, modified=0, pid=14438, just started 129 | 14438:C 20 Sep 2020 14:06:50.321 # Configuration loaded 130 | 14438:M 20 Sep 2020 14:06:50.323 * Increased maximum number of open files to 10032 (it was originally set to 1024). 131 | _._ 132 | _.-``__ ''-._ 133 | _.-`` `. `_. ''-._ Redis 6.0.6 (00000000/0) 64 bit 134 | .-`` .-```. ```\/ _.,_ ''-._ 135 | ( ' , .-` | `, ) Running in standalone mode 136 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 137 | | `-._ `._ / _.-' | PID: 14438 138 | `-._ `-._ `-./ _.-' _.-' 139 | |`-._`-._ `-.__.-' _.-'_.-'| 140 | | `-._`-._ _.-'_.-' | http://redis.io 141 | `-._ `-._`-.__.-'_.-' _.-' 142 | |`-._`-._ `-.__.-' _.-'_.-'| 143 | | `-._`-._ _.-'_.-' | 144 | `-._ `-._`-.__.-'_.-' _.-' 145 | `-._ `-.__.-' _.-' 146 | `-._ _.-' 147 | `-.__.-' 148 | 149 | 14438:M 20 Sep 2020 14:06:50.325 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 150 | 14438:M 20 Sep 2020 14:06:50.325 # Server initialized 151 | 14438:M 20 Sep 2020 14:06:50.325 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 152 | 14438:M 20 Sep 2020 14:06:50.326 * Redis version found by RedisAI: 6.0.6 - oss 153 | 14438:M 20 Sep 2020 14:06:50.326 * RedisAI version 999999, git_sha=unknown 154 | 14438:M 20 Sep 2020 14:06:50.326 * Module 'ai' loaded from install-cpu/redisai.so 155 | 14438:M 20 Sep 2020 14:06:50.327 * Loading RDB produced by version 6.0.6 156 | 14438:M 20 Sep 2020 14:06:50.327 * RDB age 151321 seconds 157 | 14438:M 20 Sep 2020 14:06:50.327 * RDB memory usage when created 0.80 Mb 158 | 14438:M 20 Sep 2020 14:06:50.327 * DB loaded from disk: 0.000 seconds 159 | 14438:M 20 Sep 2020 14:06:50.327 * Ready to accept connections 160 | 161 | ``` 162 | 163 | 164 | ## Testing RedisAI 165 | 166 | 167 | 168 | ``` 169 | sh test.sh 170 | 171 | + redis-cli -x AI.MODELSET m TORCH GPU 172 | + redis-cli AI.TENSORSET a FLOAT 2 2 VALUES 2 3 2 3 173 | OK 174 | + redis-cli AI.TENSORSET b FLOAT 2 2 VALUES 2 3 2 3 175 | OK 176 | .... 177 | .... 178 | ajeetraina@ajeetraina-desktop:~/redisaiscript$ 179 | 180 | ``` 181 | 182 | ## Using RedisAI tensors 183 | 184 | A tensor is an n-dimensional array and is the standard representation for data in DL/ML workloads. RedisAI adds to Redis a Tensor data structure that implements the tensor type. Like any datum in Redis, RedisAI’s Tensors are identified by key names. 185 | 186 | Creating new RedisAI tensors is done with the AI.TENSORSET command. For example, consider the tensor: 187 | 188 | tensorA 189 | 190 | We can create the RedisAI Tensor with the key name ‘tA’ with the following command: 191 | 192 | ``` 193 | AI.TENSORSET tA FLOAT 2 VALUES 2 3 194 | ``` 195 | 196 | Copy the command to your cli and hit the on your keyboard to execute it. It should look as follows: 197 | 198 | ``` 199 | $ redis-cli 200 | 127.0.0.1:6379> AI.TENSORSET tA FLOAT 2 VALUES 2 3 201 | OK 202 | ``` 203 | 204 | The reply ‘OK’ means that the operation was successful. We’ve called the AI.TENSORSET command to set the key named ‘tA’ with the tensor’s data, but the name could have been any string value. The FLOAT argument specifies the type of values that the tensor stores, and in this case a single-precision floating-point. After the type argument comes the tensor’s shape as a list of its dimensions, or just a single dimension of 2. 205 | 206 | The VALUES argument tells RedisAI that the tensor’s data will be given as a sequence of numeric values and in this case the numbers 2 and 3. This is useful for development purposes and creating small tensors, however for practical purposes the AI.TENSORSET command also supports importing data in binary format. 207 | 208 | The Redis key ‘tA’ now stores a RedisAI Tensor. We can verify that using standard Redis commands such as EXISTS and TYPE: 209 | 210 | ``` 211 | 127.0.0.1:6379> EXISTS tA 212 | (integer) 1 213 | 127.0.0.1:6379> TYPE tA 214 | AI_TENSOR 215 | ``` 216 | 217 | Using AI.TENSORSET with the same key name, as long as it already stores a RedisAI Tensor, will overwrite the existing data with the new. To delete a RedisAI tensor, use the Redis DEL command. 218 | 219 | RedisAI Tensors are used as inputs and outputs in the execution of models and scripts. For reading the data from a RedisAI Tensor value there is the AI.TENSORGET command: 220 | 221 | ``` 222 | 127.0.0.1:6379> AI.TENSORGET tA VALUES 223 | 1) INT8 224 | 2) 1) (integer) 2 225 | 3) 1) (integer) 2 226 | 1) (integer) 3 227 | 228 | ``` 229 | 230 | ## Reference: 231 | 232 | - https://oss.redislabs.com/redisai/ 233 | -------------------------------------------------------------------------------- /enterprise/modules/redisgraph/README.md: -------------------------------------------------------------------------------- 1 | # Redis Graph 2 | 3 | A Graph Database built on top of Redis. 4 | 5 | RedisGraph is the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graphs and linear algebra to query the graph. 6 | 7 | ## Properties: 8 | 9 | - Based on the Property Graph Model 10 | - Nodes (vertices) and Relationships (edges) that may have attributes 11 | - Nodes that can be labeled 12 | - Relationships have a relationship type 13 | - Graphs represented as sparse adjacency matrices 14 | - Cypher as query language 15 | - Cypher queries translated into linear algebra expressions 16 | 17 | 18 | ## Why is it needed? 19 | 20 | ![My Image](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisgraph/Screen%20Shot%202020-03-08%20at%208.50.26%20PM.png) 21 | 22 | - Facebook Social Media might be using it to connect you to new friends 23 | - Companies like Ways might be using it find Quickest Route from Home to work 24 | - Company might be using Graph for Network Load Balancing 25 | 26 | Based on GRAPHBLAS technology 27 | 28 | ## Why GraphBLAS? 29 | 30 | - Traversing nodes and edges one a time; no scope for library optimization 31 | - Linear algebra: “bulk” work can be given to library 32 | - Let the experts write the library kernels: fast, robust, portables, performance 33 | - Composable linear algebra : associative, distributive, (AB)^T= B^T A^T 34 | 35 | 36 | What else can you do? 37 | 38 | - Further boost with performance with GPU 39 | - True Distributed Graph with linear algebra operations 40 | 41 | # Using RedisGraph using Docker 42 | 43 | 44 | ``` 45 | $ docker run -dit -p 6379:6379 -it --rm redislabs/redisgraph 46 | Unable to find image 'redislabs/redisgraph:latest' locally 47 | latest: Pulling from redislabs/redisgraph 48 | bc51dd8edc1b: Pull complete 49 | 37d80eb324ee: Pull complete 50 | 392b7748dfaf: Pull complete 51 | 48df82c3534d: Pull complete 52 | 2ec2bb0b4b0e: Pull complete 53 | 1302bce0b2cb: Pull complete 54 | ed3429228972: Pull complete 55 | 922f57f50ba0: Pull complete 56 | Digest: sha256:62f59b3bc19f4bd56f61cc1c6ac7b1ba6cd9fc597aba17db5481ee9c547617f2 57 | Status: Downloaded newer image for redislabs/redisgraph:latest 58 | 1:C 08 Mar 2020 13:30:51.299 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 59 | 1:C 08 Mar 2020 13:30:51.299 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=1, just started 60 | 1:C 08 Mar 2020 13:30:51.299 # Configuration loaded 61 | _._ 62 | _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 5.0.7 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ 63 | ( ' , .-` | `, ) Running in standalone mode 64 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 65 | | `-._ `._ / _.-' | PID: 1 66 | `-._ `-._ `-./ _.-' _.-' 67 | |`-._`-._ `-.__.-' _.-'_.-'| 68 | | `-._`-._ _.-'_.-' | http://redis.io 69 | `-._ `-._`-.__.-'_.-' _.-' 70 | |`-._`-._ `-.__.-' _.-'_.-'| 71 | | `-._`-._ _.-'_.-' | 72 | `-._ `-._`-.__.-'_.-' _.-' 73 | `-._ `-.__.-' _.-' 74 | `-._ _.-' 75 | `-.__.-' 76 | 77 | 1:M 08 Mar 2020 13:30:51.315 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 78 | 1:M 08 Mar 2020 13:30:51.316 # Server initialized 79 | 1:M 08 Mar 2020 13:30:51.317 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 80 | 1:M 08 Mar 2020 13:30:51.318 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 81 | 1:M 08 Mar 2020 13:30:51.333 * Thread pool created, using 8 threads. 82 | 1:M 08 Mar 2020 13:30:51.334 * Module 'graph' loaded from /usr/lib/redis/modules/redisgraph.so 83 | 1:M 08 Mar 2020 13:30:51.334 * Ready to accept connections 84 | ``` 85 | 86 | ``` 87 | $ docker ps 88 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 89 | 31f1c9d2daa8 redislabs/redisgraph "docker-entrypoint.s…" 11 seconds ago Up 10 seconds 0.0.0.0:6379->6379/tcp stupefied_dijkstra 90 | ``` 91 | 92 | 93 | # Example #1: Sana loves Sid 94 | 95 | ![MY Image](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisgraph/Screen%20Shot%202020-03-08%20at%2010.29.11%20PM.png) 96 | 97 | We can create this graph using a single command: 98 | 99 | 100 | ``` 101 | GRAPH.QUERY SanalovesSid "CREATE (Sana:Person {name: 'Sana'})-[:loves]->(Sid:Person {name: 'Sid'})" 102 | ``` 103 | 104 | ``` 105 | root@5be381521ac3:/data# redis-cli 106 | 127.0.0.1:6379> GRAPH.QUERY SanalovesSid "CREATE (Sana:Person {name: 'Sana'})-[:loves]->(Sid:Person {name: 'Sid'})" 107 | 1) 1) "Labels added: 1" 108 | 2) "Nodes created: 2" 109 | 3) "Properties set: 2" 110 | 4) "Relationships created: 1" 111 | 5) "Query internal execution time: 1.974615 milliseconds" 112 | ``` 113 | 114 | 115 | 116 | When using redis-cli, queries will also follow the format of GRAPH.QUERY "". In RedisGraph, a graph is stored in a Redis key (in this case called “SanaLovesSid“) with the special type graphdata, thus this must always be specified in queries. The query itself is the part between double quotes, and uses a language called Cypher. Cypher is also used by Neo4j among other software, and RedisGraph implements a subset of it. 117 | 118 | Cypher represents nodes and relationships using a sort of ASCII art. Nodes are represented by round brackets (parentheses), and relationships are represented by square brackets. The arrow indicates the direction of the relationship. RedisGraph at present does not support undirected relationships. When you run the above command, Redis should provide some output indicating what happened: 119 | 120 | Since our graph has been created, we can start running queries against it. For this, we use the MATCH keyword: 121 | 122 | 123 | ``` 124 | 127.0.0.1:6379> GRAPH.QUERY SanalovesSid "MATCH (x) RETURN x" 125 | 1) 1) "x" 126 | 2) 1) 1) 1) 1) "id" 127 | 2) (integer) 0 128 | 2) 1) "labels" 129 | 2) 1) "Person" 130 | 3) 1) "properties" 131 | 2) 1) 1) "name" 132 | 2) "Sana" 133 | 2) 1) 1) 1) "id" 134 | 2) (integer) 1 135 | 2) 1) "labels" 136 | 2) 1) "Person" 137 | 3) 1) "properties" 138 | 2) 1) 1) "name" 139 | 2) "Sid" 140 | 3) 1) "Query internal execution time: 0.588304 milliseconds" 141 | 127.0.0.1:6379> 142 | ``` 143 | 144 | As you can see, this has given us the whole structure of each node. If we just want to get something specific, such as the name, then we can specify it in the RETURN clause: 145 | 146 | ``` 147 | 127.0.0.1:6379> GRAPH.QUERY SanalovesSid "MATCH (x) RETURN x.name " 148 | 1) 1) "x.name" 149 | 2) 1) 1) "Sana" 150 | 2) 1) "Sid" 151 | 3) 1) "Query internal execution time: 0.714206 milliseconds" 152 | ``` 153 | 154 | We can also query based on relationships. Let’s see who loves who: 155 | 156 | ``` 157 | 127.0.0.1:6379> GRAPH.QUERY SanalovesSid "MATCH (x)-[:loves]->(y) RETURN x.name, y.name" 158 | 1) 1) "x.name" 159 | 2) "y.name" 160 | 2) 1) 1) "Sana" 161 | 2) "Sid" 162 | 3) 1) "Query internal execution time: 1.080608 milliseconds" 163 | ``` 164 | 165 | Here we'll quickly create a small graph representing a subset of motorcycle riders and teams taking part in the MotoGP league. Once created, we'll start querying our data. 166 | 167 | ``` 168 | $ docker exec -it 14eb bash 169 | root@14eb75e19d2b:/data# redis-cli 170 | ``` 171 | 172 | ``` 173 | $ redis-cli 174 | 127.0.0.1:6379> GRAPH.QUERY MotoGP "CREATE (:Rider {name:'Valentino Rossi'})-[:rides]->(:Team {name:'Yamaha'}), (:Rider {name:'Dani Pedrosa'})-[:rides]->(:Team {name:'Honda'}), (:Rider {name:'Andrea Dovizioso'})-[:rides]->(:Team {name:'Ducati'})" 175 | ``` 176 | 177 | ``` 178 | 1) 1) Labels added: 2 179 | 2) Nodes created: 6 180 | 3) Properties set: 6 181 | 4) Relationships created: 3 182 | 5) "Query internal execution time: 0.399000 milliseconds" 183 | ``` 184 | 185 | Now that our MotoGP graph is created, we can start asking questions. For example: Who's riding for team Yamaha? 186 | 187 | ``` 188 | 127.0.0.1:6379> GRAPH.QUERY MotoGP "MATCH (r:Rider)-[:rides]->(t:Team) WHERE t.name = 'Yamaha' RETURN r.name, t.name" 189 | 1) 1) "r.name" 190 | 2) "t.name" 191 | 2) 1) 1) "Valentino Rossi" 192 | 2) "Yamaha" 193 | 3) 1) "Query internal execution time: 0.625399 milliseconds" 194 | ``` 195 | 196 | How many riders represent team Ducati? 197 | 198 | ``` 199 | 127.0.0.1:6379> GRAPH.QUERY MotoGP "MATCH (r:Rider)-[:rides]->(t:Team {name:'Ducati'}) RETURN count(r)" 200 | ``` 201 | 202 | ``` 203 | 1) 1) "count(r)" 204 | 2) 1) 1) (integer) 1 205 | 3) 1) "Query internal execution time: 0.624435 milliseconds" 206 | ``` 207 | 208 | Please Note: If you want to visualize Redis Graph, I would recommend using RedisInsight. But for visualizing graph, you will need to integrate RedisGraph nodule onto that Redis Enteprise Docker container. 209 | 210 | 211 | # Concept: 212 | 213 | G(V,E) - Vertices & Edges 214 | 215 | ![My Image](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisgraph/redisgraph1.png) 216 | 217 | Building Blocks 218 | 219 | - Nodes 220 | - Relationships 221 | - Labels/ Relationship Types 222 | - Properties 223 | 224 | Graph DB - Database with Graph API 225 | Graph Native DB - Graph DB + Index Free Adjacency (O(1) Vs O(logn) 226 | 227 | ![My Image](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisgraph/Screen%20Shot%202020-03-08%20at%208.24.09%20PM.png) 228 | 229 | ![MY Image](https://github.com/collabnix/redisplanet/blob/master/enterprise/modules/redisgraph/Screen%20Shot%202020-03-08%20at%208.26.38%20PM.png) 230 | 231 | 232 | ## Node 233 | Simply a reference to Redis Hash 234 | 235 | A Node called "Linus Torvald" pointing to Redis Hash which has some properties like shown below; 236 | 237 | 238 | Linus Torvald ================>. Linus Torvald { 239 | First_Name: Linus 240 | Age: "59" 241 | } 242 | 243 | # Relations 244 | 245 | 246 | Once we have node, we would like to connect them, 247 | 248 | Linus. =============. VISIT. =====================. Finland 249 | 250 | # How to store Graph in Redis? 251 | 252 | With the help of Hexastore 253 | 254 | ``` 255 | Subject. Predicate Object 256 | S P 0 257 | ``` 258 | 259 | 260 | We store all 6 permutation of the relationship 261 | 262 | ``` 263 | SPO 264 | SOP 265 | OPS Linus visit Finland 266 | OSP (S) (P) (0) 267 | PSO 268 | POS 269 | ``` 270 | 271 | -------------------------------------------------------------------------------- /oss/feature/in-memory/README.md: -------------------------------------------------------------------------------- 1 | # What do you mean when you say "in-memory database"? 2 | 3 | - Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can't be larger than memory. 4 | - Another advantage of in memory databases is that the memory representation of complex data structures is much simpler to manipulate compared to the same data structure on disk, so Redis can do a lot, with little internal complexity. 5 | - At the same time the two on-disk storage formats (RDB and AOF) don't need to be suitable for random access, so they are compact and always generated in an append-only fashion (Even the AOF log rotation is an append-only operation, since the new version is generated from the copy of data in memory). 6 | 7 | 8 | - In redis, all data has to be in memory. 9 | - This is the point which is totally different from other NoSQL. 10 | - Usually when you access and read some data in a database, you don't know if the data is in memory (cache) or not, but in the case of Redis, it's guaranteed that all data is in memory. 11 | - Writing to disk is optional, which you can think of as having the trunk on memory and a kind of backup on the disk. You may lose data which is saved after last saving to a disk if you suddenly turn off the server. 12 | 13 | - And of course the advantage of it is a performance. Since all data is in RAM, it's incredibly fast. 14 | 15 | # Similarity between Memcached Vs In-Memory Database? 16 | 17 | Let's start with similarities first... 18 | 19 | 20 | - Both Memcached and Redis serve as in-memory, key-value data stores, although Redis is more accurately described as a data structure store. 21 | - Both Memcached and Redis belong to the NoSQL family of data management solutions, 22 | - Both are based on a key-value data model. 23 | - Both keep all data in RAM, which of course makes them supremely useful as a caching layer. 24 | - In terms of performance, the two data stores are also remarkably similar, exhibiting almost identical characteristics (and metrics) with respect to throughput and latency. 25 | 26 | - Both Memcached and Redis are mature and hugely popular open source projects. 27 | - Memcached was originally developed by Brad Fitzpatrick in 2003 for the LiveJournal website. 28 | - Since then, Memcached has been rewritten in C (the original implementation was in Perl) and put in the public domain, where it has become a cornerstone of modern Web applications. 29 | - Current development of Memcached is focused on stability and optimizations rather than adding new features. 30 | - Redis was created by Salvatore Sanfilippo in 2009, and Sanfilippo remains the lead developer of the project today. 31 | - Redis is sometimes described as “Memcached on steroids,” which is hardly surprising considering that parts of Redis were built in response to lessons learned from using Memcached. 32 | - Redis has more features than Memcached and is, thus, more powerful and flexible. 33 | 34 | 35 | # Why are Memcached and Redis so popular? 36 | 37 | - Not only are they extremely effective, they’re also relatively simple. 38 | - Getting started with either Memcached or Redis is considered easy work for a developer. 39 | - It takes only a few minutes to set up and get them working with an application. 40 | - Thus, a small investment of time and effort can have an immediate, dramatic impact on performance—usually by orders of magnitude. 41 | 42 | # When to use Memcached 43 | 44 | - Memcached could be preferable when caching relatively small and static data, such as HTML code fragments. 45 | - Memcached’s internal memory management, while not as sophisticated as that of Redis, is more efficient in the simplest use cases because it consumes comparatively less memory resources for metadata. 46 | - Strings (the only data type supported by Memcached) are ideal for storing data that is only read, because strings require no further processing. 47 | 48 | - Large data sets often involve serialized data, which always requires more space to store. 49 | - While Memcached is effectively limited to storing data in its serialized form, the data structures in Redis can store any aspect of the data natively, thus reducing serialization overhead. 50 | 51 | - The second scenario in which Memcached has an advantage over Redis is in scaling. 52 | - Because Memcached is multithreaded, you can easily scale up by giving it more computational resources, but you will lose part or all of the cached data (depending on whether you use consistent hashing). 53 | - Redis, which is mostly single-threaded, can scale horizontally via clustering without loss of data. 54 | - Clustering is an effective scaling solution, but it is comparatively more complex to set up and operate. 55 | 56 | # When to use Redis 57 | 58 | - You’ll almost always want to use Redis because of its data structures. 59 | - With Redis as a cache, you gain a lot of power (such as the ability to fine-tune cache contents and durability) and greater efficiency overall. 60 | - Once you use the data structures, the efficiency boost becomes tremendous for specific application scenarios. 61 | 62 | - Redis’ superiority is evident in almost every aspect of cache management. 63 | - Caches employ a mechanism called data eviction to make room for new data by deleting old data from memory. 64 | - Memcached’s data eviction mechanism employs a Least Recently Used algorithm and somewhat arbitrarily evicts data that’s similar in size to the new data. 65 | 66 | - Redis, by contrast, allows for fine-grained control over eviction, letting you choose from six different eviction policies. - Redis also employs more sophisticated approaches to memory management and eviction candidate selection. 67 | - Redis supports both lazy and active eviction, where data is evicted only when more space is needed or proactively. 68 | 69 | - Redis gives you much greater flexibility regarding the objects you can cache. 70 | - While Memcached limits key names to 250 bytes and works with plain strings only, Redis allows key names and values to be as large as 512MB each, and they are binary safe. 71 | - Plus, Redis has five primary data structures to choose from, opening up a world of possibilities to the application developer through intelligent caching and manipulation of cached data. 72 | 73 | # Redis for data persistence 74 | 75 | - Using Redis data structures can simplify and optimize several tasks—not only while caching, but even when you want the data to be persistent and always available. 76 | - For example, instead of storing objects as serialized strings, developers can use a Redis Hash to store an object’s fields and values, and manage them using a single key. 77 | - Redis Hash saves developers the need to fetch the entire string, deserialize it, update a value, reserialize the object, and replace the entire string in the cache with its new value for every trivial update—that means lower resource consumption and increased performance. 78 | 79 | - Other data structures offered by Redis (such as lists, sets, sorted sets, hyperloglogs, bitmaps, and geospatial indexes) can be used to implement even more complex scenarios. 80 | - Sorted sets for time-series data ingestion and analysis is another example of a Redis data structure that offers enormously reduced complexity and lower bandwidth consumption. 81 | 82 | - Another important advantage of Redis is that the data it stores isn’t opaque, so the server can manipulate it directly. 83 | - A considerable share of the 180-plus commands available in Redis are devoted to data processing operations and embedding logic in the data store itself via server-side Lua scripting. 84 | - These built-in commands and user scripts give you the flexibility of handling data processing tasks directly in Redis without having to ship data across the network to another system for processing. 85 | 86 | - Redis offers optional and tunable data persistence designed to bootstrap the cache after a planned shutdown or an unplanned failure. 87 | - While we tend to regard the data in caches as volatile and transient, persisting data to disk can be quite valuable in caching scenarios. 88 | - Having the cache’s data available for loading immediately after restart allows for much shorter cache warm-up and removes the load involved in repopulating and recalculating cache contents from the primary data store. 89 | 90 | 91 | Redis and Memcached are both in-memory data storage systems. Memcached is a high-performance distributed memory cache service, and Redis is an open-source key-value store. Similar to Memcached, Redis stores most of the data in the memory. It supports operations on various data types including strings, hash tables, and linked lists among others. The Alibaba Cloud ApsaraDB family supports these two popular data storage systems: ApsaraDB for Redis and ApsaraDB for Memcache. 92 | 93 | ## Redis Vs Memcached 94 | 95 | | Redis | Memcached | 96 | | ------------- |:-------------:| 97 | | Redis only uses single cores | Memcached utilizes multiple cores | 98 | | Redis supports much richer data types, including String, Hash, List, Set and Sorted Set | Memcached only supports data records of the simple key-value structure | 99 | | Redis supports server-end data operations, and owns more data structures and supports richer data operations. In Redis, these complicated operations are as efficient as the general GET/SET operations. Therefore, if you need the cache to support more complicated structures and operations, Redis is a good choice | In Memcached, you usually need to copy the data to the client end for similar changes and then set the data back. The result is that this greatly increases network IO counts and data sizes. | 100 | | But if Redis adopts the hash structure, it will have a higher memory utilization rate than Memcached thanks to its combined compression mode | Memcached has a higher memory utilization rate for simple key-value storage. | 101 | 102 | | Although Redis has also made some optimizations for storing big data, it is still inferior to Memcached | Memcached outperforms Redis for storing data of 100k or above.| 103 | 104 | 105 | ## Reference: [Link](https://medium.com/@Alibaba_Cloud/redis-vs-memcached-in-memory-data-storage-systems-3395279b0941) 106 | 107 | 108 | ## Redis in-memory data replication 109 | 110 | - Redis can also replicate the data that it manages. 111 | - Replication can be used for implementing a highly available cache setup that can withstand failures and provide uninterrupted service to the application. 112 | - A cache failure falls only slightly short of application failure in terms of the impact on user experience and application performance, so having a proven solution that guarantees the cache’s contents and service availability is a major advantage in most cases. 113 | 114 | - Last but not least, in terms of operational visibility, Redis provides a slew of metrics and a wealth of introspective commands with which to monitor and track usage and abnormal behavior. 115 | - Real-time statistics about every aspect of the database, the display of all commands being executed, the listing and managing of client connections—Redis has all that and more. 116 | 117 | - When developers realize the effectiveness of Redis’s persistence and in-memory replication capabilities, they often use it as a first-responder database, usually to analyze and process high-velocity data and provide responses to the user while a secondary (often slower) database maintains a historical record of what happened. 118 | - When used in this manner, Redis can also be ideal for analytics use cases. 119 | 120 | 121 | # Redis is a single threaded. How to get maximum throughput? 122 | 123 | A shard is an open source Redis instance. Redis, being a single-threaded process, runs on one CPU core. You could maximize your throughput by running a shard on each CPU core. 124 | -------------------------------------------------------------------------------- /advancedapi/README.md: -------------------------------------------------------------------------------- 1 | # INFO 2 | 3 | - The INFO command returns information and statistics about the server in a format that is simple to parse by computers and easy to read by humans. 4 | 5 | 6 | The optional parameter can be used to select a specific section of information: 7 | - server: General information about the Redis server 8 | - clients: Client connections section 9 | - memory: Memory consumption related information 10 | - persistence: RDB and AOF related information 11 | - stats: General statistics 12 | - replication: Master/replica replication information 13 | - cpu: CPU consumption statistics 14 | - commandstats: Redis command statistics 15 | - cluster: Redis Cluster section 16 | - keyspace: Database related statistics 17 | 18 | It can also take the following values: 19 | - all: Return all sections 20 | - default: Return only the default set of sections 21 | 22 | When no parameter is provided, the default option is assumed. 23 | 24 | ``` 25 | 127.0.0.1:6379> info 26 | # Server 27 | redis_version:5.0.7 28 | redis_git_sha1:00000000 29 | redis_git_dirty:0 30 | redis_build_id:856bfff8b8208916 31 | redis_mode:standalone 32 | os:Linux 4.4.0-1101-aws x86_64 33 | arch_bits:64 34 | multiplexing_api:epoll 35 | atomicvar_api:atomic-builtin 36 | gcc_version:5.4.0 37 | process_id:14075 38 | run_id:70b7d0be00ff242faf077630e9a6d26cde862895 39 | tcp_port:6379 40 | uptime_in_seconds:173295 41 | uptime_in_days:2 42 | hz:10 43 | configured_hz:10 44 | lru_clock:6600031 45 | executable:/home/ubuntu/./redis-stable/src/redis-server 46 | config_file: 47 | 48 | # Clients 49 | connected_clients:1 50 | client_recent_max_input_buffer:2 51 | client_recent_max_output_buffer:0 52 | blocked_clients:0 53 | 54 | # Memory 55 | used_memory:854360 56 | used_memory_human:834.34K 57 | used_memory_rss:6197248 58 | used_memory_rss_human:5.91M 59 | used_memory_peak:854448 60 | used_memory_peak_human:834.42K 61 | used_memory_peak_perc:99.99% 62 | used_memory_overhead:841182 63 | used_memory_startup:791336 64 | used_memory_dataset:13178 65 | used_memory_dataset_perc:20.91% 66 | allocator_allocated:1264040 67 | allocator_active:1527808 68 | allocator_resident:11087872 69 | total_system_memory:1038770176 70 | total_system_memory_human:990.65M 71 | used_memory_lua:37888 72 | used_memory_lua_human:37.00K 73 | used_memory_scripts:0 74 | used_memory_scripts_human:0B 75 | number_of_cached_scripts:0 76 | maxmemory:0 77 | maxmemory_human:0B 78 | maxmemory_policy:noeviction 79 | allocator_frag_ratio:1.21 80 | allocator_frag_bytes:263768 81 | allocator_rss_ratio:7.26 82 | allocator_rss_bytes:9560064 83 | rss_overhead_ratio:0.56 84 | rss_overhead_bytes:-4890624 85 | mem_fragmentation_ratio:7.63 86 | mem_fragmentation_bytes:5384880 87 | mem_not_counted_for_evict:0 88 | mem_replication_backlog:0 89 | mem_clients_slaves:0 90 | mem_clients_normal:49694 91 | mem_aof_buffer:0 92 | mem_allocator:jemalloc-5.1.0 93 | active_defrag_running:0 94 | lazyfree_pending_objects:0 95 | 96 | # Persistence 97 | loading:0 98 | rdb_changes_since_last_save:33 99 | rdb_bgsave_in_progress:0 100 | rdb_last_save_time:1583657133 101 | rdb_last_bgsave_status:ok 102 | rdb_last_bgsave_time_sec:0 103 | rdb_current_bgsave_time_sec:-1 104 | rdb_last_cow_size:294912 105 | aof_enabled:0 106 | aof_rewrite_in_progress:0 107 | aof_rewrite_scheduled:0 108 | aof_last_rewrite_time_sec:-1 109 | aof_current_rewrite_time_sec:-1 110 | aof_last_bgrewrite_status:ok 111 | aof_last_write_status:ok 112 | aof_last_cow_size:0 113 | 114 | # Stats 115 | total_connections_received:15 116 | total_commands_processed:119 117 | instantaneous_ops_per_sec:0 118 | total_net_input_bytes:4113 119 | total_net_output_bytes:94625 120 | instantaneous_input_kbps:0.00 121 | instantaneous_output_kbps:0.00 122 | rejected_connections:0 123 | sync_full:0 124 | sync_partial_ok:0 125 | sync_partial_err:0 126 | expired_keys:4 127 | expired_stale_perc:0.00 128 | expired_time_cap_reached_count:0 129 | evicted_keys:0 130 | keyspace_hits:37 131 | keyspace_misses:7 132 | pubsub_channels:0 133 | pubsub_patterns:0 134 | latest_fork_usec:230 135 | migrate_cached_sockets:0 136 | slave_expires_tracked_keys:0 137 | active_defrag_hits:0 138 | active_defrag_misses:0 139 | active_defrag_key_hits:0 140 | active_defrag_key_misses:0 141 | 142 | # Replication 143 | role:master 144 | connected_slaves:0 145 | master_replid:267e878b36a386e95c3f1af56896a155684f6249 146 | master_replid2:0000000000000000000000000000000000000000 147 | master_repl_offset:0 148 | second_repl_offset:-1 149 | repl_backlog_active:0 150 | repl_backlog_size:1048576 151 | repl_backlog_first_byte_offset:0 152 | repl_backlog_histlen:0 153 | 154 | # CPU 155 | used_cpu_sys:103.856000 156 | used_cpu_user:114.548000 157 | used_cpu_sys_children:0.000000 158 | used_cpu_user_children:0.000000 159 | 160 | # Cluster 161 | cluster_enabled:0 162 | 163 | # Keyspace 164 | db0:keys=3,expires=0,avg_ttl=0 165 | 127.0.0.1:6379> 166 | ``` 167 | 168 | 169 | 170 | # DETAILS 171 | 172 | ``` 173 | redis_version: Version of the Redis server 174 | redis_git_sha1: Git SHA1 175 | redis_git_dirty: Git dirty flag 176 | redis_build_id: The build id 177 | redis_mode: The server's mode ("standalone", "sentinel" or "cluster") 178 | os: Operating system hosting the Redis server 179 | arch_bits: Architecture (32 or 64 bits) 180 | multiplexing_api: Event loop mechanism used by Redis 181 | atomicvar_api: Atomicvar API used by Redis 182 | gcc_version: Version of the GCC compiler used to compile the Redis server 183 | process_id: PID of the server process 184 | run_id: Random value identifying the Redis server (to be used by Sentinel and Cluster) 185 | tcp_port: TCP/IP listen port 186 | uptime_in_seconds: Number of seconds since Redis server start 187 | uptime_in_days: Same value expressed in days 188 | hz: The server's frequency setting 189 | lru_clock: Clock incrementing every minute, for LRU management 190 | executable: The path to the server's executable 191 | config_file: The path to the config file 192 | Here is the meaning of all fields in the clients section: 193 | connected_clients: Number of client connections (excluding connections from replicas) 194 | client_longest_output_list: longest output list among current client connections 195 | client_biggest_input_buf: biggest input buffer among current client connections 196 | blocked_clients: Number of clients pending on a blocking call (BLPOP, BRPOP, BRPOPLPUSH) 197 | Here is the meaning of all fields in the memory section: 198 | used_memory: Total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmalloc) 199 | used_memory_human: Human readable representation of previous value 200 | used_memory_rss: Number of bytes that Redis allocated as seen by the operating system (a.k.a resident set size). This is the number reported by tools such as top(1) and ps(1) 201 | used_memory_rss_human: Human readable representation of previous value 202 | used_memory_peak: Peak memory consumed by Redis (in bytes) 203 | used_memory_peak_human: Human readable representation of previous value 204 | used_memory_peak_perc: The percentage of used_memory_peak out of used_memory 205 | used_memory_overhead: The sum in bytes of all overheads that the server allocated for managing its internal data structures 206 | used_memory_startup: Initial amount of memory consumed by Redis at startup in bytes 207 | used_memory_dataset: The size in bytes of the dataset (used_memory_overhead subtracted from used_memory) 208 | used_memory_dataset_perc: The percentage of used_memory_dataset out of the net memory usage (used_memory minus used_memory_startup) 209 | total_system_memory: The total amount of memory that the Redis host has 210 | total_system_memory_human: Human readable representation of previous value 211 | used_memory_lua: Number of bytes used by the Lua engine 212 | used_memory_lua_human: Human readable representation of previous value 213 | used_memory_scripts: Number of bytes used by cached Lua scripts 214 | used_memory_scripts_human: Human readable representation of previous value 215 | maxmemory: The value of the maxmemory configuration directive 216 | maxmemory_human: Human readable representation of previous value 217 | maxmemory_policy: The value of the maxmemory-policy configuration directive 218 | mem_fragmentation_ratio: Ratio between used_memory_rss and used_memory 219 | mem_allocator: Memory allocator, chosen at compile time 220 | active_defrag_running: Flag indicating if active defragmentation is active 221 | lazyfree_pending_objects: The number of objects waiting to be freed (as a result of calling UNLINK, or FLUSHDB and FLUSHALL with the ASYNC option) 222 | ``` 223 | 224 | ## CLIENT LIST 225 | 226 | 227 | The CLIENT LIST command returns information and statistics about the client connections server in a mostly human readable format 228 | 229 | 230 | ``` 231 | client list 232 | id=17 addr=127.0.0.1:36238 fd=8 name= age=1086 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 events=r cmd=client 233 | 127.0.0.1:6379> 234 | ``` 235 | 236 | ## Redis CLI 237 | 238 | - Monitoring tool to show continuous stats about a Redis server 239 | - Scanning a Redis database for very large keys 240 | - Key space scanner with pattern matching 241 | - Acting as a Pub/Sub client to subscribe to channels 242 | - Monitoring the commands executed into a Redis instance 243 | - Checking the latency of a Redis server in different ways 244 | - Checking the scheduler latency of the local computer 245 | - Transferring RDB backups from a remote Redis server locally 246 | - Acting as a Redis slave for showing what a slave receives 247 | - Simulating LRU workloads for showing stats about keys hits 248 | - A client for the Lua debugger 249 | 250 | 251 | ## Continuous stats mode 252 | 253 | - A new line is printed every second with useful information and the difference between the old data point 254 | - Use -i to control the frequency 255 | 256 | ``` 257 | ubuntu@ip-172-31-25-81:~$ redis-cli --stat 258 | ------- data ------ --------------------- load -------------------- - child - 259 | keys mem clients blocked requests connections 260 | 7 834.58K 1 0 132 (+0) 16 261 | 7 834.58K 1 0 133 (+1) 16 262 | 7 834.58K 1 0 134 (+1) 16 263 | 7 834.58K 1 0 135 (+1) 16 264 | 7 834.58K 1 0 136 (+1) 16 265 | 7 834.58K 1 0 137 (+1) 16 266 | 7 834.58K 1 0 138 (+1) 16 267 | 7 834.58K 1 0 139 (+1) 16 268 | 7 834.58K 1 0 140 (+1) 16 269 | 7 834.58K 1 0 141 (+1) 16 270 | 7 834.58K 1 0 142 (+1) 16 271 | 7 834.58K 1 0 143 (+1) 16 272 | 7 834.58K 1 0 144 (+1) 16 273 | 7 834.58K 1 0 145 (+1) 16 274 | 7 834.58K 1 0 146 (+1) 16 275 | 7 834.58K 1 0 147 (+1) 16 276 | 7 834.58K 1 0 148 (+1) 16 277 | 7 834.58K 1 0 149 (+1) 16 278 | 7 834.58K 1 0 150 (+1) 16 279 | 7 834.58K 1 0 151 (+1) 16 280 | ------- data ------ --------------------- load -------------------- - child - 281 | keys mem clients blocked requests connections 282 | 7 834.58K 1 0 152 (+1) 16 283 | 7 834.58K 1 0 153 (+1) 16 284 | 7 834.58K 1 0 154 (+1) 16 285 | 286 | ``` 287 | 288 | ## MONITOR 289 | 290 | Monitoring commands executed in Redis 291 | 292 | ``` 293 | ubuntu@ip-172-31-25-81:~$ redis-cli monitor 294 | OK 295 | ``` 296 | 297 | For more info on more usages and API http://redis.io/topics/rediscli 298 | 299 | 300 | 301 | 302 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RedisPlanet - A Curated List of Redis Resources 2 | 3 | RedisPlanet is a curated list of popular Redis resources available over the internet. This repository is maintained by the Collabnix community. 4 | 5 | Have Questions? Join us over [Slack](https://launchpass.com/collabnix) and get chance to be a part of 5300+ DevOps enthusiasts. 6 | Want to contribute? Click here and get your favorite tool added. 7 | 8 | 9 | ## Creating Redis database 10 | 11 | ## Cloud 12 | 13 | - [Redis Enterprise Cloud](https://developer.redislabs.com/create/rediscloud) 14 | - [Heroku](https://developer.redislabs.com/create/heroku) 15 | - [Azure Cache](https://developer.redislabs.com/create/azure) 16 | - [Google Cloud](https://developer.redislabs.com/create/gcp) 17 | - [AWS](https://developer.redislabs.com/create/aws) 18 | 19 | ## Local 20 | 21 | - [Docker](https://developer.redislabs.com/create/docker/) 22 | - [Google Kubernetes Engine](https://developer.redislabs.com/create/kubernetes/) 23 | - [Mac using Homebrew](https://developer.redislabs.com/create/homebrew/) 24 | - [Microsoft Windows 10](https://developer.redislabs.com/create/windows) 25 | - [Build from Source](https://developer.redislabs.com/create/from-source/) 26 | - [RedisMod](https://developer.redislabs.com/explore/redismod) 27 | 28 | 29 | ## Developing with Redis Clients 30 | 31 | - [Java](https://developer.redislabs.com/develop/java/) 32 | - [Java based application on Heroku using Redis](https://developer.redislabs.com/howtos/herokujava) 33 | - [Python](https://developer.redislabs.com/develop/python/) 34 | - [Python based application on Heroku using Redis](https://developer.redislabs.com/howtos/herokupython) 35 | - [C](https://developer.redislabs.com/develop/C/) 36 | - [NodeJS](https://developer.redislabs.com/develop/nodejs/) 37 | - [NodeJS based application on Heroku using Redis](https://developer.redislabs.com/howtos/herokunodejs) 38 | - [Ruby](https://developer.redislabs.com/develop/ruby/) 39 | - [Php](https://developer.redislabs.com/develop/php/) 40 | 41 | ## Extensibility via Redis Modules 42 | 43 | ### RediSearch 44 | 45 | - [Getting Started](https://developer.redislabs.com/howtos/redisearch) 46 | - [Building Movie Database app using RediSearch](https://developer.redislabs.com/howtos/moviesdatabase/getting-started) 47 | 48 | 49 | ### RedisJSON 50 | 51 | - [Getting Started](https://developer.redislabs.com/howtos/redisjson) 52 | - [How to build a Shopping cart app using NodeJS and RedisJSON](https://developer.redislabs.com/howtos/shoppingcart) 53 | - [HackerNews Clone using RedisJSON](https://developer.redislabs.com/howtos/hackernews) 54 | 55 | 56 | ### RedisGears 57 | 58 | - [Getting Started](https://developer.redislabs.com/howtos/redisgears) 59 | - [How to build a Fraud Detection System using RedisGears and RedisBloom](https://developer.redislabs.com/howtos/frauddetection) 60 | - [Building a Pipeline for Natural Language Processing using RedisGears](https://developer.redislabs.com/howtos/nlp) 61 | 62 | ### RedisGraph 63 | 64 | - [Getting Started](https://developer.redislabs.com/howtos/redisgraph) 65 | - [Building Movies database app using RedisGraph and NodeJS](https://developer.redislabs.com/howtos/redisgraphmovies) 66 | 67 | ### RedisBloom 68 | 69 | - [Getting Started](https://developer.redislabs.com/howtos/redisbloom) 70 | - [How to build a Fraud Detection System using RedisGears and RedisBloom](https://developer.redislabs.com/howtos/frauddetection) 71 | 72 | ### RedisTimeSeries 73 | 74 | - [Getting Started](https://developer.redislabs.com/howtos/redistimeseries) 75 | - [How to Manage IoT Sensor data using RedisTimeSeries](https://redislabs.com/blog/how-to-manage-real-time-iot-sensor-data-in-redis/) 76 | 77 | ### RedisAI 78 | 79 | - [Getting Started](https://developer.redislabs.com/howtos/redisai) 80 | 81 | 82 | ### RedisInsight - Redis GUI 83 | 84 | - [Getting Started](https://developer.redislabs.com/explore/redisinsight/getting-started) 85 | - [Visualize Redis database keys using RedisInsight Browser Tool](https://developer.redislabs.com/explore/redisinsight/browser) 86 | - [Debugging Redis using RedisInsight Slowlog Tool](https://developer.redislabs.com/explore/redisinsight/slowlog) 87 | - [Reduce Redis Memory usage using RedisInsight Memory Analyzer Tool](https://developer.redislabs.com/explore/redisinsight/memoryanalyzer) 88 | - [Manage Your Redis Cluster using RedisInsight Cluster Management Tool](https://developer.redislabs.com/explore/redisinsight/cluster) 89 | - [Using Redis Streams](https://developer.redislabs.com/explore/redisinsight/streams) 90 | - [Analyze Your Redis commands using RedisInsight Profiler tool](https://developer.redislabs.com/explore/redisinsight/profiler) 91 | - [Query, Visualize and Manipulate Graphs using RedisGraph Browser Tool](https://developer.redislabs.com/explore/redisinsight/redisgraph) 92 | - [Writing Your Serverless function using RedisGears Browser Tool](https://developer.redislabs.com/explore/redisinsight/redisgears) 93 | - [Managing time-series data using RedisTimeSeries Browser Tool](https://developer.redislabs.com/explore/redisinsight/redistimeseries) 94 | - [Unified Search and Analytics using RediSearch Browser Tool](https://developer.redislabs.com/explore/redisinsight/redisearch) 95 | 96 | ### Redis Commands 97 | 98 | - [Concept](https://github.com/collabnix/redisplanet/blob/master/datastore/intro.md#data-store) 99 | - [Key & Value](https://github.com/collabnix/redisplanet/blob/master/datastore/intro.md#key--value) 100 | - [Strings](https://github.com/collabnix/redisplanet/blob/master/datastore/strings/README.md) 101 | - [SET vs GET](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#set-vs-get) 102 | - [Python Script to use SET operation](https://github.com/collabnix/redisplanet/blob/master/python/push.py) 103 | - [SET](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#set-command) 104 | - [Altering & Querying](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#altering--querying-redis-keyspace) 105 | - [EXISTS](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-1) 106 | - [DEL](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-1) 107 | - [Strings with Integer Values](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#string-with-integer-values---incr) 108 | - [INCR](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-2) 109 | - [DECR](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-2) 110 | - [INCRBY](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-2) 111 | - [DECBRY](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-2) 112 | - [MSET vs MGET](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#mset-and-mget) 113 | - [Keys Expiration](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#keys-expiration) 114 | - [EXPIRE](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-4) 115 | - [TTL](https://github.com/collabnix/redisplanet/tree/master/datastore/strings#example-4) 116 | 117 | - [Lists](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#lists) 118 | - [Basic API](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#basic-api-of-lists) 119 | - [LPUSH](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#example) 120 | - [RPUSH](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#example) 121 | - [LRANGE](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#example) 122 | - [LPOP](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#rpop-and-lpop) 123 | - [RPOP](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#rpop-and-lpop) 124 | - [LTRIM](https://github.com/collabnix/redisplanet/blob/master/datastore/lists/README.md#capped-lists-using-ltrim) 125 | - [Hashes](https://github.com/collabnix/redisplanet/blob/master/datastore/hashes/README.md#hashes) 126 | - [HSET](https://github.com/collabnix/redisplanet/blob/master/datastore/hashes/README.md#hset) 127 | - [Python Script to push hash keys into Redis Database and retrieving it seamlessly](https://github.com/collabnix/redisplanet/blob/master/python/push-catalog.py) 128 | - [HGET](https://github.com/collabnix/redisplanet/blob/master/datastore/hashes/README.md#hget) 129 | - [HINCRBY](https://github.com/collabnix/redisplanet/blob/master/datastore/hashes/README.md#hincrby) 130 | - [HGETALL](https://github.com/collabnix/redisplanet/blob/master/datastore/hashes/README.md#hgetall) 131 | 132 | - [Sets](https://github.com/collabnix/redisplanet/blob/master/datastore/set/README.md) 133 | - [SUNION](https://github.com/collabnix/redisplanet/blob/master/datastore/set/README.md#sunion) 134 | - [SINTER](https://github.com/collabnix/redisplanet/blob/master/datastore/set/README.md#sinter) 135 | - [SDIFF](https://github.com/collabnix/redisplanet/blob/master/datastore/set/README.md#sdiff) 136 | 137 | 138 | - [Sorted Set](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md) 139 | - [Sorted Set Operations](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md#sorted-set-operations) 140 | - [ZADD](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md#sorted-set-operations) 141 | - [ZRANGE](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md#sorted-set-operations) 142 | 143 | - [Operate on Ranges](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md#operate-on-ranges) 144 | - [ZRANGEBYSCORE](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md#example) 145 | - [ZREMRANGEBYSCORE](https://github.com/collabnix/redisplanet/blob/master/datastore/sorted-sets/README.md#remove-elements-in-range) 146 | 147 | 148 | - [SCAN](https://github.com/collabnix/redisplanet/blob/master/datastore/scan/README.md) 149 | - [INFO](https://github.com/collabnix/redisplanet/tree/master/advancedapi#info) 150 | - [CLIENT LIST](https://github.com/collabnix/redisplanet/tree/master/advancedapi#client-list) 151 | - [STAT](https://github.com/collabnix/redisplanet/blob/master/advancedapi/README.md#continuous-stats-mode) 152 | - [MONITOR](https://github.com/collabnix/redisplanet/blob/master/advancedapi/README.md#monitor) 153 | - [Managing a Redis Stream](https://medium.com/@sydcanem/managing-a-redis-stream-b8c912e06fa9) 154 | 155 | ### Advanced API Implementation 156 | 157 | - [Bitmaps](https://github.com/collabnix/redisplanet/blob/master/advancedapi/bitmap/README.md) 158 | - [Bit Fields](https://github.com/collabnix/redisplanet/blob/master/advancedapi/bitfield/README.md) 159 | - [HyperLogLogs](https://github.com/collabnix/redisplanet/blob/master/advancedapi/HyperLogLog/README.md) 160 | - [Geospatial Indexes](https://github.com/collabnix/redisplanet/blob/master/advancedapi/geospatial/README.md) 161 | - [Transactions](https://github.com/collabnix/redisplanet/blob/master/advancedapi/transactions/README.md) 162 | - [Pub Sub](https://github.com/collabnix/redisplanet/blob/master/advancedapi/pubsub/README.md) 163 | - [Lua Scripts](https://github.com/collabnix/redisplanet/blob/master/advancedapi/luascripts/README.md) 164 | - [Modules](https://github.com/collabnix/redisplanet/blob/master/advancedapi/modules/README.md) 165 | 166 | 167 | ### Use cases 168 | 169 | - [How to build a Real-Time Leaderboard app Using Redis](https://developer.redislabs.com/howtos/leaderboard) 170 | - [How to build a Rate Limiter using Redis](https://developer.redislabs.com/howtos/ratelimiting) 171 | - [How to cache REST API responses Using Redis & NodeJS](https://developer.redislabs.com/howtos/caching) 172 | - [How to build a Chat application using Redis](https://developer.redislabs.com/howtos/chatapp) 173 | - [Building an Analytics dashboard app using Redis](https://developer.redislabs.com/howtos/analytics) 174 | - [Building a Popup Store in Redis](https://developer.redislabs.com/howtos/popupstore) 175 | 176 | 177 | -------------------------------------------------------------------------------- /Redis-conf/redis-as-lru-cache/README.md: -------------------------------------------------------------------------------- 1 | # Using Redis as a LRU Cache 2 | 3 | When Redis is used as a cache, often it is handy to let it automatically evict old data as you add new data. This behavior is very well known in the community of developers, since it is the default behavior of the popular memcached system. 4 | 5 | LRU is actually only one of the supported eviction methods. This page covers the more general topic of the Redis maxmemory directive that is used in order to limit the memory usage to a fixed amount, and it also covers in depth the LRU algorithm used by Redis, that is actually an approximation of the exact LRU. 6 | 7 | Starting with Redis version 4.0, a new LFU (Least Frequently Used) eviction policy was introduced. 8 | 9 | ### Maxmemory configuration directive 10 | 11 | The maxmemory configuration directive is used in order to configure Redis to use a specified amount of memory for the data set. It is possible to set the configuration directive using the redis.conf file, or later using the CONFIG SET command at runtime. 12 | 13 | Example: In order to configure a memory limit of 100 megabytes, the following directive can be used inside the redis.conf file. 14 | 15 | ``` 16 | maxmemory 100mb 17 | ``` 18 | 19 | Setting maxmemory to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB. 20 | 21 | When the specified amount of memory is reached, it is possible to select among different behaviors, called policies. Redis can just return errors for commands that could result in more memory being used, or it can evict some old data in order to return back to the specified limit every time new data is added. 22 | 23 | ### Eviction policies 24 | 25 | The exact behavior Redis follows when the maxmemory limit is reached is configured using the maxmemory-policy configuration directive. 26 | 27 | The following policies are available: 28 | 29 | - noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions). 30 | - allkeys-lru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added. 31 | - volatile-lru: evict keys by trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added. 32 | - allkeys-random: evict keys randomly in order to make space for the new data added. 33 | - volatile-random: evict keys randomly in order to make space for the new data added, but only evict keys with an expire set. 34 | - volatile-ttl: evict keys with an expire set, and try to evict keys with a shorter time to live (TTL) first, in order to make space for the new data added. 35 | 36 | The policies volatile-lru, volatile-random and volatile-ttl behave like noeviction if there are no keys to evict matching the prerequisites. 37 | 38 | Picking the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis INFO output in order to tune your setup. 39 | 40 | In general as a rule of thumb: 41 | 42 | - Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests, that is, you expect that a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure. 43 | 44 | - Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform (all elements likely accessed with the same probability). 45 | 46 | - Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects. 47 | 48 | - The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem. 49 | 50 | It is also worth noting that setting an expire to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need to set an expire for the key to be evicted under memory pressure. 51 | 52 | 53 | ### How the eviction process works 54 | 55 | It is important to understand that the eviction process works like this: 56 | 57 | - A client runs a new command, resulting in more data added. 58 | - Redis checks the memory usage, and if it is greater than the maxmemory limit , it evicts keys according to the policy. 59 | - A new command is executed, and so forth. 60 | 61 | So we continuously cross the boundaries of the memory limit, by going over it, and then by evicting keys to return back under the limits. 62 | 63 | If a command results in a lot of memory being used (like a big set intersection stored into a new key) for some time the memory limit can be surpassed by a noticeable amount. 64 | 65 | ## Approximated LRU algorithm 66 | 67 | Redis LRU algorithm is not an exact implementation. This means that Redis is not able to pick the best candidate for eviction, that is, the access that was accessed the most in the past. Instead it will try to run an approximation of the LRU algorithm, by sampling a small number of keys, and evicting the one that is the best (with the oldest access time) among the sampled keys. 68 | 69 | However since Redis 3.0 the algorithm was improved to also take a pool of good candidates for eviction. This improved the performance of the algorithm, making it able to approximate more closely the behavior of a real LRU algorithm. 70 | 71 | What is important about the Redis LRU algorithm is that you are able to tune the precision of the algorithm by changing the number of samples to check for every eviction. This parameter is controlled by the following configuration directive: 72 | maxmemory-samples 5 73 | 74 | The reason why Redis does not use a true LRU implementation is because it costs more memory. However the approximation is virtually equivalent for the application using Redis. The following is a graphical comparison of how the LRU approximation used by Redis compares with true LRU. 75 | 76 | ## LRU comparison 77 | 78 | The test to generate the above graphs filled a Redis server with a given number of keys. The keys were accessed from the first to the last, so that the first keys are the best candidates for eviction using an LRU algorithm. Later more 50% of keys are added, in order to force half of the old keys to be evicted. 79 | 80 | You can see three kind of dots in the graphs, forming three distinct bands. 81 | 82 | - The light gray band are objects that were evicted. 83 | - The gray band are objects that were not evicted. 84 | - The green band are objects that were added. 85 | 86 | In a theoretical LRU implementation we expect that, among the old keys, the first half will be expired. The Redis LRU algorithm will instead only probabilistically expire the older keys. 87 | 88 | As you can see Redis 3.0 does a better job with 5 samples compared to Redis 2.8, however most objects that are among the latest accessed are still retained by Redis 2.8. Using a sample size of 10 in Redis 3.0 the approximation is very close to the theoretical performance of Redis 3.0. 89 | 90 | Note that LRU is just a model to predict how likely a given key will be accessed in the future. Moreover, if your data access pattern closely resembles the power law, most of the accesses will be in the set of keys that the LRU approximated algorithm will be able to handle well. 91 | 92 | In simulations we found that using a power law access pattern, the difference between true LRU and Redis approximation were minimal or non-existent. 93 | 94 | However you can raise the sample size to 10 at the cost of some additional CPU usage in order to closely approximate true LRU, and check if this makes a difference in your cache misses rate. 95 | To experiment in production with different values for the sample size by using the CONFIG SET maxmemory-samples command, is very simple. 96 | 97 | ### The new LFU mode 98 | 99 | Starting with Redis 4.0, a new Least Frequently Used eviction mode is available. This mode may work better (provide a better hits/misses ratio) in certain cases, since using LFU Redis will try to track the frequency of access of items, so that the ones used rarely are evicted while the one used often have an higher chance of remaining in memory. 100 | 101 | If you think at LRU, an item that was recently accessed but is actually almost never requested, will not get expired, so the risk is to evict a key that has an higher chance to be requested in the future. LFU does not have this problem, and in general should adapt better to different access patterns. 102 | 103 | To configure the LFU mode, the following policies are available: 104 | 105 | - volatile-lfu Evict using approximated LFU among the keys with an expire set. 106 | - allkeys-lfu Evict any key using approximated LFU. 107 | 108 | LFU is approximated like LRU: it uses a probabilistic counter, called a Morris counter in order to estimate the object access frequency using just a few bits per object, combined with a decay period so that the counter is reduced over time: at some point we no longer want to consider keys as frequently accessed, even if they were in the past, so that the algorithm can adapt to a shift in the access pattern. 109 | 110 | Those informations are sampled similarly to what happens for LRU (as explained in the previous section of this documentation) in order to select a candidate for eviction. 111 | 112 | However unlike LRU, LFU has certain tunable parameters: for instance, how fast should a frequent item lower in rank if it gets no longer accessed? It is also possible to tune the Morris counters range in order to better adapt the algorithm to specific use cases. 113 | 114 | By default Redis 4.0 is configured to: 115 | - Saturate the counter at, around, one million requests. 116 | - Decay the counter every one minute. 117 | 118 | Those should be reasonable values and were tested experimental, but the user may want to play with these configuration settings in order to pick optimal values. 119 | 120 | Instructions about how to tune these parameters can be found inside the example redis.conf file in the source distribution, but briefly, they are: 121 | 122 | ``` 123 | lfu-log-factor 10 124 | lfu-decay-time 1 125 | ``` 126 | 127 | The decay time is the obvious one, it is the amount of minutes a counter should be decayed, when sampled and found to be older than that value. A special value of 0 means: always decay the counter every time is scanned, and is rarely useful. 128 | The counter logarithm factor changes how many hits are needed in order to saturate the frequency counter, which is just in the range 0-255. The higher the factor, the more accesses are needed in order to reach the maximum. The lower the factor, the better is the resolution of the counter for low accesses, according to the following table: 129 | 130 | +--------+------------+------------+------------+------------+------------+ 131 | | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | 132 | +--------+------------+------------+------------+------------+------------+ 133 | | 0 | 104 | 255 | 255 | 255 | 255 | 134 | +--------+------------+------------+------------+------------+------------+ 135 | | 1 | 18 | 49 | 255 | 255 | 255 | 136 | +--------+------------+------------+------------+------------+------------+ 137 | | 10 | 10 | 18 | 142 | 255 | 255 | 138 | +--------+------------+------------+------------+------------+------------+ 139 | | 100 | 8 | 11 | 49 | 143 | 255 | 140 | +--------+------------+------------+------------+------------+------------+ 141 | 142 | So basically the factor is a trade off between better distinguishing items with low accesses VS distinguishing items with high accesses. More informations are available in the example redis.conf file self documenting comments. 143 | Since LFU is a new feature, we'll appreciate any feedback about how it performs in your use case compared to LRU 144 | -------------------------------------------------------------------------------- /kubernetes/gke/README.md: -------------------------------------------------------------------------------- 1 | # Setting up Multi-Node Redis Cluster on Google Cloud Engine using Docker Desktop for Windows 2 | 3 | Redis is a powerful tool for data storage and caching. Redis Cluster extends the functionality by offering sharding and correlated performance benefits, linear scaling, and higher availability because of how Redis stores data. The data is automatically split among multiple nodes, which allows operations to continue, even when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster. 4 | 5 | ## Pre-requisite 6 | 7 | - Installing [Google Cloud SDK](https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe) 8 | 9 | ``` 10 | gcloud init 11 | ``` 12 | 13 | 14 | - Install GIT using Chocolatey 15 | 16 | ``` 17 | Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1')) 18 | ``` 19 | 20 | ``` 21 | choco install git 22 | ``` 23 | 24 | - Install Docker Desktop for Windows 25 | 26 | - Create 3 Node GKE Cluster 27 | 28 | ``` 29 | gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw 30 | ``` 31 | 32 | 33 | ``` 34 | PS C:\Users\Ajeet_Raina> kubectl config get-contexts 35 | CURRENT NAME CLUSTER AUTHINFO NAMESPACE 36 | * gke_lofty-tea-249310_asia-east1-a_k8s-lab1 gke_lofty-tea-249310_asia-east1-a_k8s-lab1 gke_lofty-tea-249310_asia-east1-a_k8s-lab1 37 | PS C:\Users\Ajeet_Raina> kubectl get nodes 38 | NAME STATUS ROLES AGE VERSION 39 | gke-k8s-lab1-default-pool-f1fae040-9vd9 Ready 64m v1.13.11-gke.23 40 | gke-k8s-lab1-default-pool-f1fae040-ghf5 Ready 64m v1.13.11-gke.23 41 | gke-k8s-lab1-default-pool-f1fae040-z0rf Ready 64m v1.13.11-gke.23 42 | PS C:\Users\Ajeet_Raina> 43 | ``` 44 | 45 | 46 | ## Cloning this Repo 47 | 48 | ``` 49 | $git clone https://github.com/collabnix/redis 50 | cd redis/kubernetes/gke/ 51 | 52 | ``` 53 | 54 | 55 | ``` 56 | $ kubectl apply -f redis-statefulset.yaml 57 | configmap/redis-cluster created 58 | statefulset.apps/redis-cluster created 59 | ``` 60 | 61 | ``` 62 | $ kubectl apply -f redis-svc.yaml 63 | service/redis-cluster created 64 | ``` 65 | 66 | ``` 67 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get po 68 | NAME READY STATUS RESTARTS AGE 69 | redis-cluster-0 1/1 Running 0 92s 70 | redis-cluster-1 1/1 Running 0 64s 71 | redis-cluster-2 1/1 Running 0 44s 72 | redis-cluster-3 1/1 Running 0 25s 73 | redis-cluster-4 0/1 ContainerCreating 0 12s 74 | ``` 75 | 76 | ``` 77 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get pvc 78 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 79 | data-redis-cluster-0 Bound pvc-34bdf05b-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 2m15s 80 | data-redis-cluster-1 Bound pvc-4564abb9-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 107s 81 | data-redis-cluster-2 Bound pvc-51566907-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 87s 82 | data-redis-cluster-3 Bound pvc-5c8391a0-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 68s 83 | data-redis-cluster-4 Bound pvc-64a340d3-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 55s 84 | data-redis-cluster-5 Bound pvc-71024053-4af2-11ea-9222-42010a8c00e8 1Gi RWO standard 34s 85 | 86 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke> 87 | ``` 88 | 89 | ``` 90 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl describe pods redis-cluster-0 91 | Name: redis-cluster-0 92 | Namespace: default 93 | Priority: 0 94 | Node: gke-k8s-lab1-default-pool-f1fae040-9vd9/10.140.0.28 95 | Start Time: Sun, 09 Feb 2020 09:41:14 +0530 96 | Labels: app=redis-cluster 97 | controller-revision-hash=redis-cluster-fd959c7f4 98 | statefulset.kubernetes.io/pod-name=redis-cluster-0 99 | Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container redis 100 | Status: Running 101 | IP: 10.12.2.3 102 | Controlled By: StatefulSet/redis-cluster 103 | Containers: 104 | redis: 105 | Container ID: docker://6c8c32c785afabff22323cf77103ae3df29a29580863cdfe8c46db12883d87eb 106 | Image: redis:5.0.1-alpine 107 | Image ID: docker-pullable://redis@sha256:6f1cbe37b4b486fb28e2b787de03a944a47004b7b379d0f8985760350640380b 108 | Ports: 6379/TCP, 16379/TCP 109 | Host Ports: 0/TCP, 0/TCP 110 | Command: 111 | /conf/update-node.sh 112 | redis-server 113 | /conf/redis.conf 114 | State: Running 115 | Started: Sun, 09 Feb 2020 09:41:38 +0530 116 | Ready: True 117 | Restart Count: 0 118 | Requests: 119 | cpu: 100m 120 | Environment: 121 | POD_IP: (v1:status.podIP) 122 | Mounts: 123 | /conf from conf (rw) 124 | /data from data (rw) 125 | /var/run/secrets/kubernetes.io/serviceaccount from default-token-m9xql (ro) 126 | Conditions: 127 | Type Status 128 | Initialized True 129 | Ready True 130 | ContainersReady True 131 | PodScheduled True 132 | Volumes: 133 | data: 134 | Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) 135 | ClaimName: data-redis-cluster-0 136 | ReadOnly: false 137 | conf: 138 | Type: ConfigMap (a volume populated by a ConfigMap) 139 | Name: redis-cluster 140 | Optional: false 141 | default-token-m9xql: 142 | Type: Secret (a volume populated by a Secret) 143 | SecretName: default-token-m9xql 144 | Optional: false 145 | QoS Class: Burstable 146 | Node-Selectors: 147 | Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s 148 | node.kubernetes.io/unreachable:NoExecute for 300s 149 | Events: 150 | Type Reason Age From Message 151 | ---- ------ ---- ---- ------- 152 | Warning FailedScheduling 4m13s (x3 over 4m16s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times) 153 | Normal Scheduled 4m13s default-scheduler Successfully assigned default/redis-cluster-0 to gke-k8s-lab1-default-pool-f1fae040-9vd9 154 | Normal SuccessfulAttachVolume 4m8s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-34bdf05b-4af2-11ea-9222-42010a8c00e8" 155 | Normal Pulling 3m55s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 pulling image "redis:5.0.1-alpine" 156 | Normal Pulled 3m49s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 Successfully pulled image "redis:5.0.1-alpine" 157 | Normal Created 3m49s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 Created container 158 | Normal Started 3m49s kubelet, gke-k8s-lab1-default-pool-f1fae040-9vd9 Started container 159 | ``` 160 | 161 | 162 | ``` 163 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl apply -f redis-svc.yaml 164 | service/redis-cluster created 165 | 166 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get svc 167 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 168 | kubernetes ClusterIP 10.15.240.1 443/TCP 28m 169 | redis-cluster ClusterIP 10.15.248.54 6379/TCP,16379/TCP 5s 170 | 171 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke> 172 | ``` 173 | 174 | ``` 175 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379' 176 | '10.12.2.3:6379'10.12.0.6:6379'10.12.1.7:6379'10.12.2.4:6379'10.12.1.8:6379'10.12.2.5:6379' 177 | ``` 178 | 179 | ``` 180 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli --cluster create 10.12.2.3:6379 10.12.0.6:6379 10.12.1.7:6379 10.12.2.4:6379 10.12.1.8:6379 10.12.2.5:6379 --cluster-replicas 1 181 | >>> Performing hash slots allocation on 6 nodes... 182 | Master[0] -> Slots 0 - 5460 183 | Master[1] -> Slots 5461 - 10922 184 | Master[2] -> Slots 10923 - 16383 185 | Adding replica 10.12.2.4:6379 to 10.12.2.3:6379 186 | Adding replica 10.12.1.8:6379 to 10.12.0.6:6379 187 | Adding replica 10.12.2.5:6379 to 10.12.1.7:6379 188 | M: 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 10.12.2.3:6379 189 | slots:[0-5460] (5461 slots) master 190 | M: bf11440a398e88ad7bfc167dd3219a4f594ffa39 10.12.0.6:6379 191 | slots:[5461-10922] (5462 slots) master 192 | M: c82e231121118c731194d31ddc20d848953174e7 10.12.1.7:6379 193 | slots:[10923-16383] (5461 slots) master 194 | S: 707bb247a2ecc3fd36feb3c90cc58ff9194b5166 10.12.2.4:6379 195 | replicates 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 196 | S: 63abc45d61a9d9113db0c57f7fe0596da4c83a6e 10.12.1.8:6379 197 | replicates bf11440a398e88ad7bfc167dd3219a4f594ffa39 198 | S: 10c2bc0cc626725b5a1afdc5e68142610e498fd7 10.12.2.5:6379 199 | replicates c82e231121118c731194d31ddc20d848953174e7 200 | Can I set the above configuration? (type 'yes' to accept): yes 201 | >>> Nodes configuration updated 202 | >>> Assign a different config epoch to each node 203 | >>> Sending CLUSTER MEET messages to join the cluster 204 | Waiting for the cluster to join 205 | ..... 206 | >>> Performing Cluster Check (using node 10.12.2.3:6379) 207 | M: 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 10.12.2.3:6379 208 | slots:[0-5460] (5461 slots) master 209 | 1 additional replica(s) 210 | S: 63abc45d61a9d9113db0c57f7fe0596da4c83a6e 10.12.1.8:6379 211 | slots: (0 slots) slave 212 | replicates bf11440a398e88ad7bfc167dd3219a4f594ffa39 213 | M: c82e231121118c731194d31ddc20d848953174e7 10.12.1.7:6379 214 | slots:[10923-16383] (5461 slots) master 215 | 1 additional replica(s) 216 | S: 10c2bc0cc626725b5a1afdc5e68142610e498fd7 10.12.2.5:6379 217 | slots: (0 slots) slave 218 | replicates c82e231121118c731194d31ddc20d848953174e7 219 | S: 707bb247a2ecc3fd36feb3c90cc58ff9194b5166 10.12.2.4:6379 220 | slots: (0 slots) slave 221 | replicates 8a78d53307bdde11f6e53a9c1e90b1a9949463f1 222 | M: bf11440a398e88ad7bfc167dd3219a4f594ffa39 10.12.0.6:6379 223 | slots:[5461-10922] (5462 slots) master 224 | 1 additional replica(s) 225 | [OK] All nodes agree about slots configuration. 226 | >>> Check for open slots... 227 | >>> Check slots coverage... 228 | [OK] All 16384 slots covered. 229 | ``` 230 | 231 | ``` 232 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli cluster info 233 | cluster_state:ok 234 | cluster_slots_assigned:16384 235 | cluster_slots_ok:16384 236 | cluster_slots_pfail:0 237 | cluster_slots_fail:0 238 | cluster_known_nodes:6 239 | cluster_size:3 240 | cluster_current_epoch:6 241 | cluster_my_epoch:1 242 | cluster_stats_messages_ping_sent:126 243 | cluster_stats_messages_pong_sent:130 244 | cluster_stats_messages_sent:256 245 | cluster_stats_messages_ping_received:125 246 | cluster_stats_messages_pong_received:126 247 | cluster_stats_messages_meet_received:5 248 | cluster_stats_messages_received:256 249 | 250 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke> 251 | ``` 252 | 253 | ``` 254 | kubectl apply -f app-depolyment.yaml 255 | ``` 256 | 257 | ``` 258 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl get svc 259 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 260 | hit-counter-lb LoadBalancer 10.15.253.213 35.187.144.200 80:31309/TCP 103s 261 | kubernetes ClusterIP 10.15.240.1 443/TCP 46m 262 | redis-cluster ClusterIP 10.15.248.54 6379/TCP,16379/TCP 18m 263 | ``` 264 | 265 | ## Simulate a Node Failure 266 | 267 | We can simulate the failure of a cluster member by deleting the Pod. When we delete redis-cluster-0, which was originally a master, we see that Kubernetes promotes redis-cluster-3 to master, and when redis-cluster-0 returns, it does so as a slave 268 | 269 | ``` 270 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-0 -- redis-cli role 271 | 1) "master" 272 | 2) (integer) 854 273 | 3) 1) 1) "10.12.2.4" 274 | 2) "6379" 275 | 3) "854" 276 | ``` 277 | 278 | ``` 279 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-1 -- redis-cli role 280 | 1) "master" 281 | 2) (integer) 994 282 | 3) 1) 1) "10.12.1.8" 283 | 2) "6379" 284 | 3) "994" 285 | 286 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-2 -- redis-cli role 287 | 1) "master" 288 | 2) (integer) 1008 289 | 3) 1) 1) "10.12.2.5" 290 | 2) "6379" 291 | 3) "1008" 292 | 293 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-3 -- redis-cli role 294 | 1) "slave" 295 | 2) "10.12.2.3" 296 | 3) (integer) 6379 297 | 4) "connected" 298 | 5) (integer) 1008 299 | 300 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-4 -- redis-cli role 301 | 1) "slave" 302 | 2) "10.12.0.6" 303 | 3) (integer) 6379 304 | 4) "connected" 305 | 5) (integer) 1022 306 | 307 | C:\Users\Ajeet_Raina\Desktop\redis\kubernetes\gke>kubectl exec -it redis-cluster-5 -- redis-cli role 308 | 1) "slave" 309 | 2) "10.12.1.7" 310 | 3) (integer) 6379 311 | 4) "connected" 312 | 5) (integer) 1022 313 | ``` 314 | -------------------------------------------------------------------------------- /oss/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # What is Redis? 4 | 5 | - Redis is in the family of databases called key-value stores. 6 | - The essence of a key-value store is the ability to store some data, called a value, inside a key. 7 | - This data can later be retrieved only if we know the exact key used to store it. 8 | - Often Redis it is called a data structure server because it has outer key-value shell, but each value can contain a complex data structure, such as a string, a list, a hashes, or ordered data structures called sorted sets as well as probabilistic data structures like hyperloglog. 9 | 10 | - Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. 11 | - Redis stores data using a key-value system and, as such, it is very easy to retrieve data since there are no complicated relationships or other operations making relational databases slow. 12 | 13 | # What is Redis Good for? 14 | 15 | Redis can be used for a number of things such as: 16 | 17 | - Caching 18 | - Counting 19 | - Queues 20 | - Pub and Sub 21 | 22 | # What is Caching? 23 | 24 | Caching is the process of storing data into a cache. A cache is a temporary data store where data is kept for later use. 25 | 26 | A cache as a data store is easier for the client (or server) to reach, as opposed to a permanent data store that might be located on a different service, which takes more time and resources to reach (a database or an external API endpoint). 27 | 28 | 29 | # How Caching Works. 30 | 31 | The image below is not 100% accurate, since there is a lot going on under the hood and I had to keep it as simple as possible. 32 | 33 | 34 | ![My Image](https://github.com/collabnix/redisplanet/blob/master/oss/Screen%20Shot%202020-03-09%20at%203.14.14%20PM.png) 35 | ![My Image](https://github.com/collabnix/redisplanet/blob/master/oss/Screen%20Shot%202020-03-09%20at%203.14.20%20PM.png) 36 | 37 | 38 | In the first illustration, the server queries the database every time a request for profile information comes from the client (Mr. Jide). 39 | 40 | Let's assume Mr. Jide requests this data 30 times during his browsing session and each request takes 10 seconds to complete, the response time for all of the request-response times remain constant. 41 | 42 | 10secs * 30requests = 5 minutes. 43 | 44 | In the second illustration, the server looks into the cache whenever Mr. Jide requests profile information and queries the database only if the data is not available in the cache (Redis). 45 | 46 | One advantage of the second design over the first is that the response time for subsequent requests for the same data is shorter, since the data has been cached in memory (Redis) so there is no need to query the database, which is an expensive operation. 47 | 48 | We also use less server resources, leading to an improvement in application performance. 49 | 50 | Compared to querying the database, getting data from the cache is easier and faster for the server. 51 | 52 | For the second design, let's assume the request-response cycle takes 5 seconds (after the first call), and Mr. Jide requests this data 30 times. How much time will the 30 requests take? 53 | 54 | 10secs * 1request = 10 seconds (Initial Request) 55 | 5secs * 29requests = 145seconds (Subsequent Requests) 56 | Total: 10secs + 145secs = 2mins 35secs 57 | 58 | Using the first design, the requests took a whooping 5 minutes while using the second design, the total trip time was 2mins 35secs. 59 | 60 | This isn't just about speed! Imagine what difference it would make to your users and how much money you could save on server costs. 61 | 62 | 63 | 64 | # Data Structure 65 | 66 | 67 | As a first example, we can use the command SET to store the value "fido" at key "server:name": 68 | 69 | 70 | SET server:name "fido" 71 | Redis will store our data permanently, so we can later ask "What is the value stored at key server:name?" and Redis will reply with "fido": 72 | 73 | 74 | GET server:name => "fido" 75 | 76 | There is a command in order to test if a give key exists or not: 77 | 78 | 79 | EXISTS server:name => 1 80 | EXISTS server:blabla => 0 81 | 82 | Other basic operations provided by Redis are DEL to delete a given key and associated value, INCR to atomically increment a number stored at a given key: 83 | 84 | 85 | 86 | 87 | > SET connections 10 88 | OK 89 | > INCR connections 90 | (integer) 11 91 | > INCR connections 92 | (integer) 12 93 | > DEL connections 94 | (integer) 1 95 | > INCR connections 96 | (integer) 1 97 | > INCRBY connections 100 98 | (integer) 101 99 | > DECRBY connections 10 100 | (integer) 91 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | It is also possible to increment the number contained inside a key by a specific amount: 109 | 110 | 111 | INCRBY connections 100 => 101 112 | And there are similar commands in order to decrement the value of the key. 113 | 114 | 115 | DECR connections => 100 116 | DECRBY connections 10 => 90 117 | When you manipulate Redis strings with incrementing and decrementing commands, you are implementing counters. Counters are a very popular application for Redis. 118 | 119 | 120 | ## INCR: 121 | 122 | There is something special about INCR. Why do we provide such an operation if we can do it ourself with a bit of code? After all it is as simple as: 123 | 124 | x = GET count 125 | x = x + 1 126 | SET count x 127 | The problem is that doing the increment in this way will only work as long as there is a single client using the key. See what happens if two clients are accessing this key at the same time: 128 | 129 | Client A reads count as 10. 130 | Client B reads count as 10. 131 | Client A increments 10 and sets count to 11. 132 | Client B increments 10 and sets count to 11. 133 | 134 | We wanted the value to be 12, but instead it is 11! This is because incrementing the value in this way is not an atomic operation. Calling the INCR command in Redis will prevent this from happening, because it is an atomic operation. 135 | 136 | All the Redis operations implemented by single commands are atomic, including the ones operating on more complex data structures. So when you use a Redis command that modifies some value, you don't have to think about concurrent access. 137 | 138 | 139 | ## EXPIRE | TTL | PEXPIRE | PTTL 140 | 141 | Redis can be told that a key should only exist for a certain length of time. This is accomplished with the EXPIRE and TTL commands, and by the similar PEXPIRE and PTTL commands that operate using time in milliseconds instead of seconds. 142 | 143 | 144 | SET resource:lock "Redis Demo" 145 | EXPIRE resource:lock 120 146 | This causes the key resource:lock to be deleted in 120 seconds. You can test how long a key will exist with the TTL command. It returns the number of seconds until it will be deleted. 147 | 148 | 149 | TTL resource:lock => 113 150 | // after 113s 151 | TTL resource:lock => -2 152 | The -2 for the TTL of the key means that the key does not exist (anymore). A -1 for the TTL of the key means that it will never expire. Note that if you SET a key, its TTL will be reset. 153 | 154 | 155 | SET resource:lock "Redis Demo 1" 156 | EXPIRE resource:lock 120 157 | TTL resource:lock => 119 158 | SET resource:lock "Redis Demo 2" 159 | TTL resource:lock => -1 160 | The SET command is actually able to accept further arguments in order to directly set a time to live (TTL) to a key, so you can alter the value of a key and set its TTL at the same time in a single atomic operation: 161 | 162 | 163 | SET resource:lock "Redis Demo 3" EX 5 164 | TTL resource:lock => 5 165 | It is also possible to cancel the time to live of a key removing the expire and making the key permanent again. 166 | 167 | 168 | SET resource:lock "Redis Demo 3" EX 5 169 | PRESIST resource:lock 170 | TTL resource:lock => -1 171 | 172 | 173 | ## RPUSH | LPUSH | LRANGE | LPOP | RPOP 174 | 175 | Redis also supports several more complex data structures. The first one we'll look at is a list. A list is a series of ordered values. Some of the important commands for interacting with lists are RPUSH, LPUSH, LLEN, LRANGE, LPOP, and RPOP. You can immediately begin working with a key as a list, as long as it doesn't already exist as a different type. 176 | 177 | This concept is generally true for every Redis data structure: you don't create a key first, and add things to it later, but you can just directly use the command in order to add new elements. As side effect the key will be create if it did not exist. Similarly keys that will result empty after executing some command will automatically be removed from the key space. 178 | 179 | RPUSH puts the new element at the end of the list. 180 | 181 | 182 | RPUSH friends "Alice" 183 | RPUSH friends "Bob" 184 | LPUSH puts the new element at the start of the list. 185 | 186 | 187 | LPUSH friends "Sam" 188 | 189 | 190 | LRANGE gives a subset of the list. It takes the index of the first element you want to retrieve as its first parameter and the index of the last element you want to retrieve as its second parameter. A value of -1 for the second parameter means to retrieve elements until the end of the list, -2 means to include up to the penultimate, and so forth. 191 | 192 | 193 | LRANGE friends 0 -1 => 1) "Sam", 2) "Alice", 3) "Bob" 194 | LRANGE friends 0 1 => 1) "Sam", 2) "Alice" 195 | LRANGE friends 1 2 => 1) "Alice", 2) "Bob" 196 | 197 | 198 | So far we explored the commands that let you add elements to the list, and LRANGE that let you inspect ranges of the list. A fundamental functionality of Redis lists is the ability to remove, and return to the client at the same time, elements in the head or the tail of the list. 199 | 200 | LPOP removes the first element from the list and returns it. 201 | 202 | 203 | LPOP friends => "Sam" 204 | RPOP removes the last element from the list and returns it. 205 | 206 | 207 | RPOP friends => "3" 208 | Note that the list now only has four element: 209 | 210 | 211 | LLEN friends => 4 212 | LRANGE friends 0 -1 => 1) "Alice" 2) "Bob" 3) "1" 4) "2" 213 | 214 | 215 | Both RPUSH and LPUSH commands are variadic, so you can specify multiple elements in the same command execution. 216 | 217 | 218 | RPUSH friends 1 2 3 => 6 219 | Tip: RPUSH and LPUSH return the total length of the list after the operation. 220 | 221 | You can also use LLEN to obtain the current length of the list. 222 | 223 | 224 | LLEN friends => 6 225 | 226 | 227 | The next data structure that we'll look at is a set. A set is similar to a list, except it does not have a specific order and each element may only appear once. Both the data structures are very useful because while in a list is fast to access the elements near the top or the bottom, and the order of the elements is preserved, in a set is very fast to test for membership, that is, to immediately know if a given element was added or not. Moreover in a set a given element can exist only in a single copy. 228 | 229 | Some of the important commands in working with sets are SADD, SREM, SISMEMBER, SMEMBERS and SUNION. 230 | 231 | SADD adds the given member to the set, again this command is also variadic. 232 | 233 | 234 | SADD superpowers "flight" 235 | SADD superpowers "x-ray vision" "reflexes" 236 | SREM removes the given member from the set, returning 1 or 0 to signal if the member was actually there or not. 237 | 238 | 239 | SREM superpowers "reflexes" => 1 240 | SREM superpowers "making pizza" => 0 241 | 242 | 243 | SISMEMBER tests if the given value is in the set. It returns 1 if the value is there and 0 if it is not. 244 | 245 | 246 | SISMEMBER superpowers "flight" => 1 247 | SISMEMBER superpowers "reflexes" => 0 248 | SMEMBERS returns a list of all the members of this set. 249 | 250 | 251 | SMEMBERS superpowers => 1) "flight", 2) "x-ray vision" 252 | SUNION combines two or more sets and returns the list of all elements. 253 | 254 | 255 | SADD birdpowers "pecking" 256 | SADD birdpowers "flight" 257 | SUNION superpowers birdpowers => 1) "pecking", 2) "x-ray vision", 3) "flight" 258 | 259 | 260 | 261 | The return value fo SADD is as important as the one of SREM. If the element we try to add is already inside, then 1 is returned, otherwise SADD returns zero: 262 | 263 | 264 | SADD superpowers "flight" => 0 265 | SADD superpowers "invisibility" => 1 266 | 267 | 268 | Sets also have a command very similar to LPOP and RPOP in order to extract elements from the set and return them to the client in a single operation. However since sets are not ordered data structures the returned (and removed) elements are totally casual in this case. 269 | 270 | 271 | SADD letters a b c d e f => 6 272 | SPOP letters 2 => 1) "c" 2) "a" 273 | The argument of SPOP after the name of the key, is the number of elements we want it to return, and remove from the set. 274 | 275 | Now the set will have just the remaining elements: 276 | 277 | 278 | SMEMBERS letters => 1) "b" 2) "d" 3) "e" 4) "f" 279 | There is also a command to return random elements without removing such elemetns from the set, it is called SRANDMEMBER. You can try it yourself, it works just like SPOP, but if you specify a negative count instead of a positive one, it may also return repeating elements. 280 | 281 | 282 | Sets are a very handy data type, but as they are unsorted they don't work well for a number of problems. This is why Redis 1.2 introduced Sorted Sets. 283 | 284 | A sorted set is similar to a regular set, but now each value has an associated score. This score is used to sort the elements in the set. 285 | 286 | 287 | ZADD hackers 1940 "Alan Kay" 288 | ZADD hackers 1906 "Grace Hopper" 289 | ZADD hackers 1953 "Richard Stallman" 290 | ZADD hackers 1965 "Yukihiro Matsumoto" 291 | ZADD hackers 1916 "Claude Shannon" 292 | ZADD hackers 1969 "Linus Torvalds" 293 | ZADD hackers 1957 "Sophie Wilson" 294 | ZADD hackers 1912 "Alan Turing" 295 | In these examples, the scores are years of birth and the values are the names of famous hackers. 296 | 297 | 298 | ZRANGE hackers 2 4 => 1) "Claude Shannon", 2) "Alan Kay", 3) "Richard Stallman" 299 | 300 | Simple strings, sets and sorted sets already get a lot done but there is one more data type Redis can handle: Hashes. 301 | 302 | Hashes are maps between string fields and string values, so they are the perfect data type to represent objects (eg: A User with a number of fields like name, surname, age, and so forth): 303 | 304 | 305 | HSET user:1000 name "John Smith" 306 | HSET user:1000 email "john.smith@example.com" 307 | HSET user:1000 password "s3cret" 308 | To get back the saved data use HGETALL: 309 | 310 | 311 | HGETALL user:1000 312 | You can also set multiple fields at once: 313 | 314 | 315 | HMSET user:1001 name "Mary Jones" password "hidden" email "mjones@example.com" 316 | If you only need a single field value that is possible as well: 317 | 318 | 319 | HGET user:1001 name => "Mary Jones" 320 | 321 | 322 | Numerical values in hash fields are handled exactly the same as in simple strings and there are operations to increment this value in an atomic way. 323 | 324 | 325 | HSET user:1000 visits 10 326 | HINCRBY user:1000 visits 1 => 11 327 | HINCRBY user:1000 visits 10 => 21 328 | HDEL user:1000 visits 329 | HINCRBY user:1000 visits 1 => 1 330 | Check the full list of Hash commands for more information. 331 | 332 | 333 | --------------------------------------------------------------------------------