├── .DS_Store ├── ES-Tutorial-1 ├── README.md ├── cli │ ├── README.md │ └── image │ │ ├── es-head.png │ │ └── kibana.png ├── elasticsearch.yml.1st ├── image │ ├── es-head.png │ └── kibana.png ├── jvm.options.1st ├── kibana.yml ├── tuto1 └── ymladd.yml.1st ├── ES-Tutorial-2 ├── README.md ├── cli │ ├── README.md │ └── image │ │ ├── head.png │ │ └── hq.png ├── es-head.service ├── es-hq.service ├── image │ ├── head.png │ └── hq.png ├── query │ └── query ├── tools │ └── esbot │ │ ├── README.md │ │ └── esbot └── tuto2 ├── ES-Tutorial-3-1 ├── README.md ├── cli │ ├── README.md │ └── accounts.json ├── elasticsearch.yml.2nd ├── image │ └── es-head.png ├── jvm.options.2nd ├── query │ └── query ├── tuto3-1 └── ymladd.yml.2nd ├── ES-Tutorial-3-2 ├── README.md ├── elasticsearch.yml.3rd ├── image │ └── es-head.png ├── jvm.options.3rd ├── tuto3-2 └── ymladd.yml.3rd ├── ES-Tutorial-4 ├── README.md ├── elasticsearch.yml.4th ├── image │ └── es-head.png ├── jvm.options.4th ├── tuto4 └── ymladd.yml.4th ├── ES-Tutorial-5 ├── README.md ├── image │ └── noridict1.jpg ├── query │ ├── new_query │ └── query ├── tools │ ├── .DS_Store │ └── bulkapi │ │ ├── README.md │ │ ├── accounts.json │ │ ├── bulk │ │ ├── logs.jsonl │ │ └── shakespeare_6.0.json └── tuto5 ├── ES-Tutorial-6 ├── README.md ├── image │ └── grafana.png ├── query │ └── query ├── tools │ └── monitor │ │ ├── README.md │ │ ├── accounts.json │ │ ├── bulk │ │ ├── grafana_template.json │ │ ├── image │ │ ├── grafana1.png │ │ ├── grafana10.png │ │ ├── grafana11.png │ │ ├── grafana12.png │ │ ├── grafana13.png │ │ ├── grafana2.png │ │ ├── grafana3.png │ │ ├── grafana4.png │ │ ├── grafana5.png │ │ ├── grafana6.png │ │ ├── grafana7.png │ │ ├── grafana8.png │ │ └── grafana9.png │ │ └── monworker.py └── tuto6 ├── ES-Tutorial-7 ├── README.md ├── cli │ └── README.md ├── query │ └── query ├── tools │ ├── .DS_Store │ ├── ansible │ │ ├── hosts │ │ ├── roles │ │ │ ├── elasticsearch │ │ │ │ ├── defaults │ │ │ │ │ └── main.yml │ │ │ │ ├── handlers │ │ │ │ │ └── main.yml │ │ │ │ ├── tasks │ │ │ │ │ └── main.yml │ │ │ │ └── templates │ │ │ │ │ └── elasticsearch.yml.j2 │ │ │ ├── filebeat │ │ │ │ ├── defaults │ │ │ │ │ └── main.yml │ │ │ │ ├── handlers │ │ │ │ │ └── main.yml │ │ │ │ ├── tasks │ │ │ │ │ └── main.yml │ │ │ │ └── templates │ │ │ │ │ └── filebeat.yml.j2 │ │ │ └── rolling │ │ │ │ ├── tasks │ │ │ │ └── main.yml │ │ │ │ └── templates │ │ │ │ └── rolling_restart.sh.j2 │ │ └── site.yml │ ├── curator │ │ ├── action │ │ │ ├── alias.action.yml │ │ │ ├── close.action.yml │ │ │ ├── delete_duration.action.yml │ │ │ ├── delete_name.action.yml │ │ │ ├── es.action.yml │ │ │ ├── forcemerge.action.yml │ │ │ ├── open.action.yml │ │ │ ├── rollover.action.yml │ │ │ ├── rollover_alias.action.yml │ │ │ └── warm.action.yml │ │ ├── config │ │ │ └── es.config.yml │ │ ├── cur.sh │ │ └── logs │ │ │ └── es.log │ ├── shard │ │ ├── accounts.json │ │ ├── bulk │ │ ├── monworker.py │ │ ├── search_test.py │ │ └── shard │ └── telebot │ │ ├── esbot.py │ │ ├── esbot.pyc │ │ └── tele.py └── tuto7 ├── README.md ├── common ├── ES-Key.pem └── tmux └── questions └── 9th /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/.DS_Store -------------------------------------------------------------------------------- /ES-Tutorial-1/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-1 2 | 3 | ElasticSearch 첫 번째 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## ElasticSearch Product 설치 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | Master Node 1번 장비에서 실습합니다. 12 | 13 | ```bash 14 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 15 | 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-1 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 21 | 22 | ##################### Menu ############## 23 | $ ./tuto1 [Command] 24 | #####################%%%%%%############## 25 | 1 : elasticsearch packages 26 | 2 : configure elasticsearch.yml & jvm.options 27 | 3 : start elasticsearch process 28 | 4 : install kibana packages 29 | 5 : configure kibana.yml 30 | 6 : start kibana process 31 | init : ec2 instance initializing 32 | ######################################### 33 | 34 | ``` 35 | 36 | ## ELK Tutorial 1 - Elasticsearch, Kibana 세팅 37 | 38 | ### Elasticsearch 39 | ##### /etc/elasticsearch/elasticsearch.yml 40 | 41 | 1) cluster.name, node.name, network.host, http.cors.enabled, http.cors.allow-origin 추가설정 42 | 2) **./tuto1 1 ./tuto1 2 실행 후 cluster.name 은 unique name 으로 별도 설정 필요** 43 | 3) 7.x 부터 변경된 discovery 설정 추가 44 | 45 | ```bash 46 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 1 47 | 48 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 2 49 | 50 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo vi /etc/elasticsearch/elasticsearch.yml 51 | 52 | 53 | ### For ClusterName & Node Name 54 | cluster.name: mytuto-es # Your Unique Cluster Name 55 | node.name: master-ip-172-31-14-110 # Your Unique Node Name 56 | 57 | ### For Head 58 | http.cors.enabled: true 59 | http.cors.allow-origin: "*" 60 | 61 | ### For Response by External Request 62 | network.host: 0.0.0.0 63 | 64 | ### Discovery Settings 65 | discovery.seed_hosts: [ "{IP1}:9300", ] 66 | cluster.initial_master_nodes: [ "{IP1}:9300", ] 67 | 68 | ``` 69 | 70 | ##### /etc/elasticsearch/jvm.options 71 | 3) Xms1g, Xmx1g 를 물리 메모리의 절반으로 수정 72 | 73 | ```bash 74 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo vi /etc/elasticsearch/jvm.options 75 | 76 | -Xms2g 77 | -Xmx2g 78 | 79 | ``` 80 | 81 | 4) 두 파일 모두 수정이 완료되었으면 ./tuto1 3 을 실행하여 ES 프로세스 시작, 클러스터가 잘 구성되었는지 확인 82 | 83 | ```bash 84 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 3 85 | 86 | ``` 87 | 88 | 89 | ### Kibana 90 | /etc/kibana/kibana.yml 91 | 1) server.host 를 외부에서도 접근 가능하도록 0.0.0.0 으로 설정 92 | 2) elasticsearch.url 은 localhost 에 ES 도 함께 설치했기 때문에 http://localhost:9200 으로 설정 93 | 3) kibana.index 는 기본이름인 ".kibana" 로 설정 94 | 95 | ```bash 96 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 97 | 98 | ##################### Menu ############## 99 | $ ./tuto1 [Command] 100 | #####################%%%%%%############## 101 | 1 : install java & elasticsearch packages 102 | 2 : configure elasticsearch.yml & jvm.options 103 | 3 : start elasticsearch process 104 | 4 : install kibana packages 105 | 5 : configure kibana.yml 106 | 6 : start kibana process 107 | init : ec2 instance initializing 108 | ######################################### 109 | 110 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 4 111 | 112 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 5 113 | 114 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ ./tuto1 6 115 | 116 | ``` 117 | 118 | ## Smoke Test 119 | 120 | ### Elasticsearch 121 | 122 | ```bash 123 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ curl localhost:9200 124 | { 125 | "name" : "master-ip-172-31-13-126", 126 | "cluster_name" : "mytuto-es", 127 | "cluster_uuid" : "LTfRfk3KRLS31kQDROVu9A", 128 | "version" : { 129 | "number" : "7.3.0", 130 | "build_flavor" : "default", 131 | "build_type" : "rpm", 132 | "build_hash" : "a9861f4", 133 | "build_date" : "2019-01-24T11:27:09.439740Z", 134 | "build_snapshot" : false, 135 | "lucene_version" : "7.6.0", 136 | "minimum_wire_compatibility_version" : "5.6.0", 137 | "minimum_index_compatibility_version" : "5.0.0" 138 | }, 139 | "tagline" : "You Know, for Search" 140 | } 141 | 142 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ curl -H 'Content-Type: application/json' -XPOST localhost:9200/firstindex/_doc -d '{ "mykey": "myvalue" }' 143 | ``` 144 | 145 | * Web Browser 에 [http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://{FQDN}:9200](http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200) 실행 146 | 147 | ![Optional Text](image/es-head.png) 148 | 149 | ### Kibana 150 | * Web Browser 에 [http://{FQDN}:5601](http://{FQDN}:5601) 실행 151 | 152 | ![Optional Text](image/kibana.png) 153 | 154 | ## Trouble Shooting 155 | 156 | ### Elasticsearch 157 | Smoke Test 가 진행되지 않을 때에는 elasticsearch.yml 파일에 기본으로 설정되어있는 로그 디렉토리의 로그를 살펴봅니다. 158 | 159 | path.logs: /var/log/elasticsearch 로 설정되어 cluster.name 이 적용된 파일로 만들어 로깅됩니다. 160 | 161 | 위의 경우에는 /var/log/elasticsearch/mytuto-es.log 에서 확인할 수 있습니다. 162 | 163 | ```bash 164 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo vi /var/log/elasticsearch/mytuto-es.log 165 | ``` 166 | 167 | 로그도 남지 않았으면 elasticsearch user 로 elasticsearch binary 파일을 직접 실행해 로그를 살펴봅시다 168 | 169 | ```bash 170 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch 171 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 172 | [2020-01-21T16:22:17,706][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [6gb], net total_space [9.9gb], types [rootfs] 173 | [2020-01-21T16:22:17,708][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] heap size [1.9gb], compressed ordinary object pointers [true] 174 | [2020-01-21T16:22:17,740][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [master-ip-172-31-5-69] uncaught exception in thread [main] 175 | org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 176 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.5.1.jar:7.5.1] 177 | at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.5.1.jar:7.5.1] 178 | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.5.1.jar:7.5.1] 179 | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 180 | at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 181 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.5.1.jar:7.5.1] 182 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.5.1.jar:7.5.1] 183 | Caused by: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 184 | at org.elasticsearch.env.NodeEnvironment.ensureNoShardData(NodeEnvironment.java:1081) ~[elasticsearch-7.5.1.jar:7.5.1] 185 | at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:325) ~[elasticsearch-7.5.1.jar:7.5.1] 186 | at org.elasticsearch.node.Node.(Node.java:273) ~[elasticsearch-7.5.1.jar:7.5.1] 187 | at org.elasticsearch.node.Node.(Node.java:253) ~[elasticsearch-7.5.1.jar:7.5.1] 188 | at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 189 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 190 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.5.1.jar:7.5.1] 191 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.5.1.jar:7.5.1] 192 | ... 6 more 193 | ``` 194 | -------------------------------------------------------------------------------- /ES-Tutorial-1/cli/README.md: -------------------------------------------------------------------------------- 1 | # System Script 1st 2 | 3 | ElasticSearch 첫 번째 System 명령어 스크립트를 기술합니다. 4 | 5 | ## YUM Repository 등록으로 설치하기 6 | ElasticSearch 를 설치하기 위해서는 항상 Java 를 설치해야 합니다. 7 | 8 | ```bash 9 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install java 10 | 11 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/yum.repos.d/elasticsearch.repo 12 | 13 | [elasticsearch-6.x] 14 | name=Elasticsearch repository for 6.x packages 15 | baseurl=https://artifacts.elastic.co/packages/6.x/yum 16 | gpgcheck=1 17 | gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch 18 | enabled=1 19 | autorefresh=1 20 | type=rpm-md 21 | 22 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum install elasticsearch 23 | ``` 24 | 25 | ## RPM 으로 설치하기 26 | 27 | ```bash 28 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install wget 29 | 30 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.rpm 31 | 32 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo rpm -ivh ./elasticsearch-6.6.0.rpm 33 | ``` 34 | 35 | ## zip, tar Download 하여 설치하기 36 | 37 | * zip 38 | ```bash 39 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install unzip 40 | 41 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.zip 42 | 43 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ unzip elasticsearch-6.6.0.zip 44 | ``` 45 | 46 | * tar.gz 47 | ```bash 48 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.tar.gz 49 | 50 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ tar -xzf elasticsearch-6.6.0.tar.gz 51 | ``` 52 | 53 | ## Elasticsearch 실행하기 54 | 55 | * CentOS/RHEL 6 56 | ```bash 57 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo service elasticsearch start 58 | ``` 59 | 60 | * CentOS/RHEL 7 61 | ```bash 62 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo systemctl start elasticsearch.service 63 | ``` 64 | 65 | * Source Install 66 | ```bash 67 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd elasticsearch-6.6.0 68 | 69 | [ec2-user@ip-xxx-xxx-xxx-xxx elasticsearch-6.6.0]$ bin/elasticsearch -d 70 | ``` 71 | 72 | ### Smoke Test 73 | ```bash 74 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ ps ax | grep elasticsearch 75 | 76 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ curl localhost:9200 77 | { 78 | "name" : "ip-172-31-14-110", 79 | "cluster_name" : "mytuto-es", 80 | "cluster_uuid" : "52XfKjycSLCSwXqT_YPMXA", 81 | "version" : { 82 | "number" : "6.6.0", 83 | "build_flavor" : "default", 84 | "build_type" : "rpm", 85 | "build_hash" : "a9861f4", 86 | "build_date" : "2019-01-24T11:27:09.439740Z", 87 | "build_snapshot" : false, 88 | "lucene_version" : "7.6.0", 89 | "minimum_wire_compatibility_version" : "5.6.0", 90 | "minimum_index_compatibility_version" : "5.0.0" 91 | }, 92 | "tagline" : "You Know, for Search" 93 | } 94 | ``` 95 | 96 | * Web Browser 에 [http://ec2-3-0-99-205.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200](http://ec2-3-0-99-205.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200) 실행 97 | 98 | ![Optional Text](image/es-head.png) 99 | 100 | ### Trouble Shooting 101 | Smoke Test 가 진행되지 않을 때에는 elasticsearch.yml 파일에 기본으로 설정되어있는 로그 디렉토리의 로그를 살펴봅니다. 102 | 103 | YUM, RPM 을 통한 설치는 path.logs: /var/log/elasticsearch 로, Source 설치는 {install path}/logs 로 설정되어 cluster.name 이 적용된 파일을 만들어 로깅됩니다. 104 | 105 | 위의 경우에는 /var/log/elasticsearch/{cluster.name}.log, ~/elasticsearch-6.6.0/logs/{cluster.name}.log 에서 확인할 수 있습니다. 106 | 107 | * YUM, RPM 108 | ```bash 109 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /var/log/elasticsearch/{cluster.name}.log 110 | ``` 111 | 112 | * Source 113 | ```bash 114 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi ./elasticsearch-6.6.0/logs/{cluster.name}.log 115 | ``` 116 | 117 | ## Kibana Dev Tools 활용하기 118 | 119 | ```bash 120 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/yum.repos.d/kibana.repo 121 | 122 | [kibana-6.x] 123 | name=Kibana repository for 6.x packages 124 | baseurl=https://artifacts.elastic.co/packages/6.x/yum 125 | gpgcheck=1 126 | gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch 127 | enabled=1 128 | autorefresh=1 129 | type=rpm-md 130 | 131 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum install kibana 132 | 133 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/kibana/kibana.yml 134 | 135 | server.host: "0.0.0.0" 136 | elasticsearch.url: "http://localhost:9200" 137 | kibana.index: ".kibana" 138 | ``` 139 | 140 | ## Kibana 실행하기 141 | 142 | * CentOS/RHEL 6 143 | ```bash 144 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo service kibana start 145 | ``` 146 | 147 | * CentOS/RHEL 7 148 | ```bash 149 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo systemctl start kibana.service 150 | ``` 151 | 152 | * Web Browser 에 [http://{FQDN}:5601](http://{FQDN}:5601) 실행 153 | 154 | ![Optional Text](image/kibana.png) 155 | 156 | 157 | -------------------------------------------------------------------------------- /ES-Tutorial-1/cli/image/es-head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-1/cli/image/es-head.png -------------------------------------------------------------------------------- /ES-Tutorial-1/cli/image/kibana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-1/cli/image/kibana.png -------------------------------------------------------------------------------- /ES-Tutorial-1/elasticsearch.yml.1st: -------------------------------------------------------------------------------- 1 | # ======================== Elasticsearch Configuration ========================= 2 | #cluster.name: my-application 3 | #node.name: node-1 4 | #node.attr.rack: r1 5 | path.data: /var/lib/elasticsearch 6 | path.logs: /var/log/elasticsearch 7 | #bootstrap.memory_lock: true 8 | #network.host: 192.168.0.1 9 | #http.port: 9200 10 | #discovery.seed_hosts: ["host1", "host2"] 11 | #cluster.initial_master_nodes: ["node-1", "node-2"] 12 | #gateway.recover_after_nodes: 3 13 | #action.destructive_requires_name: true 14 | # ============================================================================== 15 | -------------------------------------------------------------------------------- /ES-Tutorial-1/image/es-head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-1/image/es-head.png -------------------------------------------------------------------------------- /ES-Tutorial-1/image/kibana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-1/image/kibana.png -------------------------------------------------------------------------------- /ES-Tutorial-1/jvm.options.1st: -------------------------------------------------------------------------------- 1 | ## JVM configuration 2 | 3 | ################################################################ 4 | ## IMPORTANT: JVM heap size 5 | ################################################################ 6 | ## 7 | ## You should always set the min and max JVM heap 8 | ## size to the same value. For example, to set 9 | ## the heap to 4 GB, set: 10 | ## 11 | ## -Xms4g 12 | ## -Xmx4g 13 | ## 14 | ## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html 15 | ## for more information 16 | ## 17 | ################################################################ 18 | 19 | # Xms represents the initial size of total heap space 20 | # Xmx represents the maximum size of total heap space 21 | 22 | -Xms2g 23 | -Xmx2g 24 | 25 | ################################################################ 26 | ## Expert settings 27 | ################################################################ 28 | ## 29 | ## All settings below this section are considered 30 | ## expert settings. Don't tamper with them unless 31 | ## you understand what you are doing 32 | ## 33 | ################################################################ 34 | 35 | ## GC configuration 36 | -XX:+UseConcMarkSweepGC 37 | -XX:CMSInitiatingOccupancyFraction=75 38 | -XX:+UseCMSInitiatingOccupancyOnly 39 | 40 | ## G1GC Configuration 41 | # NOTE: G1GC is only supported on JDK version 10 or later. 42 | # To use G1GC uncomment the lines below. 43 | # 10-:-XX:-UseConcMarkSweepGC 44 | # 10-:-XX:-UseCMSInitiatingOccupancyOnly 45 | # 10-:-XX:+UseG1GC 46 | # 10-:-XX:InitiatingHeapOccupancyPercent=75 47 | 48 | ## DNS cache policy 49 | # cache ttl in seconds for positive DNS lookups noting that this overrides the 50 | # JDK security property networkaddress.cache.ttl; set to -1 to cache forever 51 | -Des.networkaddress.cache.ttl=60 52 | # cache ttl in seconds for negative DNS lookups noting that this overrides the 53 | # JDK security property networkaddress.cache.negative ttl; set to -1 to cache 54 | # forever 55 | -Des.networkaddress.cache.negative.ttl=10 56 | 57 | ## optimizations 58 | 59 | # pre-touch memory pages used by the JVM during initialization 60 | -XX:+AlwaysPreTouch 61 | 62 | ## basic 63 | 64 | # explicitly set the stack size 65 | -Xss1m 66 | 67 | # set to headless, just in case 68 | -Djava.awt.headless=true 69 | 70 | # ensure UTF-8 encoding by default (e.g. filenames) 71 | -Dfile.encoding=UTF-8 72 | 73 | # use our provided JNA always versus the system one 74 | -Djna.nosys=true 75 | 76 | # turn off a JDK optimization that throws away stack traces for common 77 | # exceptions because stack traces are important for debugging 78 | -XX:-OmitStackTraceInFastThrow 79 | 80 | # flags to configure Netty 81 | -Dio.netty.noUnsafe=true 82 | -Dio.netty.noKeySetOptimization=true 83 | -Dio.netty.recycler.maxCapacityPerThread=0 84 | 85 | # log4j 2 86 | -Dlog4j.shutdownHookEnabled=false 87 | -Dlog4j2.disable.jmx=true 88 | 89 | -Djava.io.tmpdir=${ES_TMPDIR} 90 | 91 | ## heap dumps 92 | 93 | # generate a heap dump when an allocation from the Java heap fails 94 | # heap dumps are created in the working directory of the JVM 95 | -XX:+HeapDumpOnOutOfMemoryError 96 | 97 | # specify an alternative path for heap dumps; ensure the directory exists and 98 | # has sufficient space 99 | -XX:HeapDumpPath=/var/lib/elasticsearch 100 | 101 | # specify an alternative path for JVM fatal error logs 102 | -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log 103 | 104 | ## JDK 8 GC logging 105 | 106 | 8:-XX:+PrintGCDetails 107 | 8:-XX:+PrintGCDateStamps 108 | 8:-XX:+PrintTenuringDistribution 109 | 8:-XX:+PrintGCApplicationStoppedTime 110 | 8:-Xloggc:/var/log/elasticsearch/gc.log 111 | 8:-XX:+UseGCLogFileRotation 112 | 8:-XX:NumberOfGCLogFiles=32 113 | 8:-XX:GCLogFileSize=64m 114 | 115 | # JDK 9+ GC logging 116 | 9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m 117 | # due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise 118 | # time/date parsing will break in an incompatible way for some date patterns and locals 119 | 9-:-Djava.locale.providers=COMPAT 120 | -------------------------------------------------------------------------------- /ES-Tutorial-1/kibana.yml: -------------------------------------------------------------------------------- 1 | server.host: "0.0.0.0" 2 | elasticsearch.hosts: "http://localhost:9200" 3 | kibana.index: ".kibana" 4 | -------------------------------------------------------------------------------- /ES-Tutorial-1/tuto1: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ES_VER="7.5.1" 4 | ES_URL="https://artifacts.elastic.co/downloads/elasticsearch" 5 | ES_RPM="elasticsearch-${ES_VER}-x86_64.rpm" 6 | 7 | ES_ETC="/etc/elasticsearch" 8 | ES_MYML="elasticsearch.yml" 9 | ES_ADDYML="ymladd.yml" 10 | ES_JVM="jvm.options" 11 | 12 | ES_NODEIP=$(ifconfig | grep inet | grep -vE '127.0.0.1|inet6' | awk '{print $2}') 13 | ES_NODENAME=$(hostname -s) 14 | 15 | KB_URL="https://artifacts.elastic.co/downloads/kibana" 16 | KB_RPM="kibana-${ES_VER}-x86_64.rpm" 17 | KB_ETC="/etc/kibana" 18 | KB_MYML="kibana.yml" 19 | 20 | SEQ="1st" 21 | ORG_SEQ="org_1st" 22 | 23 | git pull 24 | 25 | # ES Package Install 26 | function install_es_packages 27 | { 28 | wget 2> /dev/null 29 | if [ $? -ne 1 ]; then 30 | sudo yum -y install wget 31 | fi 32 | 33 | ls -alh /usr/local/src/elasticsearch* 2> /dev/null 34 | if [ $? -ne 0 ]; then 35 | sudo wget ${ES_URL}/${ES_RPM} -O /usr/local/src/${ES_RPM} 36 | fi 37 | 38 | rpm -ql elasticsearch > /dev/null 39 | if [ $? -ne 0 ]; then 40 | sudo rpm -ivh /usr/local/src/${ES_RPM} 41 | fi 42 | } 43 | 44 | # elasticsearch.yml Configure 45 | function configure_es_yaml 46 | { 47 | sudo cp -f ${ES_ETC}/${ES_MYML} ${ES_ETC}/${ES_MYML}.${ORG_SEQ} 48 | sudo cp -f ${ES_MYML}.${SEQ} ${ES_ETC}/${ES_MYML} 49 | sudo echo "### For ClusterName & Node Name" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 50 | sudo echo "cluster.name: mytuto-es" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 51 | sudo echo "node.name: master-$ES_NODENAME" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 52 | 53 | sudo cat ${ES_ADDYML}.${SEQ} | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 54 | 55 | sudo echo "### Discovery Settings" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 56 | sudo echo "discovery.seed_hosts: [ \"$ES_NODEIP:9300\", ]" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 57 | sudo echo "cluster.initial_master_nodes: [ \"$ES_NODEIP:9300\", ]" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 58 | 59 | # jvm options Configure for Heap Memory 60 | sudo cp -f ${ES_ETC}/${ES_JVM} ${ES_ETC}/${ES_JVM}.${ORG_SEQ} 61 | sudo cp -f ${ES_JVM}.${SEQ} ${ES_ETC}/${ES_JVM} 62 | 63 | } 64 | 65 | # Start Elasticsearch 66 | function start_es_process 67 | { 68 | sudo systemctl daemon-reload 69 | sudo systemctl enable elasticsearch.service 70 | sudo systemctl restart elasticsearch 71 | } 72 | 73 | # Kibana Package Install 74 | function install_kb_packages 75 | { 76 | ls -alh /usr/local/src/kibana* 2> /dev/null 77 | if [ $? -ne 0 ]; then 78 | sudo wget ${KB_URL}/${KB_RPM} -O /usr/local/src/${KB_RPM} 79 | fi 80 | 81 | rpm -ql kibana > /dev/null 82 | if [ $? -ne 0 ]; then 83 | sudo rpm -ivh /usr/local/src/${KB_RPM} 84 | fi 85 | } 86 | 87 | # kibana.yml Configure 88 | function configure_kb_yaml 89 | { 90 | sudo cp -f ${KB_ETC}/${KB_MYML} ${KB_ETC}/${KB_MYML}.${ORG_SEQ} 91 | sudo cp -f ${KB_MYML} ${KB_ETC}/${KB_MYML} 92 | } 93 | 94 | # Start Kibana 95 | function start_kb_process 96 | { 97 | sudo systemctl daemon-reload 98 | sudo systemctl enable kibana.service 99 | sudo systemctl restart kibana 100 | } 101 | 102 | function init_ec2 103 | { 104 | # remove rpm files 105 | sudo \rm -rf /usr/local/src/* 106 | 107 | # stop & disable elasticsearch & kibana daemon 108 | sudo systemctl stop elasticsearch 109 | sudo systemctl stop kibana 110 | 111 | sudo systemctl disable elasticsearch.service 112 | sudo systemctl disable kibana.service 113 | 114 | sudo systemctl daemon-reload 115 | 116 | # erase rpm packages 117 | sudo rpm -e elasticsearch-${ES_VER}-1.x86_64 118 | sudo rpm -e kibana-${ES_VER}-1.x86_64 119 | 120 | # remove package configs 121 | sudo rm -rf /etc/elasticsearch 122 | sudo rm -rf /var/lib/elasticsearch 123 | sudo rm -rf /var/log/elasticsearch 124 | sudo rm -rf /etc/kibana 125 | sudo rm -rf /var/lib/kibana 126 | sudo rm -rf /var/log/kibana 127 | 128 | } 129 | 130 | if [ -z $1 ]; then 131 | echo "##################### Menu ##############" 132 | echo " $ ./tuto1 [Command]" 133 | echo "#####################%%%%%%##############" 134 | echo " 1 : elasticsearch packages" 135 | echo " 2 : configure elasticsearch.yml & jvm.options" 136 | echo " 3 : start elasticsearch process" 137 | echo " 4 : install kibana packages" 138 | echo " 5 : configure kibana.yml" 139 | echo " 6 : start kibana process" 140 | echo " init : ec2 instance initializing" 141 | echo "#########################################"; 142 | exit 1; 143 | fi 144 | 145 | case "$1" in 146 | "1" ) install_es_packages;; 147 | "2" ) configure_es_yaml;; 148 | "3" ) start_es_process;; 149 | "4" ) install_kb_packages;; 150 | "5" ) configure_kb_yaml;; 151 | "6" ) start_kb_process;; 152 | "init" ) init_ec2;; 153 | *) echo "Incorrect Command" ;; 154 | esac 155 | -------------------------------------------------------------------------------- /ES-Tutorial-1/ymladd.yml.1st: -------------------------------------------------------------------------------- 1 | 2 | ### For Head 3 | http.cors.enabled: true 4 | http.cors.allow-origin: "*" 5 | 6 | ### For Response by External Request 7 | network.host: 0.0.0.0 8 | 9 | -------------------------------------------------------------------------------- /ES-Tutorial-2/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-2 2 | 3 | ElasticSearch 두 번째 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## ElasticSearch Plugin 설치 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | Master Node 1번 장비에서 실습합니다. 12 | 13 | ```bash 14 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 15 | 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-2 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-2]$ ./tuto2 21 | ##################### Menu ############## 22 | $ ./tuto2 [Command] 23 | #####################%%%%%%############## 24 | 1 : install head plugin 25 | 2 : start head plugin 26 | 3 : install hq plugin 27 | 4 : start hq plugin 28 | ######################################### 29 | 30 | ``` 31 | 32 | ## ELK Tutorial 2 - Head / HQ Plugin 설치 및 시작 33 | 34 | ### Head Plugin 35 | * ElasticSearch 클러스터의 노드, 인덱스, 샤드 등을 한 눈에 볼 수 있는 플러그인 36 | * 2.x 버전까지는 내부 빌트인 플러그인 형태로 존재했으니 5.x 버전부터 standalone 형태로 변경됨 37 | * 9100 포트로 실행됨 38 | 39 | ```bash 40 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-2]$ ./tuto2 1 41 | 42 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-2]$ ./tuto2 2 43 | 44 | ``` 45 | 46 | 47 | ### HQ Plugin 48 | * 클러스터의 노드나 상세 상태정보 값을 모니터링 할 수 있는 플러그인 49 | * Head Plugin 과 마찬가지로 2.x 버전까지는 내부 빌트인 플러그인 형태로 존재했으니 5.x 버전부터 standalone 형태로 변경됨 50 | * 5000 포트로 실행됨 51 | 52 | ```bash 53 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-2]$ ./tuto2 3 54 | 55 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-2]$ ./tuto2 4 56 | 57 | ``` 58 | 59 | ## Smoke Test 60 | 61 | ### Head Plugin 62 | 63 | * Web Browser 에 [http://{Head Plugin 설치한 장비의 FQDN}:9100/index.html?base_uri=http://{ES Cluster FQDN}:9200 실행](http://HeadPluginFQDN:9100/index.html?base_uri=http://ESClusterFQDN:9200) 64 | 65 | ![Optional Text](image/head.png) 66 | 67 | 68 | ### HQ Plugin 69 | * Web Browser 에 [http://{HQ Plugin 설치한 장비의 FQDN}:5000 실행](http://HQPluginFQDN:5000) 70 | 71 | ![Optional Text](image/hq.png) 72 | 73 | -------------------------------------------------------------------------------- /ES-Tutorial-2/cli/README.md: -------------------------------------------------------------------------------- 1 | # System Script 2nd 2 | 3 | ElasticSearch 두 번째 System 명령어 스크립트를 기술합니다. 4 | 5 | ## Plugin 설치하는 방법 6 | 7 | ```bash 8 | [ec2-user@ip-xxx-xxx-xxx-xxx elasticsearch]$ sudo bin/elasticsearch-plugin install analysis-nori 9 | -> Downloading analysis-nori from elastic 10 | [=================================================] 100% 11 | -> Installed analysis-nori 12 | [ec2-user@ip-xxx-xxx-xxx-xxx elasticsearch]$ sudo bin/elasticsearch-plugin list 13 | analysis-nori 14 | 15 | [ec2-user@ip-xxx-xxx-xxx-xxx elasticsearch]$ sudo bin/elasticsearch-plugin remove analysis-nori 16 | -> removing [analysis-nori]... 17 | 18 | [ec2-user@xxx-xxx-xxx-xxx elasticsearch]$ sudo bin/elasticsearch-plugin list 19 | [ec2-user@ip-xxx-xxx-xxx-xxx elasticsearch]$ 20 | ``` 21 | 22 | ## head 플러그인 설치 23 | * ElasticSearch 클러스터의 노드, 인덱스, 샤드 등을 한 눈에 볼 수 있는 플러그인 24 | * 2.x 버전까지는 내부 빌트인 플러그인 형태로 존재했으니 5.x 버전부터 standalone 형태로 변경됨 25 | * 9100 포트로 실행됨 26 | 27 | ```bash 28 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 29 | 30 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install bzip2 epel-release 31 | 32 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install npm 33 | 34 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd /usr/local/ 35 | 36 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo git clone https://github.com/mobz/elasticsearch-head.git 37 | 38 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd elasticsearch-head/ 39 | 40 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo npm install 41 | 42 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ nohup npm run start & 43 | 44 | ``` 45 | 46 | ### HQ Plugin 설치 47 | * 클러스터의 노드나 상세 상태정보 값을 모니터링 할 수 있는 플러그인 48 | * Head Plugin 과 마찬가지로 2.x 버전까지는 내부 빌트인 플러그인 형태로 존재했으니 5.x 버전부터 standalone 형태로 변경됨 49 | * 5000 포트로 실행됨 50 | 51 | ```bash 52 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 53 | 54 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install bzip2 epel-release 55 | 56 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd /usr/local/ 57 | 58 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo git clone https://github.com/ElasticHQ/elasticsearch-HQ.git 59 | 60 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd elasticsearch-HQ/ 61 | 62 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install python34 python34-pip 63 | 64 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo pip3 install -r requirements.txt 65 | 66 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo nohup python3 application.py & 67 | ``` 68 | 69 | ## Smoke Test 70 | 71 | ### Head Plugin 72 | 73 | * Web Browser 에 http://Head Plugin 설치한 장비의 FQDN:9100/index.html?base_uri=http://{ES Cluster FQDN}:9200 실행 74 | 75 | ![Optional Text](image/head.png) 76 | 77 | ### HQ Plugin 78 | * Web Browser 에 http://FQDN:5000 실행 79 | 80 | ![Optional Text](image/hq.png) 81 | 82 | -------------------------------------------------------------------------------- /ES-Tutorial-2/cli/image/head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-2/cli/image/head.png -------------------------------------------------------------------------------- /ES-Tutorial-2/cli/image/hq.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-2/cli/image/hq.png -------------------------------------------------------------------------------- /ES-Tutorial-2/es-head.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=ES Head 3 | After=network-online.target 4 | 5 | [Service] 6 | User=ec2-user 7 | Group=ec2-user 8 | WorkingDirectory=/usr/local/elasticsearch-head/ 9 | ExecStart=/usr/bin/npm run start 10 | Restart=on-failure 11 | 12 | [Install] 13 | WantedBy=multi-user.target 14 | Alias=es-head.service 15 | -------------------------------------------------------------------------------- /ES-Tutorial-2/es-hq.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=ES HQ 3 | After=network-online.target 4 | 5 | [Service] 6 | User=root 7 | Group=root 8 | WorkingDirectory=/usr/local/elasticsearch-HQ/ 9 | ExecStart=/usr/bin/python3.4 /usr/local/elasticsearch-HQ/application.py 10 | Restart=on-failure 11 | 12 | [Install] 13 | WantedBy=multi-user.target 14 | Alias=es-hq.service 15 | -------------------------------------------------------------------------------- /ES-Tutorial-2/image/head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-2/image/head.png -------------------------------------------------------------------------------- /ES-Tutorial-2/image/hq.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-2/image/hq.png -------------------------------------------------------------------------------- /ES-Tutorial-2/query/query: -------------------------------------------------------------------------------- 1 | ## Index Settings 설정으로 인덱스 생성 2 | PUT twitter 3 | { 4 | "settings" : { 5 | "index" : { 6 | "number_of_shards" : 3, 7 | "number_of_replicas" : 1 8 | } 9 | } 10 | } 11 | 12 | ## flat_settings 형태로도 생성 가능 13 | PUT twitter 14 | { 15 | "settings" : { 16 | "index.number_of_shards" : 3, 17 | "index.number_of_replicas" : 1 18 | } 19 | } 20 | 21 | ## Index 삭제 22 | DELETE twitter 23 | 24 | ## Index Read only - 삭제는 막을 수 없지만 read only 형태로 만드는 것은 가능 25 | PUT twitter/_settings 26 | { 27 | "index.blocks.read_only_allow_delete": true 28 | } 29 | 30 | PUT twitter/_settings 31 | { 32 | "index.blocks.read_only_allow_delete": false 33 | } 34 | 35 | PUT twitter/_settings 36 | { 37 | "index.blocks.read_only_allow_delete": null 38 | } 39 | 40 | ## 해당 인덱스가 존재하는지 확인 41 | HEAD twitter 42 | 43 | ## Index Setting 정보 확인 44 | GET twitter/_settings 45 | 46 | ## Index Mapping 정보 확인 47 | # 7.x 48 | GET twitter/_mapping 49 | GET twitter/_mappings 50 | 51 | # 6.x 52 | GET twitter/_mapping 53 | GET twitter/_mappings 54 | GET twitter/_mapping/_doc 55 | GET twitter/_mappings/_doc 56 | 57 | ## Index 통계 데이터 정보 확인 58 | GET twitter/_stats 59 | 60 | ## 세그먼트 정보 확인 61 | GET twitter/_segments 62 | 63 | ## 클러스터 내 Index 상태 정보 확인 64 | GET _cat/indices?v 65 | 66 | ## 특정 인덱스만 대상으로 정보 확인 67 | GET _cat/indices/twitter?v 68 | 69 | ## PUT Method 를 통해 문서의 ID 를 지정하여 색인 70 | PUT twitter/_doc/1 71 | { 72 | "user" : "kimchy", 73 | "post_date" : "2009-11-15T14:12:12", 74 | "message" : "trying out Elasticsearch" 75 | } 76 | 77 | ## 문서 색인으로 생성된 인덱스 매핑 확인하기 78 | GET twitter/_mappings 79 | 80 | ## 7.x 버전에서 type 을 _doc 으로 주지 않은 채 PUT Method 로 문서를 색인한다면? 81 | PUT specifictype/mytype/1 82 | { 83 | "user" : "kimchy", 84 | "post_date" : "2009-11-15T14:12:12", 85 | "message" : "trying out Elasticsearch" 86 | } 87 | 88 | ## mytype 이 아닌 _doc 타입 이름으로도 문서 색인 가능 89 | PUT specifictype/_doc/2 90 | { 91 | "user" : "kimchy", 92 | "post_date" : "2009-11-15T14:12:12", 93 | "message" : "trying out Elasticsearch" 94 | } 95 | 96 | 97 | ## Index Create Operator, 문서 아이디가 존재하면 문서 색인 불가 98 | PUT twitter/_doc/1?op_type=create 99 | { 100 | "user" : "kimchy", 101 | "post_date" : "2009-11-15T14:12:12", 102 | "message" : "trying out Elasticsearch" 103 | } 104 | 105 | ## _create API 를 통한 동일 동작 106 | PUT twitter/_doc/1/_create 107 | { 108 | "user" : "kimchy", 109 | "post_date" : "2009-11-15T14:12:12", 110 | "message" : "trying out Elasticsearch" 111 | } 112 | 113 | ## create operator 나 API 를 활용하지 않으면 문서 내용이 update 됨 114 | PUT twitter/_doc/1 115 | { 116 | "user" : "salad", 117 | "post_date" : "2009-11-15T14:12:12", 118 | "message" : "trying out Elasticsearch" 119 | } 120 | 121 | ## POST Method 를 통한 문서 색인하기 122 | POST twitter/_doc 123 | { 124 | "user" : "kimchy", 125 | "post_date" : "2009-11-15T14:12:12", 126 | "message" : "trying out Elasticsearch" 127 | } 128 | 129 | ## 문서 아이디를 통해 문서 가져오기 130 | GET twitter/_doc/1 131 | 132 | ## 문서의 _source field 만 가져오기 133 | # 7.x 134 | GET twitter/_source/1 135 | # 6.x 136 | GET twitter/_doc/1/_source 137 | 138 | ## 문서 id 를 통해 문서 삭제하기 139 | DELETE twitter/_doc/1 140 | 141 | ## 클러스터 Health 정보 확인하기 142 | GET _cluster/health 143 | 144 | # 클러스터 Settings 전보 확인하기 145 | GET _cluster/settings 146 | -------------------------------------------------------------------------------- /ES-Tutorial-2/tools/esbot/README.md: -------------------------------------------------------------------------------- 1 | # esbot 2 | 3 | ElasticSearch 클러스터 상태 체크를 하는 스크립트를 기술합니다. 4 | * 개발환경 - Python 2.7.10 5 | * sys, json, urllib3 site package import 6 | 7 | ## esbot 스크립트 설치하기 8 | 9 | ```bash 10 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 11 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd /usr/local/ 12 | [ec2-user@ip-xxx-xxx-xxx-xxx local]$ sudo git clone https://github.com/benjamin-btn/esbot.git 13 | [ec2-user@ip-xxx-xxx-xxx-xxx local]$ cd esbot 14 | [ec2-user@ip-xxx-xxx-xxx-xxx esbot]$ ./esbot 15 | Usage : ./esbot [options] [Cluster URL] 16 | 17 | i : ES Info 18 | ``` 19 | ## 확장 방법 20 | 21 | ```bash 22 | def es(cmd): 23 | try: 24 | header = { 'Content-Type': 'application/json' } 25 | data = {} 26 | if cmd[1] == "i": 27 | es_rtn('GET', cmd[2], data, header) 28 | else: 29 | print "incorrect commands" 30 | except IndexError: 31 | print "Usage : ./esbot [options] [Cluster URL]\n\n\ 32 | i : ES Info\n\ 33 | " 34 | ``` 35 | 36 | 기능을 추가할 때마다 Usage 에 옵션 및 설명 추가 37 | 38 | 구현부에 elif 로 분기하여 body 를 data = {} 에 정의하고 39 | 40 | HTTP Method 와 cmd[2] 를 cmd[2] + "_cat/health" 형태로 변경 41 | -------------------------------------------------------------------------------- /ES-Tutorial-2/tools/esbot/esbot: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import sys 5 | import urllib3 6 | import json 7 | 8 | def es(cmd): 9 | try: 10 | header = { 'Content-Type': 'application/json' } 11 | data = {} 12 | if cmd[1] == "i": 13 | es_rtn('GET', cmd[2], data, header) 14 | else: 15 | print "incorrect commands" 16 | except IndexError: 17 | print "Usage : ./esbot [options] [Cluster URL]\n\n\ 18 | i : ES Info\n\ 19 | " 20 | 21 | def es_rtn(method, cmd, data=None, header=None): 22 | http = urllib3.PoolManager() 23 | 24 | try: 25 | rtn = http.request(method,cmd,body=json.dumps(data),headers=header) 26 | except urllib3.exceptions.HTTPError as errh: 27 | print ("Http Error:",errh) 28 | 29 | print rtn.data 30 | 31 | if __name__ == '__main__': 32 | es(sys.argv) 33 | -------------------------------------------------------------------------------- /ES-Tutorial-2/tuto2: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | git pull 4 | 5 | # Elasticsearch Head Plugin Install 6 | function install_head 7 | { 8 | rpm -ql git > /dev/null 9 | if [ $? -ne 0 ]; then 10 | sudo yum -y install git 11 | fi 12 | 13 | ls -alh /usr/local/elasticsearch-head 2> /dev/null 14 | if [ $? -ne 0 ]; then 15 | sudo yum -y install bzip2 epel-release 16 | sudo yum -y install npm 17 | cd /usr/local/ 18 | sudo git clone https://github.com/mobz/elasticsearch-head.git 19 | cd elasticsearch-head/ 20 | sudo npm install 21 | fi 22 | 23 | } 24 | 25 | # Start Head Plugin 26 | function start_head 27 | { 28 | sudo cp es-head.service /usr/lib/systemd/system/es-head.service 29 | sudo systemctl daemon-reload 30 | sudo systemctl enable es-head.service 31 | sudo systemctl start es-head.service 32 | } 33 | 34 | # Elasticsearch HQ Plugin Install 35 | function install_hq 36 | { 37 | rpm -ql git > /dev/null 38 | if [ $? -ne 0 ]; then 39 | sudo yum -y install git 40 | fi 41 | 42 | ls -alh /usr/local/elasticsearch-HQ 2> /dev/null 43 | if [ $? -ne 0 ]; then 44 | sudo yum -y install bzip2 epel-release 45 | cd /usr/local/ 46 | sudo git clone https://github.com/ElasticHQ/elasticsearch-HQ.git 47 | cd elasticsearch-HQ/ 48 | sudo yum -y install python34 python34-pip 49 | sudo pip3.4 install -r requirements.txt 50 | fi 51 | 52 | } 53 | 54 | # Start Head Plugin 55 | function start_hq 56 | { 57 | sudo cp es-hq.service /usr/lib/systemd/system/es-hq.service 58 | sudo systemctl daemon-reload 59 | sudo systemctl enable es-hq.service 60 | sudo systemctl start es-hq.service 61 | } 62 | 63 | 64 | if [ -z $1 ]; then 65 | echo "##################### Menu ##############" 66 | echo " $ ./tuto2 [Command]" 67 | echo "#####################%%%%%%##############" 68 | echo " 1 : install head plugin" 69 | echo " 2 : start head plugin" 70 | echo " 3 : install hq plugin" 71 | echo " 4 : start hq plugin" 72 | echo "#########################################"; 73 | exit 1; 74 | fi 75 | 76 | case "$1" in 77 | "1" ) install_head;; 78 | "2" ) start_head;; 79 | "3" ) install_hq;; 80 | "4" ) start_hq;; 81 | *) echo "Incorrect Command" ;; 82 | esac 83 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-3-1 2 | 3 | ElasticSearch 세 번째-1 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## ElasticSearch Product 설치 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | https://github.com/benjamin-btn/ES-Tutorial-2 12 | 13 | Master 2~3번 장비에서 실습합니다. 14 | 15 | ```bash 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-3-1 21 | 22 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ ./tuto3 23 | ##################### Menu ############## 24 | $ ./tuto3 [Command] 25 | #####################%%%%%%############## 26 | 1 : install java & elasticsearch packages 27 | 2 : configure elasticsearch.yml & jvm.options 28 | 3 : start elasticsearch process 29 | init : ec2 instance initializing 30 | ######################################### 31 | 32 | ``` 33 | 34 | ## ELK Tutorial 3 - Elasticsearch Node 추가 35 | 36 | ### Elasticsearch 37 | ##### /etc/elasticsearch/elasticsearch.yml 38 | 39 | 1) cluster.name, node.name, http.cors.enabled, http.cors.allow-origin 기존장비와 동일 설정 40 | 2) http.port, transport.tcp.port 추가 설정 41 | 3) **network.host 를 network.bind_host 와 network.publish_host 로 분리** 42 | 4) node.master, node.data role 추가 설정 43 | 44 | ```bash 45 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ ./tuto3 1 46 | 47 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ ./tuto3 2 48 | 49 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ sudo vi /etc/elasticsearch/elasticsearch.yml 50 | ### For ClusterName & Node Name 51 | cluster.name: mytuto-es 52 | node.name: master-ip-172-31-13-110 53 | 54 | ### For Head 55 | http.cors.enabled: true 56 | http.cors.allow-origin: "*" 57 | 58 | ### For Response by External Request 59 | network.bind_host: 0.0.0.0 60 | network.publish_host: {IP} 61 | 62 | ### ES Port Settings 63 | http.port: 9200 64 | transport.tcp.port: 9300 65 | 66 | ### ES Node Role Settings 67 | node.master: true 68 | node.data: true 69 | 70 | ``` 71 | 72 | 5) ~discovery.zen.minimum_master_nodes~ 7.x discovery 설정인 discovery.seed_hosts 추가 설정 73 | 6) ~discovery.zen.ping.unicast.hosts~ 7.x discovery 설정인 cluster.initial_master_nodes 는 직접 수정 필요 74 | 7) **./tuto3 1 ./tuto3 2 실행 후 ~discovery.zen.ping.unicast.hosts~ discovery.seed_hosts, cluster.initial_master_nodes 에 기존 장비와 추가하는 노드 2대의 ip:9300 설정 필요** 75 | 76 | ```bash 77 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ sudo vi /etc/elasticsearch/elasticsearch.yml 78 | 79 | ### Discovery Settings 80 | #discovery.zen.minimum_master_nodes: 2 81 | #discovery.zen.ping.unicast.hosts: [ "{IP1}:9300", "{IP2}:9300", "{IP3}:9300", ] 82 | discovery.seed_hosts: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 83 | cluster.initial_master_nodes: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 84 | 85 | ``` 86 | 87 | 8) 클러스터에 노드 2대가 정상적으로 추가되면 기존 장비 한 대의 설정도 동일하게 수정해둡니다. **ES 프로세스를 재시작할 필요는 없습니다.** 나중에 장애 혹은 작업으로 재시작 될 때 클러스터에 수정한 정보를 바탕으로 조인됩니다. 튜토 1에서 설치한 장비에 아래 세팅을 추가해줍니다. 88 | 89 | ```bash 90 | ### For Response by External Request 91 | network.bind_host: 0.0.0.0 92 | network.publish_host: {IP} 93 | 94 | ### ES Port Settings 95 | http.port: 9200 96 | transport.tcp.port: 9300 97 | 98 | ### ES Node Role Settings 99 | node.master: true 100 | node.data: true 101 | 102 | ### Discovery Settings 103 | #discovery.zen.minimum_master_nodes: 2 104 | #discovery.zen.ping.unicast.hosts: [ "{IP1}:9300", "{IP2}:9300", "{IP3}:9300", ] 105 | discovery.seed_hosts: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 106 | cluster.initial_master_nodes: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 107 | 108 | ``` 109 | 110 | ##### /etc/elasticsearch/jvm.options 111 | 9) Xms1g, Xmx1g 를 물리 메모리의 절반으로 수정 112 | 113 | ```bash 114 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ sudo vi /etc/elasticsearch/jvm.options 115 | 116 | -Xms2g 117 | -Xmx2g 118 | 119 | ``` 120 | 121 | 10) 두 파일 모두 수정이 완료되었으면 추가할 노드 2대에서 스크립트 3번을 실행하여 ES 프로세스 시작, 클러스터에 잘 조인되는지 확인 122 | 123 | ```bash 124 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ ./tuto3 3 125 | 126 | ``` 127 | 128 | ## Smoke Test 129 | 130 | ### Elasticsearch 131 | 132 | ```bash 133 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ curl localhost:9200 134 | { 135 | "name" : "master-ip-172-31-11-101", 136 | "cluster_name" : "mytuto-es", 137 | "cluster_uuid" : "LTfRfk3KRLS31kQDROVu9A", 138 | "version" : { 139 | "number" : "7.3.0", 140 | "build_flavor" : "default", 141 | "build_type" : "rpm", 142 | "build_hash" : "a9861f4", 143 | "build_date" : "2019-01-24T11:27:09.439740Z", 144 | "build_snapshot" : false, 145 | "lucene_version" : "7.6.0", 146 | "minimum_wire_compatibility_version" : "5.6.0", 147 | "minimum_index_compatibility_version" : "5.0.0" 148 | }, 149 | "tagline" : "You Know, for Search" 150 | } 151 | 152 | ``` 153 | 154 | * Web Browser 에 [http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200](http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200) 실행 155 | 156 | ![Optional Text](image/es-head.png) 157 | 158 | ## Trouble Shooting 159 | 160 | ### Elasticsearch 161 | Smoke Test 가 진행되지 않을 때에는 elasticsearch.yml 파일에 기본으로 설정되어있는 로그 디렉토리의 로그를 살펴봅니다. 162 | 163 | path.logs: /var/log/elasticsearch 로 설정되어 cluster.name 이 적용된 파일로 만들어 로깅됩니다. 164 | 165 | 위의 경우에는 /var/log/elasticsearch/mytuto-es.log 에서 확인할 수 있습니다. 166 | 167 | ```bash 168 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-1]$ sudo vi /var/log/elasticsearch/mytuto-es.log 169 | ``` 170 | 171 | 로그도 남지 않았으면 elasticsearch user 로 elasticsearch binary 파일을 직접 실행해 로그를 살펴봅시다 172 | 173 | ```bash 174 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch 175 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 176 | [2020-01-21T16:22:17,706][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [6gb], net total_space [9.9gb], types [rootfs] 177 | [2020-01-21T16:22:17,708][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] heap size [1.9gb], compressed ordinary object pointers [true] 178 | [2020-01-21T16:22:17,740][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [master-ip-172-31-5-69] uncaught exception in thread [main] 179 | org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 180 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.5.1.jar:7.5.1] 181 | at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.5.1.jar:7.5.1] 182 | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.5.1.jar:7.5.1] 183 | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 184 | at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 185 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.5.1.jar:7.5.1] 186 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.5.1.jar:7.5.1] 187 | Caused by: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 188 | at org.elasticsearch.env.NodeEnvironment.ensureNoShardData(NodeEnvironment.java:1081) ~[elasticsearch-7.5.1.jar:7.5.1] 189 | at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:325) ~[elasticsearch-7.5.1.jar:7.5.1] 190 | at org.elasticsearch.node.Node.(Node.java:273) ~[elasticsearch-7.5.1.jar:7.5.1] 191 | at org.elasticsearch.node.Node.(Node.java:253) ~[elasticsearch-7.5.1.jar:7.5.1] 192 | at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 193 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 194 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.5.1.jar:7.5.1] 195 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.5.1.jar:7.5.1] 196 | ... 6 more 197 | ``` 198 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/cli/README.md: -------------------------------------------------------------------------------- 1 | # System Script 3rd 2 | 3 | ElasticSearch 세 번째 System 명령어 스크립트를 기술합니다. 4 | 5 | ## Elasticsearch 환경설정 - 그 외 시스템 설정 6 | 7 | ```bash 8 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/security/limits.conf 9 | 10 | elasticsearch soft nofile 65536 11 | elasticsearch hard nofile 65536 12 | 13 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/security/limits.d/20-nproc.conf 14 | 15 | elasticsearch soft noproc 4096 16 | elasticsearch hard noproc 4096 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/sysconfig/elasticsearch 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/sysctl.conf 21 | 22 | vm.max_map_count=262144 23 | 24 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo sysctl -p 25 | 26 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo swapoff -a 27 | 28 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo vi /etc/sysctl.conf 29 | 30 | vm.swappiness = 1 31 | 32 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo sysctl -p 33 | ``` 34 | 35 | ## Bulk API 36 | 37 | ```bash 38 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json 39 | 40 | ``` 41 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/elasticsearch.yml.2nd: -------------------------------------------------------------------------------- 1 | # ======================== Elasticsearch Configuration ========================= 2 | #cluster.name: my-application 3 | #node.name: node-1 4 | #node.attr.rack: r1 5 | path.data: /var/lib/elasticsearch 6 | path.logs: /var/log/elasticsearch 7 | #bootstrap.memory_lock: true 8 | #network.host: 192.168.0.1 9 | #http.port: 9200 10 | #discovery.seed_hosts: ["host1", "host2"] 11 | #cluster.initial_master_nodes: ["node-1", "node-2"] 12 | #gateway.recover_after_nodes: 3 13 | #action.destructive_requires_name: true 14 | # ============================================================================== 15 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/image/es-head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-3-1/image/es-head.png -------------------------------------------------------------------------------- /ES-Tutorial-3-1/jvm.options.2nd: -------------------------------------------------------------------------------- 1 | ## JVM configuration 2 | 3 | ################################################################ 4 | ## IMPORTANT: JVM heap size 5 | ################################################################ 6 | ## 7 | ## You should always set the min and max JVM heap 8 | ## size to the same value. For example, to set 9 | ## the heap to 4 GB, set: 10 | ## 11 | ## -Xms4g 12 | ## -Xmx4g 13 | ## 14 | ## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html 15 | ## for more information 16 | ## 17 | ################################################################ 18 | 19 | # Xms represents the initial size of total heap space 20 | # Xmx represents the maximum size of total heap space 21 | 22 | -Xms2g 23 | -Xmx2g 24 | 25 | ################################################################ 26 | ## Expert settings 27 | ################################################################ 28 | ## 29 | ## All settings below this section are considered 30 | ## expert settings. Don't tamper with them unless 31 | ## you understand what you are doing 32 | ## 33 | ################################################################ 34 | 35 | ## GC configuration 36 | -XX:+UseConcMarkSweepGC 37 | -XX:CMSInitiatingOccupancyFraction=75 38 | -XX:+UseCMSInitiatingOccupancyOnly 39 | 40 | ## G1GC Configuration 41 | # NOTE: G1GC is only supported on JDK version 10 or later. 42 | # To use G1GC uncomment the lines below. 43 | # 10-:-XX:-UseConcMarkSweepGC 44 | # 10-:-XX:-UseCMSInitiatingOccupancyOnly 45 | # 10-:-XX:+UseG1GC 46 | # 10-:-XX:InitiatingHeapOccupancyPercent=75 47 | 48 | ## optimizations 49 | 50 | # pre-touch memory pages used by the JVM during initialization 51 | -XX:+AlwaysPreTouch 52 | 53 | ## basic 54 | 55 | # explicitly set the stack size 56 | -Xss1m 57 | 58 | # set to headless, just in case 59 | -Djava.awt.headless=true 60 | 61 | # ensure UTF-8 encoding by default (e.g. filenames) 62 | -Dfile.encoding=UTF-8 63 | 64 | # use our provided JNA always versus the system one 65 | -Djna.nosys=true 66 | 67 | # turn off a JDK optimization that throws away stack traces for common 68 | # exceptions because stack traces are important for debugging 69 | -XX:-OmitStackTraceInFastThrow 70 | 71 | # flags to configure Netty 72 | -Dio.netty.noUnsafe=true 73 | -Dio.netty.noKeySetOptimization=true 74 | -Dio.netty.recycler.maxCapacityPerThread=0 75 | 76 | # log4j 2 77 | -Dlog4j.shutdownHookEnabled=false 78 | -Dlog4j2.disable.jmx=true 79 | 80 | -Djava.io.tmpdir=${ES_TMPDIR} 81 | 82 | ## heap dumps 83 | 84 | # generate a heap dump when an allocation from the Java heap fails 85 | # heap dumps are created in the working directory of the JVM 86 | -XX:+HeapDumpOnOutOfMemoryError 87 | 88 | # specify an alternative path for heap dumps; ensure the directory exists and 89 | # has sufficient space 90 | -XX:HeapDumpPath=/var/lib/elasticsearch 91 | 92 | # specify an alternative path for JVM fatal error logs 93 | -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log 94 | 95 | ## JDK 8 GC logging 96 | 97 | 8:-XX:+PrintGCDetails 98 | 8:-XX:+PrintGCDateStamps 99 | 8:-XX:+PrintTenuringDistribution 100 | 8:-XX:+PrintGCApplicationStoppedTime 101 | 8:-Xloggc:/var/log/elasticsearch/gc.log 102 | 8:-XX:+UseGCLogFileRotation 103 | 8:-XX:NumberOfGCLogFiles=32 104 | 8:-XX:GCLogFileSize=64m 105 | 106 | # JDK 9+ GC logging 107 | 9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m 108 | # due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise 109 | # time/date parsing will break in an incompatible way for some date patterns and locals 110 | 9-:-Djava.locale.providers=COMPAT 111 | 112 | # temporary workaround for C2 bug with JDK 10 on hardware with AVX-512 113 | 10-:-XX:UseAVX=2 114 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/query/query: -------------------------------------------------------------------------------- 1 | PUT _all/_settings 2 | { 3 | "settings": { 4 | "index.unassigned.node_left.delayed_timeout": "5s" 5 | } 6 | } 7 | 8 | DELETE twitter 9 | 10 | PUT twitter 11 | { 12 | "settings": { 13 | "index.number_of_shards": 6, 14 | "index.number_of_replicas": 1 15 | } 16 | } 17 | 18 | ## Rolling Restart 19 | # Routing Off 20 | PUT _cluster/settings 21 | { 22 | "transient" : { 23 | "cluster.routing.allocation.enable" : "new_primaries" 24 | } 25 | } 26 | 27 | GET _cluster/settings?flat_settings 28 | 29 | # [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo systemctl stop elasticsearch 30 | # [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo systemctl start elasticsearch 31 | 32 | # Routing On 33 | PUT _cluster/settings 34 | { 35 | "transient" : { 36 | "cluster.routing.allocation.enable" : null 37 | } 38 | } 39 | 40 | GET _cluster/settings?flat_settings 41 | 42 | ## Shard Reroute 43 | POST _cluster/reroute 44 | { 45 | "commands" : [ 46 | { 47 | "move" : { 48 | "index" : "twitter", 49 | "shard" : 0, 50 | "from_node" : "data-ip-172-31-11-201", 51 | "to_node" : "data-ip-172-31-3-240" 52 | } 53 | } 54 | ] 55 | } 56 | 57 | ## Shard Allocation by Watermark 58 | # Disk Threshold Enable & Watermark Setting 59 | PUT _cluster/settings?flat_settings 60 | { 61 | "transient": { 62 | "cluster.routing.allocation.disk.threshold_enabled": "true", 63 | "cluster.routing.allocation.disk.watermark.low": "85%", 64 | "cluster.routing.allocation.disk.watermark.high": "90%", 65 | "cluster.routing.allocation.disk.watermark.flood_stage": "95%" 66 | } 67 | } 68 | 69 | GET _cluster/settings?flat_settings 70 | 71 | # Read only off 72 | PUT twitter/_settings 73 | { 74 | "index.blocks.read_only_allow_delete": null 75 | } 76 | 77 | ## Dynamic Index Setting 78 | # Number Of Replicas 79 | PUT twitter/_settings 80 | { 81 | "index.number_of_replicas" : 2 82 | } 83 | 84 | GET twitter/_settings?flat_settings 85 | 86 | # Refresh Interval 87 | PUT twitter/_settings 88 | { 89 | "index.refresh_interval" : "2s" 90 | } 91 | 92 | GET twitter/_settings?flat_settings 93 | 94 | # Routing Allocation 95 | PUT twitter/_settings 96 | { 97 | "index.routing.allocation.enable" : null 98 | } 99 | 100 | GET twitter/_settings?flat_settings 101 | 102 | 103 | # Routing Rebalance 104 | PUT twitter/_settings 105 | { 106 | "index.routing.rebalance.enable" : null 107 | } 108 | 109 | # Index Dynamic Mapping 110 | PUT data/_doc/1 111 | { "count": 5 } 112 | 113 | GET data/_mapping 114 | 115 | PUT strdata/_doc/1 116 | { "stringdata": "strdata" } 117 | 118 | GET strdata/_mapping 119 | 120 | # Index Template 121 | PUT _template/mytemplate 122 | { 123 | "index_patterns": ["te*", "bar*"], 124 | "order" : 0, 125 | "settings": { 126 | "number_of_shards": 2 127 | } 128 | } 129 | 130 | GET _template/mytemplate?flat_settings 131 | 132 | POST test_template/_doc 133 | { 134 | "test": "template" 135 | } 136 | 137 | PUT _template/mytemplate_order1 138 | { 139 | "index_patterns": ["test*"], 140 | "order" : 1, 141 | "settings": { 142 | "number_of_shards": 3, 143 | "number_of_replicas": 2 144 | } 145 | } 146 | 147 | GET _template/mytemplate_order1?flat_settings 148 | 149 | POST test_template_order1/_doc 150 | { 151 | "test": "template" 152 | } 153 | 154 | PUT _template/mytemplate_order2 155 | { 156 | "index_patterns": ["test_template*"], 157 | "order" : 2, 158 | "settings": { 159 | "number_of_shards": 4 160 | } 161 | } 162 | 163 | GET _template/mytemplate_order2?flat_settings 164 | 165 | POST test_template_order2/_doc 166 | { 167 | "test": "template" 168 | } 169 | 170 | GET _cat/templates?v&s=name 171 | DELETE _template/mytemplate 172 | DELETE _template/mytemplate_order1 173 | DELETE _template/mytemplate_order2 174 | GET _cat/templates?v&s=name 175 | 176 | # Hot / Warm Data 177 | PUT _template/mytemplate 178 | { 179 | "index_patterns": ["*"], 180 | "order" : 0, 181 | "settings": { 182 | "number_of_shards": 3, 183 | "index.routing.allocation.require.box_type" : "hotdata" 184 | } 185 | } 186 | 187 | GET _template/mytemplate?flat_settings 188 | 189 | PUT test/_settings 190 | { 191 | "index.routing.allocation.require.box_type" : "warmdata" 192 | } 193 | 194 | ## Cluster API 195 | # transient, persistent 설정해보기 196 | 197 | GET _cluster/settings?flat_settings 198 | 199 | PUT /_cluster/settings?flat_settings 200 | { 201 | "persistent" : { 202 | "discovery.zen.minimum_master_nodes" : 1 203 | }, 204 | "transient" : { 205 | "cluster.routing.allocation.enable" : "none" 206 | } 207 | } 208 | 209 | GET _cluster/settings?flat_settings 210 | 211 | # _cluster API 로 클러스터 라우팅 할당 모드를 변경 212 | 213 | PUT _cluster/settings?flat_settings 214 | { 215 | "transient" : { 216 | "cluster.routing.allocation.enable" : "none" 217 | } 218 | } 219 | 220 | GET _cluster/settings?flat_settings 221 | 222 | # _cluster API 로 운영중인 특정 노드의 샤드 제외 223 | 224 | PUT _cluster/settings?flat_settings 225 | { 226 | "transient" : { 227 | "cluster.routing.allocation.exclude._ip" : "1.1.1.1, 2.2.2.2, 3.3.3.*" 228 | } 229 | } 230 | 231 | GET _cluster/settings?flat_settings 232 | 233 | # POST _cluster/reroute 를 이용한 unassigned 샤드 강제 분배 234 | 235 | POST _cluster/reroute 236 | { 237 | "commands" : [ 238 | { 239 | "allocate_replica" : { 240 | "index" : "twitter", 241 | "shard" : 0, 242 | "node" : "data-ip-172-31-0-80" 243 | } 244 | } 245 | ] 246 | } 247 | 248 | # POST _cluster/reroute 를 이용한 샤드 할당에 실패한 샤드 강제 분배 249 | 250 | POST _cluster/reroute?retry_failed 251 | 252 | # POST _cluster/allocation/explain 을 통해 샤드가 왜 할당되지 못했는지를 확인 253 | 254 | POST _cluster/allocation/explain 255 | 256 | # _all 이나 wildcard 를 대상으로 삭제작업 방지 257 | 258 | PUT _cluster/settings?flat_settings 259 | { 260 | "transient": { 261 | "action.destructive_requires_name": true 262 | } 263 | } 264 | 265 | GET _cluster/settings?flat_settings 266 | 267 | # POST _reindex 를 이용한 재색인 268 | 269 | POST _reindex 270 | { 271 | "source": { 272 | "index": "twitter" 273 | }, 274 | "dest": { 275 | "index": "new_twitter" 276 | } 277 | } 278 | 279 | # 외부 클러스터에서 reindex 가능 280 | 281 | # sudo vi /etc/elasticsearch/elasticsearch.yml 282 | # 아래 내용 추가하려 해당 whitelist 로 부터 인덱스를 재색인 할 수 있도록 설정 283 | # reindex.remote.whitelist: "{SRC_Cluster_URL}:9200" 284 | 285 | # curl -XPOST -H 'Content-Type: application/json' http://{my_cluster_url}/_reindex 286 | #{ 287 | # "source": { 288 | # "remote": { 289 | # "host": "http://{SRC_Cluster_URL}:9200", 290 | # }, 291 | # "index": "twitter" 292 | # }, 293 | # "dest": { 294 | # "index": "re_twitter" 295 | # } 296 | #} 297 | 298 | # bulk API 299 | 300 | DELETE test 301 | POST _bulk 302 | { "index" : { "_index" : "test", "_type" : "_doc", "_id" : "1" } } 303 | { "field1" : "value1" } 304 | { "delete" : { "_index" : "test", "_type" : "_doc", "_id" : "2" } } 305 | { "create" : { "_index" : "test", "_type" : "_doc", "_id" : "3" } } 306 | { "field1" : "value3" } 307 | { "update" : {"_id" : "1", "_type" : "_doc", "_index" : "test"} } 308 | { "doc" : {"field2" : "value2"} } 309 | 310 | GET test/_search 311 | 312 | # json 문서로 bulking 하기 313 | 314 | # curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary “@accounts.json" 315 | 316 | # curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@shakespeare_6.0.json" 317 | 318 | # aliases API 319 | 320 | # test index 생성 321 | POST test1/_doc 322 | { "name": "test1" } 323 | 324 | POST test2/_doc 325 | { "name": "test1" } 326 | 327 | POST test3/_doc 328 | { "name": "test1" } 329 | 330 | POST /_aliases 331 | { 332 | "actions": [ 333 | { "add": { "index": "test1", "alias": "alias1" } } 334 | ] 335 | } 336 | 337 | POST test1/_aliases/alias_direct 338 | 339 | # 삭제 340 | POST /_aliases 341 | { 342 | "actions": [ 343 | { "remove": { "index": "test1", "alias": "alias1" } } 344 | ] 345 | } 346 | 347 | DELETE test1/_aliases/alias_direct 348 | 349 | 350 | # 여러 인덱스에 앨리어싱 351 | 352 | POST /_aliases 353 | { 354 | "actions": [ 355 | { "add": { "indices": ["test1", "test2"], "alias": "alias2" } } 356 | ] 357 | } 358 | 359 | # 와일드카드 이용 360 | 361 | POST /_aliases 362 | { 363 | "actions": [ 364 | { "add": { "index": "test*", "alias": "alias3" } } 365 | ] 366 | } 367 | 368 | # 세그먼트 병합하기 369 | 370 | POST /_forcemerge?max_num_segments=1 371 | 372 | 373 | # index open/close 374 | 375 | POST twitter/_close 376 | POST twitter/_open 377 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/tuto3-1: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ES_VER="7.5.1" 4 | ES_URL="https://artifacts.elastic.co/downloads/elasticsearch" 5 | ES_RPM="elasticsearch-${ES_VER}-x86_64.rpm" 6 | 7 | ES_ETC="/etc/elasticsearch" 8 | ES_MYML="elasticsearch.yml" 9 | ES_ADDYML="ymladd.yml" 10 | ES_JVM="jvm.options" 11 | 12 | ES_NODEIP=$(ifconfig | grep inet | grep -vE '127.0.0.1|inet6' | awk '{print $2}') 13 | ES_NODENAME=$(hostname -s) 14 | 15 | SEQ="2nd" 16 | ORG_SEQ="org_2nd" 17 | 18 | git pull 19 | 20 | # ES Package Install 21 | function install_es_packages 22 | { 23 | wget 2> /dev/null 24 | if [ $? -ne 1 ]; then 25 | sudo yum -y install wget 26 | fi 27 | 28 | ls -alh /usr/local/src/elasticsearch* 2> /dev/null 29 | if [ $? -ne 0 ]; then 30 | sudo wget ${ES_URL}/${ES_RPM} -O /usr/local/src/${ES_RPM} 31 | fi 32 | 33 | rpm -ql elasticsearch > /dev/null 34 | if [ $? -ne 0 ]; then 35 | sudo rpm -ivh /usr/local/src/${ES_RPM} 36 | fi 37 | } 38 | 39 | # elasticsearch.yml Configure 40 | function configure_es_yaml 41 | { 42 | sudo cp -f ${ES_ETC}/${ES_MYML} ${ES_ETC}/${ES_MYML}.${ORG_SEQ} 43 | sudo cp -f ${ES_MYML}.${SEQ} ${ES_ETC}/${ES_MYML} 44 | 45 | sudo echo "### For ClusterName & Node Name" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 46 | sudo echo "cluster.name: mytuto-es" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 47 | sudo echo "node.name: master-$ES_NODENAME" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 48 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 49 | 50 | sudo echo "### For Head" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 51 | sudo echo "http.cors.enabled: true" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 52 | sudo echo "http.cors.allow-origin: \"*\"" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 53 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 54 | 55 | sudo echo "### For Response by External Request" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 56 | sudo echo "network.bind_host: 0.0.0.0" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 57 | sudo echo "network.publish_host: $ES_NODEIP" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 58 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 59 | 60 | sudo echo "### Discovery Settings" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 61 | sudo echo "discovery.seed_hosts: [ \"\", \"\", \"\", ]" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 62 | sudo echo "cluster.initial_master_nodes: [ \"\", \"\", \"\", ]" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 63 | 64 | sudo cat ${ES_ADDYML}.${SEQ} | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 65 | 66 | # jvm options Configure for Heap Memory 67 | sudo cp -f ${ES_ETC}/${ES_JVM} ${ES_ETC}/${ES_JVM}.${ORG_SEQ} 68 | sudo cp -f ${ES_JVM}.${SEQ} ${ES_ETC}/${ES_JVM} 69 | 70 | } 71 | 72 | # Start Elasticsearch 73 | function start_es_process 74 | { 75 | sudo cat /etc/elasticsearch/elasticsearch.yml | grep '"", "", "",' > /dev/null 76 | if [ $? -eq 0 ]; then 77 | echo "Set your unicast.hosts!! Edit your /etc/elasticsearch/elasticsearch.yml & Input your node1,2,3 in \"\"" 78 | else 79 | sudo systemctl daemon-reload 80 | sudo systemctl enable elasticsearch.service 81 | sudo systemctl restart elasticsearch 82 | fi 83 | } 84 | 85 | function init_ec2 86 | { 87 | # remove rpm files 88 | sudo \rm -rf /usr/local/src/* 89 | 90 | # stop & disable elasticsearch & kibana daemon 91 | sudo systemctl stop elasticsearch 92 | sudo systemctl disable elasticsearch.service 93 | sudo systemctl daemon-reload 94 | 95 | # erase rpm packages 96 | sudo rpm -e elasticsearch-${ES_VER}-1.x86_64 97 | 98 | # remove package configs 99 | sudo rm -rf /etc/elasticsearch 100 | sudo rm -rf /var/lib/elasticsearch 101 | sudo rm -rf /var/log/elasticsearch 102 | 103 | } 104 | 105 | if [ -z $1 ]; then 106 | echo "##################### Menu ##############" 107 | echo " $ ./tuto3-1 [Command]" 108 | echo "#####################%%%%%%##############" 109 | echo " 1 : elasticsearch packages" 110 | echo " 2 : configure elasticsearch.yml & jvm.options" 111 | echo " 3 : start elasticsearch process" 112 | echo " init : ec2 instance initializing" 113 | echo "#########################################"; 114 | exit 1; 115 | fi 116 | 117 | case "$1" in 118 | "1" ) install_es_packages;; 119 | "2" ) configure_es_yaml;; 120 | "3" ) start_es_process;; 121 | "init" ) init_ec2;; 122 | *) echo "Incorrect Command" ;; 123 | esac 124 | -------------------------------------------------------------------------------- /ES-Tutorial-3-1/ymladd.yml.2nd: -------------------------------------------------------------------------------- 1 | 2 | ### ES Port Settings 3 | http.port: 9200 4 | transport.tcp.port: 9300 5 | 6 | ### ES Node Role Settings 7 | node.master: true 8 | node.data: true 9 | 10 | -------------------------------------------------------------------------------- /ES-Tutorial-3-2/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-3-2 2 | 3 | ElasticSearch 세 번째-2 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## ElasticSearch Product 설치 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | Data Node 1~3번 장비에서 실습합니다. 12 | 13 | ```bash 14 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 15 | 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-3-2 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ ./tuto3-2 21 | ##################### Menu ############## 22 | $ ./tuto3-2 [Command] 23 | #####################%%%%%%############## 24 | 1 : install java & elasticsearch packages 25 | 2 : configure elasticsearch.yml & jvm.options 26 | 3 : start elasticsearch process 27 | init : ec2 instance initializing 28 | ######################################### 29 | 30 | ``` 31 | 32 | ## ELK Tutorial 3-2 - Elasticsearch Data Node 추가 33 | 34 | ### Elasticsearch 35 | ##### /etc/elasticsearch/elasticsearch.yml 36 | 37 | 1) cluster.name, node.name, http.cors.enabled, http.cors.allow-origin 기존장비와 동일 설정 38 | 2) network.host 를 network.bind_host 와 network.publish_host 로 분리, 기존장비와 동일 설정 39 | 3) http.port, transport.tcp.port 기존장비와 동일 설정 40 | 41 | ```bash 42 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ ./tuto3-2 1 43 | 44 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ ./tuto3-2 2 45 | 46 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ sudo vi /etc/elasticsearch/elasticsearch.yml 47 | 48 | 49 | ### For ClusterName & Node Name 50 | cluster.name: mytuto-es 51 | node.name: data-ip-172-31-13-110 52 | 53 | ### For Head 54 | http.cors.enabled: true 55 | http.cors.allow-origin: "*" 56 | 57 | ### For Response by External Request 58 | network.bind_host: 0.0.0.0 59 | network.publish_host: {IP} 60 | 61 | ### ES Port Settings 62 | http.port: 9200 63 | transport.tcp.port: 9300 64 | 65 | ``` 66 | 67 | 4) **node.master: false, node.data:true 로 role 추가 설정** 68 | 5) discovery.zen.minimum_master_nodes 기존장비와 동일 설정 69 | 6) **~discovery.zen.ping.unicast.hosts~ discovery.seed_hosts 는 직접 수정 필요, 기존에 설정한 마스터 노드 3대만 설정(데이터노드 아이피 설정 금지)** 70 | 7) **./tuto3-2 1, ./tuto3-2 2 실행 후 ~discovery.zen.ping.unicast.hosts~ discovery.seed_hosts 에 기존 장비와 추가했던 노드 3대의 ip:9300 설정 필요** 71 | 72 | ```bash 73 | ### ES Node Role Settings 74 | node.master: false 75 | node.data: true 76 | 77 | ### Discovery Settings 78 | #discovery.zen.minimum_master_nodes: 2 79 | #discovery.zen.ping.unicast.hosts: [ "{IP1}:9300", "{IP2}:9300", "{IP3}:9300", ] 80 | discovery.seed_hosts: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 81 | 82 | ``` 83 | 84 | ##### /etc/elasticsearch/jvm.options 85 | 8) Xms1g, Xmx1g 를 물리 메모리의 절반으로 수정 86 | 87 | ```bash 88 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ sudo vi /etc/elasticsearch/jvm.options 89 | 90 | 91 | -Xms2g 92 | -Xmx2g 93 | 94 | ``` 95 | 96 | 9) 두 파일 모두 수정이 완료되었으면 추가할 노드 3대에서 스크립트 3번을 실행하여 ES 프로세스 시작, 클러스터에 잘 조인되는지 확인 97 | 98 | ```bash 99 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ ./tuto3-2 3 100 | 101 | ``` 102 | 103 | 10) **클러스터에 데이터노드 3대가 정상적으로 추가되면 기존 마스터와 데이터노드 롤을 전부 갖고 있는 노드에 node.master: true, node.data:false 로 설정하여 한대씩 프로세스 재시작** 104 | 105 | **주의할 점은 마스터 노드를 재시작 할 때 반드시 클러스터가 그린이 된 이후 다음 마스터 노드를 재시작** 106 | 107 | **7.x 버전부터는 노드의 역할이 변경되면 프로세스가 올라오지 않는다. 108 | /usr/share/elasticsearch/bin/elasticsearch-node repurpose 명령으로 샤드 내에 데이터를 clean up 해준 뒤 재시작해야한다** 109 | 110 | 111 | ```bash 112 | ### ES Node Role Settings 113 | node.master: true 114 | node.data: false 115 | 116 | 117 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3]$ systemctl restart elasticsearch.service 118 | 119 | ``` 120 | 121 | 122 | ## Smoke Test 123 | 124 | ### Elasticsearch 125 | 126 | ```bash 127 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ curl localhost:9200 128 | { 129 | "name" : "data-ip-172-31-10-90", 130 | "cluster_name" : "mytuto-es", 131 | "cluster_uuid" : "LTfRfk3KRLS31kQDROVu9A", 132 | "version" : { 133 | "number" : "7.3.0", 134 | "build_flavor" : "default", 135 | "build_type" : "rpm", 136 | "build_hash" : "a9861f4", 137 | "build_date" : "2019-01-24T11:27:09.439740Z", 138 | "build_snapshot" : false, 139 | "lucene_version" : "7.6.0", 140 | "minimum_wire_compatibility_version" : "5.6.0", 141 | "minimum_index_compatibility_version" : "5.0.0" 142 | }, 143 | "tagline" : "You Know, for Search" 144 | } 145 | 146 | ``` 147 | 148 | * Web Browser 에 [http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200](http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200) 실행 149 | 150 | ![Optional Text](image/es-head.png) 151 | 152 | ## Trouble Shooting 153 | 154 | ### Elasticsearch 155 | Smoke Test 가 진행되지 않을 때에는 elasticsearch.yml 파일에 기본으로 설정되어있는 로그 디렉토리의 로그를 살펴봅니다. 156 | 157 | path.logs: /var/log/elasticsearch 로 설정되어 cluster.name 이 적용된 파일로 만들어 로깅됩니다. 158 | 159 | 위의 경우에는 /var/log/elasticsearch/mytuto-es.log 에서 확인할 수 있습니다. 160 | 161 | ```bash 162 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-3-2]$ sudo vi /var/log/elasticsearch/mytuto-es.log 163 | ``` 164 | 165 | 로그도 남지 않았으면 elasticsearch user 로 elasticsearch binary 파일을 직접 실행해 로그를 살펴봅시다 166 | 167 | ```bash 168 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch 169 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 170 | [2020-01-21T16:22:17,706][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [6gb], net total_space [9.9gb], types [rootfs] 171 | [2020-01-21T16:22:17,708][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] heap size [1.9gb], compressed ordinary object pointers [true] 172 | [2020-01-21T16:22:17,740][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [master-ip-172-31-5-69] uncaught exception in thread [main] 173 | org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 174 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.5.1.jar:7.5.1] 175 | at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.5.1.jar:7.5.1] 176 | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.5.1.jar:7.5.1] 177 | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 178 | at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 179 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.5.1.jar:7.5.1] 180 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.5.1.jar:7.5.1] 181 | Caused by: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 182 | at org.elasticsearch.env.NodeEnvironment.ensureNoShardData(NodeEnvironment.java:1081) ~[elasticsearch-7.5.1.jar:7.5.1] 183 | at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:325) ~[elasticsearch-7.5.1.jar:7.5.1] 184 | at org.elasticsearch.node.Node.(Node.java:273) ~[elasticsearch-7.5.1.jar:7.5.1] 185 | at org.elasticsearch.node.Node.(Node.java:253) ~[elasticsearch-7.5.1.jar:7.5.1] 186 | at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 187 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 188 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.5.1.jar:7.5.1] 189 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.5.1.jar:7.5.1] 190 | ... 6 more 191 | ``` 192 | -------------------------------------------------------------------------------- /ES-Tutorial-3-2/elasticsearch.yml.3rd: -------------------------------------------------------------------------------- 1 | # ======================== Elasticsearch Configuration ========================= 2 | #cluster.name: my-application 3 | #node.name: node-1 4 | #node.attr.rack: r1 5 | path.data: /var/lib/elasticsearch 6 | path.logs: /var/log/elasticsearch 7 | #bootstrap.memory_lock: true 8 | #network.host: 192.168.0.1 9 | #http.port: 9200 10 | #discovery.seed_hosts: ["host1", "host2"] 11 | #cluster.initial_master_nodes: ["node-1", "node-2"] 12 | #gateway.recover_after_nodes: 3 13 | #action.destructive_requires_name: true 14 | # ============================================================================== 15 | -------------------------------------------------------------------------------- /ES-Tutorial-3-2/image/es-head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-3-2/image/es-head.png -------------------------------------------------------------------------------- /ES-Tutorial-3-2/jvm.options.3rd: -------------------------------------------------------------------------------- 1 | ## JVM configuration 2 | 3 | ################################################################ 4 | ## IMPORTANT: JVM heap size 5 | ################################################################ 6 | ## 7 | ## You should always set the min and max JVM heap 8 | ## size to the same value. For example, to set 9 | ## the heap to 4 GB, set: 10 | ## 11 | ## -Xms4g 12 | ## -Xmx4g 13 | ## 14 | ## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html 15 | ## for more information 16 | ## 17 | ################################################################ 18 | 19 | # Xms represents the initial size of total heap space 20 | # Xmx represents the maximum size of total heap space 21 | 22 | -Xms2g 23 | -Xmx2g 24 | 25 | ################################################################ 26 | ## Expert settings 27 | ################################################################ 28 | ## 29 | ## All settings below this section are considered 30 | ## expert settings. Don't tamper with them unless 31 | ## you understand what you are doing 32 | ## 33 | ################################################################ 34 | 35 | ## GC configuration 36 | -XX:+UseConcMarkSweepGC 37 | -XX:CMSInitiatingOccupancyFraction=75 38 | -XX:+UseCMSInitiatingOccupancyOnly 39 | 40 | ## G1GC Configuration 41 | # NOTE: G1GC is only supported on JDK version 10 or later. 42 | # To use G1GC uncomment the lines below. 43 | # 10-:-XX:-UseConcMarkSweepGC 44 | # 10-:-XX:-UseCMSInitiatingOccupancyOnly 45 | # 10-:-XX:+UseG1GC 46 | # 10-:-XX:InitiatingHeapOccupancyPercent=75 47 | 48 | ## optimizations 49 | 50 | # pre-touch memory pages used by the JVM during initialization 51 | -XX:+AlwaysPreTouch 52 | 53 | ## basic 54 | 55 | # explicitly set the stack size 56 | -Xss1m 57 | 58 | # set to headless, just in case 59 | -Djava.awt.headless=true 60 | 61 | # ensure UTF-8 encoding by default (e.g. filenames) 62 | -Dfile.encoding=UTF-8 63 | 64 | # use our provided JNA always versus the system one 65 | -Djna.nosys=true 66 | 67 | # turn off a JDK optimization that throws away stack traces for common 68 | # exceptions because stack traces are important for debugging 69 | -XX:-OmitStackTraceInFastThrow 70 | 71 | # flags to configure Netty 72 | -Dio.netty.noUnsafe=true 73 | -Dio.netty.noKeySetOptimization=true 74 | -Dio.netty.recycler.maxCapacityPerThread=0 75 | 76 | # log4j 2 77 | -Dlog4j.shutdownHookEnabled=false 78 | -Dlog4j2.disable.jmx=true 79 | 80 | -Djava.io.tmpdir=${ES_TMPDIR} 81 | 82 | ## heap dumps 83 | 84 | # generate a heap dump when an allocation from the Java heap fails 85 | # heap dumps are created in the working directory of the JVM 86 | -XX:+HeapDumpOnOutOfMemoryError 87 | 88 | # specify an alternative path for heap dumps; ensure the directory exists and 89 | # has sufficient space 90 | -XX:HeapDumpPath=/var/lib/elasticsearch 91 | 92 | # specify an alternative path for JVM fatal error logs 93 | -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log 94 | 95 | ## JDK 8 GC logging 96 | 97 | 8:-XX:+PrintGCDetails 98 | 8:-XX:+PrintGCDateStamps 99 | 8:-XX:+PrintTenuringDistribution 100 | 8:-XX:+PrintGCApplicationStoppedTime 101 | 8:-Xloggc:/var/log/elasticsearch/gc.log 102 | 8:-XX:+UseGCLogFileRotation 103 | 8:-XX:NumberOfGCLogFiles=32 104 | 8:-XX:GCLogFileSize=64m 105 | 106 | # JDK 9+ GC logging 107 | 9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m 108 | # due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise 109 | # time/date parsing will break in an incompatible way for some date patterns and locals 110 | 9-:-Djava.locale.providers=COMPAT 111 | 112 | # temporary workaround for C2 bug with JDK 10 on hardware with AVX-512 113 | 10-:-XX:UseAVX=2 114 | -------------------------------------------------------------------------------- /ES-Tutorial-3-2/tuto3-2: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ES_VER="7.5.1" 4 | ES_URL="https://artifacts.elastic.co/downloads/elasticsearch" 5 | ES_RPM="elasticsearch-${ES_VER}-x86_64.rpm" 6 | 7 | ES_ETC="/etc/elasticsearch" 8 | ES_MYML="elasticsearch.yml" 9 | ES_ADDYML="ymladd.yml" 10 | ES_JVM="jvm.options" 11 | 12 | ES_NODEIP=$(ifconfig | grep inet | grep -vE '127.0.0.1|inet6' | awk '{print $2}') 13 | ES_NODENAME=$(hostname -s) 14 | 15 | SEQ="3rd" 16 | ORG_SEQ="org_3rd" 17 | 18 | git pull 19 | 20 | # ES Package Install 21 | function install_es_packages 22 | { 23 | wget 2> /dev/null 24 | if [ $? -ne 1 ]; then 25 | sudo yum -y install wget 26 | fi 27 | 28 | ls -alh /usr/local/src/elasticsearch* 2> /dev/null 29 | if [ $? -ne 0 ]; then 30 | sudo wget ${ES_URL}/${ES_RPM} -O /usr/local/src/${ES_RPM} 31 | fi 32 | 33 | rpm -ql elasticsearch > /dev/null 34 | if [ $? -ne 0 ]; then 35 | sudo rpm -ivh /usr/local/src/${ES_RPM} 36 | fi 37 | } 38 | 39 | # elasticsearch.yml Configure 40 | function configure_es_yaml 41 | { 42 | sudo cp -f ${ES_ETC}/${ES_MYML} ${ES_ETC}/${ES_MYML}.${ORG_SEQ} 43 | sudo cp -f ${ES_MYML}.${SEQ} ${ES_ETC}/${ES_MYML} 44 | 45 | sudo echo "### For ClusterName & Node Name" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 46 | sudo echo "cluster.name: mytuto-es" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 47 | sudo echo "node.name: data-$ES_NODENAME" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 48 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 49 | 50 | sudo echo "### For Head" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 51 | sudo echo "http.cors.enabled: true" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 52 | sudo echo "http.cors.allow-origin: \"*\"" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 53 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 54 | 55 | sudo echo "### For Response by External Request" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 56 | sudo echo "network.bind_host: 0.0.0.0" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 57 | sudo echo "network.publish_host: $ES_NODEIP" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 58 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 59 | 60 | sudo echo "### Discovery Settings" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 61 | sudo echo "discovery.seed_hosts: [ \"\", \"\", \"\", ]" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 62 | 63 | sudo cat ${ES_ADDYML}.${SEQ} | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 64 | 65 | # jvm options Configure for Heap Memory 66 | sudo cp -f ${ES_ETC}/${ES_JVM} ${ES_ETC}/${ES_JVM}.${ORG_SEQ} 67 | sudo cp -f ${ES_JVM}.${SEQ} ${ES_ETC}/${ES_JVM} 68 | 69 | } 70 | 71 | # Start Elasticsearch 72 | function start_es_process 73 | { 74 | sudo cat /etc/elasticsearch/elasticsearch.yml | grep '"", "", "",' > /dev/null 75 | if [ $? -eq 0 ]; then 76 | echo "Set your unicast.hosts!! Edit your /etc/elasticsearch/elasticsearch.yml & Input your node1,2,3 in \"\"" 77 | else 78 | sudo systemctl daemon-reload 79 | sudo systemctl enable elasticsearch.service 80 | sudo systemctl restart elasticsearch 81 | fi 82 | } 83 | 84 | function init_ec2 85 | { 86 | # remove rpm files 87 | sudo \rm -rf /usr/local/src/* 88 | 89 | # stop & disable elasticsearch & kibana daemon 90 | sudo systemctl stop elasticsearch 91 | sudo systemctl disable elasticsearch.service 92 | sudo systemctl daemon-reload 93 | 94 | # erase rpm packages 95 | sudo rpm -e elasticsearch-${ES_VER}-1.x86_64 96 | 97 | # remove package configs 98 | sudo rm -rf /etc/elasticsearch 99 | sudo rm -rf /var/lib/elasticsearch 100 | sudo rm -rf /var/log/elasticsearch 101 | 102 | } 103 | 104 | if [ -z $1 ]; then 105 | echo "##################### Menu ##############" 106 | echo " $ ./tuto3-2 [Command]" 107 | echo "#####################%%%%%%##############" 108 | echo " 1 : elasticsearch packages" 109 | echo " 2 : configure elasticsearch.yml & jvm.options" 110 | echo " 3 : start elasticsearch process" 111 | echo " init : ec2 instance initializing" 112 | echo "#########################################"; 113 | exit 1; 114 | fi 115 | 116 | case "$1" in 117 | "1" ) install_es_packages;; 118 | "2" ) configure_es_yaml;; 119 | "3" ) start_es_process;; 120 | "init" ) init_ec2;; 121 | *) echo "Incorrect Command" ;; 122 | esac 123 | -------------------------------------------------------------------------------- /ES-Tutorial-3-2/ymladd.yml.3rd: -------------------------------------------------------------------------------- 1 | 2 | ### ES Port Settings 3 | http.port: 9200 4 | transport.tcp.port: 9300 5 | 6 | ### ES Node Role Settings 7 | node.master: false 8 | node.data: true 9 | 10 | -------------------------------------------------------------------------------- /ES-Tutorial-4/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-4 2 | 3 | ElasticSearch 네 번째 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## ElasticSearch Product 설치 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | Warm Data Node 1~3번 장비에서 실습합니다. 12 | 13 | ```bash 14 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 15 | 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-4 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 21 | ##################### Menu ############## 22 | $ ./tuto4 [Command] 23 | #####################%%%%%%############## 24 | 1 : install java & elasticsearch packages 25 | 2 : configure elasticsearch.yml & jvm.options 26 | 3 : start elasticsearch process 27 | 4 : hotdata/warmdata template settings 28 | 5 : all indices move to hotdata node 29 | 6 : specific index moves to warmdata node 30 | init : ec2 instance initializing 31 | ######################################### 32 | 33 | ``` 34 | 35 | ## ELK Tutorial 4 - Elasticsearch Warm Data Node 추가 36 | 37 | ### Elasticsearch 38 | ##### /etc/elasticsearch/elasticsearch.yml 39 | 40 | 1) cluster.name, node.name, http.cors.enabled, http.cors.allow-origin 기존장비와 동일 설정 41 | 2) network.host 를 network.bind_host 와 network.publish_host 로 분리, 기존장비와 동일 설정 42 | 3) http.port, transport.tcp.port 기존장비와 동일 설정 43 | 44 | ```bash 45 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 1 46 | 47 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 2 48 | 49 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ sudo vi /etc/elasticsearch/elasticsearch.yml 50 | 51 | 52 | ### For ClusterName & Node Name 53 | cluster.name: mytuto-es 54 | node.name: warm-ip-172-31-13-110 55 | 56 | ### For Head 57 | http.cors.enabled: true 58 | http.cors.allow-origin: "*" 59 | 60 | ### For Response by External Request 61 | network.bind_host: 0.0.0.0 62 | network.publish_host: {IP} 63 | 64 | ### ES Port Settings 65 | http.port: 9200 66 | transport.tcp.port: 9300 67 | 68 | ``` 69 | 70 | 4) node.master: false, node.data:true 로 role 동일 설정 71 | 5) discovery.zen.minimum_master_nodes 기존장비와 동일 설정 72 | 6) **~discovery.zen.ping.unicast.hosts~ discovery.seed_hosts, cluster.initial_master_nodes 는 직접 수정 필요, 기존에 설정한 마스터 노드 3대만 설정(데이터노드 아이피 설정 금지)** 73 | 7) **./tuto4 1 ./tuto4 2 실행 후 ~discovery.zen.ping.unicast.hosts~ discovery.seed_hosts, cluster.initial_master_nodes 에 기존 장비와 추가했던 노드 3대의 ip:9300 설정 필요** 74 | 75 | ```bash 76 | ### ES Node Role Settings 77 | node.master: false 78 | node.data: true 79 | 80 | ### Discovery Settings 81 | #discovery.zen.minimum_master_nodes: 2 82 | #discovery.zen.ping.unicast.hosts: [ "{IP1}:9300", "{IP2}:9300", "{IP3}:9300", ] 83 | discovery.seed_hosts: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 84 | cluster.initial_master_nodes: [ "{IP1}:9300", "{IP3}:9300", "{IP3}:9300", ] 85 | 86 | ``` 87 | 88 | 8) **warm data node 임을 클러스터에서 인식할 수 있도록 node.attr.box_type: warmdata 추가 설정** 89 | 90 | ```bash 91 | ### Hot / Warm Data Node Settings 92 | node.attr.box_type: warmdata 93 | 94 | ``` 95 | 96 | ##### /etc/elasticsearch/jvm.options 97 | 9) Xms1g, Xmx1g 를 물리 메모리의 절반으로 수정 98 | 99 | ```bash 100 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ sudo vi /etc/elasticsearch/jvm.options 101 | 102 | 103 | -Xms4g 104 | -Xmx4g 105 | 106 | ``` 107 | 108 | 10) 두 파일 모두 수정이 완료되었으면 추가할 노드 3대에서 스크립트 3번을 실행하여 ES 프로세스 시작, 클러스터에 잘 조인되는지 확인 109 | 110 | ```bash 111 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 3 112 | 113 | ``` 114 | 115 | 11) **클러스터에 warm data node 3대가 정상적으로 추가되면 기존 데이터노드 3대에 node.attr.box_type: hotdata 설정 후 한 대씩 프로세스 재시작, 클러스터에서 hot data node 임을 인식할 수 있도록 추가설정** 116 | 117 | ```bash 118 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4-1]$ sudo vi /etc/elasticsearch/elasticsearch.yml 119 | 120 | ### Hot / Warm Data Node Settings 121 | node.attr.box_type: hotdata 122 | 123 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4-1]$ systemctl restart elasticsearch.service 124 | 125 | ``` 126 | 127 | 12) 4번 스크립트 실행으로 신규 인덱스는 무조건 hot data node 로 할당될 수 있도록 템플릿 설정 128 | 129 | ```bash 130 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 4 131 | 132 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/_template/estemplate -d ' 133 | { 134 | "index_patterns": ["*"], 135 | "order" : 0, 136 | "settings": { 137 | "index.routing.allocation.require.box_type" : "hotdata" 138 | } 139 | }' 140 | 141 | ``` 142 | 143 | 13) 클러스터 내 모든 인덱스에 hotdata box_type 으로 설정 144 | 145 | ```bash 146 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 5 147 | 148 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/_all/_settings -d ' 149 | { 150 | "index.routing.allocation.require.box_type" : "hotdata" 151 | }' 152 | 153 | ``` 154 | 155 | 14) warm data node 로 이동이 필요한 인덱스만 명령을 통해 재할당 진행 156 | 157 | ```bash 158 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ ./tuto4 6 firstindex 159 | 160 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/$1/_settings -d ' 161 | { 162 | "index.routing.allocation.require.box_type" : "warmdata" 163 | }' 164 | 165 | ``` 166 | 167 | 이후, curator 를 통해 주기적으로 warm data 에 인덱스를 재할당 168 | 169 | ## Smoke Test 170 | 171 | ### Elasticsearch 172 | 173 | ```bash 174 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ curl localhost:9200 175 | { 176 | "name" : "warm-ip-172-31-5-89", 177 | "cluster_name" : "mytuto-es", 178 | "cluster_uuid" : "LTfRfk3KRLS31kQDROVu9A", 179 | "version" : { 180 | "number" : "7.3.0", 181 | "build_flavor" : "default", 182 | "build_type" : "rpm", 183 | "build_hash" : "a9861f4", 184 | "build_date" : "2019-01-24T11:27:09.439740Z", 185 | "build_snapshot" : false, 186 | "lucene_version" : "7.6.0", 187 | "minimum_wire_compatibility_version" : "5.6.0", 188 | "minimum_index_compatibility_version" : "5.0.0" 189 | }, 190 | "tagline" : "You Know, for Search" 191 | } 192 | 193 | ``` 194 | 195 | * Web Browser 에 [http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200](http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com:9100/index.html?base_uri=http://FQDN:9200) 실행 196 | 197 | ![Optional Text](image/es-head.png) 198 | 199 | ## Trouble Shooting 200 | 201 | ### Elasticsearch 202 | Smoke Test 가 진행되지 않을 때에는 elasticsearch.yml 파일에 기본으로 설정되어있는 로그 디렉토리의 로그를 살펴봅니다. 203 | 204 | path.logs: /var/log/elasticsearch 로 설정되어 cluster.name 이 적용된 파일로 만들어 로깅됩니다. 205 | 206 | 위의 경우에는 /var/log/elasticsearch/mytuto-es.log 에서 확인할 수 있습니다. 207 | 208 | ```bash 209 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-4]$ sudo vi /var/log/elasticsearch/mytuto-es.log 210 | ``` 211 | 212 | 로그도 남지 않았으면 elasticsearch user 로 elasticsearch binary 파일을 직접 실행해 로그를 살펴봅시다 213 | 214 | ```bash 215 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-1]$ sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch 216 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 217 | [2020-01-21T16:22:17,706][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [6gb], net total_space [9.9gb], types [rootfs] 218 | [2020-01-21T16:22:17,708][INFO ][o.e.e.NodeEnvironment ] [master-ip-172-31-5-69] heap size [1.9gb], compressed ordinary object pointers [true] 219 | [2020-01-21T16:22:17,740][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [master-ip-172-31-5-69] uncaught exception in thread [main] 220 | org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 221 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-7.5.1.jar:7.5.1] 222 | at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-7.5.1.jar:7.5.1] 223 | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.5.1.jar:7.5.1] 224 | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 225 | at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.5.1.jar:7.5.1] 226 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115) ~[elasticsearch-7.5.1.jar:7.5.1] 227 | at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.5.1.jar:7.5.1] 228 | Caused by: java.lang.IllegalStateException: Node is started with node.data=false, but has shard data: [/var/lib/elasticsearch/nodes/0/indices/se6EbqfOQieAKo_CoASbOg/0, /var/lib/elasticsearch/nodes/0/indices/MFk046FfS0CWeXWJscO-MQ/0]. Use 'elasticsearch-node repurpose' tool to clean up 229 | at org.elasticsearch.env.NodeEnvironment.ensureNoShardData(NodeEnvironment.java:1081) ~[elasticsearch-7.5.1.jar:7.5.1] 230 | at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:325) ~[elasticsearch-7.5.1.jar:7.5.1] 231 | at org.elasticsearch.node.Node.(Node.java:273) ~[elasticsearch-7.5.1.jar:7.5.1] 232 | at org.elasticsearch.node.Node.(Node.java:253) ~[elasticsearch-7.5.1.jar:7.5.1] 233 | at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 234 | at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.5.1.jar:7.5.1] 235 | at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.5.1.jar:7.5.1] 236 | at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.5.1.jar:7.5.1] 237 | ... 6 more 238 | ``` 239 | 240 | -------------------------------------------------------------------------------- /ES-Tutorial-4/elasticsearch.yml.4th: -------------------------------------------------------------------------------- 1 | # ======================== Elasticsearch Configuration ========================= 2 | #cluster.name: my-application 3 | #node.name: node-1 4 | #node.attr.rack: r1 5 | path.data: /var/lib/elasticsearch 6 | path.logs: /var/log/elasticsearch 7 | #bootstrap.memory_lock: true 8 | #network.host: 192.168.0.1 9 | #http.port: 9200 10 | #discovery.seed_hosts: ["host1", "host2"] 11 | #cluster.initial_master_nodes: ["node-1", "node-2"] 12 | #gateway.recover_after_nodes: 3 13 | #action.destructive_requires_name: true 14 | # ============================================================================== 15 | -------------------------------------------------------------------------------- /ES-Tutorial-4/image/es-head.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-4/image/es-head.png -------------------------------------------------------------------------------- /ES-Tutorial-4/jvm.options.4th: -------------------------------------------------------------------------------- 1 | ## JVM configuration 2 | 3 | ################################################################ 4 | ## IMPORTANT: JVM heap size 5 | ################################################################ 6 | ## 7 | ## You should always set the min and max JVM heap 8 | ## size to the same value. For example, to set 9 | ## the heap to 4 GB, set: 10 | ## 11 | ## -Xms4g 12 | ## -Xmx4g 13 | ## 14 | ## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html 15 | ## for more information 16 | ## 17 | ################################################################ 18 | 19 | # Xms represents the initial size of total heap space 20 | # Xmx represents the maximum size of total heap space 21 | 22 | -Xms2g 23 | -Xmx2g 24 | 25 | ################################################################ 26 | ## Expert settings 27 | ################################################################ 28 | ## 29 | ## All settings below this section are considered 30 | ## expert settings. Don't tamper with them unless 31 | ## you understand what you are doing 32 | ## 33 | ################################################################ 34 | 35 | ## GC configuration 36 | -XX:+UseConcMarkSweepGC 37 | -XX:CMSInitiatingOccupancyFraction=75 38 | -XX:+UseCMSInitiatingOccupancyOnly 39 | 40 | ## G1GC Configuration 41 | # NOTE: G1GC is only supported on JDK version 10 or later. 42 | # To use G1GC uncomment the lines below. 43 | # 10-:-XX:-UseConcMarkSweepGC 44 | # 10-:-XX:-UseCMSInitiatingOccupancyOnly 45 | # 10-:-XX:+UseG1GC 46 | # 10-:-XX:InitiatingHeapOccupancyPercent=75 47 | 48 | ## optimizations 49 | 50 | # pre-touch memory pages used by the JVM during initialization 51 | -XX:+AlwaysPreTouch 52 | 53 | ## basic 54 | 55 | # explicitly set the stack size 56 | -Xss1m 57 | 58 | # set to headless, just in case 59 | -Djava.awt.headless=true 60 | 61 | # ensure UTF-8 encoding by default (e.g. filenames) 62 | -Dfile.encoding=UTF-8 63 | 64 | # use our provided JNA always versus the system one 65 | -Djna.nosys=true 66 | 67 | # turn off a JDK optimization that throws away stack traces for common 68 | # exceptions because stack traces are important for debugging 69 | -XX:-OmitStackTraceInFastThrow 70 | 71 | # flags to configure Netty 72 | -Dio.netty.noUnsafe=true 73 | -Dio.netty.noKeySetOptimization=true 74 | -Dio.netty.recycler.maxCapacityPerThread=0 75 | 76 | # log4j 2 77 | -Dlog4j.shutdownHookEnabled=false 78 | -Dlog4j2.disable.jmx=true 79 | 80 | -Djava.io.tmpdir=${ES_TMPDIR} 81 | 82 | ## heap dumps 83 | 84 | # generate a heap dump when an allocation from the Java heap fails 85 | # heap dumps are created in the working directory of the JVM 86 | -XX:+HeapDumpOnOutOfMemoryError 87 | 88 | # specify an alternative path for heap dumps; ensure the directory exists and 89 | # has sufficient space 90 | -XX:HeapDumpPath=/var/lib/elasticsearch 91 | 92 | # specify an alternative path for JVM fatal error logs 93 | -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log 94 | 95 | ## JDK 8 GC logging 96 | 97 | 8:-XX:+PrintGCDetails 98 | 8:-XX:+PrintGCDateStamps 99 | 8:-XX:+PrintTenuringDistribution 100 | 8:-XX:+PrintGCApplicationStoppedTime 101 | 8:-Xloggc:/var/log/elasticsearch/gc.log 102 | 8:-XX:+UseGCLogFileRotation 103 | 8:-XX:NumberOfGCLogFiles=32 104 | 8:-XX:GCLogFileSize=64m 105 | 106 | # JDK 9+ GC logging 107 | 9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m 108 | # due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise 109 | # time/date parsing will break in an incompatible way for some date patterns and locals 110 | 9-:-Djava.locale.providers=COMPAT 111 | 112 | # temporary workaround for C2 bug with JDK 10 on hardware with AVX-512 113 | 10-:-XX:UseAVX=2 114 | -------------------------------------------------------------------------------- /ES-Tutorial-4/tuto4: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ES_VER="7.5.1" 4 | ES_URL="https://artifacts.elastic.co/downloads/elasticsearch" 5 | ES_RPM="elasticsearch-${ES_VER}-x86_64.rpm" 6 | 7 | ES_ETC="/etc/elasticsearch" 8 | ES_MYML="elasticsearch.yml" 9 | ES_ADDYML="ymladd.yml" 10 | ES_JVM="jvm.options" 11 | 12 | ES_NODEIP=$(ifconfig | grep inet | grep -vE '127.0.0.1|inet6' | awk '{print $2}') 13 | ES_NODENAME=$(hostname -s) 14 | 15 | SEQ="4th" 16 | ORG_SEQ="org_4th" 17 | 18 | git pull 19 | 20 | # ES Package Install 21 | function install_es_packages 22 | { 23 | wget 2> /dev/null 24 | if [ $? -ne 1 ]; then 25 | sudo yum -y install wget 26 | fi 27 | 28 | ls -alh /usr/local/src/elasticsearch* 2> /dev/null 29 | if [ $? -ne 0 ]; then 30 | sudo wget ${ES_URL}/${ES_RPM} -O /usr/local/src/${ES_RPM} 31 | fi 32 | 33 | rpm -ql elasticsearch > /dev/null 34 | if [ $? -ne 0 ]; then 35 | sudo rpm -ivh /usr/local/src/${ES_RPM} 36 | fi 37 | } 38 | 39 | # elasticsearch.yml Configure 40 | function configure_es_yaml 41 | { 42 | sudo cp -f ${ES_ETC}/${ES_MYML} ${ES_ETC}/${ES_MYML}.${ORG_SEQ} 43 | sudo cp -f ${ES_MYML}.${SEQ} ${ES_ETC}/${ES_MYML} 44 | 45 | sudo echo "### For ClusterName & Node Name" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 46 | sudo echo "cluster.name: mytuto-es" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 47 | sudo echo "node.name: warm-$ES_NODENAME" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 48 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 49 | 50 | sudo echo "### For Head" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 51 | sudo echo "http.cors.enabled: true" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 52 | sudo echo "http.cors.allow-origin: \"*\"" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 53 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 54 | 55 | sudo echo "### For Response by External Request" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 56 | sudo echo "network.bind_host: 0.0.0.0" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 57 | sudo echo "network.publish_host: $ES_NODEIP" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 58 | sudo echo "" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 59 | 60 | sudo echo "### Discovery Settings" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 61 | sudo echo "discovery.seed_hosts: [ \"\", \"\", \"\", ]" | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 62 | 63 | sudo cat ${ES_ADDYML}.${SEQ} | sudo tee -a ${ES_ETC}/${ES_MYML} > /dev/null 64 | 65 | # jvm options Configure for Heap Memory 66 | sudo cp -f ${ES_ETC}/${ES_JVM} ${ES_ETC}/${ES_JVM}.${ORG_SEQ} 67 | sudo cp -f ${ES_JVM}.${SEQ} ${ES_ETC}/${ES_JVM} 68 | 69 | } 70 | 71 | # Start Elasticsearch 72 | function start_es_process 73 | { 74 | sudo cat /etc/elasticsearch/elasticsearch.yml | grep '"", "", "",' > /dev/null 75 | if [ $? -eq 0 ]; then 76 | echo "Set your unicast.hosts!! Edit your /etc/elasticsearch/elasticsearch.yml & Input your node1,2,3 in \"\"" 77 | else 78 | sudo systemctl daemon-reload 79 | sudo systemctl enable elasticsearch.service 80 | sudo systemctl restart elasticsearch 81 | fi 82 | } 83 | 84 | function configure_es_template 85 | { 86 | curl -s localhost:9200 > /dev/null 87 | if [ $? -ne 0 ]; then 88 | echo "Your ES Process is not working yet" 89 | else 90 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/_template/estemplate -d ' 91 | { 92 | "index_patterns": ["*"], 93 | "order" : 0, 94 | "settings": { 95 | "index.routing.allocation.require.box_type" : "hotdata" 96 | } 97 | }' 98 | fi 99 | 100 | } 101 | 102 | function configure_es_hotmove 103 | { 104 | curl -s localhost:9200 > /dev/null 105 | if [ $? -ne 0 ]; then 106 | echo "Your ES Process is not working yet" 107 | else 108 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/_all/_settings -d ' 109 | { 110 | "index.routing.allocation.require.box_type" : "hotdata" 111 | }' 112 | fi 113 | } 114 | 115 | function configure_es_warmmove 116 | { 117 | if [ -z $1 ]; then 118 | echo "Input Your Index Name" 119 | else 120 | curl -s localhost:9200 > /dev/null 121 | if [ $? -ne 0 ]; then 122 | echo "Your ES Process is not working yet" 123 | else 124 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/$1/_settings -d ' 125 | { 126 | "index.routing.allocation.require.box_type" : "warmdata" 127 | }' 128 | fi 129 | fi 130 | } 131 | 132 | function init_ec2 133 | { 134 | # remove rpm files 135 | sudo \rm -rf /usr/local/src/* 136 | 137 | # delete a es template 138 | curl -s -H 'Content-Type: application/json' -XDELETE http://localhost:9200/_template/estemplate 139 | 140 | # shard reloacate to hot data node 141 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/_all/_settings -d ' 142 | { 143 | "index.routing.allocation.require.box_type" : "hotdata" 144 | }' 145 | sleep 30 146 | 147 | # stop & disable elasticsearch & kibana daemon 148 | sudo systemctl stop elasticsearch 149 | sudo systemctl disable elasticsearch.service 150 | sudo systemctl daemon-reload 151 | 152 | # erase rpm packages 153 | sudo rpm -e elasticsearch-${ES_VER}-1.x86_64 154 | 155 | # remove package configs 156 | sudo rm -rf /etc/elasticsearch 157 | sudo rm -rf /var/lib/elasticsearch 158 | sudo rm -rf /var/log/elasticsearch 159 | 160 | } 161 | 162 | 163 | if [ -z $1 ]; then 164 | echo "##################### Menu ##############" 165 | echo " $ ./tuto4 [Command]" 166 | echo "#####################%%%%%%##############" 167 | echo " 1 : elasticsearch packages" 168 | echo " 2 : configure elasticsearch.yml & jvm.options" 169 | echo " 3 : start elasticsearch process" 170 | echo " 4 : hotdata/warmdata template settings" 171 | echo " 5 : all indices move to hotdata node" 172 | echo " 6 : specific index moves to warmdata node" 173 | echo " init : ec2 instance initializing" 174 | echo "#########################################"; 175 | exit 1; 176 | fi 177 | 178 | case "$1" in 179 | "1" ) install_es_packages;; 180 | "2" ) configure_es_yaml;; 181 | "3" ) start_es_process;; 182 | "4" ) configure_es_template;; 183 | "5" ) configure_es_hotmove $2;; 184 | "6" ) configure_es_warmmove $2;; 185 | "init" ) init_ec2;; 186 | *) echo "Incorrect Command" ;; 187 | esac 188 | -------------------------------------------------------------------------------- /ES-Tutorial-4/ymladd.yml.4th: -------------------------------------------------------------------------------- 1 | 2 | ### ES Port Settings 3 | http.port: 9200 4 | transport.tcp.port: 9300 5 | 6 | ### ES Node Role Settings 7 | node.master: false 8 | node.data: true 9 | 10 | ### Hot / Warm Data Node Settings 11 | node.attr.box_type: warmdata 12 | 13 | -------------------------------------------------------------------------------- /ES-Tutorial-5/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-5 2 | 3 | ElasticSearch 다섯 번째 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## Nori Plugin 다뤄보기 8 | 9 | 이 튜토리얼에서는 rpm 로 설치된 ES 기준으로 실습합니다. 10 | 11 | Elasticsearch 가 실행중인 아무 노드에서 실습합니다. 12 | 13 | [Nori Analyzer 공식 레퍼런스 페이지](https://www.elastic.co/guide/en/elasticsearch/plugins/current/analysis-nori.html) 를 참고해주세요. 14 | 15 | ```bash 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-5 21 | 22 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 23 | ##################### Menu ############## 24 | $ ./tuto5 [Command] 25 | #####################%%%%%%############## 26 | 1 : install nori plugin 27 | 2 : restart es process 28 | 3 : make a nori mappings 29 | 4 : standard analyzer tokens 30 | 5 : nori analyzer tokens 31 | 6 : nori analyzer indexing 32 | 7 : nori analyzer searching 33 | ######################################### 34 | 35 | ``` 36 | 37 | 1) /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-nori 로 Nori 설치 38 | 2) Nori 플러그인 설치 사항을 반영하기 위해 롤링리스타트로 전체 노드 재시작 진행 39 | 3) 재시작과 동시에 Nori 의 사전파일 userdict_ko.txt 를 /etc/elasticsearch 밑에 생성 40 | 41 | ```bash 42 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 1 43 | 44 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 2 45 | 46 | ``` 47 | 48 | 4) Nori Analyzer 를 쓰기 위한 테스트 인덱스 생성(7.x 부터는 type 제외한 채 생성) 49 | 50 | ```bash 51 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 3 52 | 53 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/noritest1 -d ' 54 | { 55 | "settings": { 56 | "index": { 57 | "analysis": { 58 | "tokenizer": { 59 | "nori_user_dict": { 60 | "type": "nori_tokenizer", 61 | "decompound_mode": "mixed", 62 | "user_dictionary": "userdict_ko.txt" 63 | } 64 | }, 65 | "analyzer": { 66 | "my_analyzer": { 67 | "type": "custom", 68 | "tokenizer": "nori_user_dict" 69 | } 70 | } 71 | } 72 | } 73 | }, 74 | "mappings": { 75 | "properties": { 76 | "norimsg": { 77 | "type": "text", 78 | "analyzer": "my_analyzer" 79 | } 80 | } 81 | } 82 | }' 83 | 84 | ``` 85 | 86 | ## Smoke Test 87 | 88 | ```bash 89 | [ec2-user@ip-172-31-4-45 ES-Tutorial-5]$ ./tuto5 4 90 | ## Standard Analyzer Tokens 91 | ## text : Winter is Coming!!! 92 | { 93 | "tokens" : [ 94 | { 95 | "token" : "winter", 96 | "start_offset" : 0, 97 | "end_offset" : 6, 98 | "type" : "", 99 | "position" : 0 100 | }, 101 | { 102 | "token" : "is", 103 | "start_offset" : 7, 104 | "end_offset" : 9, 105 | "type" : "", 106 | "position" : 1 107 | }, 108 | { 109 | "token" : "coming", 110 | "start_offset" : 10, 111 | "end_offset" : 16, 112 | "type" : "", 113 | "position" : 2 114 | } 115 | ] 116 | } 117 | 118 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 5 119 | ## Nori Analyzer Tokens 120 | ## text : 21세기 세종계획 121 | { 122 | "tokens" : [ 123 | { 124 | "token" : "21", 125 | "start_offset" : 0, 126 | "end_offset" : 2, 127 | "type" : "word", 128 | "position" : 0 129 | }, 130 | { 131 | "token" : "세기", 132 | "start_offset" : 2, 133 | "end_offset" : 4, 134 | "type" : "word", 135 | "position" : 1 136 | }, 137 | { 138 | "token" : "세종", 139 | "start_offset" : 5, 140 | "end_offset" : 7, 141 | "type" : "word", 142 | "position" : 2 143 | }, 144 | { 145 | "token" : "계획", 146 | "start_offset" : 7, 147 | "end_offset" : 9, 148 | "type" : "word", 149 | "position" : 3 150 | } 151 | ] 152 | } 153 | 154 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 6 155 | ## Nori Analyzer Indexing 156 | ## norimsg : 21세기 세종계획 157 | {"_index":"noritest1","_type":"_doc","_id":"sz1iRGkB78Gpz5ewOq03","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":1,"_primary_term":1}[ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ 158 | 159 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ ./tuto5 7 160 | ## Nori Analyzer Searching 161 | ## norimsg : 세종 162 | { 163 | "took" : 28, 164 | "timed_out" : false, 165 | "_shards" : { 166 | "total" : 5, 167 | "successful" : 5, 168 | "skipped" : 0, 169 | "failed" : 0 170 | }, 171 | "hits" : { 172 | "total" : 2, 173 | "max_score" : 0.2876821, 174 | "hits" : [ 175 | { 176 | "_index" : "noritest1", 177 | "_type" : "_doc", 178 | "_id" : "sz1iRGkB78Gpz5ewOq03", 179 | "_score" : 0.2876821, 180 | "_source" : { 181 | "norimsg" : "21세기 세종계획" 182 | } 183 | } 184 | ] 185 | } 186 | } 187 | 188 | ``` 189 | 190 | ![Optional Text](image/noridict1.jpg) 191 | 192 | 대학 + 생선 + 교회 193 | 194 | 대학생 + 선교회 195 | 196 | ## Trouble Shooting 197 | 198 | ### Elasticsearch 199 | Smoke Test 가 진행되지 않을 때에는 elasticsearch.yml 파일에 기본으로 설정되어있는 로그 디렉토리의 로그를 살펴봅니다. 200 | 201 | path.logs: /var/log/elasticsearch 로 설정되어 cluster.name 이 적용된 파일로 만들어 로깅됩니다. 202 | 203 | 위의 경우에는 /var/log/elasticsearch/mytuto-es.log 에서 확인할 수 있습니다. 204 | 205 | ```bash 206 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-5]$ sudo vi /var/log/elasticsearch/mytuto-es.log 207 | ``` 208 | 209 | -------------------------------------------------------------------------------- /ES-Tutorial-5/image/noridict1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-5/image/noridict1.jpg -------------------------------------------------------------------------------- /ES-Tutorial-5/query/new_query: -------------------------------------------------------------------------------- 1 | ## Analysis 2 | # 애널라이저 확인 3 | 4 | # standard 5 | POST _analyze 6 | { 7 | "text": "Winter is Coming!!!" 8 | } 9 | 10 | POST _analyze 11 | { 12 | "tokenizer": "standard", 13 | "filter": [ 14 | "lowercase", 15 | "asciifolding" 16 | ], 17 | "text": "Is this déja vu?" 18 | } 19 | 20 | # whitespace 21 | POST _analyze 22 | { 23 | "analyzer": "whitespace", 24 | "text": "Winter is coming!!!" 25 | } 26 | 27 | POST _analyze 28 | { 29 | "char_filter": [ 30 | "html_strip" 31 | ], 32 | "tokenizer": "whitespace", 33 | "filter": [ 34 | "uppercase" 35 | ], 36 | "text": "This is mixed analyzer" 37 | } 38 | 39 | # english 40 | POST _analyze 41 | { 42 | "analyzer": "english", 43 | "text": "Winter is coming!!!" 44 | } 45 | 46 | # 이미 정의되어 있는 analyzer 를 그대로 가져다가 사용하는 방식 47 | PUT index_analyzer_settings1 48 | { 49 | "settings": { 50 | "analysis": { 51 | "analyzer": { 52 | "my_analyzer": { 53 | "type": "standard", 54 | "max_token_length": 5, 55 | "stopwords": "_english_" 56 | } 57 | } 58 | } 59 | }, 60 | "mappings": { 61 | "properties": { 62 | "comment": { 63 | "type": "text", 64 | "analyzer": "my_analyzer" 65 | } 66 | } 67 | } 68 | } 69 | 70 | POST index_analyzer_settings1/_analyze 71 | { 72 | "analyzer": "my_analyzer", 73 | "text": "This is Standard Analyzer" 74 | } 75 | 76 | POST index_analyzer_settings1/_doc 77 | { 78 | "comment": "This is Standard Analyzer" 79 | } 80 | 81 | GET index_analyzer_settings1/_search 82 | { 83 | "query": { 84 | "match": { 85 | "comment": "standard" 86 | } 87 | } 88 | } 89 | 90 | # 이미 정의되어 있는 tokenizer 에 character filter, token filter 를 조합하여 사용하는 방식 91 | PUT index_analyzer_settings2 92 | { 93 | "settings": { 94 | "analysis": { 95 | "analyzer": { 96 | "my_analyzer": { 97 | "type": "custom", 98 | "char_filter": [ 99 | "html_strip" 100 | ], 101 | "tokenizer": "standard", 102 | "filter": [ 103 | "uppercase" 104 | ] 105 | } 106 | } 107 | } 108 | }, 109 | "mappings": { 110 | "properties": { 111 | "comment": { 112 | "type": "text", 113 | "analyzer": "my_analyzer" 114 | } 115 | } 116 | } 117 | } 118 | 119 | POST index_analyzer_settings2/_analyze 120 | { 121 | "analyzer": "my_analyzer", 122 | "text": "This is Standard Analyzer" 123 | } 124 | 125 | POST index_analyzer_settings2/_doc 126 | { 127 | "comment": "This is Standard Analyzer" 128 | } 129 | 130 | GET index_analyzer_settings2/_search 131 | { 132 | "query": { 133 | "match": { 134 | "comment": "standard" 135 | } 136 | } 137 | } 138 | 139 | # mixed 140 | PUT mixed_analyzer 141 | { 142 | "settings": { 143 | "analysis": { 144 | "char_filter": { 145 | "my_char_filter": { 146 | "type": "mapping", 147 | "mappings": [ 148 | ":) => _happy_", 149 | ":( => _sad_" 150 | ] 151 | } 152 | }, 153 | "tokenizer": { 154 | "my_tokenizer": { 155 | "type": "standard", 156 | "max_token_length": 20 157 | } 158 | }, 159 | "filter": { 160 | "my_stop": { 161 | "type": "stop", 162 | "stopwords": [ 163 | "and", 164 | "is", 165 | "the", 166 | "this" 167 | ] 168 | } 169 | }, 170 | "analyzer": { 171 | "my_analyzer": { 172 | "type": "custom", 173 | "char_filter": [ 174 | "html_strip", 175 | "my_char_filter" 176 | ], 177 | "tokenizer": "my_tokenizer", 178 | "filter": [ 179 | "lowercase", 180 | "my_stop" 181 | ] 182 | } 183 | } 184 | } 185 | }, 186 | "mappings": { 187 | "properties": { 188 | "comment": { 189 | "type": "text", 190 | "analyzer": "my_analyzer" 191 | } 192 | } 193 | } 194 | } 195 | 196 | POST mixed_analyzer/_analyze 197 | { 198 | "analyzer": "my_analyzer", 199 | "text": "This is My Analyzer :)" 200 | } 201 | 202 | POST mixed_analyzer/_doc 203 | { 204 | "comment": "This is My Analyzer :)" 205 | } 206 | 207 | GET mixed_analyzer/_search 208 | { 209 | "query": { 210 | "match": { 211 | "comment": "my analyzer" 212 | } 213 | } 214 | } 215 | 216 | PUT nori_sample 217 | { 218 | "settings": { 219 | "analysis": { 220 | "tokenizer": { 221 | "nori_user_dict": { 222 | "type": "nori_tokenizer", 223 | "decompound_mode": "mixed", 224 | "user_dictionary": "userdict_ko.txt" 225 | } 226 | }, 227 | "analyzer": { 228 | "my_analyzer": { 229 | "type": "custom", 230 | "tokenizer": "nori_user_dict" 231 | } 232 | } 233 | } 234 | } 235 | } 236 | 237 | GET shakespeare/_search 238 | # Term Vector 239 | POST /shakespeare/_doc/gPezC3ABFjKEo3V5Mu-2/_termvectors 240 | { 241 | "fields": [ 242 | "text_entry" 243 | ], 244 | "offsets": true, 245 | "payloads": true, 246 | "positions": true, 247 | "term_statistics": true, 248 | "field_statistics": true 249 | } 250 | 251 | # URI Search 252 | GET shakespeare/_search?from=0&size=100&q=text_entry:mother&sort=line_id:asc 253 | 254 | # Request Body Search 255 | POST shakespeare/_search 256 | { 257 | "query": { 258 | "term": { 259 | "play_name.keyword": "Henry IV" 260 | } 261 | } 262 | } 263 | 264 | # Pagination 265 | POST shakespeare/_search 266 | { 267 | "from": 0, 268 | "size": 2, 269 | "query": { 270 | "match": { 271 | "text_entry": "my mother" 272 | } 273 | } 274 | } 275 | 276 | # max pagination size 변경 277 | PUT shakespeare/_settings 278 | { 279 | "index.max_result_window": 10001 280 | } 281 | 282 | # sort 283 | POST shakespeare/_search 284 | { 285 | "sort": { 286 | "line_id": "desc" 287 | } 288 | } 289 | 290 | # 스코어 계산방식 291 | POST shakespeare/_search 292 | { 293 | "explain": true, 294 | "from": 0, 295 | "size": 2, 296 | "query": { 297 | "match": { 298 | "text_entry": "my mother" 299 | } 300 | } 301 | } 302 | 303 | # _source filtering 304 | POST shakespeare/_search 305 | { 306 | "_source": false, 307 | "sort": { 308 | "line_id": "desc" 309 | } 310 | } 311 | 312 | POST shakespeare/_search 313 | { 314 | "_source": [ 315 | "speaker", 316 | "text_entry" 317 | ], 318 | "sort": { 319 | "line_id": "desc" 320 | } 321 | } 322 | 323 | POST shakespeare/_search 324 | { 325 | "_source": [ 326 | "*num*" 327 | ], 328 | "sort": { 329 | "line_id": "desc" 330 | } 331 | } 332 | 333 | # highlight 334 | POST shakespeare/_search 335 | { 336 | "_source": [ 337 | "play_name", 338 | "speaker", 339 | "text_entry" 340 | ], 341 | "query": { 342 | "query_string": { 343 | "query": "henry" 344 | } 345 | }, 346 | "highlight": { 347 | "fields": { 348 | "speaker": {} 349 | } 350 | } 351 | } 352 | 353 | # match query 354 | POST shakespeare/_search 355 | { 356 | "query": { 357 | "match": { 358 | "text_entry": "my mother" 359 | } 360 | } 361 | } 362 | 363 | # boost 364 | POST shakespeare/_search 365 | { 366 | "query": { 367 | "match": { 368 | "text_entry": { 369 | "query": "my mother", 370 | "boost": 2 371 | } 372 | } 373 | } 374 | } 375 | 376 | # match_phrase 377 | POST shakespeare/_search 378 | { 379 | "query": { 380 | "match_phrase": { 381 | "text_entry": "my mother a" 382 | } 383 | } 384 | } 385 | 386 | # match_phrase_prefix 387 | POST shakespeare/_search 388 | { 389 | "query": { 390 | "match_phrase_prefix": { 391 | "text_entry": "my mother d" 392 | } 393 | } 394 | } 395 | 396 | # multi match 397 | POST multi_match_index/_doc 398 | { 399 | "name": "ElasticSearch Engine", 400 | "comment": "It's Best Solution" 401 | } 402 | 403 | POST multi_match_index/_doc 404 | { 405 | "name": "Mongo DB", 406 | "comment": "What is difference ElasticSearch Engine" 407 | } 408 | 409 | POST multi_match_index/_search 410 | { 411 | "query": { 412 | "multi_match": { 413 | "query": "Engine", 414 | "fields": [ 415 | "name", 416 | "comment" 417 | ] 418 | } 419 | } 420 | } 421 | 422 | # query_string 423 | POST shakespeare/_search 424 | { 425 | "query": { 426 | "query_string": { 427 | "query": "henry VI*", 428 | "fields": [ 429 | "text_entry", 430 | "play_name" 431 | ] 432 | } 433 | } 434 | } 435 | 436 | # term 437 | POST shakespeare/_search 438 | { 439 | "query": { 440 | "term": { 441 | "play_name.keyword": "Henry IV" 442 | } 443 | } 444 | } 445 | 446 | POST shakespeare/_search 447 | { 448 | "query": { 449 | "term": { 450 | "play_name.keyword": { 451 | "value": "Henry IV", 452 | "boost": 2.0 453 | } 454 | } 455 | } 456 | } 457 | 458 | # terms 1938 YORK 459 | POST shakespeare/_search 460 | { 461 | "from":0, "size": 10000, 462 | "query": { 463 | "terms": { 464 | "speaker.keyword": [ 465 | "YORK", 466 | "KING HENRY IV" 467 | ] 468 | } 469 | } 470 | } 471 | 472 | # range 473 | POST shakespeare/_search 474 | { 475 | "query": { 476 | "range": { 477 | "line_id": { 478 | "gte": 250, 479 | "lte": 259 480 | } 481 | } 482 | } 483 | } 484 | 485 | # wildcard 486 | POST shakespeare/_search 487 | { 488 | "query": { 489 | "wildcard": { 490 | "speaker.keyword": "KING HENR*" 491 | } 492 | } 493 | } 494 | 495 | # bool 496 | POST shakespeare/_search 497 | { 498 | "query": { 499 | "bool": { 500 | "must": [ 501 | { 502 | "match": { 503 | "text_entry": { 504 | "query": "my heart" 505 | } 506 | } 507 | } 508 | ], 509 | "filter": [ 510 | { 511 | "term": { 512 | "speaker.keyword": "KING HENRY IV" 513 | } 514 | }, 515 | { 516 | "range": { 517 | "line_id": { 518 | "gte": "30" 519 | } 520 | } 521 | } 522 | ] 523 | } 524 | } 525 | } 526 | 527 | # must 528 | POST shakespeare/_search 529 | { 530 | "query": { 531 | "bool": { 532 | "must": [ 533 | { 534 | "match": { 535 | "text_entry": { 536 | "query": "my mother" 537 | } 538 | } 539 | } 540 | ] 541 | } 542 | } 543 | } 544 | 545 | # filter 546 | POST shakespeare/_search 547 | { 548 | "query": { 549 | "bool": { 550 | "filter": [ 551 | { 552 | "term": { 553 | "speaker.keyword": "KING HENRY IV" 554 | } 555 | } 556 | ] 557 | } 558 | } 559 | } 560 | 561 | # should 22633 All of one nature, of one substance bred, line_id 14 562 | POST shakespeare/_search 563 | { 564 | "from": 0, "size": 10000, 565 | "query": { 566 | "bool": { 567 | "should": [ 568 | { 569 | "match": { 570 | "text_entry": { 571 | "query": "my mother", 572 | "boost": 2 573 | } 574 | } 575 | }, 576 | { 577 | "term": { 578 | "speaker.keyword": { 579 | "value": "KING HENRY IV" 580 | } 581 | } 582 | } 583 | ] 584 | } 585 | } 586 | } 587 | 588 | POST shakespeare/_search 589 | { 590 | "from": 0, "size": 1000, 591 | "query": { 592 | "bool": { 593 | "filter": [ 594 | { 595 | "term": { 596 | "speaker.keyword": { 597 | "value": "KING HENRY IV" 598 | } 599 | } 600 | } 601 | ], 602 | "should": [ 603 | { 604 | "match": { 605 | "text_entry": { 606 | "query": "her mother", 607 | "boost": 2 608 | } 609 | } 610 | } 611 | ], 612 | "minimum_should_match": 1 613 | } 614 | } 615 | } 616 | 617 | # must_not 618 | POST shakespeare/_search 619 | { 620 | "query": { 621 | "bool": { 622 | "must_not": [ 623 | { 624 | "match": { 625 | "text_entry": { 626 | "query": "my mother" 627 | } 628 | } 629 | } 630 | ] 631 | } 632 | } 633 | } 634 | 635 | # bool all 636 | POST shakespeare/_search 637 | { 638 | "query": { 639 | "bool": { 640 | "must": [ 641 | { 642 | "match": { 643 | "text_entry": "my mother" 644 | } 645 | } 646 | ], 647 | "filter": [ 648 | { 649 | "range": { 650 | "line_id": { 651 | "gte": "30" 652 | } 653 | } 654 | } 655 | ], 656 | "should": [ 657 | { 658 | "term": { 659 | "speaker.keyword": "KING HENRY IV" 660 | } 661 | }, 662 | { 663 | "term": { 664 | "speaker.keyword": "YORK" 665 | } 666 | } 667 | ], 668 | "minimum_should_match": 1, 669 | "must_not": [ 670 | { 671 | "match": { 672 | "play_name": "Part" 673 | } 674 | } 675 | ] 676 | } 677 | } 678 | } 679 | -------------------------------------------------------------------------------- /ES-Tutorial-5/query/query: -------------------------------------------------------------------------------- 1 | ## Analysis 2 | # 애널라이저 확인 3 | 4 | # standard 5 | POST _analyze 6 | { 7 | "text": "Winter is Coming!!!" 8 | } 9 | 10 | POST _analyze 11 | { 12 | "tokenizer": "standard", 13 | "filter": [ 14 | "lowercase", 15 | "asciifolding" 16 | ], 17 | "text": "Is this déja vu?" 18 | } 19 | 20 | # whitespace 21 | POST _analyze 22 | { 23 | "analyzer": "whitespace", 24 | "text": "Winter is coming!!!" 25 | } 26 | 27 | POST _analyze 28 | { 29 | "char_filter": [ 30 | "html_strip" 31 | ], 32 | "tokenizer": "whitespace", 33 | "filter": [ 34 | "uppercase" 35 | ], 36 | "text": "This is mixed analyzer" 37 | } 38 | 39 | # english 40 | POST _analyze 41 | { 42 | "analyzer": "english", 43 | "text": "Winter is coming!!!" 44 | } 45 | 46 | # 이미 정의되어 있는 analyzer 를 그대로 가져다가 사용하는 방식 47 | PUT index_analyzer_settings1 48 | { 49 | "settings": { 50 | "analysis": { 51 | "analyzer": { 52 | "my_analyzer": { 53 | "type": "standard", 54 | "max_token_length": 5, 55 | "stopwords": "_english_" 56 | } 57 | } 58 | } 59 | }, 60 | "mappings": { 61 | "properties": { 62 | "comment": { 63 | "type": "text", 64 | "analyzer": "my_analyzer" 65 | } 66 | } 67 | } 68 | } 69 | 70 | POST index_analyzer_settings1/_analyze 71 | { 72 | "analyzer": "my_analyzer", 73 | "text": "This is Standard Analyzer" 74 | } 75 | 76 | POST index_analyzer_settings1/_doc 77 | { 78 | "comment": "This is Standard Analyzer" 79 | } 80 | 81 | GET index_analyzer_settings1/_search 82 | { 83 | "query": { 84 | "match": { 85 | "comment": "standard" 86 | } 87 | } 88 | } 89 | 90 | # 이미 정의되어 있는 tokenizer 에 character filter, token filter 를 조합하여 사용하는 방식 91 | PUT index_analyzer_settings2 92 | { 93 | "settings": { 94 | "analysis": { 95 | "analyzer": { 96 | "my_analyzer": { 97 | "type": "custom", 98 | "char_filter": [ 99 | "html_strip" 100 | ], 101 | "tokenizer": "standard", 102 | "filter": [ 103 | "uppercase" 104 | ] 105 | } 106 | } 107 | } 108 | }, 109 | "mappings": { 110 | "properties": { 111 | "comment": { 112 | "type": "text", 113 | "analyzer": "my_analyzer" 114 | } 115 | } 116 | } 117 | } 118 | 119 | POST index_analyzer_settings2/_analyze 120 | { 121 | "analyzer": "my_analyzer", 122 | "text": "This is Standard Analyzer" 123 | } 124 | 125 | POST index_analyzer_settings2/_doc 126 | { 127 | "comment": "This is Standard Analyzer" 128 | } 129 | 130 | GET index_analyzer_settings2/_search 131 | { 132 | "query": { 133 | "match": { 134 | "comment": "standard" 135 | } 136 | } 137 | } 138 | 139 | # mixed 140 | PUT mixed_analyzer 141 | { 142 | "settings": { 143 | "analysis": { 144 | "char_filter": { 145 | "my_char_filter": { 146 | "type": "mapping", 147 | "mappings": [ 148 | ":) => _happy_", 149 | ":( => _sad_" 150 | ] 151 | } 152 | }, 153 | "tokenizer": { 154 | "my_tokenizer": { 155 | "type": "standard", 156 | "max_token_length": 20 157 | } 158 | }, 159 | "filter": { 160 | "my_stop": { 161 | "type": "stop", 162 | "stopwords": [ 163 | "and", 164 | "is", 165 | "the", 166 | "this" 167 | ] 168 | } 169 | }, 170 | "analyzer": { 171 | "my_analyzer": { 172 | "type": "custom", 173 | "char_filter": [ 174 | "html_strip", 175 | "my_char_filter" 176 | ], 177 | "tokenizer": "my_tokenizer", 178 | "filter": [ 179 | "lowercase", 180 | "my_stop" 181 | ] 182 | } 183 | } 184 | } 185 | }, 186 | "mappings": { 187 | "properties": { 188 | "comment": { 189 | "type": "text", 190 | "analyzer": "my_analyzer" 191 | } 192 | } 193 | } 194 | } 195 | 196 | POST mixed_analyzer/_analyze 197 | { 198 | "analyzer": "my_analyzer", 199 | "text": "This is My Analyzer :)" 200 | } 201 | 202 | POST mixed_analyzer/_doc 203 | { 204 | "comment": "This is My Analyzer :)" 205 | } 206 | 207 | GET mixed_analyzer/_search 208 | { 209 | "query": { 210 | "match": { 211 | "comment": "my analyzer" 212 | } 213 | } 214 | } 215 | 216 | DELETE nori_sample 217 | PUT nori_sample 218 | { 219 | "settings": { 220 | "analysis": { 221 | "tokenizer": { 222 | "nori_user_dict": { 223 | "type": "nori_tokenizer", 224 | "decompound_mode": "mixed", 225 | "user_dictionary": "userdict_ko.txt" 226 | } 227 | }, 228 | "analyzer": { 229 | "my_analyzer": { 230 | "type": "custom", 231 | "tokenizer": "nori_user_dict" 232 | } 233 | } 234 | } 235 | } 236 | } 237 | 238 | DELETE nori_hanja 239 | PUT nori_hanja 240 | { 241 | "settings": { 242 | "analysis": { 243 | "tokenizer": { 244 | "nori_user_tokenizer": { 245 | "type": "nori_tokenizer", 246 | "decompound_mode": "mixed", 247 | "user_dictionary": "userdict_ko.txt" 248 | } 249 | }, 250 | "analyzer": { 251 | "nori_user_analyzer": { 252 | "type": "custom", 253 | "tokenizer": "nori_user_tokenizer", 254 | "filter": [ 255 | "nori_readingform" 256 | ] 257 | } 258 | } 259 | } 260 | }, 261 | "mappings": { 262 | "properties": { 263 | "comment": { 264 | "type": "text", 265 | "analyzer": "nori_user_analyzer" 266 | } 267 | } 268 | } 269 | } 270 | 271 | POST nori_hanja/_close 272 | POST nori_hanja/_open 273 | 274 | POST nori_hanja/_analyze 275 | { 276 | "tokenizer": "nori_tokenizer", 277 | "filter": [ 278 | "nori_readingform" 279 | ], 280 | "text": "高昇日" 281 | } 282 | 283 | POST nori_hanja/_analyze 284 | { 285 | "analyzer": "nori_user_analyzer", 286 | "text": "高昇日" 287 | } 288 | 289 | POST nori_hanja/_doc 290 | { 291 | "comment": "高昇日" 292 | } 293 | 294 | GET nori_hanja/_search 295 | { 296 | "query": { 297 | "match": { 298 | "comment": "高昇日" 299 | } 300 | } 301 | } 302 | 303 | # Term Vector 304 | POST /bank/account/25/_termvectors 305 | { 306 | "fields": [ 307 | "address" 308 | ], 309 | "offsets": true, 310 | "payloads": true, 311 | "positions": true, 312 | "term_statistics": true, 313 | "field_statistics": true 314 | } 315 | 316 | # URI Search 317 | GET bank/_search?from=0&size=100&q=address:Fleet&sort=age:asc 318 | 319 | # Request Body Search 320 | GET bank/_search 321 | { 322 | "query": { 323 | "term": { 324 | "city.keyword": "Mulino" 325 | } 326 | } 327 | } 328 | 329 | # Pagination 330 | GET bank/_search 331 | { 332 | "from": 0, 333 | "size": 2, 334 | "query": { 335 | "match": { 336 | "address": "Fleet" 337 | } 338 | } 339 | } 340 | 341 | # max pagination size 변경 342 | PUT bank/_settings 343 | { 344 | "index.max_result_window": 10001 345 | } 346 | 347 | # sort 348 | GET bank/_search 349 | { 350 | "sort": { 351 | "age": "desc" 352 | } 353 | } 354 | 355 | # 스코어 계산방식 356 | GET bank/_search 357 | { 358 | "explain": true, 359 | "from": 0, 360 | "size": 2, 361 | "query": { 362 | "match": { 363 | "address": "Fleet" 364 | } 365 | } 366 | } 367 | 368 | # _source filtering 369 | GET bank/_search 370 | { 371 | "_source": false, 372 | "sort": { 373 | "age": "desc" 374 | } 375 | } 376 | 377 | GET bank/_search 378 | { 379 | "_source": [ 380 | "age", 381 | "gender" 382 | ], 383 | "sort": { 384 | "age": "desc" 385 | } 386 | } 387 | 388 | GET bank/_search 389 | { 390 | "_source": [ 391 | "*ge*" 392 | ], 393 | "sort": { 394 | "age": "desc" 395 | } 396 | } 397 | 398 | # highlight 399 | GET bank/_search 400 | { 401 | "_source": [ 402 | "account_number", 403 | "firstname", 404 | "lastname" 405 | ], 406 | "query": { 407 | "query_string": { 408 | "query": "Fleet" 409 | } 410 | }, 411 | "highlight": { 412 | "fields": { 413 | "address": {} 414 | } 415 | } 416 | } 417 | 418 | # match query 419 | GET bank/_search 420 | { 421 | "query": { 422 | "match": { 423 | "address": "345 Fleet" 424 | } 425 | } 426 | } 427 | 428 | # boost 429 | GET bank/_search 430 | { 431 | "query": { 432 | "match": { 433 | "address": { 434 | "query": "345 Fleet", 435 | "boost": 2 436 | } 437 | } 438 | } 439 | } 440 | 441 | # match_phrase 442 | GET bank/_search 443 | { 444 | "query": { 445 | "match_phrase": { 446 | "address": "Fleet Walk" 447 | } 448 | } 449 | } 450 | 451 | # match_phrase_prefix 452 | GET bank/_search 453 | { 454 | "query": { 455 | "match_phrase_prefix": { 456 | "address": "425 Fleet W" 457 | } 458 | } 459 | } 460 | 461 | # multi match 462 | POST multi_match_index/_doc 463 | { 464 | "name": "ElasticSearch Engine", 465 | "comment": "It's Best Solution" 466 | } 467 | 468 | POST multi_match_index/_doc 469 | { 470 | "name": "Mongo DB", 471 | "comment": "What is difference ElasticSearch Engine" 472 | } 473 | 474 | GET multi_match_index/_search 475 | { 476 | "query": { 477 | "multi_match": { 478 | "query": "Engine", 479 | "fields": [ 480 | "name", 481 | "comment" 482 | ] 483 | } 484 | } 485 | } 486 | 487 | # query_string 488 | GET bank/_search 489 | { 490 | "query": { 491 | "query_string": { 492 | "query": "Walk Flee*", 493 | "fields": [ 494 | "address", 495 | "employer" 496 | ] 497 | } 498 | } 499 | } 500 | 501 | # term 502 | GET bank/_search 503 | { 504 | "query": { 505 | "term": { 506 | "gender.keyword": "M" 507 | } 508 | } 509 | } 510 | 511 | # terms 512 | GET bank/_search 513 | { 514 | "query": { 515 | "terms": { 516 | "gender.keyword": [ 517 | "F", 518 | "M" 519 | ] 520 | } 521 | } 522 | } 523 | 524 | # range 525 | GET bank/_search 526 | { 527 | "query": { 528 | "range": { 529 | "age": { 530 | "gte": 25, 531 | "lte": 30 532 | } 533 | } 534 | } 535 | } 536 | 537 | # wildcard 538 | GET bank/_search 539 | { 540 | "query": { 541 | "wildcard": { 542 | "lastname.keyword": "D*e" 543 | } 544 | } 545 | } 546 | 547 | # bool 548 | GET bank/_search 549 | { 550 | "query": { 551 | "bool": { 552 | "must": [ 553 | { 554 | "match": { 555 | "address": { 556 | "query": "Fleet" 557 | } 558 | } 559 | } 560 | ], 561 | "filter": [ 562 | { 563 | "term": { 564 | "gender.keyword": "F" 565 | } 566 | }, 567 | { 568 | "range": { 569 | "age": { 570 | "gte": "30" 571 | } 572 | } 573 | } 574 | ] 575 | } 576 | } 577 | } 578 | 579 | # must 580 | GET bank/_search 581 | { 582 | "query": { 583 | "bool": { 584 | "must": [ 585 | { 586 | "match": { 587 | "address": { 588 | "query": "Fleet" 589 | } 590 | } 591 | } 592 | ] 593 | } 594 | } 595 | } 596 | 597 | # filter 598 | GET bank/_search 599 | { 600 | "query": { 601 | "bool": { 602 | "filter": [ 603 | { 604 | "match": { 605 | "address": { 606 | "query": "Fleet" 607 | } 608 | } 609 | } 610 | ] 611 | } 612 | } 613 | } 614 | 615 | # should 616 | GET bank/_search 617 | { 618 | "query": { 619 | "bool": { 620 | "should": [ 621 | { 622 | "match": { 623 | "state": { 624 | "query": "MI", 625 | "boost": 2 626 | } 627 | } 628 | }, 629 | { 630 | "term": { 631 | "gender.keyword": { 632 | "value": "M" 633 | } 634 | } 635 | } 636 | ], 637 | "minimum_should_match": 1 638 | } 639 | } 640 | } 641 | 642 | # must_not 643 | GET bank/_search 644 | { 645 | "query": { 646 | "bool": { 647 | "must_not": [ 648 | { 649 | "match": { 650 | "address": { 651 | "query": "Fleet" 652 | } 653 | } 654 | } 655 | ] 656 | } 657 | } 658 | } 659 | 660 | # bool all 661 | GET bank/_search 662 | { 663 | "query": { 664 | "bool": { 665 | "must": [ 666 | { 667 | "term": { 668 | "gender.keyword": "F" 669 | } 670 | } 671 | ], 672 | "filter": [ 673 | { 674 | "range": { 675 | "age": { 676 | "lte": "30" 677 | } 678 | } 679 | } 680 | ], 681 | "should": [ 682 | { 683 | "match": { 684 | "state": { 685 | "query": "MI" 686 | } 687 | } 688 | }, 689 | { 690 | "match": { 691 | "city": { 692 | "query": "Nogal" 693 | } 694 | } 695 | } 696 | ], 697 | "must_not": [ 698 | { 699 | "match": { 700 | "address": "Hope" 701 | } 702 | } 703 | ] 704 | } 705 | } 706 | } 707 | -------------------------------------------------------------------------------- /ES-Tutorial-5/tools/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-5/tools/.DS_Store -------------------------------------------------------------------------------- /ES-Tutorial-5/tools/bulkapi/README.md: -------------------------------------------------------------------------------- 1 | # bulkapi 2 | 3 | ElasticSearch 클러스터 bulk api request 스크립트를 기술합니다. 4 | 5 | ## bulkapi 스크립트 설치하기 6 | 7 | ```bash 8 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 9 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo git clone https://github.com/benjamin-btn/bulkapi.git 10 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd bulkapi 11 | [ec2-user@ip-xxx-xxx-xxx-xxx bulkapi]$ ./bulk 12 | 13 | ``` 14 | -------------------------------------------------------------------------------- /ES-Tutorial-5/tools/bulkapi/bulk: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json > /dev/null 4 | 5 | curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json > /dev/null 6 | 7 | #curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl > /dev/null 8 | -------------------------------------------------------------------------------- /ES-Tutorial-5/tuto5: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | git pull 4 | 5 | # ES Nori Plugin Install 6 | function install_nori 7 | { 8 | sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install analysis-nori 9 | 10 | } 11 | 12 | function restart_es_process 13 | { 14 | sudo systemctl restart elasticsearch 15 | sudo touch /etc/elasticsearch/userdict_ko.txt 16 | } 17 | 18 | function configure_nori_mappings 19 | { 20 | curl -s localhost:9200 > /dev/null 21 | if [ $? -ne 0 ]; then 22 | echo "Your ES Process is not working yet" 23 | else 24 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/noritest1 -d ' 25 | { 26 | "settings": { 27 | "index": { 28 | "analysis": { 29 | "tokenizer": { 30 | "nori_user_dict": { 31 | "type": "nori_tokenizer", 32 | "decompound_mode": "mixed", 33 | "user_dictionary": "userdict_ko.txt" 34 | } 35 | }, 36 | "analyzer": { 37 | "my_analyzer": { 38 | "type": "custom", 39 | "tokenizer": "nori_user_dict" 40 | } 41 | } 42 | } 43 | } 44 | }, 45 | "mappings": { 46 | "properties": { 47 | "norimsg": { 48 | "type": "text", 49 | "analyzer": "my_analyzer" 50 | } 51 | } 52 | } 53 | }' 54 | fi 55 | 56 | } 57 | 58 | 59 | function get_tokens_by_standard 60 | { 61 | 62 | curl -s localhost:9200 > /dev/null 63 | if [ $? -ne 0 ]; then 64 | echo "Your ES Process is not working yet" 65 | else 66 | ## post="{ 67 | ## \"analyzer\" : \"standard\", 68 | ## \"text\": \"$1\" 69 | ## }" 70 | ## 71 | ## curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/_analyze -d "${post}" 72 | 73 | echo "## Standard Analyzer Tokens" 74 | echo "## text : Winter is Coming!!!" 75 | 76 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/_analyze?pretty -d ' 77 | { 78 | "analyzer": "standard", 79 | "text": "Winter is coming!!!" 80 | }' 81 | fi 82 | } 83 | 84 | function get_tokens_by_nori 85 | { 86 | 87 | curl -s localhost:9200 > /dev/null 88 | if [ $? -ne 0 ]; then 89 | echo "Your ES Process is not working yet" 90 | else 91 | echo "## Nori Analyzer Tokens" 92 | echo "## text : 21세기 세종계획" 93 | 94 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/noritest1/_analyze?pretty -d ' 95 | { 96 | "analyzer": "my_analyzer", 97 | "text": "21세기 세종계획" 98 | }' 99 | fi 100 | } 101 | 102 | 103 | function indexing_by_nori 104 | { 105 | 106 | curl -s localhost:9200 > /dev/null 107 | if [ $? -ne 0 ]; then 108 | echo "Your ES Process is not working yet" 109 | else 110 | echo "## Nori Analyzer Indexing" 111 | echo "## norimsg : 21세기 세종계획" 112 | 113 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/noritest1/_doc -d ' 114 | { 115 | "norimsg": "21세기 세종계획" 116 | }' 117 | fi 118 | } 119 | 120 | function searching_by_nori 121 | { 122 | 123 | curl -s localhost:9200 > /dev/null 124 | if [ $? -ne 0 ]; then 125 | echo "Your ES Process is not working yet" 126 | else 127 | echo "## Nori Analyzer Searching" 128 | echo "## norimsg : 세종" 129 | 130 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/noritest1/_search?pretty -d ' 131 | { 132 | "query": { 133 | "match": { 134 | "norimsg": "세종" 135 | } 136 | } 137 | }' 138 | fi 139 | } 140 | 141 | if [ -z $1 ]; then 142 | echo "##################### Menu ##############" 143 | echo " $ ./tuto5 [Command]" 144 | echo "#####################%%%%%%##############" 145 | echo " 1 : install nori plugin" 146 | echo " 2 : restart es process" 147 | echo " 3 : make a nori mappings" 148 | echo " 4 : standard analyzer tokens" 149 | echo " 5 : nori analyzer tokens" 150 | echo " 6 : nori analyzer indexing" 151 | echo " 7 : nori analyzer searching" 152 | echo "#########################################"; 153 | exit 1; 154 | fi 155 | 156 | case "$1" in 157 | "1" ) install_nori;; 158 | "2" ) restart_es_process;; 159 | "3" ) configure_nori_mappings;; 160 | "4" ) get_tokens_by_standard;; 161 | "5" ) get_tokens_by_nori;; 162 | "6" ) indexing_by_nori;; 163 | "7" ) searching_by_nori;; 164 | *) echo "Incorrect Command" ;; 165 | esac 166 | -------------------------------------------------------------------------------- /ES-Tutorial-6/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-6 2 | 3 | ElasticSearch 여섯 번째 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## InfluxDB & Grafana 설치하기 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | Elasticsearch 가 실행중인 아무 노드에서 실습합니다. 12 | 13 | ```bash 14 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 15 | 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-6 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-6]$ ./tuto6 21 | ##################### Menu ############## 22 | $ ./tuto6 [Command] 23 | #####################%%%%%%############## 24 | 1 : install influxdb packages 25 | 2 : start influxdb process 26 | 3 : install grafana packages 27 | 4 : start grafana process 28 | ######################################### 29 | 30 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-6]$ ./tuto6 1 31 | 32 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-6]$ ./tuto6 2 33 | 34 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-6]$ ./tuto6 3 35 | 36 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-6]$ ./tuto6 4 37 | 38 | ``` 39 | 40 | ## Smoke Test 41 | 42 | ### InfluxDB 43 | 44 | ```bash 45 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-6]$ influx -precision rfc3339 46 | Connected to http://localhost:8086 version 1.x.x 47 | InfluxDB shell 1.x.x 48 | > CREATE DATABASE mdb 49 | > use mdb 50 | > select * from docs 51 | 52 | ``` 53 | 54 | ### Grafana 55 | 56 | * Web Browser 에 [http://FQDN:3000](http://FQDN:3000) 실행 57 | 58 | ![Optional Text](image/grafana.png) 59 | 60 | ## Trouble Shooting 61 | 62 | -------------------------------------------------------------------------------- /ES-Tutorial-6/image/grafana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/image/grafana.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/README.md: -------------------------------------------------------------------------------- 1 | # monworker 2 | 3 | ElasticSearch 클러스터 상태 체크를 하는 스크립트를 기술합니다. 4 | * 개발환경 - Python 2.7.10 5 | * json, urllib3 datetime, influxdb site package import 6 | 7 | ## monworker 스크립트 설치하기 8 | 9 | ```bash 10 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 11 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ 12 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo git clone https://github.com/benjamin-btn/monworker.git 13 | 14 | ``` 15 | 16 | ## 사용 방법 17 | 18 | ```bash 19 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ while(true); do ./monworker.py;sleep 1; done 20 | 21 | ``` 22 | 23 | ```bash 24 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ ./bulk localhost:9200 25 | 26 | ``` 27 | 28 | ```bash 29 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ influx -precision rfc3339 30 | Connected to http://localhost:8086 version 1.x.x 31 | InfluxDB shell 1.x.x 32 | > use mdb 33 | > select * from docs 34 | 35 | ``` 36 | 37 | ## Grafana Settings 38 | 39 | ![Optional Text](image/grafana1.png) 40 | ![Optional Text](image/grafana2.png) 41 | ![Optional Text](image/grafana3.png) 42 | ![Optional Text](image/grafana4.png) 43 | ![Optional Text](image/grafana5.png) 44 | ![Optional Text](image/grafana6.png) 45 | ![Optional Text](image/grafana7.png) 46 | ![Optional Text](image/grafana8.png) 47 | ![Optional Text](image/grafana9.png) 48 | ![Optional Text](image/grafana10.png) 49 | ![Optional Text](image/grafana11.png) 50 | ![Optional Text](image/grafana12.png) 51 | ![Optional Text](image/grafana13.png) 52 | -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/bulk: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ES=$1 4 | 5 | if [ -z $ES ]; then 6 | echo "Usage : ./bulk localhost:9200" 7 | else 8 | while(true); do 9 | curl -s -H 'Content-Type: application/x-ndjson' -XPOST "${ES}/bank/account/_bulk?pretty" --data-binary @accounts.json > /dev/null 10 | python monworker.py 11 | sleep 1 12 | 13 | done 14 | fi 15 | 16 | -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana1.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana10.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana11.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana12.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana13.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana2.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana3.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana4.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana5.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana6.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana7.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana8.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/image/grafana9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-6/tools/monitor/image/grafana9.png -------------------------------------------------------------------------------- /ES-Tutorial-6/tools/monitor/monworker.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import urllib3 5 | import json 6 | from datetime import datetime 7 | from influxdb import InfluxDBClient 8 | 9 | influxUrl = "localhost" 10 | esUrl = "http://localhost:9200" 11 | 12 | def get_ifdb(db, host=influxUrl, port=8086, user='root', passwd='root'): 13 | client = InfluxDBClient(host, port, user, passwd, db) 14 | try: 15 | client.create_database(db) 16 | except: 17 | pass 18 | return client 19 | 20 | def my_test(ifdb): 21 | local_dt = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ') 22 | 23 | statVal = es_mon() 24 | point = [{ 25 | "measurement": 'docs', 26 | "tags": { 27 | "type": "ec2", 28 | }, 29 | "time": local_dt, 30 | "fields": { 31 | "pri_doc": statVal[0], 32 | "tot_doc": statVal[1], 33 | 34 | "pri_idx_tot": statVal[2], 35 | "pri_idx_mil": statVal[3], 36 | "tot_idx_tot": statVal[4], 37 | "tot_idx_mil": statVal[5], 38 | 39 | "pri_squery_tot": statVal[6], 40 | "pri_squery_mil": statVal[7], 41 | "tot_squery_tot": statVal[8], 42 | "tot_squery_mil": statVal[9], 43 | 44 | "pri_sfetch_tot": statVal[10], 45 | "pri_sfetch_mil": statVal[11], 46 | "tot_sfetch_tot": statVal[12], 47 | "tot_sfetch_mil": statVal[13], 48 | 49 | "pri_sscroll_tot": statVal[14], 50 | "pri_sscroll_mil": statVal[15], 51 | "tot_sscroll_tot": statVal[16], 52 | "tot_sscroll_mil": statVal[17], 53 | 54 | "pri_ssuggest_tot": statVal[18], 55 | "pri_ssuggest_mil": statVal[19], 56 | "tot_ssuggest_tot": statVal[20], 57 | "tot_ssuggest_mil": statVal[21] 58 | } 59 | }] 60 | 61 | ifdb.write_points(point) 62 | 63 | def es_mon(): 64 | http = urllib3.PoolManager() 65 | header = { 'Content-Type': 'application/json' } 66 | monCmd = esUrl + "/_stats" 67 | 68 | try: 69 | rtn = http.request("GET",monCmd,body=json.dumps(None),headers=header) 70 | except urllib3.exceptions.HTTPError as errh: 71 | print ("Http Error:",errh) 72 | 73 | monData = json.loads(rtn.data) 74 | rtnVal = [] 75 | rtnVal.append(monData['_all']['primaries']['docs']['count']) 76 | rtnVal.append(monData['_all']['total']['docs']['count']) 77 | 78 | rtnVal.append(monData['_all']['primaries']['indexing']['index_total']) 79 | rtnVal.append(monData['_all']['primaries']['indexing']['index_time_in_millis']) 80 | rtnVal.append(monData['_all']['total']['indexing']['index_total']) 81 | rtnVal.append(monData['_all']['total']['indexing']['index_time_in_millis']) 82 | 83 | rtnVal.append(monData['_all']['primaries']['search']['query_total']) 84 | rtnVal.append(monData['_all']['primaries']['search']['query_time_in_millis']) 85 | rtnVal.append(monData['_all']['total']['search']['query_total']) 86 | rtnVal.append(monData['_all']['total']['search']['query_time_in_millis']) 87 | 88 | rtnVal.append(monData['_all']['primaries']['search']['fetch_total']) 89 | rtnVal.append(monData['_all']['primaries']['search']['fetch_time_in_millis']) 90 | rtnVal.append(monData['_all']['total']['search']['fetch_total']) 91 | rtnVal.append(monData['_all']['total']['search']['fetch_time_in_millis']) 92 | 93 | rtnVal.append(monData['_all']['primaries']['search']['scroll_total']) 94 | rtnVal.append(monData['_all']['primaries']['search']['scroll_time_in_millis']) 95 | rtnVal.append(monData['_all']['total']['search']['scroll_total']) 96 | rtnVal.append(monData['_all']['total']['search']['scroll_time_in_millis']) 97 | 98 | rtnVal.append(monData['_all']['primaries']['search']['suggest_total']) 99 | rtnVal.append(monData['_all']['primaries']['search']['suggest_time_in_millis']) 100 | rtnVal.append(monData['_all']['total']['search']['suggest_total']) 101 | rtnVal.append(monData['_all']['total']['search']['suggest_time_in_millis']) 102 | 103 | return rtnVal 104 | 105 | if __name__ == '__main__': 106 | ifdb = get_ifdb(db='mdb') 107 | my_test(ifdb) 108 | 109 | # while(true); do ./monworker.py; done 110 | -------------------------------------------------------------------------------- /ES-Tutorial-6/tuto6: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | INFLUXDB_URL="https://dl.influxdata.com/influxdb/releases" 4 | INFLUXDB_RPM="influxdb-1.7.4.x86_64.rpm" 5 | GRAFANA_URL="https://dl.grafana.com/oss/release" 6 | GRAFANA_RPM="grafana-5.4.3-1.x86_64.rpm" 7 | 8 | git pull 9 | 10 | # InfluxDB Package Install 11 | function install_if_packages 12 | { 13 | wget &> /dev/null 14 | if [ $? -ne 1 ]; then 15 | sudo yum -y install wget 16 | fi 17 | 18 | ls -alh /usr/local/src/influxdb* &> /dev/null 19 | if [ $? -ne 0 ]; then 20 | sudo wget ${INFLUXDB_URL}/${INFLUXDB_RPM} -O /usr/local/src/${INFLUXDB_RPM} 21 | fi 22 | 23 | rpm -ql influxdb > /dev/null 24 | if [ $? -ne 0 ]; then 25 | sudo rpm -ivh /usr/local/src/${INFLUXDB_RPM} 26 | fi 27 | 28 | pip &> /dev/null 29 | if [ $? -ne 0 ]; then 30 | sudo easy_install pip 31 | sudo pip install urllib3 32 | sudo pip install influxdb --ignore-installed 33 | fi 34 | 35 | } 36 | 37 | # Start InfluxDB 38 | function start_if_process 39 | { 40 | sudo systemctl daemon-reload 41 | sudo systemctl enable influxdb.service 42 | sudo systemctl restart influxdb.service 43 | } 44 | 45 | # Grafana Package Install 46 | function install_gf_packages 47 | { 48 | wget 2> /dev/null 49 | if [ $? -ne 1 ]; then 50 | sudo yum -y install wget 51 | fi 52 | 53 | ls -alh /usr/local/src/grafana* 2> /dev/null 54 | if [ $? -ne 0 ]; then 55 | sudo wget ${GRAFANA_URL}/${GRAFANA_RPM} -O /usr/local/src/${GRAFANA_RPM} 56 | fi 57 | 58 | rpm -ql grafana > /dev/null 59 | if [ $? -ne 0 ]; then 60 | sudo yum -y install fontconfig urw-fonts 61 | sudo rpm -ivh /usr/local/src/${GRAFANA_RPM} 62 | fi 63 | } 64 | 65 | # Start Grafana 66 | function start_gf_process 67 | { 68 | sudo systemctl daemon-reload 69 | sudo systemctl enable grafana-server.service 70 | sudo systemctl restart grafana-server.service 71 | } 72 | 73 | 74 | if [ -z $1 ]; then 75 | echo "##################### Menu ##############" 76 | echo " $ ./tuto6 [Command]" 77 | echo "#####################%%%%%%##############" 78 | echo " 1 : install influxdb packages" 79 | echo " 2 : start influxdb process" 80 | echo " 3 : install grafana packages" 81 | echo " 4 : start grafana process" 82 | echo "#########################################"; 83 | exit 1; 84 | fi 85 | 86 | case "$1" in 87 | "1" ) install_if_packages;; 88 | "2" ) start_if_process;; 89 | "3" ) install_gf_packages;; 90 | "4" ) start_gf_process;; 91 | *) echo "Incorrect Command" ;; 92 | esac 93 | -------------------------------------------------------------------------------- /ES-Tutorial-7/README.md: -------------------------------------------------------------------------------- 1 | # ES-Tutorial-7 2 | 3 | ElasticSearch 일곱 번째 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## Tutorial 7 설치 8 | 9 | 이 튜토리얼에서는 rpm 파일을 이용하여 실습합니다. 10 | 11 | ES-Tutorial-6 을 진행했던 장비에서 실습합니다. 12 | 13 | ```bash 14 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ sudo yum -y install git 15 | 16 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ git clone https://github.com/benjamin-btn/ES7-Tutorial.git 17 | 18 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cd ES7-Tutorial/ES-Tutorial-7 19 | 20 | [ec2-user@ip-xxx-xxx-xxx-xxx ES-Tutorial-7]$ ./tuto7 21 | ##################### Menu ############## 22 | $ ./tuto7 [Command] 23 | #####################%%%%%%############## 24 | 1 : install curator package 25 | 2 : configure es hot template 26 | 3 : install elasticdump package 27 | 4 : install telegram package 28 | 5 : install ansible package 29 | ######################################### 30 | 31 | ``` 32 | 33 | -------------------------------------------------------------------------------- /ES-Tutorial-7/cli/README.md: -------------------------------------------------------------------------------- 1 | # System Script 7th 2 | 3 | ## Ansible 배포환경 준비하기 4 | * ansible 은 ssh 기반으로 배포대상에 접근/배포 진행 5 | + 배포서버의 ssh 공개키가 배포 대상버서의 known_hosts 에 등록되어 있어야 함 6 | 7 | * 배포서버에서 ssh 공개키 생성 8 | 9 | ```bash 10 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ ssh-keygen 11 | Generating public/private rsa key pair. 12 | Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa): 13 | Enter passphrase (empty for no passphrase): 14 | Enter same passphrase again: 15 | Your identification has been saved in /home/ec2-user/.ssh/id_rsa. 16 | Your public key has been saved in /home/ec2-user/.ssh/id_rsa.pub. 17 | The key fingerprint is: 18 | SHA256:MZdDy8G/zSRSkI2jLehAoIoaW2yFEHYaftZXigdKnxY ec2-user@ip-xxx-xxx-xxx-xxx.ap-southeast-1.compute.internal 19 | The key's randomart image is: 20 | +---[RSA 2048]----+ 21 | |o+.+ E o+= | 22 | |oo=.= = oo=+o | 23 | |.o.=.* =oo*+ | 24 | |o.o.o + o+o.o . | 25 | |+ + o S. . * | 26 | |.= . . o | 27 | |o | 28 | | | 29 | | | 30 | +----[SHA256]-----+ 31 | 32 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ ls ~/.ssh 33 | authorized_keys id_rsa id_rsa.pub known_hosts 34 | 35 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ cat ~/.ssh/id_rsa.pub 36 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClJdQ0NStPDIyJo+VHMSDAyvvZ/ASOXQLoz3z7HU+ZN0bBS6bQiY0Ve4rbzGJ6ZXBRshrKh8DiwzAIVcKfLm0ijdTX43ZL/jhz2f8zuLKO6hh5pW9pEoD+TMOX3mwLmEFqTcmgWnv/e0gLJWVNk8mLUJqDfC23c1NUnWYyGvmcK2H8ypS330lk5KugvQkSX6FpbbrWt3M61N2xH55amHnl1nuO8mwlcqLMdsI+RfFm+9RNPC/vFv3fGFTz2i1sAkr3UCe19sxLMbh1l4SPjlqflTmc5/PJs9iDWAI8Fe7DXOxB5krAkAdKM52oh49DazLB3l+WAB6sRAQM+276L ec2-user@ip-xxx-xxx-xxx-xxx.ap-southeast-1.compute.internal 37 | 38 | ``` 39 | 40 | * 배포 대상서버에 배포서버 공개키를 등록하여 배포서버가 인증없이 배포 대상서버에 접근할 수 있도록 설정 41 | 42 | ```bash 43 | [ec2-user@ip-xxx-xxx-xxx-xxx ~]$ echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClJdQ0NStPDIyJo+VHMSDAyvvZ/ASOXQLoz3z7HU+ZN0bBS6bQiY0Ve4rbzGJ6ZXBRshrKh8DiwzAIVcKfLm0ijdTX43ZL/jhz2f8zuLKO6hh5pW9pEoD+TMOX3mwLmEFqTcmgWnv/e0gLJWVNk8mLUJqDfC23c1NUnWYyGvmcK2H8ypS330lk5KugvQkSX6FpbbrWt3M61N2xH55amHnl1nuO8mwlcqLMdsI+RfFm+9RNPC/vFv3fGFTz2i1sAkr3UCe19sxLMbh1l4SPjlqflTmc5/PJs9iDWAI8Fe7DXOxB5krAkAdKM52oh49DazLB3l+WAB6sRAQM+276L ec2-user@ip-xxx-xxx-xxx-xxx.ap-southeast-1.compute.internal' >> ~/.ssh/known_hosts 44 | 45 | ``` 46 | 47 | * 완료되었으면 배포서버에서 배포 대상서버로 ssh 접근 테스트 48 | 49 | * 정상접근 확인 후 ansible 을 이용해 배포 시작 50 | 51 | 52 | 53 | -------------------------------------------------------------------------------- /ES-Tutorial-7/query/query: -------------------------------------------------------------------------------- 1 | DELETE bank 2 | 3 | PUT bank 4 | { 5 | "settings": { 6 | "index": { 7 | "number_of_shards": 1, 8 | "number_of_replicas": 0 9 | } 10 | } 11 | } 12 | 13 | PUT /_all/_settings 14 | { 15 | "index.routing.allocation.require.box_type" : "hot" 16 | } 17 | 18 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-7/tools/.DS_Store -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/hosts: -------------------------------------------------------------------------------- 1 | #[master-node] 2 | #[data-node] 3 | #[all-node:vars] 4 | #ansible_ssh_private_key_file=/home/ec2-user/ES7-Tutorial/common/ES-Key.pem 5 | #ansible_ssh_common_args='-o StrictHostKeyChecking=no' 6 | #[all-node] 7 | 8 | #[rolling-node:vars] 9 | #ansible_ssh_private_key_file=/home/ec2-user/ES7-Tutorial/common/ES-Key.pem 10 | #ansible_ssh_common_args='-o StrictHostKeyChecking=no' 11 | #[rolling-node] 12 | #172.31.9.121 13 | #172.31.11.163 14 | #172.31.9.32 15 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/elasticsearch/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | es_download_url: "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.rpm" 3 | minimum_master_nodes: 2 4 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/elasticsearch/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: restart elasticsearch 3 | service: 4 | name: elasticsearch 5 | state: restarted 6 | become: true 7 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/elasticsearch/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: install dependencies 2 | yum: 3 | name: "{{ item }}" 4 | state: present 5 | with_items: 6 | - java 7 | - unzip 8 | become: yes 9 | 10 | - name: check elasticsearch is runnning 11 | shell: ps aux | grep -v grep | grep -ic elastic 12 | register: elasticsearch_checker 13 | failed_when: elasticsearch_checker.rc == 2 14 | 15 | - name: download elasticsearch 16 | get_url: 17 | url: "{{ es_download_url }}" 18 | dest: /home/ec2-user/elasticsearch-{{ es_version }}.rpm 19 | when: elasticsearch_checker.stdout == "0" 20 | 21 | - name: install elasticsearch 22 | yum: 23 | name: /home/ec2-user/elasticsearch-{{ es_version }}.rpm 24 | state: present 25 | when: elasticsearch_checker.stdout == "0" 26 | become: yes 27 | 28 | - name: create a directory for elasticsearch 29 | file: 30 | path: /var/lib/elasticsearch 31 | state: directory 32 | owner: elasticsearch 33 | group: elasticsearch 34 | mode: 0755 35 | when: elasticsearch_checker.stdout == "0" 36 | become: yes 37 | 38 | - name: create a data directory for elasticsearch 39 | file: 40 | path: /etc/systemd/system/elasticsearch.service.d/ 41 | state: directory 42 | owner: root 43 | group: root 44 | mode: 0755 45 | when: elasticsearch_checker.stdout == "0" 46 | become: yes 47 | 48 | #- name: copy override config file 49 | # template: 50 | # src: override.conf.j2 51 | # dest: /etc/systemd/system/elasticsearch.service.d/override.conf 52 | # when: elasticsearch_checker.stdout == "0" 53 | 54 | - name: create a symlink elasticsearch config 55 | file: 56 | src: /etc/elasticsearch 57 | dest: /var/lib/elasticsearch/config 58 | state: link 59 | owner: elasticsearch 60 | group: elasticsearch 61 | when: elasticsearch_checker.stdout == "0" 62 | become: yes 63 | 64 | - name: modify default jvm heap size (Xms) 65 | replace: 66 | destfile: /var/lib/elasticsearch/config/jvm.options 67 | regexp: -Xms2g 68 | replace: -Xms{{ heap_size }}g 69 | when: elasticsearch_checker.stdout == "0" 70 | become: yes 71 | 72 | - name: modify default jvm heap size (Xmx) 73 | replace: 74 | destfile: /var/lib/elasticsearch/config/jvm.options 75 | regexp: -Xmx2g 76 | replace: -Xmx{{ heap_size }}g 77 | when: elasticsearch_checker.stdout == "0" 78 | become: yes 79 | 80 | - name: modify default jvm heap size (Xms) in ES 6.x 81 | replace: 82 | destfile: /var/lib/elasticsearch/config/jvm.options 83 | regexp: -Xms1g 84 | replace: -Xms{{ heap_size }}g 85 | when: elasticsearch_checker.stdout == "0" 86 | become: yes 87 | 88 | - name: modify default jvm heap size (Xmx) in ES 6.x 89 | replace: 90 | destfile: /var/lib/elasticsearch/config/jvm.options 91 | regexp: -Xmx1g 92 | replace: -Xmx{{ heap_size }}g 93 | when: elasticsearch_checker.stdout == "0" 94 | become: yes 95 | 96 | - name: copy elasticsearch config file 97 | template: 98 | src: elasticsearch.yml.j2 99 | dest: /var/lib/elasticsearch/config/elasticsearch.yml 100 | when: elasticsearch_checker.stdout == "0" 101 | become: yes 102 | 103 | - name: reload systemd 104 | command: systemctl daemon-reload 105 | when: elasticsearch_checker.stdout == "0" 106 | notify: restart elasticsearch 107 | become: yes 108 | 109 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/elasticsearch/templates/elasticsearch.yml.j2: -------------------------------------------------------------------------------- 1 | cluster.name: {{ cluster_name }} 2 | node.name: {{ ansible_nodename }} 3 | node.master: {{ master|lower }} 4 | node.data: {{ data|lower }} 5 | path.data: /var/lib/elasticsearch/data 6 | path.logs: /var/lib/elasticsearch/logs 7 | network.bind_host: 0.0.0.0 8 | network.publish_host: {{ ansible_eth0["ipv4"]["address"] }} 9 | discovery.zen.minimum_master_nodes: {{ minimum_master_nodes }} 10 | http.port: 9200 11 | transport.tcp.port: 9300 12 | {% if groups['master-node'] is defined %} 13 | discovery.zen.ping.unicast.hosts: [ {% for host in groups['master-node'] %} "{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}:9300", {% endfor %} ] 14 | {% else %} 15 | discovery.zen.ping.unicast.hosts: [ {% for host in groups['all-node'] %} "{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}:9300", {% endfor %} ] 16 | {% endif %} 17 | http.cors.enabled: true 18 | http.cors.allow-origin: "*" 19 | thread_pool.bulk.queue_size: 10000 20 | thread_pool.search.queue_size: 10000 21 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/filebeat/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | filebeat_download_url: "https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.0-x86_64.rpm" 3 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/filebeat/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: restart filebeat 3 | service: 4 | name: filebeat 5 | state: restarted 6 | become: true 7 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/filebeat/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: download filebeat 2 | get_url: 3 | url: "{{ filebeat_download_url }}" 4 | dest: /home/ec2-user/filebeat-{{ filebeat_version }}-x86_64.rpm 5 | 6 | - name: install filebeat 7 | yum: 8 | name: /home/ec2-user/filebeat-{{ filebeat_version }}-x86_64.rpm 9 | state: present 10 | become: true 11 | 12 | - name: copy filebeat config file 13 | template: 14 | src: filebeat.yml.j2 15 | dest: /etc/filebeat/filebeat.yml 16 | notify: restart filebeat 17 | become: true 18 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/filebeat/templates/filebeat.yml.j2: -------------------------------------------------------------------------------- 1 | ###################### Filebeat Configuration Example ######################### 2 | 3 | # This file is an example configuration file highlighting only the most common 4 | # options. The filebeat.full.yml file from the same directory contains all the 5 | # supported options with more comments. You can use it as a reference. 6 | # 7 | # You can find the full configuration reference here: 8 | # https://www.elastic.co/guide/en/beats/filebeat/index.html 9 | 10 | #=========================== Filebeat prospectors ============================= 11 | 12 | filebeat.prospectors: 13 | 14 | - input_type: log 15 | 16 | # Paths that should be crawled and fetched. Glob based paths. 17 | paths: 18 | - /var/lib/elasticsearch/logs/{{ cluster_name }}.log 19 | - /var/lib/elasticsearch/logs/*_slowlog.log 20 | 21 | multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}' 22 | multiline.negate: true 23 | multiline.match: after 24 | 25 | 26 | #================================ Outputs ===================================== 27 | 28 | # Configure what outputs to use when sending the data collected by the beat. 29 | # Multiple outputs may be used. 30 | 31 | #-------------------------- Elasticsearch output ------------------------------ 32 | output.elasticsearch: 33 | hosts: ["localhost:9200"] 34 | index: "elasticsearch-logs-%{+yyyy.MM.dd}" 35 | 36 | #------------------------------ Logstash output ------------------------------- 37 | output.logstash: 38 | hosts: ["localhost:5043"] 39 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/rolling/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: install rolling restart script 2 | template: 3 | src: rolling_restart.sh.j2 4 | dest: /home/ec2-user/rolling_restart.sh 5 | 6 | - name: change mode script 7 | file: 8 | path: /home/ec2-user/rolling_restart.sh 9 | mode: 0755 10 | 11 | - name: run script 12 | shell: /home/ec2-user/rolling_restart.sh 13 | register: elasticsearch_update 14 | 15 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/roles/rolling/templates/rolling_restart.sh.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | hostname=`hostname -s` 4 | 5 | curl -XPUT 'localhost:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d' 6 | { 7 | "transient": { 8 | "cluster.routing.allocation.enable": "new_primaries" 9 | } 10 | } 11 | ' 12 | 13 | curl -XPOST 'localhost:9200/_flush/synced?pretty' 14 | 15 | ## end 16 | 17 | sudo service elasticsearch restart 18 | #systemctl stop elasticsearch 19 | 20 | #rpm -U /usr/local/src/elasticsearch-5.6.1.rpm 21 | #yum update -y java 22 | 23 | #systemctl start elasticsearch 24 | 25 | while true; do 26 | cluster_health=`curl -s http://localhost:9200/_cat/nodes | grep -ic $hostname` 27 | 28 | if [ $cluster_health -eq 1 ]; then 29 | break 30 | fi 31 | echo "wait.." 32 | sleep 5 33 | done 34 | 35 | curl -XPUT 'localhost:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d' 36 | { 37 | "transient": { 38 | "cluster.routing.allocation.enable": null 39 | } 40 | } 41 | ' 42 | 43 | while true; do 44 | cluster_health=`curl -s http://localhost:9200/_cat/health | grep -ic green` 45 | 46 | if [ $cluster_health -eq 1 ]; then 47 | break 48 | fi 49 | echo "wait.." 50 | sleep 10 51 | done 52 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/ansible/site.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: install elasticsearch (master-node) 3 | hosts: master-node 4 | vars: 5 | - cluster_name: ansible-test 6 | - master: true 7 | - data: false 8 | - heap_size: 2 9 | - es_version: 6.8.6 10 | - filebeat_version : 6.8.6 11 | roles: 12 | - elasticsearch 13 | - filebeat 14 | 15 | - name: install elasticsearch (data-node) 16 | hosts: data-node 17 | vars: 18 | - cluster_name: ansible-test 19 | - master: false 20 | - data: true 21 | - heap_size: 2 22 | - es_version: 6.8.6 23 | - filebeat_version : 6.8.6 24 | roles: 25 | - elasticsearch 26 | - filebeat 27 | 28 | - name: install elasticsearch (all-node) 29 | hosts: all-node 30 | vars: 31 | - cluster_name: ansible-test 32 | - master: true 33 | - data: true 34 | - heap_size: 2 35 | - es_version: 6.8.6 36 | - filebeat_version : 6.8.6 37 | roles: 38 | - elasticsearch 39 | - filebeat 40 | 41 | - name: install filebeat 42 | hosts: filebeat-node 43 | vars: 44 | - cluster_name: ansible-test 45 | - filebeat_version : 6.8.6 46 | roles: 47 | - filebeat 48 | 49 | - name: rolling restart 50 | hosts: rolling-node 51 | serial: 1 52 | roles: 53 | - rolling 54 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/alias.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: alias 4 | options: 5 | name: tuto-today 6 | add: 7 | filters: 8 | - filtertype: pattern 9 | kind: prefix 10 | value: '^tuto-.*' 11 | exclude: 12 | - filtertype: age 13 | source: name 14 | direction: younger 15 | timestring: '%Y-%m-%d' 16 | unit: days 17 | unit_count: 1 18 | exclude: 19 | remove: 20 | filters: 21 | - filtertype: pattern 22 | kind: prefix 23 | value: '^tuto-.*' 24 | exclude: 25 | - filtertype: age 26 | source: name 27 | direction: older 28 | timestring: '%Y-%m-%d' 29 | unit: days 30 | unit_count: 1 31 | exclude: 32 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/close.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: close 4 | options: 5 | delete_aliases: False 6 | disable_action: False 7 | filters: 8 | - filtertype: pattern 9 | kind: prefix 10 | value: '^[a-z].*' 11 | exclude: 12 | - filtertype: age 13 | source: name 14 | direction: older 15 | timestring: '%Y-%m-%d' 16 | unit: days 17 | unit_count: 2 18 | exclude: 19 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/delete_duration.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: delete_indices 4 | options: 5 | ignore_empty_list: True 6 | timeout_override: 7 | continue_if_exception: False 8 | disable_action: False 9 | filters: 10 | - filtertype: pattern 11 | kind: prefix 12 | value: '^[a-z].*' 13 | exclude: 14 | - filtertype: kibana 15 | exclude: True 16 | - filtertype: age 17 | source: creation_date 18 | direction: older 19 | unit: days 20 | unit_count: 0 21 | exclude: 22 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/delete_name.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: delete_indices 4 | options: 5 | ignore_empty_list: True 6 | continue_if_exception: False 7 | disable_action: False 8 | filters: 9 | - filtertype: pattern 10 | kind: prefix 11 | value: '^[a-z].*' 12 | exclude: 13 | - filtertype: age 14 | source: name 15 | direction: older 16 | timestring: '%Y-%m-%d' 17 | unit: days 18 | unit_count: 1 19 | exclude: 20 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/es.action.yml: -------------------------------------------------------------------------------- 1 | actions: # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/index.html 2 | 1: 3 | action: delete_indices # curator action 4 | options: 5 | ignore_empty_list: True # 6 | timeout_override: # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/option_timeout_override.html 7 | continue_if_exception: False # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/option_continue.html 8 | disable_action: False # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/option_disable.html 9 | filters: 10 | - filtertype: pattern # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/filtertype_pattern.html 11 | kind: prefix 12 | value: '^[a-z].*-' 13 | exclude: 14 | - filtertype: age # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/filtertype_age.html 15 | source: creation_date 16 | direction: older 17 | unit: days 18 | unit_count: 0 19 | exclude: 20 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/forcemerge.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: forcemerge 4 | options: 5 | max_num_segments: 1 6 | filters: 7 | - filtertype: pattern 8 | kind: prefix 9 | value: '^[a-z].*' 10 | exclude: 11 | - filtertype: age 12 | source: creation_date 13 | direction: older 14 | unit: days 15 | unit_count: 0 16 | exclude: 17 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/open.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: open 4 | options: 5 | disable_action: False 6 | filters: 7 | - filtertype: pattern 8 | kind: prefix 9 | value: '^[a-z].*' 10 | exclude: 11 | - filtertype: age 12 | source: name 13 | direction: older 14 | timestring: '%Y-%m-%d' 15 | unit: days 16 | unit_count: 1 17 | exclude: 18 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/rollover.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: rollover 4 | options: 5 | name: rollover-write 6 | conditions: 7 | max_age: 1d 8 | max_docs: 2 9 | extra_settings: 10 | index.number_of_shards: 3 11 | index.number_of_replicas: 1 12 | continue_if_exception: False 13 | disable_action: False 14 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/rollover_alias.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: alias 4 | options: 5 | name: rollover-read 6 | add: 7 | filters: 8 | - filtertype: pattern 9 | kind: prefix 10 | value: '^rollover-curator-.*' 11 | exclude: 12 | - filtertype: age 13 | source: creation_date 14 | direction: older 15 | timestring: '%Y-%m-%d' 16 | unit: days 17 | unit_count: 0 18 | exclude: 19 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/action/warm.action.yml: -------------------------------------------------------------------------------- 1 | actions: 2 | 1: 3 | action: allocation 4 | options: 5 | key: box_type 6 | value: warmdata 7 | allocation_type: require 8 | filters: 9 | - filtertype: pattern 10 | kind: prefix 11 | value: '^[a-z].*-' 12 | exclude: 13 | - filtertype: age 14 | source: name 15 | direction: older 16 | timestring: '%Y-%m-%d' 17 | unit: days 18 | unit_count: 1 19 | exclude: 20 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/config/es.config.yml: -------------------------------------------------------------------------------- 1 | client: # https://www.elastic.co/guide/en/elasticsearch/client/curator/5.6/configfile.html 2 | hosts: 3 | - localhost 4 | port: 9200 5 | url_prefix: 6 | use_ssl: False 7 | certificate: 8 | client_cert: 9 | client_key: 10 | ssl_no_validate: False 11 | http_auth: 12 | timeout: 30 13 | master_only: False 14 | 15 | logging: 16 | loglevel: INFO 17 | logfile: /home/ec2-user/ES7-Tutorial/ES-Tutorial-7/tools/curator/logs/curator.log 18 | logformat: default 19 | blacklist: ['elasticsearch', 'urllib3'] 20 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/cur.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | function delete_duration_indices 4 | { 5 | if [ -z $1 ]; then 6 | echo "##########################################################" 7 | cat config/es.config.yml 8 | echo "##########################################################" 9 | cat action/delete_duration.action.yml 10 | echo "##########################################################" 11 | /bin/curator --dry-run --config config/es.config.yml action/delete_duration.action.yml # dry-run 12 | cat logs/curator.log 13 | \rm logs/curator.log 14 | elif [ $1 == "real" ]; then 15 | /bin/curator --config config/es.config.yml action/delete_duration.action.yml 16 | cat logs/curator.log 17 | \rm logs/curator.log 18 | else 19 | echo "Incorrect parameter" 20 | fi 21 | } 22 | 23 | function create_indices 24 | { 25 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/tuto-$(date -d '2 day ago' '+%Y-%m-%d')/_doc -d '{ "TEST":"TTT" }' > /dev/null 26 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/tuto-$(date -d '1 day ago' '+%Y-%m-%d')/_doc -d '{ "TEST":"TTT" }' > /dev/null 27 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/tuto-$(date -d '0 day ago' '+%Y-%m-%d')/_doc -d '{ "TEST":"TTT" }' > /dev/null 28 | } 29 | 30 | function delete_name_indices 31 | { 32 | if [ -z $1 ]; then 33 | echo "##########################################################" 34 | cat action/delete_name.action.yml 35 | echo "##########################################################" 36 | /bin/curator --dry-run --config config/es.config.yml action/delete_name.action.yml # dry-run 37 | cat logs/curator.log 38 | \rm logs/curator.log 39 | elif [ $1 == "real" ]; then 40 | /bin/curator --config config/es.config.yml action/delete_name.action.yml # dry-run 41 | cat logs/curator.log 42 | \rm logs/curator.log 43 | else 44 | echo "Incorrect parameter" 45 | fi 46 | 47 | } 48 | 49 | function add_alias 50 | { 51 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/tuto-$(date -d '1 day ago' '+%Y-%m-%d')/_aliases/tuto-today > /dev/null 52 | if [ -z $1 ]; then 53 | echo "##########################################################" 54 | cat action/alias.action.yml 55 | echo "##########################################################" 56 | /bin/curator --dry-run --config config/es.config.yml action/alias.action.yml # dry-run 57 | cat logs/curator.log 58 | \rm logs/curator.log 59 | elif [ $1 == "real" ]; then 60 | /bin/curator --config config/es.config.yml action/alias.action.yml # dry-run 61 | cat logs/curator.log 62 | \rm logs/curator.log 63 | else 64 | echo "Incorrect parameter" 65 | fi 66 | 67 | } 68 | 69 | function close_indices 70 | { 71 | if [ -z $1 ]; then 72 | echo "##########################################################" 73 | cat action/close.action.yml 74 | echo "##########################################################" 75 | /bin/curator --dry-run --config config/es.config.yml action/close.action.yml # dry-run 76 | cat logs/curator.log 77 | \rm logs/curator.log 78 | elif [ $1 == "real" ]; then 79 | /bin/curator --config config/es.config.yml action/close.action.yml 80 | cat logs/curator.log 81 | \rm logs/curator.log 82 | else 83 | echo "Incorrect parameter" 84 | fi 85 | 86 | } 87 | 88 | function open_indices 89 | { 90 | if [ -z $1 ]; then 91 | echo "##########################################################" 92 | cat action/open.action.yml 93 | echo "##########################################################" 94 | /bin/curator --dry-run --config config/es.config.yml action/open.action.yml # dry-run 95 | cat logs/curator.log 96 | \rm logs/curator.log 97 | elif [ $1 == "real" ]; then 98 | /bin/curator --config config/es.config.yml action/open.action.yml # dry-run 99 | cat logs/curator.log 100 | \rm logs/curator.log 101 | else 102 | echo "Incorrect parameter" 103 | fi 104 | 105 | } 106 | 107 | function forcemerge_indices 108 | { 109 | if [ -z $1 ]; then 110 | echo "##########################################################" 111 | cat action/forcemerge.action.yml 112 | echo "##########################################################" 113 | /bin/curator --dry-run --config config/es.config.yml action/forcemerge.action.yml # dry-run 114 | cat logs/curator.log 115 | \rm logs/curator.log 116 | elif [ $1 == "real" ]; then 117 | /bin/curator --config config/es.config.yml action/forcemerge.action.yml # dry-run 118 | cat logs/curator.log 119 | \rm logs/curator.log 120 | else 121 | echo "Incorrect parameter" 122 | fi 123 | 124 | } 125 | 126 | function rollover_indices 127 | { 128 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/rollover-curator-000001 -d ' 129 | { 130 | "aliases": { 131 | "rollover-write": {}, 132 | "rollover-read": {} 133 | } 134 | } 135 | ' > /dev/null 136 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/rollover-write/_doc -d '{ "TEST":"TTT" }' > /dev/null 137 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/rollover-write/_doc -d '{ "TEST":"TTT" }' > /dev/null 138 | curl -s -H 'Content-Type: application/json' -XPOST http://localhost:9200/rollover-write/_doc -d '{ "TEST":"TTT" }' > /dev/null 139 | 140 | if [ -z $1 ]; then 141 | echo "##########################################################" 142 | cat action/rollover.action.yml 143 | echo "##########################################################" 144 | /bin/curator --dry-run --config config/es.config.yml action/rollover.action.yml # dry-run 145 | sleep 3 146 | /bin/curator --dry-run --config config/es.config.yml action/rollover_alias.action.yml # dry-run 147 | cat logs/curator.log 148 | \rm logs/curator.log 149 | elif [ $1 == "real" ]; then 150 | /bin/curator --config config/es.config.yml action/rollover.action.yml # dry-run 151 | sleep 3 152 | /bin/curator --config config/es.config.yml action/rollover_alias.action.yml # dry-run 153 | cat logs/curator.log 154 | \rm logs/curator.log 155 | else 156 | echo "Incorrect parameter" 157 | fi 158 | 159 | } 160 | 161 | function towarm_indices 162 | { 163 | if [ -z $1 ]; then 164 | echo "##########################################################" 165 | cat action/warm.action.yml 166 | echo "##########################################################" 167 | /bin/curator --dry-run --config config/es.config.yml action/warm.action.yml # dry-run 168 | cat logs/curator.log 169 | \rm logs/curator.log 170 | elif [ $1 == "real" ]; then 171 | /bin/curator --config config/es.config.yml action/warm.action.yml # dry-run 172 | cat logs/curator.log 173 | \rm logs/curator.log 174 | else 175 | echo "Incorrect parameter" 176 | fi 177 | 178 | } 179 | 180 | if [ -z $1 ]; then 181 | echo "##################### Menu ##############" 182 | echo " $ ./cur.sh [Command] [real]" 183 | echo "#####################%%%%%%##############" 184 | echo " 1 : delete all indices by duration without kibana index" 185 | echo " 2 : create test indices" 186 | echo " 3 : delete all indices by name without kibana index" 187 | echo " 4 : add a alias" 188 | echo " 5 : close indices" 189 | echo " 6 : open indices" 190 | echo " 7 : forcemerge indices" 191 | echo " 8 : rollover indices" 192 | echo " 9 : towarm indices" 193 | echo "#########################################"; 194 | exit 1; 195 | fi 196 | 197 | case "$1" in 198 | "1" ) delete_duration_indices $2;; 199 | "2" ) create_indices;; 200 | "3" ) delete_name_indices $2;; 201 | "4" ) add_alias $2;; 202 | "5" ) close_indices $2;; 203 | "6" ) open_indices $2;; 204 | "7" ) forcemerge_indices $2;; 205 | "8" ) rollover_indices $2;; 206 | "9" ) towarm_indices $2;; 207 | *) echo "Incorrect Command" ;; 208 | esac 209 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/curator/logs/es.log: -------------------------------------------------------------------------------- 1 | 2 | 2018-11-20 12:51:07,462 INFO Preparing Action ID: 1, "rollover" 3 | 2018-11-20 12:51:07,467 INFO Trying Action ID: 1, "rollover": No description given 4 | 2018-11-20 12:51:07,469 ERROR alias "aliasname" not found. 5 | 2018-11-20 12:51:07,469 ERROR Failed to complete action: rollover. : Unable to perform index rollover with alias "aliasname". See previous logs for more details. 6 | 2018-11-20 12:52:52,774 INFO Preparing Action ID: 1, "rollover" 7 | 2018-11-20 12:52:52,780 INFO Trying Action ID: 1, "rollover": No description given 8 | 2018-11-20 12:52:52,782 INFO DRY-RUN MODE. No changes will be made. 9 | 2018-11-20 12:52:52,786 INFO DRY-RUN: rollover: test-today result: {u'dry_run': True, u'conditions': {u'[max_docs: 2]': False, u'[max_age: 1d]': False}, u'new_index': u'test-2018-11-000021', u'acknowledged': False, u'old_index': u'test-2018-11-20', u'shards_acknowledged': False, u'rolled_over': False} 10 | 2018-11-20 12:52:52,786 INFO Action ID: 1, "rollover" completed. 11 | 2018-11-20 12:52:52,786 INFO Job completed. 12 | 2018-11-20 12:53:53,852 ERROR Schema error: extra keys not allowed @ data['filters'] 13 | 2018-11-20 12:54:27,064 INFO Preparing Action ID: 1, "rollover" 14 | 2018-11-20 12:54:27,070 INFO Trying Action ID: 1, "rollover": No description given 15 | 2018-11-20 12:54:27,072 INFO DRY-RUN MODE. No changes will be made. 16 | 2018-11-20 12:54:27,075 INFO DRY-RUN: rollover: test-today result: {u'dry_run': True, u'conditions': {u'[max_docs: 2]': False, u'[max_age: 1d]': False}, u'new_index': u'test-2018-11-000021', u'acknowledged': False, u'old_index': u'test-2018-11-20', u'shards_acknowledged': False, u'rolled_over': False} 17 | 2018-11-20 12:54:27,076 INFO Action ID: 1, "rollover" completed. 18 | 2018-11-20 12:54:27,076 INFO Job completed. 19 | 2018-11-20 12:54:35,567 INFO Preparing Action ID: 1, "rollover" 20 | 2018-11-20 12:54:35,572 INFO Trying Action ID: 1, "rollover": No description given 21 | 2018-11-20 12:54:35,574 INFO Performing index rollover 22 | 2018-11-20 12:54:35,577 INFO Action ID: 1, "rollover" completed. 23 | 2018-11-20 12:54:35,577 INFO Job completed. 24 | 2018-11-20 12:55:06,598 INFO Preparing Action ID: 1, "rollover" 25 | 2018-11-20 12:55:06,604 INFO Trying Action ID: 1, "rollover": No description given 26 | 2018-11-20 12:55:06,606 INFO Performing index rollover 27 | 2018-11-20 12:55:06,686 INFO Action ID: 1, "rollover" completed. 28 | 2018-11-20 12:55:06,687 INFO Job completed. 29 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/shard/bulk: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | curl -s -H 'Content-Type: application/x-ndjson' -XPOST $1:9200/bank/account/_bulk?pretty --data-binary @accounts.json 4 | 5 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/shard/monworker.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import urllib3 5 | import json 6 | from datetime import datetime 7 | from influxdb import InfluxDBClient 8 | 9 | influxUrl = "localhost" 10 | esUrl = "localhost:9200" 11 | 12 | def get_ifdb(db, host=influxUrl, port=8086, user='root', passwd='root'): 13 | client = InfluxDBClient(host, port, user, passwd, db) 14 | try: 15 | client.create_database(db) 16 | except: 17 | pass 18 | return client 19 | 20 | def my_test(ifdb): 21 | local_dt = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ') 22 | 23 | statVal = es_mon() 24 | point = [{ 25 | "measurement": 'docs', 26 | "tags": { 27 | "type": "ec2", 28 | }, 29 | "time": local_dt, 30 | "fields": { 31 | "pri_doc": statVal[0], 32 | "tot_doc": statVal[1], 33 | 34 | "pri_idx_tot": statVal[2], 35 | "pri_idx_mil": statVal[3], 36 | "tot_idx_tot": statVal[4], 37 | "tot_idx_mil": statVal[5], 38 | 39 | "pri_squery_tot": statVal[6], 40 | "pri_squery_mil": statVal[7], 41 | "tot_squery_tot": statVal[8], 42 | "tot_squery_mil": statVal[9], 43 | 44 | "pri_sfetch_tot": statVal[10], 45 | "pri_sfetch_mil": statVal[11], 46 | "tot_sfetch_tot": statVal[12], 47 | "tot_sfetch_mil": statVal[13], 48 | 49 | "pri_sscroll_tot": statVal[14], 50 | "pri_sscroll_mil": statVal[15], 51 | "tot_sscroll_tot": statVal[16], 52 | "tot_sscroll_mil": statVal[17], 53 | 54 | "pri_ssuggest_tot": statVal[18], 55 | "pri_ssuggest_mil": statVal[19], 56 | "tot_ssuggest_tot": statVal[20], 57 | "tot_ssuggest_mil": statVal[21] 58 | } 59 | }] 60 | 61 | ifdb.write_points(point) 62 | 63 | def es_mon(): 64 | http = urllib3.PoolManager() 65 | header = { 'Content-Type': 'application/json' } 66 | monCmd = esUrl + "/_stats" 67 | 68 | try: 69 | rtn = http.request("GET",monCmd,body=json.dumps(None),headers=header) 70 | except urllib3.exceptions.HTTPError as errh: 71 | print ("Http Error:",errh) 72 | 73 | monData = json.loads(rtn.data) 74 | rtnVal = [] 75 | rtnVal.append(monData['_all']['primaries']['docs']['count']) 76 | rtnVal.append(monData['_all']['total']['docs']['count']) 77 | 78 | rtnVal.append(monData['_all']['primaries']['indexing']['index_total']) 79 | rtnVal.append(monData['_all']['primaries']['indexing']['index_time_in_millis']) 80 | rtnVal.append(monData['_all']['total']['indexing']['index_total']) 81 | rtnVal.append(monData['_all']['total']['indexing']['index_time_in_millis']) 82 | 83 | rtnVal.append(monData['_all']['primaries']['search']['query_total']) 84 | rtnVal.append(monData['_all']['primaries']['search']['query_time_in_millis']) 85 | rtnVal.append(monData['_all']['total']['search']['query_total']) 86 | rtnVal.append(monData['_all']['total']['search']['query_time_in_millis']) 87 | 88 | rtnVal.append(monData['_all']['primaries']['search']['fetch_total']) 89 | rtnVal.append(monData['_all']['primaries']['search']['fetch_time_in_millis']) 90 | rtnVal.append(monData['_all']['total']['search']['fetch_total']) 91 | rtnVal.append(monData['_all']['total']['search']['fetch_time_in_millis']) 92 | 93 | rtnVal.append(monData['_all']['primaries']['search']['scroll_total']) 94 | rtnVal.append(monData['_all']['primaries']['search']['scroll_time_in_millis']) 95 | rtnVal.append(monData['_all']['total']['search']['scroll_total']) 96 | rtnVal.append(monData['_all']['total']['search']['scroll_time_in_millis']) 97 | 98 | rtnVal.append(monData['_all']['primaries']['search']['suggest_total']) 99 | rtnVal.append(monData['_all']['primaries']['search']['suggest_time_in_millis']) 100 | rtnVal.append(monData['_all']['total']['search']['suggest_total']) 101 | rtnVal.append(monData['_all']['total']['search']['suggest_time_in_millis']) 102 | 103 | return rtnVal 104 | 105 | if __name__ == '__main__': 106 | ifdb = get_ifdb(db='mdb') 107 | my_test(ifdb) 108 | 109 | # while(true); do ./monworker.py; done 110 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/shard/search_test.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | #-*- coding: utf-8 -*- 3 | 4 | import urllib3 5 | import json 6 | import threading 7 | import time 8 | import argparse 9 | import logging 10 | import sys 11 | 12 | from urllib3 import HTTPConnectionPool 13 | 14 | parser = argparse.ArgumentParser() 15 | 16 | parser.add_argument("--url", help="the url of cluster", required=True) 17 | parser.add_argument("--port", help="the port of cluster (default 9200)", default=9200, required=False) 18 | parser.add_argument("--threads", help="the number of search threads (default 1)", default=1, required=False) 19 | parser.add_argument("--requests", help="the number of times search request of cluster (default 1)", default=1, required=False) 20 | parser.add_argument("--index_name", help="the name of index to test", required=True) 21 | parser.add_argument("--output_file", help="output file name (default STDOUT)", required=False, type=str) 22 | parser.add_argument("--verbose", help="verbose mode (default False)", default=False, action="store_true", required=False) 23 | 24 | args = parser.parse_args() 25 | 26 | url = args.url 27 | port = args.port 28 | threads = int(args.threads) 29 | requests = int(args.requests) 30 | index_name = args.index_name 31 | output_file = args.output_file 32 | verbose = args.verbose 33 | 34 | logger = logging.getLogger("crumbs") 35 | logger.setLevel(logging.DEBUG) 36 | 37 | 38 | if output_file != None : 39 | logger.addHandler(logging.FileHandler(output_file)) 40 | else: 41 | logger.addHandler(logging.StreamHandler(sys.stdout)) 42 | 43 | MAX_THREAD=threads 44 | MAX_REQUESTS=requests 45 | 46 | query = { 47 | "from":0, "size":10000, 48 | "query": { 49 | "query_string": { 50 | "query": "*" 51 | } 52 | } 53 | } 54 | 55 | #query = { 56 | # "from":0, "size":10000, 57 | # "query": { 58 | # "query_string": { 59 | # "query": "*" 60 | # } 61 | # } 62 | #} 63 | 64 | encoded_data = json.dumps(query).encode('utf-8') 65 | 66 | es_connection_pool = HTTPConnectionPool(url, port=port, maxsize=100) 67 | 68 | took_data = {} 69 | 70 | def query_to_es(index): 71 | 72 | for i in range(0,MAX_REQUESTS) : 73 | 74 | response = es_connection_pool.request( 75 | 'GET', 76 | '/%s/_search' % index, 77 | body=encoded_data, 78 | headers={'Content-Type': 'application/json'} 79 | ) 80 | 81 | search_response_data = json.loads(response.data) 82 | 83 | if verbose : 84 | 85 | response = es_connection_pool.request( 86 | 'GET', 87 | '/_cat/indices/%s?h=dc,ss,sc&format=json' % (index_name) 88 | ) 89 | 90 | index_data = json.loads(response.data)[0] 91 | 92 | logger.info( "%s\t%s\t%s\t%s" % ( index_data['dc'], index_data['ss'], index_data['sc'], search_response_data['took'] ) ) 93 | 94 | took_data[index].append(search_response_data['took']) 95 | 96 | time.sleep(1) 97 | 98 | 99 | 100 | threads = [] 101 | 102 | took_data[index_name] = [] 103 | 104 | for i in range(0,MAX_THREAD) : 105 | thread = threading.Thread(target=query_to_es, args=[index_name]) 106 | thread.start() 107 | 108 | threads.append(thread) 109 | 110 | for thread in threads: 111 | thread.join() 112 | 113 | for took in took_data : 114 | 115 | total_took = 0 116 | 117 | for elapsed_time in took_data[took]: 118 | 119 | total_took = total_took + elapsed_time 120 | 121 | logger.info("== RESULT ==") 122 | logger.info("[INDEX :%s] average took time : %d ms" % ( took, total_took/len(took_data[took]))) 123 | logger.info("[INDEX :%s] max took time : %d ms" % ( took, max(took_data[took] ) ) ) 124 | logger.info("[INDEX :%s] min took time : %d ms\n\n" % ( took, min(took_data[took] ) ) ) 125 | 126 | # while(true); do ./search_test.py --url ec2-3-16-8-131.us-east-2.compute.amazonaws.com --threads 1 --requests 1 --index_name bank; done 127 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/shard/shard: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #curl -s -H 'Content-Type: application/json' -XDELETE http://localhost:9200/bank 4 | 5 | if [ -z $1 ]; then 6 | echo "Usage : ./shard localhost" 7 | else 8 | while(true); 9 | do 10 | bash bulk $1 > /dev/null; 11 | python search_test.py --url $1 --threads 1 --requests 1 --index_name bank; 12 | #python monworker.py 13 | sleep 1 14 | done 15 | fi 16 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/telebot/esbot.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | import sys 5 | import urllib3 6 | import json 7 | 8 | def es(cmd): 9 | try: 10 | header = { 'Content-Type': 'application/json' } 11 | data = {} 12 | if cmd[1] == "i": 13 | rtn = es_rtn('GET', "localhost:9200", data, header) 14 | else: 15 | rtn = "incorrect commands" 16 | print rtn 17 | return rtn 18 | 19 | except IndexError: 20 | rtn = "Usage : ./esbot [options] [Cluster URL]\n\n\ 21 | i : ES Info\n\ 22 | " 23 | print rtn 24 | return rtn 25 | 26 | def es_rtn(method, cmd, data=None, header=None): 27 | http = urllib3.PoolManager() 28 | 29 | try: 30 | rtn = http.request(method,cmd,body=json.dumps(data),headers=header).data 31 | except urllib3.exceptions.HTTPError as errh: 32 | rtn = "Http Error:",errh 33 | 34 | return rtn 35 | 36 | if __name__ == '__main__': 37 | es(sys.argv) 38 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/telebot/esbot.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/benjamin-btn/ES7-Tutorial/f7cc8210cdf6db5cda732edb736068dba58f660d/ES-Tutorial-7/tools/telebot/esbot.pyc -------------------------------------------------------------------------------- /ES-Tutorial-7/tools/telebot/tele.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import json 5 | import urllib3 6 | from telegram.ext import Updater, MessageHandler, Filters, CommandHandler # import modules 7 | import esbot 8 | 9 | my_token = '' 10 | print('start telegram chat bot') 11 | 12 | def es_command(bot, update) : 13 | cmd = update.message.text.split(" ") 14 | rtn = esbot.es(cmd) 15 | 16 | update.message.reply_text(rtn) 17 | 18 | updater = Updater(my_token) 19 | 20 | es_handler = CommandHandler('es', es_command) 21 | updater.dispatcher.add_handler(es_handler) 22 | 23 | updater.start_polling(timeout=3, clean=True) 24 | updater.idle() 25 | -------------------------------------------------------------------------------- /ES-Tutorial-7/tuto7: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | git pull 4 | 5 | function install_curator_package 6 | { 7 | pip 2> /dev/null 8 | if [ $? -ne 1 ]; then 9 | sudo easy_install pip 10 | sudo pip install elasticsearch-curator --ignore-installed 11 | fi 12 | 13 | } 14 | 15 | function configure_es_template 16 | { 17 | curl -s localhost:9200 > /dev/null 18 | if [ $? -ne 0 ]; then 19 | echo "Your ES Process is not working yet" 20 | else 21 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/_template/estemplate -d ' 22 | { 23 | "index_patterns": ["*"], 24 | "order" : 0, 25 | "settings": { 26 | "index.routing.allocation.require.box_type" : "hot" 27 | } 28 | }' 29 | fi 30 | 31 | } 32 | 33 | function install_telegram_package 34 | { 35 | sudo pip install python-telegram-bot --ignore-installed 36 | } 37 | 38 | function install_ansible_package 39 | { 40 | sudo yum -y install ansible 41 | 42 | } 43 | 44 | if [ -z $1 ]; then 45 | echo "##################### Menu ##############" 46 | echo " $ ./tuto7 [Command]" 47 | echo "#####################%%%%%%##############" 48 | echo " 1 : install curator package" 49 | echo " 2 : configure es hot template" 50 | echo " 3 : install telegram package" 51 | echo " 4 : install ansible package" 52 | echo "#########################################"; 53 | exit 1; 54 | fi 55 | 56 | case "$1" in 57 | "1" ) install_curator_package;; 58 | "2" ) configure_es_template;; 59 | "3" ) install_telegram_package;; 60 | "4" ) install_ansible_package;; 61 | *) echo "Incorrect Command" ;; 62 | esac 63 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ES7-Tutorial 2 | 3 | ElasticSearch 튜토리얼을 기술합니다. 4 | 5 | 본 스크립트는 외부 공인망을 기준으로 작성되었습니다. 6 | 7 | ## Product 별 버전 상세 8 | ``` 9 | Product Version. 7.3.0(2019/08/21 기준 Latest Ver.) 10 | ``` 11 | * [Elasticsearch](https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-x86_64.rpm) 12 | * [Kibana](https://artifacts.elastic.co/downloads/kibana/kibana-7.3.0-x86_64.rpm) 13 | 14 | 최신 버전은 [Elasticsearch 공식 홈페이지](https://www.elastic.co/downloads) 에서 다운로드 가능합니다. 15 | 16 | ## ES 7.x 버전에셔 변경된 사항 17 | EX 6.x 버전에서 7.x 버전으로 넘어오면서 다양한 변화가 있었습니다. ([ES 7.x Breaking Changes](https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-7.0.html)) 18 | 19 | 그 중 사용자가 직접 설정해야되는 부분과, default 로 설정되는 부분들에 대해 알아보겠습니다. 20 | 21 | 아래는 변경된 사항에 대해 다뤄볼 주제들입니다. 22 | 23 | * [Network Changes](#Network-Changes) 24 | + ES 클러스터 노드 Network 설정 제약조건 추가 25 | 26 | * [Discovery Changes](#Discovery-Changes) 27 | + ES 클러스터 노드 Discovery 및 Master 선출과정 변경 28 | 29 | * [Indices Changes](#Indices-Changes) 30 | + 인덱스 Primary Shard default 개수 5개에서 1개로 변경 31 | + 세그먼트 refresh 방식 변경 32 | 33 | * [Mapping Changes](#Mapping-Changes) 34 | + \_all meta field 세팅 불가 35 | + 내부적으로 인덱스 내의 매핑 이름을 \_doc 하나로 고정하면서 매핑의 사용을 제거 36 | 37 | * [Search & Query DSL Changes](#Search-&-Query-DSL-Changes) 38 | + Adaptive Replica Selection 이 default 로 설정됨 39 | + Scroll Query 에 request\_cache 사용 불가 40 | 41 | * [Thread Pool Name Changes](#Thread-Pool-Name-Changes) 42 | + bulk 가 write 로 완전히 변경됨(configure 관련 이름까지) 43 | 44 | * [Settings Changes](#Settings-Changes) 45 | + node.name 의 default 값이 랜덤한 값에서 호스트네임으로 변경됨 46 | 47 | # Network Changes 48 | #### 단일 호스트 network.host 설정 시 discovery 설정 필수 49 | * 6.x 버전까지는 단일 호스트에서 discovery 설정 없이 network.host 를 정의할 수 있었습니다. 50 | * 7.x 버전부터는 localhost 로 서비스를 하는 것이 아니면 network.host 를 정의한 순간 discovery 설정을 필수로 해주어야 합니다. 51 | 52 | # Discovery Changes 53 | #### ES 클러스터 노드 Discovery 및 Master 선출과정 변경 54 | * ES 클러스터 노드 Discovery 및 Master 선출과정 변경 ([공식 레퍼런스 페이지 참고](https://www.elastic.co/guide/en/elasticsearch/reference/current/discovery-settings.html)) 55 | * 기존의 마스터 후보 장비 목록을 설정하던 discovery.zen.ping.unicast.hosts 와 Split Brain 을 막기 위한 discovery.zen.minimum\_master\_nodes 설정이 없어지고 discovery.seed\_hosts 와 cluster.initial\_master\_nodes 설정이 위 설정들을 대체하게 됨 56 | * discovery.seed\_hosts 57 | + ES 클러스터링의 기준이 되는 마스터노드의 목록을 설정합니다. 58 | + 해당 설정을 하지 않았을 경우에는 localhost 내에 9300 - 9305 포트를 스캔하여 해당 포트로 올라온 ES 프로세스끼리 클러스터링을 진행합니다. 59 | + 기존 6.x 에서 discovery.zen.ping.unicast.hosts 를 설정하지 않으면 단일 노드로 클러스터링이 구성되는 반면, 7.x 에서 discovery.seed\_hosts 를 설정하지 않고 9300 - 9305 포트로 ES 가 2개 이상 올라와있지 않으면 ES 는 정상적으로 올라오지 않습니다. 60 | + 단일 노드로 설정을 희망한다면 discovery.seed\_hosts 에 단일 노드의 주소를 설정해주어야 합니다. 61 | 62 | * cluster.initial\_master\_nodes 63 | + ES 클러스터의 마스터를 선출하는 목록입니다. 64 | + 해당 설정에서 기재된 목록을 기준으로 discovery.zen.minimum\_master\_nodes 의 수를 자동으로 계산합니다. 65 | 66 | * 프로덕션 환경에서는 둘 다 동일하게 설정하면 기존의 설정과 크게 다르지 않습니다. 67 | 68 | # Indices Changes 69 | #### 인덱스 Primary Shard default 개수 5개에서 1개로 변경 70 | * 6.x 에서 number\_of\_shards 를 설정하지 않고 인덱스를 생성하면 기본적으로 5개의 Primary Shard 가 세팅되었던 부분이 기본으로 1개의 Primary Shard 로 생성되는 방식으로 변경 71 | 72 | #### 세그먼트 refresh 방식 변경 73 | * index.refresh\_interval 는 문서가 인덱싱 될 때 메모리 버퍼 캐시에 저장된 문서를 실제 물리 디스크로 내려 저장하는 주기를 의미하고 기본값은 1s 입니다. 74 | * 이 값이 기본값인 1s 로 유지되는 경우, index.search.idle.after 에 설정된 시간만큼(기본값은 30s) 검색 요청이 없다면 검색 요청이 들어올 때 까지 refresh 를 하지 않습니다. 75 | 76 | # Mapping Changes 77 | #### \_all meta field 세팅 불가 78 | * 문서의 전체 field 의 value 를 묶어서 비효율적으로 구성되는 \_all field 가 6.x 에서 deprecate 되었고 7.x 버전에서 완전히 제거되었습니다. 79 | 80 | #### 내부적으로 인덱스 내의 매핑 이름을 \_doc 하나로 고정하면서 매핑의 사용을 제거 81 | * 하나의 인덱스에서 다중 타입을 구성할 수 있는 multi type 을 6.x 에서 deprecate 하고 가능하면 \_doc 라는 이름의 타입으로 쓸 것을 권고하다가 7.x 버전에서는 \_doc 라는 이름의 타입으로 이름을 고정하고, 인덱스 생성 시 타입 이름 기재 부분을 아예 제거하였습니다. 82 | ```bash 83 | # 6.x 84 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/es6test1 -d '{ 85 | "mappings": { 86 | "_doc": { 87 | "properties": { 88 | "es6": { 89 | "type": "text" 90 | } 91 | } 92 | } 93 | } 94 | }' 95 | ``` 96 | ```bash 97 | # 7.x 98 | curl -s -H 'Content-Type: application/json' -XPUT http://localhost:9200/es7test1 -d '{ 99 | "mappings": { 100 | "properties": { 101 | "es7": { 102 | "type": "text" 103 | } 104 | } 105 | } 106 | }' 107 | ``` 108 | 109 | # Search & Query DSL Changes 110 | #### Adaptive Replica Selection 이 default 로 설정됨 111 | * ES 에 검색 요청이 들어오게 되면 기본적으로 replica shard 들을 대상으로 Round Robin 형태로 데이터를 요청합니다. 112 | 113 | * 6.x 에서 좀 더 나은 상태에 있는 샤드로 부터 검색 결과를 받기 위해 Adaptive Replica Selection 기능이 추가되었고, 아래의 조건을 기준으로 동작합니다. 114 | + coordinating node 와 검색 요청에 응답한 replica shard 를 가진 data node 사이에 과거 요청들의 응답시간 115 | + data node 에서 검색 요청에 의해 응답하는 데 걸린 시간 116 | + data node 에서 사용한 search thread pool 의 queue size 117 | 118 | * 7.x 에서는 이 기능이 default 로 설정되어 기존처럼 Round Robin 방식을 사용하기 위해서는 해당 기능을 false 로 설정해주어야 합니다. 119 | ```bash 120 | PUT /_cluster/settings 121 | { 122 | "transient": { 123 | "cluster.routing.use_adaptive_replica_selection": false 124 | } 125 | } 126 | ``` 127 | 128 | #### Scroll Query 에 request\_cache 사용 불가 129 | * 6.x 에서 deprecate 된 request\_cache: true 세팅이 완전히 제거되었습니다. 130 | 131 | # Thread Pool Name Changes 132 | #### bulk 가 write 로 완전히 변경됨(configure 관련 이름까지) 133 | * 6.3 에서 write 로 이름이 변경된 bulk thread pool 이름이 관련 세팅까지 write 로 이름이 변경되었습니다. 134 | * 튜토리얼에서는 thread\_pool.bulk.queue\_size 세팅이 thread\_pool.bulk.queue\_size 로 변경됩니다. 135 | 136 | # Settings Changes 137 | #### node.name 의 default 값이 랜덤한 값에서 호스트 네임으로 변경됨 138 | * 기존에 random string 으로 구성되던 기본 ES 노드 네임이 7.x 버전부터 시스템의 호스트명으로 기본 ES 노드 네임을 설정합니다. 139 | 140 | 141 | -------------------------------------------------------------------------------- /common/ES-Key.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEowIBAAKCAQEAocD4OBm5NKa/m+COoHzktfoqz4xUoZSfpSDnQztTRlxCyAa8cV9z4njH1b5u 3 | 6JkURDbjucDfQuHaFC+M6b5LdbLIl30V/0Moxzo5FKPRqrcthXYnyd3S+SN6J5HimG6gwv9L+BK6 4 | chK+QVeKn9jROsJaCFGrv3SJ/DR99NFXTmHqQnZnoYSK8dE3o4H6YmOaw4vrhTXY3WtetXBhUe97 5 | maYfyYDfFo52Kf8hwSc9DoU43pLUTBaXWqfmhtQSe/acd7TG39YUZWdwQrq8Klt140Ex3eXnbsJC 6 | rtJ8JLs9MPgqzY09RJhazlGiIpwCR0eNKIRwndcBpJ2wCo4jx1GZoQIDAQABAoIBACWVC1cliuWT 7 | 1LMn8puRSSaK8IV2indca9dXFMSHNSsE5rNI9WG2FtvIyk18SJKcdpv+0Nxo1rbYeO31ulzYzPmU 8 | x4yDEKhVd1UKzxZflah/lQEMWeRKOOmP96LX/3kBQzLrVEBYQZ+dgTz3VQscukhXvclvCGOcdS73 9 | F0jZltpsAPKIFjyxDt4aoyPu4p4unon6YbIT2gZjvsz2l3VUm5lYEmsA79lMCBp5iOPm8i1k/BO0 10 | 6zSd++ayNO2u+bCOfuLPcqmthRNCjCFxVzNEki/l3FO+Wou9jJWox42twEh4wXAe51dQUqNdZey6 11 | znyojbWGfEHPuwOyaavbzurWFq0CgYEA1UjsGtil76V6hzMhB362kkG9JwVxF0lQoer/gIBIxluM 12 | i4LaHT4NbMXmxcOlt4NnYAPQCJnxhB9uBw/RpZGIr4N+uT1LkDdCw/Rsm+5XEW+bTTgPAq0ELGBf 13 | O+pHG3WeAn3oYuE9ao+7ir/713A9Cn83UU5CQoYV4jEO1Q2bcE8CgYEAwiYO+B82Vx8wdD3XNrEm 14 | sPj1FpGY397BMThv1Ckfynos826XYGGX081HU5xERfh6PT7gvWEw0XNPVOZ1nd0x7pz7zsWXbqoO 15 | ShbENExLlac+deDsSms5ooRcZkRIn1DHWr/6RLAxXNK8Mygsi8E31Hw1Ioiop7nrObqsIforaw8C 16 | gYAxpWrICPv/H364787VZspqmwDDj4G2kOtC9WeJ6tKF0ZOSed/5hJMtaZeBGzx8zgqHD/whtGvC 17 | fGppHGaJaqntaOdbiQgIxsQ0xrVtSnpb5aW5wL3Fuq5JAhnI4YyxuJwSKmqocZORNWnLL0sY59hd 18 | lCU1OMk1oO6BGzg/oY44AwKBgQC/3hrHDRmHyfP5vK+2hiYFmVOlBSh+fcaRHQQvOKEJWeqYfL+u 19 | 6WPBVkpaD8HNIH21jzFNFwLGy10oO0UbSOEyvgOAWfeIzximEY+/W3MLJ6frmOgLt6HSwVoLWwom 20 | IA+T2Mu9HB78a+q/58D2MHI7VLCyOznp4Cvd9mRsg65q8wKBgBizZq3cdzwL1CRjGwmMJlIXfXCg 21 | bbHOozkIkG7WipqnHINc+4tc6eQFkrmX+Rhee9el2qyv/GRntfwFp2e9/CRBKZgAOHOyV/ZRM55f 22 | WtGw+DgwZ6PMdbEy5M7zjyDvsGUpHWcw2ycVs6w0C95kGcP3kfxHyMGDsqIenZ3NlFxp 23 | -----END RSA PRIVATE KEY----- -------------------------------------------------------------------------------- /common/tmux: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ##function tmux_key_download() { 4 | ## sudo yum -y install tmux unzip 5 | ## wget http://ec2-52-221-155-168.ap-southeast-1.compute.amazonaws.com/ES-Key.zip 6 | ## unzip ES-Key.zip 7 | ##} 8 | 9 | function my_tmux() { 10 | KEY="ES-Key.pem" 11 | 12 | HOSTS=$@ 13 | local user="ec2-user" 14 | local hosts=( $HOSTS ) 15 | local name="sshs-$(hexdump -n 2 -v -e '/1 "%02X"' /dev/urandom)" 16 | 17 | tmux new-session -d -s $name 18 | tmux new-window -t $name:1 "ssh -i ES-Key.pem -l $user -o StrictHostKeyChecking=no ${hosts[0]}" 19 | unset hosts[0]i; 20 | for i in "${hosts[@]}"; do 21 | tmux split-window -h "ssh -i ES-Key.pem -l $user -o StrictHostKeyChecking=no $i" 22 | tmux select-layout tiled > /dev/null 23 | done 24 | tmux select-pane -t 0 25 | tmux set-window-option synchronize-panes on > /dev/null 26 | 27 | tmux kill-window -t 0 28 | tmux attach -t $name 29 | 30 | } 31 | 32 | 33 | if [ -z $1 ]; then 34 | echo "Usage : ./tmux ip1 ip2 ip3 ..." 35 | exit 1; 36 | fi 37 | 38 | my_tmux $@ 39 | 40 | ##if [ -z $1 ]; then 41 | ## echo "##################### Menu ##############" 42 | ## echo " $ ./tmux [Command]" 43 | ## echo "#####################%%%%%%##############" 44 | ## echo " 1 : download a pem key" 45 | ## echo " 2 : connect multi ssh sessions" 46 | ## echo "#########################################"; 47 | ## exit 1; 48 | ##fi 49 | ## 50 | ##case "$1" in 51 | ## "1" ) tmux_key_download;; 52 | ## "2" ) my_tmux $@;; 53 | ## *) echo "Incorrect Command" ;; 54 | ##esac 55 | -------------------------------------------------------------------------------- /questions/9th: -------------------------------------------------------------------------------- 1 | # Q1. 인덱스에 정규표현식을 쓸 수 있는지? 2 | # A1. 와일드 카드를 제외한 정규표현식 사용 불가 3 | 4 | POST test1 5 | { 6 | "test": "TTT" 7 | } 8 | 9 | POST test2 10 | { 11 | "test": "TTT" 12 | } 13 | 14 | DELETE test[0-9] 15 | DELETE test* 16 | 17 | # Q2. 클러스터 샤드 라우팅 primaries, new_primaries 의 차이 18 | # A2. 상황별로 정리 19 | # 각 모드별로 클러스터 내에 노드가 모두 있을 때 20 | # [인덱스 추가] 21 | # primaries - 모든 노드에 새롭게 추가되는 인덱스에 대해서만 프라이머리 샤드만 추가, 레플리카는 unassigned 22 | # new_primaries - 모든 노드에 새롭게 추가되는 인덱스에 대해서만 프라이머리 샤드>만 추가, 레플리카는 unassigned 23 | # -> 동일 24 | # 25 | # 각 모드별로 클러스터에 노드를 추가할 때 26 | # [샤드 배치] 27 | # primaries - 투입되면 프라이머리 샤드들을 새롭게 추가된 노드에게 배치 28 | # new_primaries - 투입되면 샤드를 받지 않은 채로 노드만 추가됨 29 | # [인덱스 추가] 30 | # primaries - 모든 노드에 새롭게 추가되는 인덱스에 대해서만 프라이머리 샤드만 추가, 레플리카는 unassigned 31 | # new_primaries - 새롭게 추가된 노드에만 프라이머리 샤드들을 배치 32 | 33 | PUT _all/_settings 34 | { 35 | "settings": { 36 | "index.unassigned.node_left.delayed_timeout": "5s" 37 | } 38 | } 39 | 40 | GET _cluster/settings 41 | 42 | PUT _cluster/settings 43 | { 44 | "transient" : { 45 | "cluster.routing.allocation.enable" : null 46 | } 47 | } 48 | 49 | PUT test1 50 | { 51 | "settings": { 52 | "index.number_of_shards": 3, 53 | "index.number_of_replicas": 1, 54 | "index.unassigned.node_left.delayed_timeout": "5s" 55 | } 56 | } 57 | 58 | # node stop 59 | 60 | PUT _cluster/settings 61 | { 62 | "transient" : { 63 | "cluster.routing.allocation.enable" : "primaries" 64 | } 65 | } 66 | 67 | # node start 68 | # 투입된 노드에 프라이머리 샤드만 할당 69 | 70 | PUT test2 71 | { 72 | "settings": { 73 | "index.number_of_shards": 3, 74 | "index.number_of_replicas": 1, 75 | "index.unassigned.node_left.delayed_timeout": "5s" 76 | } 77 | } 78 | 79 | # 노드들에게 프라이머리 샤드만 할당 80 | 81 | PUT _cluster/settings 82 | { 83 | "transient" : { 84 | "cluster.routing.allocation.enable" : null 85 | } 86 | } 87 | 88 | # node stop 89 | 90 | PUT _cluster/settings 91 | { 92 | "transient" : { 93 | "cluster.routing.allocation.enable" : "new_primaries" 94 | } 95 | } 96 | 97 | # node start 98 | # 투입된 노드에 기존 인덱스의 샤드는 할당하지 않음 99 | 100 | PUT test3 101 | { 102 | "settings": { 103 | "index.number_of_shards": 3, 104 | "index.number_of_replicas": 1 105 | } 106 | } 107 | 108 | # 새롭게 생성한 인덱스의 샤드는 모든 노드들에게 할당 109 | # 다만, 하나의 노드에만 샤드가 분배되는 것 처럼 보이는 것은 투입된 노드가 다른 노드들에 비해 샤드가 없기 때문에 disk threshold 에 의해 그렇게 보이는 것 110 | 111 | PUT _cluster/settings 112 | { 113 | "transient" : { 114 | "cluster.routing.allocation.enable" : null 115 | } 116 | } 117 | 118 | # Q3. search analyzer 는 무엇인가요? 119 | # A. 보통의 경우 인덱스에 애널라이저를 설정하고, 특정 text field 에 해당 애널라>이저를 설정하면 색인할 때 해당 애널라이저의 방식을 따라 토큰이 생성됩니다. 이후>에 검색 시에도 동일한 애널라이저를 통해 토큰을 생성하여 일치되는 토큰이 있을 때 검색이 됩니다. 120 | # 그런데, N-grams. Edge N-grams 같은 토커나이저들은 색인 시에 실제 사용자가 검>색하지 않을 토큰이 다량으로 생성되는 방식으로 토큰을 생성합니다. 121 | # ex) Foxes (min, max gram 3) -> Fox, oxe, xes 122 | # 사용자는 실제 의미있는 Fox 로 검색하는 게 일반적입니다. 그래서 색인 시에는 가>능한 많은 토큰들을 생성하고, 검색을 할 때에는 의미 있는 토큰만을 대상으로 검색하고 싶을 때 search_analyzer 를 사용합니다. 123 | # https://www.elastic.co/guide/en/elasticsearch/reference/current/search-analyzer.html 124 | 125 | # ngram token 126 | PUT ngram_index 127 | { 128 | "settings": { 129 | "analysis": { 130 | "analyzer": { 131 | "my_analyzer": { 132 | "tokenizer": "my_tokenizer" 133 | } 134 | }, 135 | "tokenizer": { 136 | "my_tokenizer": { 137 | "type": "ngram", 138 | "min_gram": 3, 139 | "max_gram": 3, 140 | "token_chars": [ 141 | "letter", 142 | "digit" 143 | ] 144 | } 145 | } 146 | } 147 | }, 148 | "mappings": { 149 | "properties": { 150 | "title": { 151 | "type": "text", 152 | "analyzer": "my_analyzer" 153 | } 154 | } 155 | } 156 | } 157 | 158 | POST ngram_index/_analyze 159 | { 160 | "analyzer": "my_analyzer", 161 | "text": "Foxes" 162 | } 163 | # search_analyzer 164 | PUT search_analyzer_index1 165 | { 166 | "settings": { 167 | "analysis": { 168 | "analyzer": { 169 | "my_analyzer": { 170 | "tokenizer": "my_tokenizer" 171 | } 172 | }, 173 | "tokenizer": { 174 | "my_tokenizer": { 175 | "type": "ngram", 176 | "min_gram": 3, 177 | "max_gram": 3, 178 | "token_chars": [ 179 | "letter", 180 | "digit" 181 | ] 182 | } 183 | } 184 | } 185 | }, 186 | "mappings": { 187 | "properties": { 188 | "title": { 189 | "type": "text", 190 | "analyzer": "my_analyzer", 191 | "search_analyzer": "standard" 192 | } 193 | } 194 | } 195 | } 196 | 197 | POST search_analyzer_index1/_doc 198 | { 199 | "title": "Foxes" 200 | } 201 | 202 | POST search_analyzer_index1/_search 203 | { 204 | "query": { 205 | "match": { 206 | "title": "Fox" 207 | } 208 | } 209 | } 210 | 211 | # lowercase token filter for standard analyzer 212 | PUT search_analyzer_index2 213 | { 214 | "settings": { 215 | "analysis": { 216 | "analyzer": { 217 | "my_analyzer": { 218 | "tokenizer": "my_tokenizer", 219 | "filter": [ "lowercase" ] 220 | } 221 | }, 222 | "tokenizer": { 223 | "my_tokenizer": { 224 | "type": "ngram", 225 | "min_gram": 3, 226 | "max_gram": 3, 227 | "token_chars": [ 228 | "letter", 229 | "digit" 230 | ] 231 | } 232 | } 233 | } 234 | }, 235 | "mappings": { 236 | "properties": { 237 | "title": { 238 | "type": "text", 239 | "analyzer": "my_analyzer", 240 | "search_analyzer": "standard" 241 | } 242 | } 243 | } 244 | } 245 | 246 | POST search_analyzer_index2/_doc 247 | { 248 | "title": "Foxes" 249 | } 250 | 251 | POST search_analyzer_index2/_search 252 | { 253 | "query": { 254 | "match": { 255 | "title": "Fox" 256 | } 257 | } 258 | } 259 | 260 | # Q4. 쿼리가 한 번이라도 수행되었을 때 받은 스코어를 유지할 수 있는 방법은 없나요? 261 | # A4. constant_score 를 사용하여 처음부터 스코어를 지정해주는 방법이 있습니다. 이 쿼리는 사용자가 특정 쿼리에 스코어를 지정하여 지속적으로 사용할 것이 예상되기 때문에 filter 절 내에 정의하도록 구성되어 있습니다. 262 | 263 | # score1, 2, 3... 264 | 265 | POST bank/_search 266 | { 267 | "query": { 268 | "match": { 269 | "address": "Fleet" 270 | } 271 | } 272 | } 273 | 274 | 275 | GET bank/_search 276 | { 277 | "query": { 278 | "constant_score" : { 279 | "filter": { 280 | "match" : { "address" : "Fleet"} 281 | }, 282 | "boost" : 3.4 283 | } 284 | } 285 | } 286 | 287 | # Q5. bulk size 조정 방법 288 | # 벌크 사이즈의 조정은 클라에서 합니다. ES 에서 조정할 수 있는 것은 bulk job 이 코어를 몇 개 까지 쓸 수 있게 할 것인지, 코어를 다 써서 더 이상 할당할 코어가 없을 때 요청된 job 들을 얼마나 저장할지만 결정합니다. 6장에 해당 내용이 있습니다. 289 | 290 | # Q6. term 쿼리는 스코어가 일정한데 어떤 사항을 기준으로 순서가 결정되나요? 291 | # A6. 순서는 랜덤하게 결정됩니다. 스코어가 모두 동일하기 때문에 전체 문서가 필요하면 pagination 을 활용해야 합니다. 292 | 293 | # 8509 294 | POST shakespeare/_search 295 | { 296 | "query": { 297 | "term": { 298 | "speaker.keyword": "CADE" 299 | } 300 | } 301 | } 302 | 303 | # Q7. search template 304 | # A7. 사용자가 요청할 쿼리의 형식을 미리 템플릿화 하여 정의하고 정의된 템플릿에 편하게 쿼리를 요청할 수 있는 기능 305 | GET _search/template 306 | { 307 | "source" : { 308 | "query": { "match" : { "{{my_field}}" : "{{my_value}}" } }, 309 | "size" : "{{my_size}}" 310 | }, 311 | "params" : { 312 | "my_field" : "message", 313 | "my_value" : "some message", 314 | "my_size" : 5 315 | } 316 | } 317 | 318 | POST _scripts/my_search_template 319 | { 320 | "script":{ 321 | "lang":"mustache", 322 | "source":{ 323 | "query":{ 324 | "match":{ 325 | "text_entry":"{{my_query}}" 326 | } 327 | } 328 | } 329 | } 330 | } 331 | 332 | GET _scripts/my_search_template 333 | 334 | POST shakespeare/_search/template 335 | { 336 | "id": "my_search_template", 337 | "params": { 338 | "my_query": "my mother" 339 | } 340 | } 341 | 342 | DELETE _scripts/my_search_template 343 | 344 | # Q8. 저장된 Routing Key 들은 어디서 볼 수 있나요? 345 | # A8. Routing Key 는 문서의 ID 를 대체하여 샤드 할당 알고리즘에 의해 문서를 색인합니다. 346 | # 문서의 ID 와 마찬가지로 별도의 리스트를 확인하기 어려운 정보로 확인됩니다. 347 | # 해당 Key 는 특정 샤드로 데이터를 저장하고 싶을 때 사용하는 unique key 입니다. 348 | # _source field 에서 자동으로 사용할 수 없는 항목이니 문서의 필드에 함께 추가하여 색인하는 게 좋을 것 같습니다. 349 | 350 | # Q9. copy_to 필드는 어디에 저장되나요? 351 | # A9. ES 공식 문서에 해당 필드는 디스크에 저장되지 않는다고 명시되어 있습니다. 352 | # _source 필드는 디스크에 저장되는 필드입니다. 즉, copy_to 는 쿼리 타임에 메모리 상에 저장되었다가 소멸되는 것으로 보입니다. 353 | 354 | DELETE rindex 355 | # Routing key 를 이용한 인덱싱 356 | POST rindex/_doc?routing=user1 357 | { 358 | "title": "This is a document for user1" 359 | } 360 | 361 | POST rindex/_doc?routing=user2 362 | { 363 | "title": "This is a document for user2" 364 | } 365 | 366 | GET rindex/_search 367 | GET rindex/_search?routing=user1 368 | GET rindex/_search?routing=user2 369 | 370 | 371 | POST rindex/_search?routing=user2 372 | { 373 | "query": { 374 | "match": { 375 | "title": "user1" 376 | } 377 | } 378 | } 379 | 380 | POST rindex/_search?routing=user1 381 | { 382 | "query": { 383 | "match": { 384 | "title": "user1" 385 | } 386 | } 387 | } 388 | 389 | POST rindex/_search 390 | { 391 | "query": { 392 | "terms": { 393 | "_routing": [ "user1", "user2" ] 394 | } 395 | } 396 | } 397 | 398 | --------------------------------------------------------------------------------