├── .gitignore ├── README.md ├── consumers └── boom │ ├── deploy │ ├── files │ ├── centos7-base.repo │ └── tests │ │ ├── test00 │ │ ├── test01 │ │ ├── test02 │ │ ├── test03 │ │ └── test04 │ └── setup.yml ├── deployment ├── terraform │ ├── README.md │ ├── ansible.cfg │ ├── aws-ec2 │ │ ├── main.tf │ │ ├── userdata.sh │ │ └── vars.tf │ ├── bin │ │ └── README │ ├── deploy │ ├── destroy │ ├── digitalocean │ │ ├── main.tf │ │ └── vars.tf │ ├── hosts │ │ ├── README │ │ └── hostsheader │ ├── playbook.yml │ └── terraform.tfvars └── vagrant │ ├── Vagrantfile │ └── ansible │ ├── setup.yml │ └── templates │ └── hosts.j2 ├── gateways ├── apiman │ ├── README.md │ ├── configure │ ├── deploy │ ├── files │ │ ├── apiman.service │ │ └── standalone-apiman.xml │ └── setup.yml ├── kong │ ├── configure │ ├── deploy │ ├── files │ │ ├── cassandra.repo │ │ ├── cassandra.yaml │ │ ├── kong.repo │ │ ├── kong.service │ │ ├── kong.yml │ │ └── waitforcassandra.sh │ └── setup.yml ├── tyk │ ├── configure │ ├── deploy │ ├── files │ │ ├── apps │ │ │ ├── test01.json │ │ │ ├── test02.json │ │ │ ├── test03.json │ │ │ └── test04.json │ │ ├── mongodb-org-3.0.repo │ │ ├── tyk-gateway.conf │ │ └── tyk.repo │ └── setup.yml └── wso2 │ ├── README.md │ ├── deploy │ ├── files │ └── wso2am.service │ └── setup.yml └── webservers └── dummy-api ├── deploy ├── files ├── centos7-base.repo └── dummy-api.service └── setup.yml /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # API Gateway Benchmarks 2 | 3 | This project aims to provide a complete set of tools needed to do simple performance comparisons in the API manager/gateway space. It is inspired by the great [Framework Benchmarks project](https://github.com/TechEmpower/FrameworkBenchmarks) by [TechEmpower](https://www.techempower.com/benchmarks/). 4 | 5 | ## About 6 | 7 | This repository contains configuration that makes it easy for everyone to reproduce performance comparisons of different HTTP based API gateway/manager products. The configuration and tests are open source, and contributions are encouraged. 8 | 9 | To simplify the tests suite, three roles are defined: *consumer*, *gateway* and *webserver*. To run a performance test, each role must be filled by exactly one software component. 10 | 11 | For performance comparisons, multiple performance tests are executed where one role switches software component (typically the gateway) between each run. 12 | 13 | ## Tests 14 | 15 | The tests cover a limited set of features which are considered as basic functionality in most API gateways. Each test focuses on a set of very specific features for easy comparison. 16 | 17 | **Test 00: Reference** 18 | 19 | Requests sent directly from the consumer to the webserver. The gateway is not part of the request handling, and does therefore not affect the results. 20 | 21 | The requests should be sent according to the following specifications. 22 | 23 | | Property | Value | 24 | |----------------|--------------| 25 | | Request method | GET | 26 | | Protocol | http | 27 | | Host | webserver | 28 | | Port | 8888 | 29 | | Request path | /test00 | 30 | | Headers | *none* | 31 | 32 | The webserver should accept the requests and reply with ``200 OK``. 33 | 34 | **Test 01: HTTP routing** 35 | 36 | Proxy consumer requests through the gateway to the upstream webserver. 37 | 38 | The requests should be sent from the consumer according to the following specifications. 39 | 40 | | Property | Value | 41 | |----------------|--------------| 42 | | Request method | GET | 43 | | Protocol | http | 44 | | Host | gateway | 45 | | Port | 8000 | 46 | | Request path | /test01 | 47 | | Headers | *none* | 48 | 49 | The gateway should accept the requests and proxy them to ``http://webserver:8888/test01``. The webserver should accept the requests and reply with ``200 OK``. 50 | 51 | **Test 02: Key based authentication and authorization** 52 | 53 | Authenticate, authorize and proxy consumer requests through the gateway to the upstream webserver. 54 | 55 | The requests should be sent from the consumer according to the following specifications. 56 | 57 | | Property | Value | 58 | |----------------|--------------| 59 | | Request method | GET | 60 | | Protocol | http | 61 | | Host | gateway | 62 | | Port | 8000 | 63 | | Request path | /test02 | 64 | | Headers | apikey=key02 | 65 | 66 | The gateway should verify the specified key, accept the requests and proxy them to ``http://webserver:8888/test02``. The webserver should accept the requests and reply with ``200 OK``. 67 | 68 | **Test 03: Key based auth and rate limiting (high limit)** 69 | 70 | Authenticate, authorize and proxy consumer requests through the gateway to the upstream webserver. All requests should be counted, but none should exceed the rate limitation. 71 | 72 | The requests should be sent from the consumer according to the following specifications. 73 | 74 | | Property | Value | 75 | |----------------|--------------| 76 | | Request method | GET | 77 | | Protocol | http | 78 | | Host | gateway | 79 | | Port | 8000 | 80 | | Request path | /test03 | 81 | | Headers | apikey=key03 | 82 | 83 | The gateway should verify the specified key, accept the requests, count them and proxy them to ``http://webserver:8888/test03``. The webserver should accept the requests and reply with ``200 OK``. 84 | 85 | **Test 04: Key based auth and rate limit of 1 rps** 86 | 87 | Authenticate, authorize and proxy consumer requests through the gateway to the upstream webserver. Only one request is allowed per second. The rest of the requests should be rejected. 88 | 89 | The requests should be sent from the consumer according to the following specifications. 90 | 91 | | Property | Value | 92 | |----------------|--------------| 93 | | Request method | GET | 94 | | Protocol | http | 95 | | Host | gateway | 96 | | Port | 8000 | 97 | | Request path | /test04 | 98 | | Headers | apikey=key04 | 99 | 100 | The gateway should verify the specified key, and allow only one request per second. This one request per second should be proxied to ``http://webserver:8888/test04``. The webserver should accept the requests and reply with ``200 OK``. The requests exceeding the rate limit should be rejected. 101 | 102 | ## Roles specification 103 | 104 | ### Consumers 105 | 106 | Configuration for each type of consumer is put in subdirectories in the ``consumers/`` directory. Each subdirectory should contain a ``deploy`` file that can be executed to install and prepare the consumer for load generation. 107 | 108 | Wrappers to run the different tests should be put in ``/usr/local/bin/`` inside the consumer instance and named ``test00``, ``test01``, ..., ``textXX``. The wrappers should execute requests according to the test specifications. 109 | 110 | ### Gateways 111 | 112 | Configuration for each API gateway is put in subdirectories in the ``gateways/`` directory. Each subdirectory should contain a ``deploy`` file that can be executed to install, configure and start the gateway. It should also define the APIs and policies needed for the tests. 113 | 114 | | Property | Value | 115 | |-------------|---------| 116 | | Protocol | http | 117 | | Listen host | 0.0.0.0 | 118 | | Listen port | 8000 | 119 | 120 | ### Webservers 121 | 122 | Configuration for each web server is put in subdirectories in the ``webservers/`` directory. Each subdirectory should contain a ``deploy`` file that can be executed to install, configure and start the web server. 123 | 124 | | Property | Value | 125 | |-------------|---------| 126 | | Protocol | http | 127 | | Listen host | 0.0.0.0 | 128 | | Listen port | 8888 | 129 | 130 | ## Execution 131 | 132 | ### Environment specifications 133 | 134 | * Three instances running CentOS 7 x86_64. Each of them service the purpose of ``consumer``, ``gateway`` or ``webserver``. 135 | * Each host should be configured with ``/etc/hosts`` properly set with entries for ``consumer``, ``gateway`` and ``webserver`` for consistent host mapping in different environments. 136 | * Selinux should be disabled. 137 | * The EPEL7 repository should be enabled. 138 | * Root access is required. 139 | * This git repository cloned to */opt/benchmarks*. 140 | 141 | ### Deployment example 142 | 143 | #### Vagrant and VirtualBox 144 | 145 | **1. Install dependencies** 146 | 147 | * Install [Virtualbox](https://www.virtualbox.org/wiki/Downloads). 148 | * Install [Virtualbox guest additions](https://www.virtualbox.org/wiki/Downloads). 149 | * Install [Vagrant](https://www.vagrantup.com/). 150 | * Clone this git repository (``git clone https://github.com/varnish/api-gateway-benchmarks``). 151 | 152 | **2. Prepare virtual environment** 153 | 154 | Build the three virtual instances using Vagrant. 155 | 156 | cd deployment/vagrant 157 | vagrant up 158 | 159 | **3. Deploy components** 160 | 161 | vagrant ssh gateway 162 | # Using Kong in this example. Tyk is also available in the tyk directory. 163 | cd /opt/benchmarks/gateways/kong 164 | sudo ./deploy 165 | exit 166 | 167 | vagrant ssh webserver 168 | cd /opt/benchmarks/webservers/dummy-api 169 | sudo ./deploy 170 | exit 171 | 172 | vagrant ssh consumer 173 | cd /opt/benchmarks/consumers/boom 174 | sudo ./deploy 175 | exit 176 | 177 | **4. Run tests** 178 | 179 | vagrant ssh consumer 180 | /usr/local/bin/test00 181 | /usr/local/bin/test01 182 | /usr/local/bin/test02 183 | /usr/local/bin/test03 184 | /usr/local/bin/test04 185 | exit 186 | 187 | **5. Interpret results** 188 | 189 | Currently this is a manual process. The goal is to automate it. 190 | 191 | -------------------------------------------------------------------------------- /consumers/boom/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Check if Ansible is installed 4 | which ansible-playbook 2>&1 > /dev/null 5 | if [ $? -ne 0 ]; then 6 | # Install Ansible if it's not 7 | yum install -y ansible 8 | fi 9 | 10 | ansible-playbook -c local -i "localhost," setup.yml 11 | -------------------------------------------------------------------------------- /consumers/boom/files/centos7-base.repo: -------------------------------------------------------------------------------- 1 | [centos7-base] 2 | name=CentOS-7 - Base 3 | mirrorlist=http://mirrorlist.centos.org/?release=7&arch=$basearch&repo=os&infra=$infra 4 | gpgcheck=1 5 | enabled=0 6 | gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7 7 | -------------------------------------------------------------------------------- /consumers/boom/files/tests/test00: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/go/bin/boom -n 100000 -c 100 http://webserver:8888/test00 4 | -------------------------------------------------------------------------------- /consumers/boom/files/tests/test01: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/go/bin/boom -n 100000 -c 100 http://gateway:8000/test01 4 | -------------------------------------------------------------------------------- /consumers/boom/files/tests/test02: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/go/bin/boom -n 100000 -c 100 -h "apikey:key02" http://gateway:8000/test02 4 | -------------------------------------------------------------------------------- /consumers/boom/files/tests/test03: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/go/bin/boom -n 100000 -c 100 -h "apikey:key03" http://gateway:8000/test03 4 | -------------------------------------------------------------------------------- /consumers/boom/files/tests/test04: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | /usr/local/go/bin/boom -n 100000 -c 100 -h "apikey:key04" http://gateway:8000/test04 4 | -------------------------------------------------------------------------------- /consumers/boom/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Setup Boom" 2 | hosts: all 3 | environment: 4 | GOPATH: "/usr/local/go" 5 | tasks: 6 | - name: "Install disabled Centos-Base repo for RHEL7" 7 | copy: src="files/centos7-base.repo" dest="/etc/yum.repos.d/centos7-base.repo" mode="0444" 8 | when: ansible_distribution in ['RedHat'] and 9 | ansible_distribution_major_version in ['7'] 10 | 11 | - name: "Install golang with Centos repo for RHEL7" 12 | yum: name="golang" enablerepo="centos7-base" state="installed" 13 | when: ansible_distribution in ['RedHat'] and 14 | ansible_distribution_major_version in ['7'] 15 | 16 | - name: "Ensure tools are installed" 17 | yum: name="{{ item }}" state="installed" 18 | with_items: 19 | - "git" 20 | - "golang" 21 | 22 | - name: "Create directory structure" 23 | file: path="{{ item }}" state="directory" mode="0755" 24 | with_items: 25 | - "/usr/local/go" 26 | - "/usr/local/go/src" 27 | - "/usr/local/go/pkg" 28 | - "/usr/local/go/bin" 29 | 30 | - name: "Download and install Dummy-API" 31 | command: "/bin/go get github.com/espebra/boom" 32 | args: 33 | creates: "/usr/local/go/src/github.com/espebra/boom/" 34 | 35 | - name: "Ensure that test wrappers are present" 36 | copy: src="files/tests/" dest="/usr/local/bin/" mode="0555" 37 | -------------------------------------------------------------------------------- /deployment/terraform/README.md: -------------------------------------------------------------------------------- 1 | # Terraform based deploys 2 | 3 | ## Installing Terraform 4 | 5 | Terraform is a Golang based tool and official binaries are available at https://terraform.io/downloads.html. 6 | On OSX, Terraform can also be installed via Homebrew. 7 | 8 | ## Important information 9 | 10 | Terraform manipulates cloud based instances, and it is imperative that you ensure that the configuration used does not alter any of your existing resources. Typically the naming chosen for the instances should avoid this, but please ensure via ``terraform plan`` before applying or destroying. 11 | 12 | ## Ansible integration 13 | 14 | A nice dynamic inventory script for Ansible is at https://github.com/adammck/terraform-inventory 15 | Binary releases are available at https://github.com/adammck/terraform-inventory/releases 16 | Copy the ``terraform-inventory`` binary that works for your platform to the ``bin/`` subdirectory. 17 | 18 | ## Amazon EC2 19 | 20 | ### Quickstart EC2 21 | 22 | 1. Edit the terraform.tfvars file and insert AWS credentials and sshkey, or add them to a separate terraform.tfvars.mine file. 23 | 2. Run ``./deploy aws-ec2 [terraform.tfvars.mine]`` to run start instances, provision and run tests 24 | 3. Run ``./destroy aws-ec2 [terraform.tfvars.mine]`` to remove all AWS resources created by ``deploy`` 25 | 26 | ### Manual EC2 run 27 | 28 | 1. Edit the terraform.tfvars file and insert AWS credentials and sshkey 29 | 2. Run ``terraform plan aws-ec2 [-var-file=terraform.tfvars.mine]`` to verify the plan 30 | 3. Run ``terraform apply aws-ec2 [-var-file=terraform.tfvars.mine]`` to spin up instances 31 | 4. Run ``ansible-playbook --inventory-file=bin/terraform-inventory playbook.yml`` 32 | 5. Run ``terraform destroy aws-ec2 [-var-file=terraform.tfvars.mine]`` to remove all resources 33 | 34 | ### Nodes 35 | 36 | The setup consists of three nodes: 37 | 38 | * webserver 39 | * gateway 40 | * consumer 41 | 42 | ### Custom setups 43 | 44 | The ec2 region, instance types and ami to use is configured with variables, default is **m3.medium** and the official Centos7 AMI: 45 | 46 | aws_ec2_region = "us-east-1" 47 | aws_ec2_ami = "ami-6d1c2007" (Centos7 64bit) 48 | instance_types = "m3.medium" (Can be configured per node) 49 | 50 | ## DigitalOcean 51 | 52 | ### Quickstart DigitalOcean 53 | 54 | 1. Edit the terraform.tfvars file and insert a valid Digitalocean API key and sshkey 55 | 2. Run ``./deploy digitalocean [terraform.tfvars.mine]`` to run start instances, provision and run tests 56 | 3. Run ``./destroy digitalocean [terraform.tfvars.mine]`` to remove all Digitalocean resources created by ``deploy`` 57 | 58 | ### Manual Digitalocean run 59 | 60 | 1. Edit the terraform.tfvars file and insert a valid API key and sshkey 61 | 2. Run ``terraform plan digitalocean [-var-file=terraform.tfvars.mine]`` to verify the plan 62 | 3. Run ``terraform apply digitalocean [-var-file=terraform.tfvars.mine]`` to spin up instances 63 | 4. Run ``ansible-playbook --inventory-file=bin/terraform-inventory playbook.yml`` 64 | 5. Run ``terraform destroy digitalocean [-var-file=terraform.tfvars.mine]`` to remove all resources 65 | 66 | ### Custom setups 67 | 68 | The droplet location, size and image to use is configured in with variables, defaults are: 69 | 70 | droplet_region = "ams3" (Amsterdam) 71 | droplet_image = "centos-7-0-x64" 72 | droplet_size = "1gb" (Can be configured per node) 73 | -------------------------------------------------------------------------------- /deployment/terraform/ansible.cfg: -------------------------------------------------------------------------------- 1 | [ssh_connection] 2 | # Disable to avoid certain weird issues 3 | pipelining=False 4 | 5 | [defaults] 6 | # Digitalocean images use root 7 | remote_user=root 8 | # Centos7 AMI on AWS EC2 use centos 9 | #remote_user=centos 10 | # RHEL7 AMI on AWS EC2 use centos 11 | #remote_user=ec2-user 12 | 13 | [privilege_escalation] 14 | # Centos7 AMI on AWS EC2 use centos 15 | # Need sudo 16 | become=True 17 | become_user=root 18 | -------------------------------------------------------------------------------- /deployment/terraform/aws-ec2/main.tf: -------------------------------------------------------------------------------- 1 | # Configure the Amazon EC2 Provider 2 | provider "aws" { 3 | access_key = "${var.aws_access_key}" 4 | secret_key = "${var.aws_secret_key}" 5 | region = "${var.aws_ec2_region}" 6 | } 7 | 8 | # Create a key pair 9 | resource "aws_key_pair" "apiperf-sshkey" { 10 | key_name = "apiperf-sshkey" 11 | public_key = "${var.sshkey}" 12 | } 13 | 14 | # Create a VPC 15 | resource "aws_vpc" "apiperf" { 16 | cidr_block = "10.0.0.0/16" 17 | enable_dns_hostnames = true 18 | } 19 | 20 | # Create an internet gateway to give the subnet access to the world 21 | resource "aws_internet_gateway" "apiperf" { 22 | vpc_id = "${aws_vpc.apiperf.id}" 23 | } 24 | 25 | # Grant VPC internet access on its main route table 26 | resource "aws_route" "apiperf-internet_access" { 27 | route_table_id = "${aws_vpc.apiperf.main_route_table_id}" 28 | destination_cidr_block = "0.0.0.0/0" 29 | gateway_id = "${aws_internet_gateway.apiperf.id}" 30 | } 31 | 32 | # Create a subnet to launch our instances into 33 | resource "aws_subnet" "apiperf" { 34 | vpc_id = "${aws_vpc.apiperf.id}" 35 | cidr_block = "10.0.1.0/24" 36 | map_public_ip_on_launch = true 37 | } 38 | 39 | # Our default security group to access 40 | # the instances over SSH and HTTP 41 | resource "aws_security_group" "apiperf" { 42 | name = "api_engine_perftests" 43 | description = "Used for API Engine Performance Testing" 44 | vpc_id = "${aws_vpc.apiperf.id}" 45 | 46 | # Permit ICMP 47 | ingress { 48 | from_port = -1 49 | to_port = -1 50 | protocol = "icmp" 51 | cidr_blocks = ["0.0.0.0/0"] 52 | } 53 | 54 | # SSH access from anywhere 55 | ingress { 56 | from_port = 22 57 | to_port = 22 58 | protocol = "tcp" 59 | cidr_blocks = ["0.0.0.0/0"] 60 | } 61 | 62 | # HTTP access from the VPC 63 | ingress { 64 | from_port = 80 65 | to_port = 80 66 | protocol = "tcp" 67 | cidr_blocks = ["0.0.0.0/0"] 68 | } 69 | 70 | # Testing Ports 71 | ingress { 72 | from_port = 0 73 | to_port = 8000 74 | protocol = "tcp" 75 | cidr_blocks = ["0.0.0.0/0"] 76 | } 77 | 78 | ingress { 79 | from_port = 0 80 | to_port = 8888 81 | protocol = "tcp" 82 | cidr_blocks = ["0.0.0.0/0"] 83 | } 84 | 85 | ingress { 86 | from_port = 0 87 | to_port = 6081 88 | protocol = "tcp" 89 | cidr_blocks = ["0.0.0.0/0"] 90 | } 91 | 92 | # outbound internet access 93 | egress { 94 | from_port = 0 95 | to_port = 0 96 | protocol = "-1" 97 | cidr_blocks = ["0.0.0.0/0"] 98 | } 99 | } 100 | 101 | resource "aws_instance" "apiperf-gateway" { 102 | ami = "${lookup(var.aws_ec2_ami, "gateway")}" 103 | instance_type = "${lookup(var.instance_types, "gateway")}" 104 | key_name = "apiperf-sshkey" 105 | user_data = "${file("aws-ec2/userdata.sh")}" 106 | subnet_id = "${aws_subnet.apiperf.id}" 107 | vpc_security_group_ids = ["${aws_security_group.apiperf.id}"] 108 | tags { 109 | name = "${lookup(var.instance_names, "gateway")}" 110 | } 111 | provisioner "local-exec" { 112 | command = "echo ${self.private_ip} ${lookup(var.instance_names, "gateway")} > hosts/${lookup(var.instance_names, "gateway")}.host" 113 | } 114 | provisioner "local-exec" { 115 | command = "echo ansible_ssh_user: ${lookup(var.user_name, aws_instance.apiperf-gateway.ami)} > host_vars/${self.public_ip}" 116 | } 117 | } 118 | 119 | resource "aws_instance" "apiperf-webserver" { 120 | ami = "${lookup(var.aws_ec2_ami, "webserver")}" 121 | instance_type = "${lookup(var.instance_types, "webserver")}" 122 | key_name = "apiperf-sshkey" 123 | user_data = "${file("aws-ec2/userdata.sh")}" 124 | subnet_id = "${aws_subnet.apiperf.id}" 125 | vpc_security_group_ids = ["${aws_security_group.apiperf.id}"] 126 | tags { 127 | name = "${lookup(var.instance_names, "webserver")}" 128 | } 129 | provisioner "local-exec" { 130 | command = "echo ${self.private_ip} ${lookup(var.instance_names, "webserver")} > hosts/${lookup(var.instance_names, "webserver")}.host" 131 | } 132 | provisioner "local-exec" { 133 | command = "echo ansible_ssh_user: ${lookup(var.user_name, aws_instance.apiperf-webserver.ami)} > host_vars/${self.public_ip}" 134 | } 135 | } 136 | 137 | resource "aws_instance" "apiperf-consumer" { 138 | ami = "${lookup(var.aws_ec2_ami, "consumer")}" 139 | instance_type = "${lookup(var.instance_types, "consumer")}" 140 | key_name = "apiperf-sshkey" 141 | user_data = "${file("aws-ec2/userdata.sh")}" 142 | subnet_id = "${aws_subnet.apiperf.id}" 143 | vpc_security_group_ids = ["${aws_security_group.apiperf.id}"] 144 | tags { 145 | name = "${lookup(var.instance_names, "consumer")}" 146 | } 147 | provisioner "local-exec" { 148 | command = "echo ${self.private_ip} ${lookup(var.instance_names, "consumer")} > hosts/${lookup(var.instance_names, "consumer")}.host" 149 | } 150 | provisioner "local-exec" { 151 | command = "echo ansible_ssh_user: ${lookup(var.user_name, aws_instance.apiperf-consumer.ami)} > host_vars/${self.public_ip}" 152 | } 153 | } 154 | 155 | resource "null_resource" "hostsfile" { 156 | triggers { 157 | aws_instance_consumer = "${aws_instance.apiperf-consumer.private_ip}" 158 | aws_instance_webserver = "${aws_instance.apiperf-webserver.private_ip}" 159 | aws_instance_gateway = "${aws_instance.apiperf-gateway.private_ip}" 160 | } 161 | provisioner "file" { 162 | connection { 163 | host = "${aws_instance.apiperf-consumer.public_ip}" 164 | user = "${lookup(var.user_name, aws_instance.apiperf-consumer.ami)}" 165 | } 166 | source = "hosts" 167 | destination = "/tmp" 168 | } 169 | provisioner "remote-exec" { 170 | connection { 171 | host = "${aws_instance.apiperf-consumer.public_ip}" 172 | user = "${lookup(var.user_name, aws_instance.apiperf-consumer.ami)}" 173 | } 174 | inline = [ 175 | "sudo sh -c 'cat /tmp/hosts/hostsheader /tmp/hosts/*.host > /etc/hosts'", 176 | ] 177 | } 178 | provisioner "file" { 179 | connection { 180 | host = "${aws_instance.apiperf-gateway.public_ip}" 181 | user = "${lookup(var.user_name, aws_instance.apiperf-gateway.ami)}" 182 | } 183 | source = "hosts" 184 | destination = "/tmp" 185 | } 186 | provisioner "remote-exec" { 187 | connection { 188 | host = "${aws_instance.apiperf-gateway.public_ip}" 189 | user = "${lookup(var.user_name, aws_instance.apiperf-gateway.ami)}" 190 | } 191 | inline = [ 192 | "sudo sh -c 'cat /tmp/hosts/hostsheader /tmp/hosts/*.host > /etc/hosts'", 193 | ] 194 | } 195 | provisioner "file" { 196 | connection { 197 | host = "${aws_instance.apiperf-webserver.public_ip}" 198 | user = "${lookup(var.user_name, aws_instance.apiperf-webserver.ami)}" 199 | } 200 | source = "hosts" 201 | destination = "/tmp" 202 | } 203 | provisioner "remote-exec" { 204 | connection { 205 | host = "${aws_instance.apiperf-webserver.public_ip}" 206 | user = "${lookup(var.user_name, aws_instance.apiperf-webserver.ami)}" 207 | } 208 | inline = [ 209 | "sudo sh -c 'cat /tmp/hosts/hostsheader /tmp/hosts/*.host > /etc/hosts'", 210 | ] 211 | } 212 | } 213 | 214 | -------------------------------------------------------------------------------- /deployment/terraform/aws-ec2/userdata.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -v 2 | setenforce 0 3 | cat < /etc/selinux/config 4 | SELINUX=permissive 5 | SELINUXTYPE=targeted 6 | EOF 7 | -------------------------------------------------------------------------------- /deployment/terraform/aws-ec2/vars.tf: -------------------------------------------------------------------------------- 1 | variable "aws_access_key" { 2 | description = "Valid Amazon AWS access key" 3 | } 4 | 5 | variable "aws_secret_key" { 6 | description = "Valid Amazon AWS secret key" 7 | } 8 | 9 | # http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html 10 | variable "aws_ec2_region" { 11 | description = "Amazon EC2 region to use" 12 | default = "us-east-1" 13 | } 14 | 15 | # https://aws.amazon.com/marketplace/pp/B00O7WM7QW 16 | variable "aws_ec2_ami" { 17 | description = "The EC2 AMI to use" 18 | default = { 19 | "gateway" = "ami-6d1c2007" 20 | "webserver" = "ami-6d1c2007" 21 | "consumer" = "ami-6d1c2007" 22 | } 23 | } 24 | 25 | variable "sshkey" { 26 | description = "SSH key to use for provisioning" 27 | } 28 | 29 | variable "instance_names" { 30 | description = "Base names to use for instances" 31 | default = { 32 | "gateway" = "gateway" 33 | "webserver" = "webserver" 34 | "consumer" = "consumer" 35 | } 36 | } 37 | 38 | # https://aws.amazon.com/ec2/instance-types/ 39 | variable "instance_types" { 40 | description = "Instance types to use" 41 | default = { 42 | "gateway" = "m3.medium" 43 | "webserver" = "m3.medium" 44 | "consumer" = "m3.medium" 45 | } 46 | } 47 | 48 | variable "user_name" { 49 | default = { 50 | "ami-6d1c2007" = "centos" 51 | } 52 | } 53 | -------------------------------------------------------------------------------- /deployment/terraform/bin/README: -------------------------------------------------------------------------------- 1 | Download a terraform-inventory binary that works for your platform from https://github.com/adammck/terraform-inventory/releases and put in this directory. 2 | -------------------------------------------------------------------------------- /deployment/terraform/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ "$1" = "" ]; then 4 | echo "Usage:" 5 | echo " deploy cloud-provider-name [custom-terraform-var-file]" 6 | echo "Examples:" 7 | echo " deploy aws-ec2" 8 | echo " deploy digitalocean" 9 | echo " deploy aws-ec2 terraform.tfvars.mine" 10 | fi 11 | 12 | if [ -d "$1" ]; then 13 | terraform apply -var-file="$2" "$1" 14 | ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i bin/terraform-inventory playbook.yml 15 | fi 16 | -------------------------------------------------------------------------------- /deployment/terraform/destroy: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ "$1" = "" ]; then 4 | echo "Usage:" 5 | echo " destroy cloud-provider-name [custom-terraform-var-file]" 6 | echo "Examples:" 7 | echo " destroy aws-ec2" 8 | echo " destroy digitalocean" 9 | echo " destroy aws-ec2 terraform.tfvars.mine" 10 | fi 11 | 12 | if [ -d "$1" ]; then 13 | terraform destroy -var-file="$2" "$1" 14 | fi 15 | -------------------------------------------------------------------------------- /deployment/terraform/digitalocean/main.tf: -------------------------------------------------------------------------------- 1 | # Configure the DigitalOcean Provider 2 | provider "digitalocean" { 3 | token = "${var.do_apikey}" 4 | } 5 | 6 | resource "digitalocean_ssh_key" "apiperf-sshkey" { 7 | name = "apigwperftest" 8 | public_key = "${var.sshkey}" 9 | } 10 | 11 | resource "digitalocean_droplet" "apiperf-webserver" { 12 | image = "${var.droplet_image}" 13 | private_networking = "${var.droplet_use_privatenet}" 14 | name = "apiperf-${lookup(var.droplet_names, "webserver")}" 15 | size = "${lookup(var.droplet_sizes, "webserver")}" 16 | region = "${var.droplet_region}" 17 | ssh_keys = [ "${digitalocean_ssh_key.apiperf-sshkey.id}" ] 18 | provisioner "local-exec" { 19 | command = "echo ${self.ipv4_address} ${lookup(var.droplet_names, "webserver")} > hosts/${lookup(var.droplet_names, "webserver")}.host" 20 | } 21 | provisioner "remote-exec" { 22 | inline = [ 23 | "service firewalld stop" 24 | ] 25 | } 26 | } 27 | 28 | resource "digitalocean_droplet" "apiperf-gateway" { 29 | image = "${var.droplet_image}" 30 | private_networking = "${var.droplet_use_privatenet}" 31 | name = "apiperf-${lookup(var.droplet_names, "gateway")}" 32 | size = "${lookup(var.droplet_sizes, "gateway")}" 33 | region = "${var.droplet_region}" 34 | ssh_keys = [ "${digitalocean_ssh_key.apiperf-sshkey.id}" ] 35 | provisioner "local-exec" { 36 | command = "echo ${self.ipv4_address} ${lookup(var.droplet_names, "gateway")} > hosts/${lookup(var.droplet_names, "gateway")}.host" 37 | } 38 | provisioner "remote-exec" { 39 | inline = [ 40 | "service firewalld stop" 41 | ] 42 | } 43 | } 44 | 45 | resource "digitalocean_droplet" "apiperf-consumer" { 46 | image = "${var.droplet_image}" 47 | private_networking = "${var.droplet_use_privatenet}" 48 | name = "apiperf-${lookup(var.droplet_names, "consumer")}" 49 | size = "${lookup(var.droplet_sizes, "consumer")}" 50 | region = "${var.droplet_region}" 51 | ssh_keys = [ "${digitalocean_ssh_key.apiperf-sshkey.id}" ] 52 | provisioner "local-exec" { 53 | command = "echo ${self.ipv4_address} ${lookup(var.droplet_names, "consumer")} > hosts/${lookup(var.droplet_names, "consumer")}.host" 54 | } 55 | provisioner "remote-exec" { 56 | inline = [ 57 | "service firewalld stop" 58 | ] 59 | } 60 | } 61 | 62 | resource "null_resource" "hostsfile" { 63 | triggers { 64 | droplet_consumer = "${digitalocean_droplet.apiperf-consumer.ipv4_address}" 65 | droplet_webserver = "${digitalocean_droplet.apiperf-webserver.ipv4_address}" 66 | droplet_gateway = "${digitalocean_droplet.apiperf-gateway.ipv4_address}" 67 | } 68 | provisioner "file" { 69 | connection { 70 | host = "${digitalocean_droplet.apiperf-consumer.ipv4_address}" 71 | } 72 | source = "hosts" 73 | destination = "/tmp" 74 | } 75 | provisioner "remote-exec" { 76 | connection { 77 | host = "${digitalocean_droplet.apiperf-consumer.ipv4_address}" 78 | } 79 | inline = [ 80 | "cat /tmp/hosts/hostsheader /tmp/hosts/*.host > /etc/hosts", 81 | ] 82 | } 83 | provisioner "file" { 84 | connection { 85 | host = "${digitalocean_droplet.apiperf-gateway.ipv4_address}" 86 | } 87 | source = "hosts" 88 | destination = "/tmp" 89 | } 90 | provisioner "remote-exec" { 91 | connection { 92 | host = "${digitalocean_droplet.apiperf-gateway.ipv4_address}" 93 | } 94 | inline = [ 95 | "cat /tmp/hosts/hostsheader /tmp/hosts/*.host > /etc/hosts", 96 | ] 97 | } 98 | provisioner "file" { 99 | connection { 100 | host = "${digitalocean_droplet.apiperf-webserver.ipv4_address}" 101 | } 102 | source = "hosts" 103 | destination = "/tmp" 104 | } 105 | provisioner "remote-exec" { 106 | connection { 107 | host = "${digitalocean_droplet.apiperf-webserver.ipv4_address}" 108 | } 109 | inline = [ 110 | "cat /tmp/hosts/hostsheader /tmp/hosts/*.host > /etc/hosts", 111 | ] 112 | } 113 | } 114 | -------------------------------------------------------------------------------- /deployment/terraform/digitalocean/vars.tf: -------------------------------------------------------------------------------- 1 | variable "do_apikey" { 2 | description = "Valid Digital Ocean API Key" 3 | } 4 | 5 | variable "sshkey" { 6 | description = "SSH key to use for provisioning" 7 | } 8 | 9 | variable "droplet_region" { 10 | description = "The region to use for droplets" 11 | default = "ams3" 12 | } 13 | 14 | variable "droplet_image" { 15 | description = "The droplet image to use" 16 | default = "centos-7-0-x64" 17 | } 18 | 19 | variable "droplet_use_privatenet" { 20 | description = "Disable or enable private networking" 21 | default = "False" 22 | } 23 | 24 | variable "droplet_names" { 25 | description = "Base names to use for droplets" 26 | default = { 27 | "gateway" = "gateway" 28 | "webserver" = "webserver" 29 | "consumer" = "consumer" 30 | } 31 | } 32 | 33 | variable "droplet_sizes" { 34 | description = "Droplet sizes" 35 | default = { 36 | "gateway" = "1gb" 37 | "webserver" = "1gb" 38 | "consumer" = "1gb" 39 | } 40 | } 41 | -------------------------------------------------------------------------------- /deployment/terraform/hosts/README: -------------------------------------------------------------------------------- 1 | # Hostsfile autogenerated from provisioning data 2 | -------------------------------------------------------------------------------- /deployment/terraform/hosts/hostsheader: -------------------------------------------------------------------------------- 1 | # Terraform generated hosts-file 2 | 3 | 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 4 | ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 5 | 6 | # Terraform nodes below 7 | 8 | -------------------------------------------------------------------------------- /deployment/terraform/playbook.yml: -------------------------------------------------------------------------------- 1 | - name: "Common setup for all hosts" 2 | hosts: all 3 | tasks: 4 | - name: "Install the EPEL repo" 5 | yum: name="{{ item }}" state="installed" 6 | with_items: 7 | - "epel-release" 8 | - name: "Install common tools" 9 | yum: name="{{ item }}" state="installed" 10 | with_items: 11 | - "git" 12 | - "ansible" 13 | - "wget" 14 | - name: "Fetch the api-gateway-benchmarks repo" 15 | git: repo=https://github.com/varnish/api-gateway-benchmarks.git 16 | dest=/opt/api-gateway-benchmarks 17 | 18 | - name: "Custom setup for webserver" 19 | hosts: apiperf-webserver 20 | tasks: 21 | - name: "Webserver Playbook: dummy-api" 22 | shell: ./deploy 23 | args: 24 | chdir: /opt/api-gateway-benchmarks/webservers/dummy-api 25 | register: ansible_dummyapi 26 | - debug: var=ansible_dummyapi.stdout_lines 27 | 28 | - name: "Custom setup for gateway" 29 | hosts: apiperf-gateway 30 | tasks: 31 | - name: "Gateway Playbook: kong" 32 | shell: ./deploy 33 | args: 34 | chdir: /opt/api-gateway-benchmarks/gateways/kong 35 | register: ansible_kong 36 | - debug: var=ansible_kong.stdout_lines 37 | # tasks: 38 | # - name: "Gateway Playbook: tyk" 39 | # shell: ./deploy 40 | # args: 41 | # chdir: /opt/api-gateway-benchmarks/gateways/tyk 42 | # register: ansible_tyk 43 | # - debug: var=ansible_tyk.stdout_lines 44 | # tasks: 45 | # - name: "Gateway Playbook: wso2" 46 | # shell: ./deploy 47 | # args: 48 | # chdir: /opt/api-gateway-benchmarks/gateways/wso2 49 | # register: ansible_wso2 50 | # - debug: var=ansible_wso2.stdout_lines 51 | 52 | - name: "consumer" 53 | hosts: apiperf-consumer 54 | tasks: 55 | - name: "Consumer Playbook: boom" 56 | shell: ./deploy 57 | args: 58 | chdir: /opt/api-gateway-benchmarks/consumers/boom 59 | register: ansible_boom 60 | - debug: var=ansible_boom.stdout_lines 61 | - name: "Run test: 00" 62 | command: /usr/local/bin/test00 63 | register: testresult 64 | - debug: var=testresult.stdout_lines 65 | - name: "Run test: 01" 66 | command: /usr/local/bin/test01 67 | register: testresult 68 | - debug: var=testresult.stdout_lines 69 | - name: "Run test: 02" 70 | command: /usr/local/bin/test02 71 | register: testresult 72 | - debug: var=testresult.stdout_lines 73 | - name: "Run test: 03" 74 | command: /usr/local/bin/test03 75 | register: testresult 76 | - debug: var=testresult.stdout_lines 77 | - name: "Run test: 04" 78 | command: /usr/local/bin/test04 79 | register: testresult 80 | - debug: var=testresult.stdout_lines 81 | -------------------------------------------------------------------------------- /deployment/terraform/terraform.tfvars: -------------------------------------------------------------------------------- 1 | # AWS Credentials 2 | aws_access_key = "" 3 | aws_secret_key = "" 4 | # Digital Ocean Credentials 5 | do_apikey = "" 6 | # SSH key to install and use during deployment and provisioning 7 | sshkey = "" 8 | # If above SSH key is not in the users keychain, specify path 9 | #sshkey_path = "${file("path/to/private_key")}" 10 | -------------------------------------------------------------------------------- /deployment/vagrant/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | def linux config, ip, box, hostname, box_url 5 | config.vm.define hostname.to_sym do |c| 6 | # Give the VM a public or private IP address. PS: By default, the vagrant 7 | # images are not secure to expose as they allow login with known keys and 8 | # passwords. 9 | #c.vm.network :public_network 10 | c.vm.network :private_network, ip: ip 11 | 12 | # Sync our top folder 13 | c.vm.synced_folder "../../", "/opt/benchmarks", owner: "root", group: "root" 14 | 15 | # Force the VM names instead of using generated ones. May cause problems 16 | # if running the same VM in different vagrant projects. 17 | #c.vm.provider :virtualbox do |v| 18 | # v.customize ['modifyvm', :id, '--name', box.to_sym] 19 | #end 20 | 21 | # Memory (MB) and CPU (vCPU) allocation 22 | c.vm.provider :virtualbox do |v| 23 | v.memory = 2048 24 | v.cpus = 1 25 | end 26 | 27 | c.vm.box_url = box_url 28 | c.vm.box = box 29 | c.vm.hostname = '%s.local' % hostname.to_sym 30 | c.vm.boot_timeout = 900 31 | 32 | # Deploy vagrant environment specific configuration 33 | c.vm.provision "setup", type: "ansible" do |ansible| 34 | ansible.sudo = true 35 | ansible.playbook = "ansible/setup.yml" 36 | end 37 | 38 | # Useful when rebuilding image based on existing box 39 | # to not overwrite the insecure pubkey from: 40 | # https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub 41 | # 42 | #c.ssh.insert_key = false 43 | end 44 | end 45 | 46 | # hostname, ip, box, box_url, checksum 47 | Vagrant.configure('2') do |config| 48 | linux config, '10.11.12.20', 'varnish/centos-7-amd64', 'consumer', 'https://images.varnish-software.com/vagrant/centos-7-amd64.json' 49 | linux config, '10.11.12.21', 'varnish/centos-7-amd64', 'gateway', 'https://images.varnish-software.com/vagrant/centos-7-amd64.json' 50 | linux config, '10.11.12.22', 'varnish/centos-7-amd64', 'webserver', 'https://images.varnish-software.com/vagrant/centos-7-amd64.json' 51 | end 52 | -------------------------------------------------------------------------------- /deployment/vagrant/ansible/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Vagrant specific configuration" 2 | hosts: all 3 | tasks: 4 | - name: "Ensure /etc/hosts is distributed" 5 | template: src=templates/hosts.j2 dest=/etc/hosts backup=yes 6 | - name: "Ensure the firewall is disabled" 7 | service: name="firewalld" state="stopped" enabled="no" 8 | -------------------------------------------------------------------------------- /deployment/vagrant/ansible/templates/hosts.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | 127.0.0.1 localhost 4 | 10.11.12.20 consumer 5 | 10.11.12.21 gateway 6 | 10.11.12.22 webserver 7 | -------------------------------------------------------------------------------- /gateways/apiman/README.md: -------------------------------------------------------------------------------- 1 | # Apiman 2 | 3 | Wildfly is available at port 9990 (http). The username and password is admin/admin. 4 | 5 | -------------------------------------------------------------------------------- /gateways/apiman/configure: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | cd /usr/local/wildfly-9.0.2.Final/bin 4 | 5 | # Create admin user 6 | ./add-user.sh -e -u admin -p admin 7 | -------------------------------------------------------------------------------- /gateways/apiman/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Check if Ansible is installed 4 | which ansible-playbook 2>&1 > /dev/null 5 | if [ $? -ne 0 ]; then 6 | # Install Ansible if it's not 7 | yum install -y ansible 8 | fi 9 | 10 | ansible-playbook -c local -i "localhost," setup.yml 11 | 12 | ./configure 13 | -------------------------------------------------------------------------------- /gateways/apiman/files/apiman.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=apiman 3 | After=network.target 4 | 5 | [Service] 6 | WorkingDirectory=/usr/local/wildfly-9.0.2.Final 7 | ExecStart=/usr/local/wildfly-9.0.2.Final/bin/standalone.sh -c standalone-apiman.xml 8 | StandardOutput=syslog 9 | StandardError=syslog 10 | SyslogIdentifier=apiman 11 | PrivateTmp=true 12 | User=apiman 13 | Group=apiman 14 | 15 | [Install] 16 | WantedBy=default.target 17 | -------------------------------------------------------------------------------- /gateways/apiman/files/standalone-apiman.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE 153 | h2 154 | 155 | sa 156 | sa 157 | 158 | 159 | 160 | jdbc:h2:${jboss.server.data.dir}/keycloak;AUTO_SERVER=TRUE 161 | h2 162 | 163 | sa 164 | sa 165 | 166 | 167 | 168 | 169 | org.h2.jdbcx.JdbcDataSource 170 | 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | false 179 | 180 | 181 | 182 | 183 | 184 | 185 | 186 | 187 | 188 | 189 | 190 | 191 | 192 | 193 | 194 | 195 | 196 | 197 | 198 | 199 | 200 | 201 | 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | 215 | 216 | 217 | 218 | 219 | 220 | 221 | 222 | 223 | 224 | 225 | 226 | 227 | 228 | 229 | 230 | 231 | 232 | 233 | 234 | 235 | 236 | 237 | 238 | 239 | 240 | 241 | 242 | 243 | 244 | 245 | 246 | 247 | 248 | 249 | 250 | 251 | 252 | 253 | 254 | 255 | 256 | 257 | 258 | 259 | 260 | 261 | 262 | 263 | 264 | 265 | 266 | 267 | 268 | 269 | 270 | 271 | 272 | 273 | 274 | 275 | 276 | 277 | 278 | 279 | 280 | 281 | 282 | 283 | 284 | 285 | 286 | 287 | 288 | 289 | 290 | 291 | 292 | 293 | 294 | 295 | 296 | 297 | 298 | 299 | 300 | 301 | 302 | 303 | 304 | 305 | 306 | 307 | 308 | 309 | 310 | 311 | 312 | 313 | 314 | 315 | 316 | 317 | 318 | 319 | 320 | 321 | 322 | 323 | 324 | 325 | 326 | 327 | 328 | 329 | 330 | 331 | 332 | 333 | 334 | 335 | 336 | 337 | 338 | 339 | 340 | 341 | 342 | 343 | 344 | 345 | 346 | 347 | 348 | 349 | 350 | 351 | 352 | 353 | 354 | 355 | 356 | 357 | 358 | 359 | 360 | 361 | 362 | 363 | 364 | 365 | 366 | 367 | 368 | 369 | 370 | 371 | 372 | 373 | 374 | 375 | 376 | 377 | 378 | 379 | 380 | 381 | 382 | 383 | 384 | 385 | 386 | 387 | 388 | 389 | 390 | 391 | 392 | 393 | 394 | 395 | 396 | 397 | 398 | 0.0.0.0 399 | 400 | 401 | 402 | 403 | 404 | 405 | 406 | 407 | 408 | 409 | auth 410 | 411 | 412 | 413 | MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxyG61ohrfJQKNmDA/ePZtqZVpPXjwn3k3T+iWiTvMsxW2+WlnqIEmL5qZ09DMhBH9r50WZRO2gVoCb657Er9x0vfD6GNf/47XU2y33TX8axhP+hSwkv/VViaDlu4jQrfgPWz/FXMjWIZxg1xQS+nOBF2ScCRYWNQ/ZnUNnvrq8dGC2/AlyeYcgDUOdwlJuvgkGlF0QoVPQiRPurR3RwlG+BjL8JB3hbaAZhdJqwqApmGQbcpgLj2tODnlrZnEAp5cPPU/lgqCE1OOp78BAEiE91ZLPl/+D8qDHk+Maz0Io3bkeRZMXPpvtbL3qN+3GlF8Yz264HDSsTNrH+nd19tFQIDAQAB 414 | /auth 415 | none 416 | false 417 | preferred_username 418 | 419 | 420 | apiman 421 | apiman 422 | 5af5458f-0a96-4251-8f92-08ebcc3a8aa2 423 | true 424 | true 425 | true 426 | 427 | 428 | apiman 429 | apimanui 430 | 722557fd-a725-4cc0-9dff-7d09c0c47038 431 | true 432 | true 433 | 434 | 435 | apiman 436 | apiman-gateway-api 437 | 217b725d-7790-47a7-a3fc-5cf31f92a8db 438 | true 439 | true 440 | true 441 | 442 | 443 | 444 | 445 | 446 | 447 | 448 | 449 | 450 | 451 | 452 | 453 | 457 | 458 | 459 | 460 | 461 | 462 | 463 | 464 | 465 | 466 | 467 | 468 | 469 | 470 | 471 | 472 | 473 | -------------------------------------------------------------------------------- /gateways/apiman/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Setup Apiman" 2 | hosts: all 3 | tasks: 4 | 5 | # From http://www.apiman.io/latest/download.html 6 | # "You'll obviously also need to have Java installed - we currently recommend at least version 7." 7 | - name: "Ensure dependencies are installed" 8 | yum: name="{{ item }}" state="installed" 9 | with_items: 10 | - "java-1.7.0-openjdk" 11 | 12 | - name: "Ensure wildfly is downloaded" 13 | get_url: url="http://download.jboss.org/wildfly/9.0.2.Final/wildfly-9.0.2.Final.zip" 14 | dest="/usr/local/wildfly.zip" 15 | timeout=1800 16 | 17 | - name: "Ensure apiman is downloaded" 18 | get_url: url="http://downloads.jboss.org/apiman/1.2.1.Final/apiman-distro-wildfly9-1.2.1.Final-overlay.zip" 19 | dest="/usr/local/apiman.zip" 20 | timeout=1800 21 | 22 | - name: "Extract wildfly" 23 | command: "unzip wildfly.zip" 24 | args: 25 | chdir: "/usr/local" 26 | creates: "/usr/local/wildfly-9.0.2.Final" 27 | 28 | # apiman is extracted inside the wildfly directory 29 | - name: "Extract apiman" 30 | command: "unzip apiman.zip -d /usr/local/wildfly-9.0.2.Final" 31 | args: 32 | chdir: "/usr/local" 33 | creates: "/usr/local/wildfly-9.0.2.Final/apiman" 34 | 35 | - name: "Put configuration in place" 36 | copy: src="files/standalone-apiman.xml" 37 | dest="/usr/local/wildfly-9.0.2.Final/standalone/configuration/standalone-apiman.xml" 38 | mode=0444 39 | 40 | - name: "Ensure the apiman user is present" 41 | user: name="apiman" comment="apiman system user" 42 | system="yes" state="present" 43 | 44 | - name: "Set ownerships" 45 | file: path="{{ item }}" owner="apiman" group="apiman" recurse="yes" 46 | with_items: 47 | - "/usr/local/wildfly-9.0.2.Final/" 48 | 49 | - name: "Ensure apiman service script is present" 50 | copy: src="files/apiman.service" 51 | dest="/etc/systemd/system/apiman.service" 52 | mode=444 owner=root group=root 53 | notify: 54 | - "Restart apiman" 55 | 56 | - name: "Ensure apiman is running" 57 | service: name="apiman" state="running" enabled="yes" 58 | handlers: 59 | - name: "Restart apiman" 60 | service: name="apiman" state="restarted" 61 | -------------------------------------------------------------------------------- /gateways/kong/configure: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Test 1 4 | curl -i -X POST http://localhost:8001/apis/ --data 'request_path=/test01' --data 'upstream_url=http://webserver:8888/test01' --data 'name=test01' 5 | 6 | # Test 2 7 | curl -i -X POST http://localhost:8001/apis/ --data 'request_path=/test02' --data 'upstream_url=http://webserver:8888/test02' --data 'name=test02' 8 | curl -i -X POST http://localhost:8001/apis/test02/plugins --data "name=key-auth" # Enable key based authentication 9 | curl -i -X POST http://localhost:8001/consumers/ --data "username=test02" # Create consumer 10 | curl -i -X POST http://localhost:8001/consumers/test02/key-auth --data "key=key02" # Set the API key 11 | 12 | # Test 3 13 | curl -i -X POST http://localhost:8001/apis/ --data 'request_path=/test03' --data 'upstream_url=http://webserver:8888/test03' --data 'name=test03' 14 | curl -i -X POST http://localhost:8001/apis/test03/plugins --data "name=key-auth" # Enable key based authentication 15 | curl -i -X POST http://localhost:8001/consumers/ --data "username=test03" # Create consumer 16 | curl -i -X POST http://localhost:8001/consumers/test03/key-auth --data "key=key03" # Set the API key 17 | curl -i -X POST http://localhost:8001/apis/test03/plugins --data "name=rate-limiting" --data "config.minute=60000000" # 1M per second on avg, rate limiting 18 | 19 | # Test 4 20 | curl -i -X POST http://localhost:8001/apis/ --data 'request_path=/test04' --data 'upstream_url=http://webserver:8888/test04' --data 'name=test04' 21 | curl -i -X POST http://localhost:8001/apis/test04/plugins --data "name=key-auth" # Enable key based authentication 22 | curl -i -X POST http://localhost:8001/consumers/ --data "username=test04" # Create consumer 23 | curl -i -X POST http://localhost:8001/consumers/test04/key-auth --data "key=key04" # Set the API key 24 | curl -i -X POST http://localhost:8001/apis/test04/plugins --data "name=rate-limiting" --data "config.second=1" # 1 rps allowed, strict rate limiting 25 | 26 | -------------------------------------------------------------------------------- /gateways/kong/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Check if Ansible is installed 4 | which ansible-playbook 2>&1 > /dev/null 5 | if [ $? -ne 0 ]; then 6 | # Install Ansible if it's not 7 | yum install -y ansible 8 | fi 9 | 10 | ansible-playbook -c local -i "localhost," setup.yml 11 | 12 | ./configure 13 | -------------------------------------------------------------------------------- /gateways/kong/files/cassandra.repo: -------------------------------------------------------------------------------- 1 | [datastax] 2 | name=DataStax Repo for Apache Cassandra 3 | baseurl=http://rpm.datastax.com/community 4 | enabled=1 5 | gpgcheck=0 6 | -------------------------------------------------------------------------------- /gateways/kong/files/cassandra.yaml: -------------------------------------------------------------------------------- 1 | # Cassandra storage config YAML 2 | 3 | # NOTE: 4 | # See http://wiki.apache.org/cassandra/StorageConfiguration for 5 | # full explanations of configuration directives 6 | # /NOTE 7 | 8 | # The name of the cluster. This is mainly used to prevent machines in 9 | # one logical cluster from joining another. 10 | cluster_name: 'Test Cluster' 11 | 12 | # This defines the number of tokens randomly assigned to this node on the ring 13 | # The more tokens, relative to other nodes, the larger the proportion of data 14 | # that this node will store. You probably want all nodes to have the same number 15 | # of tokens assuming they have equal hardware capability. 16 | # 17 | # If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility, 18 | # and will use the initial_token as described below. 19 | # 20 | # Specifying initial_token will override this setting on the node's initial start, 21 | # on subsequent starts, this setting will apply even if initial token is set. 22 | # 23 | # If you already have a cluster with 1 token per node, and wish to migrate to 24 | # multiple tokens per node, see http://wiki.apache.org/cassandra/Operations 25 | num_tokens: 256 26 | 27 | # initial_token allows you to specify tokens manually. While you can use # it with 28 | # vnodes (num_tokens > 1, above) -- in which case you should provide a 29 | # comma-separated list -- it's primarily used when adding nodes # to legacy clusters 30 | # that do not have vnodes enabled. 31 | # initial_token: 32 | 33 | # See http://wiki.apache.org/cassandra/HintedHandoff 34 | # May either be "true" or "false" to enable globally, or contain a list 35 | # of data centers to enable per-datacenter. 36 | # hinted_handoff_enabled: DC1,DC2 37 | hinted_handoff_enabled: true 38 | # this defines the maximum amount of time a dead host will have hints 39 | # generated. After it has been dead this long, new hints for it will not be 40 | # created until it has been seen alive and gone down again. 41 | max_hint_window_in_ms: 10800000 # 3 hours 42 | # Maximum throttle in KBs per second, per delivery thread. This will be 43 | # reduced proportionally to the number of nodes in the cluster. (If there 44 | # are two nodes in the cluster, each delivery thread will use the maximum 45 | # rate; if there are three, each will throttle to half of the maximum, 46 | # since we expect two nodes to be delivering hints simultaneously.) 47 | hinted_handoff_throttle_in_kb: 1024 48 | # Number of threads with which to deliver hints; 49 | # Consider increasing this number when you have multi-dc deployments, since 50 | # cross-dc handoff tends to be slower 51 | max_hints_delivery_threads: 2 52 | 53 | # Maximum throttle in KBs per second, total. This will be 54 | # reduced proportionally to the number of nodes in the cluster. 55 | batchlog_replay_throttle_in_kb: 1024 56 | 57 | # Authentication backend, implementing IAuthenticator; used to identify users 58 | # Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator, 59 | # PasswordAuthenticator}. 60 | # 61 | # - AllowAllAuthenticator performs no checks - set it to disable authentication. 62 | # - PasswordAuthenticator relies on username/password pairs to authenticate 63 | # users. It keeps usernames and hashed passwords in system_auth.credentials table. 64 | # Please increase system_auth keyspace replication factor if you use this authenticator. 65 | authenticator: AllowAllAuthenticator 66 | 67 | # Authorization backend, implementing IAuthorizer; used to limit access/provide permissions 68 | # Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer, 69 | # CassandraAuthorizer}. 70 | # 71 | # - AllowAllAuthorizer allows any action to any user - set it to disable authorization. 72 | # - CassandraAuthorizer stores permissions in system_auth.permissions table. Please 73 | # increase system_auth keyspace replication factor if you use this authorizer. 74 | authorizer: AllowAllAuthorizer 75 | 76 | # Validity period for permissions cache (fetching permissions can be an 77 | # expensive operation depending on the authorizer, CassandraAuthorizer is 78 | # one example). Defaults to 2000, set to 0 to disable. 79 | # Will be disabled automatically for AllowAllAuthorizer. 80 | permissions_validity_in_ms: 2000 81 | 82 | # Refresh interval for permissions cache (if enabled). 83 | # After this interval, cache entries become eligible for refresh. Upon next 84 | # access, an async reload is scheduled and the old value returned until it 85 | # completes. If permissions_validity_in_ms is non-zero, then this must be 86 | # also. 87 | # Defaults to the same value as permissions_validity_in_ms. 88 | # permissions_update_interval_in_ms: 1000 89 | 90 | # The partitioner is responsible for distributing groups of rows (by 91 | # partition key) across nodes in the cluster. You should leave this 92 | # alone for new clusters. The partitioner can NOT be changed without 93 | # reloading all data, so when upgrading you should set this to the 94 | # same partitioner you were already using. 95 | # 96 | # Besides Murmur3Partitioner, partitioners included for backwards 97 | # compatibility include RandomPartitioner, ByteOrderedPartitioner, and 98 | # OrderPreservingPartitioner. 99 | # 100 | partitioner: org.apache.cassandra.dht.Murmur3Partitioner 101 | 102 | # Directories where Cassandra should store data on disk. Cassandra 103 | # will spread data evenly across them, subject to the granularity of 104 | # the configured compaction strategy. 105 | # If not set, the default directory is $CASSANDRA_HOME/data/data. 106 | data_file_directories: 107 | - /var/lib/cassandra/data 108 | 109 | # commit log. when running on magnetic HDD, this should be a 110 | # separate spindle than the data directories. 111 | # If not set, the default directory is $CASSANDRA_HOME/data/commitlog. 112 | commitlog_directory: /var/lib/cassandra/commitlog 113 | 114 | # policy for data disk failures: 115 | # die: shut down gossip and client transports and kill the JVM for any fs errors or 116 | # single-sstable errors, so the node can be replaced. 117 | # stop_paranoid: shut down gossip and client transports even for single-sstable errors, 118 | # kill the JVM for errors during startup. 119 | # stop: shut down gossip and client transports, leaving the node effectively dead, but 120 | # can still be inspected via JMX, kill the JVM for errors during startup. 121 | # best_effort: stop using the failed disk and respond to requests based on 122 | # remaining available sstables. This means you WILL see obsolete 123 | # data at CL.ONE! 124 | # ignore: ignore fatal errors and let requests fail, as in pre-1.2 Cassandra 125 | disk_failure_policy: stop 126 | 127 | # policy for commit disk failures: 128 | # die: shut down gossip and Thrift and kill the JVM, so the node can be replaced. 129 | # stop: shut down gossip and Thrift, leaving the node effectively dead, but 130 | # can still be inspected via JMX. 131 | # stop_commit: shutdown the commit log, letting writes collect but 132 | # continuing to service reads, as in pre-2.0.5 Cassandra 133 | # ignore: ignore fatal errors and let the batches fail 134 | commit_failure_policy: stop 135 | 136 | # Maximum size of the key cache in memory. 137 | # 138 | # Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the 139 | # minimum, sometimes more. The key cache is fairly tiny for the amount of 140 | # time it saves, so it's worthwhile to use it at large numbers. 141 | # The row cache saves even more time, but must contain the entire row, 142 | # so it is extremely space-intensive. It's best to only use the 143 | # row cache if you have hot rows or static rows. 144 | # 145 | # NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. 146 | # 147 | # Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache. 148 | key_cache_size_in_mb: 149 | 150 | # Duration in seconds after which Cassandra should 151 | # save the key cache. Caches are saved to saved_caches_directory as 152 | # specified in this configuration file. 153 | # 154 | # Saved caches greatly improve cold-start speeds, and is relatively cheap in 155 | # terms of I/O for the key cache. Row cache saving is much more expensive and 156 | # has limited use. 157 | # 158 | # Default is 14400 or 4 hours. 159 | key_cache_save_period: 14400 160 | 161 | # Number of keys from the key cache to save 162 | # Disabled by default, meaning all keys are going to be saved 163 | # key_cache_keys_to_save: 100 164 | 165 | # Maximum size of the row cache in memory. 166 | # NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. 167 | # 168 | # Default value is 0, to disable row caching. 169 | row_cache_size_in_mb: 0 170 | 171 | # Duration in seconds after which Cassandra should 172 | # save the row cache. Caches are saved to saved_caches_directory as specified 173 | # in this configuration file. 174 | # 175 | # Saved caches greatly improve cold-start speeds, and is relatively cheap in 176 | # terms of I/O for the key cache. Row cache saving is much more expensive and 177 | # has limited use. 178 | # 179 | # Default is 0 to disable saving the row cache. 180 | row_cache_save_period: 0 181 | 182 | # Number of keys from the row cache to save 183 | # Disabled by default, meaning all keys are going to be saved 184 | # row_cache_keys_to_save: 100 185 | 186 | # Maximum size of the counter cache in memory. 187 | # 188 | # Counter cache helps to reduce counter locks' contention for hot counter cells. 189 | # In case of RF = 1 a counter cache hit will cause Cassandra to skip the read before 190 | # write entirely. With RF > 1 a counter cache hit will still help to reduce the duration 191 | # of the lock hold, helping with hot counter cell updates, but will not allow skipping 192 | # the read entirely. Only the local (clock, count) tuple of a counter cell is kept 193 | # in memory, not the whole counter, so it's relatively cheap. 194 | # 195 | # NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. 196 | # 197 | # Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache. 198 | # NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache. 199 | counter_cache_size_in_mb: 200 | 201 | # Duration in seconds after which Cassandra should 202 | # save the counter cache (keys only). Caches are saved to saved_caches_directory as 203 | # specified in this configuration file. 204 | # 205 | # Default is 7200 or 2 hours. 206 | counter_cache_save_period: 7200 207 | 208 | # Number of keys from the counter cache to save 209 | # Disabled by default, meaning all keys are going to be saved 210 | # counter_cache_keys_to_save: 100 211 | 212 | # The off-heap memory allocator. Affects storage engine metadata as 213 | # well as caches. Experiments show that JEMAlloc saves some memory 214 | # than the native GCC allocator (i.e., JEMalloc is more 215 | # fragmentation-resistant). 216 | # 217 | # Supported values are: NativeAllocator, JEMallocAllocator 218 | # 219 | # If you intend to use JEMallocAllocator you have to install JEMalloc as library and 220 | # modify cassandra-env.sh as directed in the file. 221 | # 222 | # Defaults to NativeAllocator 223 | # memory_allocator: NativeAllocator 224 | 225 | # saved caches 226 | # If not set, the default directory is $CASSANDRA_HOME/data/saved_caches. 227 | saved_caches_directory: /var/lib/cassandra/saved_caches 228 | 229 | # commitlog_sync may be either "periodic" or "batch." 230 | # 231 | # When in batch mode, Cassandra won't ack writes until the commit log 232 | # has been fsynced to disk. It will wait 233 | # commitlog_sync_batch_window_in_ms milliseconds between fsyncs. 234 | # This window should be kept short because the writer threads will 235 | # be unable to do extra work while waiting. (You may need to increase 236 | # concurrent_writes for the same reason.) 237 | # 238 | # commitlog_sync: batch 239 | # commitlog_sync_batch_window_in_ms: 2 240 | # 241 | # the other option is "periodic" where writes may be acked immediately 242 | # and the CommitLog is simply synced every commitlog_sync_period_in_ms 243 | # milliseconds. 244 | commitlog_sync: periodic 245 | commitlog_sync_period_in_ms: 10000 246 | 247 | # The size of the individual commitlog file segments. A commitlog 248 | # segment may be archived, deleted, or recycled once all the data 249 | # in it (potentially from each columnfamily in the system) has been 250 | # flushed to sstables. 251 | # 252 | # The default size is 32, which is almost always fine, but if you are 253 | # archiving commitlog segments (see commitlog_archiving.properties), 254 | # then you probably want a finer granularity of archiving; 8 or 16 MB 255 | # is reasonable. 256 | commitlog_segment_size_in_mb: 32 257 | 258 | # Reuse commit log files when possible. The default is false, and this 259 | # feature will be removed entirely in future versions of Cassandra. 260 | #commitlog_segment_recycling: false 261 | 262 | # any class that implements the SeedProvider interface and has a 263 | # constructor that takes a Map of parameters will do. 264 | seed_provider: 265 | # Addresses of hosts that are deemed contact points. 266 | # Cassandra nodes use this list of hosts to find each other and learn 267 | # the topology of the ring. You must change this if you are running 268 | # multiple nodes! 269 | - class_name: org.apache.cassandra.locator.SimpleSeedProvider 270 | parameters: 271 | # seeds is actually a comma-delimited list of addresses. 272 | # Ex: ",," 273 | - seeds: "127.0.0.1" 274 | 275 | # For workloads with more data than can fit in memory, Cassandra's 276 | # bottleneck will be reads that need to fetch data from 277 | # disk. "concurrent_reads" should be set to (16 * number_of_drives) in 278 | # order to allow the operations to enqueue low enough in the stack 279 | # that the OS and drives can reorder them. Same applies to 280 | # "concurrent_counter_writes", since counter writes read the current 281 | # values before incrementing and writing them back. 282 | # 283 | # On the other hand, since writes are almost never IO bound, the ideal 284 | # number of "concurrent_writes" is dependent on the number of cores in 285 | # your system; (8 * number_of_cores) is a good rule of thumb. 286 | concurrent_reads: 32 287 | concurrent_writes: 32 288 | concurrent_counter_writes: 32 289 | 290 | # Total memory to use for sstable-reading buffers. Defaults to 291 | # the smaller of 1/4 of heap or 512MB. 292 | # file_cache_size_in_mb: 512 293 | 294 | # Total permitted memory to use for memtables. Cassandra will stop 295 | # accepting writes when the limit is exceeded until a flush completes, 296 | # and will trigger a flush based on memtable_cleanup_threshold 297 | # If omitted, Cassandra will set both to 1/4 the size of the heap. 298 | # memtable_heap_space_in_mb: 2048 299 | # memtable_offheap_space_in_mb: 2048 300 | 301 | # Ratio of occupied non-flushing memtable size to total permitted size 302 | # that will trigger a flush of the largest memtable. Lager mct will 303 | # mean larger flushes and hence less compaction, but also less concurrent 304 | # flush activity which can make it difficult to keep your disks fed 305 | # under heavy write load. 306 | # 307 | # memtable_cleanup_threshold defaults to 1 / (memtable_flush_writers + 1) 308 | # memtable_cleanup_threshold: 0.11 309 | 310 | # Specify the way Cassandra allocates and manages memtable memory. 311 | # Options are: 312 | # heap_buffers: on heap nio buffers 313 | # offheap_buffers: off heap (direct) nio buffers 314 | # offheap_objects: native memory, eliminating nio buffer heap overhead 315 | memtable_allocation_type: heap_buffers 316 | 317 | # Total space to use for commitlogs. Since commitlog segments are 318 | # mmapped, and hence use up address space, the default size is 32 319 | # on 32-bit JVMs, and 8192 on 64-bit JVMs. 320 | # 321 | # If space gets above this value (it will round up to the next nearest 322 | # segment multiple), Cassandra will flush every dirty CF in the oldest 323 | # segment and remove it. So a small total commitlog space will tend 324 | # to cause more flush activity on less-active columnfamilies. 325 | # commitlog_total_space_in_mb: 8192 326 | 327 | # This sets the amount of memtable flush writer threads. These will 328 | # be blocked by disk io, and each one will hold a memtable in memory 329 | # while blocked. 330 | # 331 | # memtable_flush_writers defaults to the smaller of (number of disks, 332 | # number of cores), with a minimum of 2 and a maximum of 8. 333 | # 334 | # If your data directories are backed by SSD, you should increase this 335 | # to the number of cores. 336 | #memtable_flush_writers: 8 337 | 338 | # A fixed memory pool size in MB for for SSTable index summaries. If left 339 | # empty, this will default to 5% of the heap size. If the memory usage of 340 | # all index summaries exceeds this limit, SSTables with low read rates will 341 | # shrink their index summaries in order to meet this limit. However, this 342 | # is a best-effort process. In extreme conditions Cassandra may need to use 343 | # more than this amount of memory. 344 | index_summary_capacity_in_mb: 345 | 346 | # How frequently index summaries should be resampled. This is done 347 | # periodically to redistribute memory from the fixed-size pool to sstables 348 | # proportional their recent read rates. Setting to -1 will disable this 349 | # process, leaving existing index summaries at their current sampling level. 350 | index_summary_resize_interval_in_minutes: 60 351 | 352 | # Whether to, when doing sequential writing, fsync() at intervals in 353 | # order to force the operating system to flush the dirty 354 | # buffers. Enable this to avoid sudden dirty buffer flushing from 355 | # impacting read latencies. Almost always a good idea on SSDs; not 356 | # necessarily on platters. 357 | trickle_fsync: false 358 | trickle_fsync_interval_in_kb: 10240 359 | 360 | # TCP port, for commands and data 361 | # For security reasons, you should not expose this port to the internet. Firewall it if needed. 362 | storage_port: 7000 363 | 364 | # SSL port, for encrypted communication. Unused unless enabled in 365 | # encryption_options 366 | # For security reasons, you should not expose this port to the internet. Firewall it if needed. 367 | ssl_storage_port: 7001 368 | 369 | # Address or interface to bind to and tell other Cassandra nodes to connect to. 370 | # You _must_ change this if you want multiple nodes to be able to communicate! 371 | # 372 | # Set listen_address OR listen_interface, not both. Interfaces must correspond 373 | # to a single address, IP aliasing is not supported. 374 | # 375 | # Leaving it blank leaves it up to InetAddress.getLocalHost(). This 376 | # will always do the Right Thing _if_ the node is properly configured 377 | # (hostname, name resolution, etc), and the Right Thing is to use the 378 | # address associated with the hostname (it might not be). 379 | # 380 | # Setting listen_address to 0.0.0.0 is always wrong. 381 | # 382 | # If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address 383 | # you can specify which should be chosen using listen_interface_prefer_ipv6. If false the first ipv4 384 | # address will be used. If true the first ipv6 address will be used. Defaults to false preferring 385 | # ipv4. If there is only one address it will be selected regardless of ipv4/ipv6. 386 | listen_address: localhost 387 | # listen_interface: eth0 388 | # listen_interface_prefer_ipv6: false 389 | 390 | # Address to broadcast to other Cassandra nodes 391 | # Leaving this blank will set it to the same value as listen_address 392 | # broadcast_address: 1.2.3.4 393 | 394 | # Internode authentication backend, implementing IInternodeAuthenticator; 395 | # used to allow/disallow connections from peer nodes. 396 | # internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator 397 | 398 | # Whether to start the native transport server. 399 | # Please note that the address on which the native transport is bound is the 400 | # same as the rpc_address. The port however is different and specified below. 401 | start_native_transport: true 402 | # port for the CQL native transport to listen for clients on 403 | # For security reasons, you should not expose this port to the internet. Firewall it if needed. 404 | native_transport_port: 9042 405 | # The maximum threads for handling requests when the native transport is used. 406 | # This is similar to rpc_max_threads though the default differs slightly (and 407 | # there is no native_transport_min_threads, idle threads will always be stopped 408 | # after 30 seconds). 409 | # native_transport_max_threads: 128 410 | # 411 | # The maximum size of allowed frame. Frame (requests) larger than this will 412 | # be rejected as invalid. The default is 256MB. 413 | # native_transport_max_frame_size_in_mb: 256 414 | 415 | # The maximum number of concurrent client connections. 416 | # The default is -1, which means unlimited. 417 | # native_transport_max_concurrent_connections: -1 418 | 419 | # The maximum number of concurrent client connections per source ip. 420 | # The default is -1, which means unlimited. 421 | # native_transport_max_concurrent_connections_per_ip: -1 422 | 423 | # Whether to start the thrift rpc server. 424 | start_rpc: true 425 | 426 | # The address or interface to bind the Thrift RPC service and native transport 427 | # server to. 428 | # 429 | # Set rpc_address OR rpc_interface, not both. Interfaces must correspond 430 | # to a single address, IP aliasing is not supported. 431 | # 432 | # Leaving rpc_address blank has the same effect as on listen_address 433 | # (i.e. it will be based on the configured hostname of the node). 434 | # 435 | # Note that unlike listen_address, you can specify 0.0.0.0, but you must also 436 | # set broadcast_rpc_address to a value other than 0.0.0.0. 437 | # 438 | # For security reasons, you should not expose this port to the internet. Firewall it if needed. 439 | # 440 | # If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address 441 | # you can specify which should be chosen using rpc_interface_prefer_ipv6. If false the first ipv4 442 | # address will be used. If true the first ipv6 address will be used. Defaults to false preferring 443 | # ipv4. If there is only one address it will be selected regardless of ipv4/ipv6. 444 | rpc_address: localhost 445 | #rpc_interface: eth0 446 | # rpc_interface_prefer_ipv6: false 447 | 448 | # port for Thrift to listen for clients on 449 | rpc_port: 9160 450 | 451 | # RPC address to broadcast to drivers and other Cassandra nodes. This cannot 452 | # be set to 0.0.0.0. If left blank, this will be set to the value of 453 | # rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must 454 | # be set. 455 | # broadcast_rpc_address: 1.2.3.4 456 | 457 | # enable or disable keepalive on rpc/native connections 458 | rpc_keepalive: true 459 | 460 | # Cassandra provides two out-of-the-box options for the RPC Server: 461 | # 462 | # sync -> One thread per thrift connection. For a very large number of clients, memory 463 | # will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size 464 | # per thread, and that will correspond to your use of virtual memory (but physical memory 465 | # may be limited depending on use of stack space). 466 | # 467 | # hsha -> Stands for "half synchronous, half asynchronous." All thrift clients are handled 468 | # asynchronously using a small number of threads that does not vary with the amount 469 | # of thrift clients (and thus scales well to many clients). The rpc requests are still 470 | # synchronous (one thread per active request). If hsha is selected then it is essential 471 | # that rpc_max_threads is changed from the default value of unlimited. 472 | # 473 | # The default is sync because on Windows hsha is about 30% slower. On Linux, 474 | # sync/hsha performance is about the same, with hsha of course using less memory. 475 | # 476 | # Alternatively, can provide your own RPC server by providing the fully-qualified class name 477 | # of an o.a.c.t.TServerFactory that can create an instance of it. 478 | rpc_server_type: sync 479 | 480 | # Uncomment rpc_min|max_thread to set request pool size limits. 481 | # 482 | # Regardless of your choice of RPC server (see above), the number of maximum requests in the 483 | # RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync 484 | # RPC server, it also dictates the number of clients that can be connected at all). 485 | # 486 | # The default is unlimited and thus provides no protection against clients overwhelming the server. You are 487 | # encouraged to set a maximum that makes sense for you in production, but do keep in mind that 488 | # rpc_max_threads represents the maximum number of client requests this server may execute concurrently. 489 | # 490 | # rpc_min_threads: 16 491 | # rpc_max_threads: 2048 492 | 493 | # uncomment to set socket buffer sizes on rpc connections 494 | # rpc_send_buff_size_in_bytes: 495 | # rpc_recv_buff_size_in_bytes: 496 | 497 | # Uncomment to set socket buffer size for internode communication 498 | # Note that when setting this, the buffer size is limited by net.core.wmem_max 499 | # and when not setting it it is defined by net.ipv4.tcp_wmem 500 | # See: 501 | # /proc/sys/net/core/wmem_max 502 | # /proc/sys/net/core/rmem_max 503 | # /proc/sys/net/ipv4/tcp_wmem 504 | # /proc/sys/net/ipv4/tcp_wmem 505 | # and: man tcp 506 | # internode_send_buff_size_in_bytes: 507 | # internode_recv_buff_size_in_bytes: 508 | 509 | # Frame size for thrift (maximum message length). 510 | thrift_framed_transport_size_in_mb: 15 511 | 512 | # Set to true to have Cassandra create a hard link to each sstable 513 | # flushed or streamed locally in a backups/ subdirectory of the 514 | # keyspace data. Removing these links is the operator's 515 | # responsibility. 516 | incremental_backups: false 517 | 518 | # Whether or not to take a snapshot before each compaction. Be 519 | # careful using this option, since Cassandra won't clean up the 520 | # snapshots for you. Mostly useful if you're paranoid when there 521 | # is a data format change. 522 | snapshot_before_compaction: false 523 | 524 | # Whether or not a snapshot is taken of the data before keyspace truncation 525 | # or dropping of column families. The STRONGLY advised default of true 526 | # should be used to provide data safety. If you set this flag to false, you will 527 | # lose data on truncation or drop. 528 | auto_snapshot: true 529 | 530 | # When executing a scan, within or across a partition, we need to keep the 531 | # tombstones seen in memory so we can return them to the coordinator, which 532 | # will use them to make sure other replicas also know about the deleted rows. 533 | # With workloads that generate a lot of tombstones, this can cause performance 534 | # problems and even exaust the server heap. 535 | # (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets) 536 | # Adjust the thresholds here if you understand the dangers and want to 537 | # scan more tombstones anyway. These thresholds may also be adjusted at runtime 538 | # using the StorageService mbean. 539 | tombstone_warn_threshold: 1000 540 | tombstone_failure_threshold: 100000 541 | 542 | # Granularity of the collation index of rows within a partition. 543 | # Increase if your rows are large, or if you have a very large 544 | # number of rows per partition. The competing goals are these: 545 | # 1) a smaller granularity means more index entries are generated 546 | # and looking up rows withing the partition by collation column 547 | # is faster 548 | # 2) but, Cassandra will keep the collation index in memory for hot 549 | # rows (as part of the key cache), so a larger granularity means 550 | # you can cache more hot rows 551 | column_index_size_in_kb: 64 552 | 553 | 554 | # Log WARN on any batch size exceeding this value. 5kb per batch by default. 555 | # Caution should be taken on increasing the size of this threshold as it can lead to node instability. 556 | batch_size_warn_threshold_in_kb: 5 557 | 558 | # Number of simultaneous compactions to allow, NOT including 559 | # validation "compactions" for anti-entropy repair. Simultaneous 560 | # compactions can help preserve read performance in a mixed read/write 561 | # workload, by mitigating the tendency of small sstables to accumulate 562 | # during a single long running compactions. The default is usually 563 | # fine and if you experience problems with compaction running too 564 | # slowly or too fast, you should look at 565 | # compaction_throughput_mb_per_sec first. 566 | # 567 | # concurrent_compactors defaults to the smaller of (number of disks, 568 | # number of cores), with a minimum of 2 and a maximum of 8. 569 | # 570 | # If your data directories are backed by SSD, you should increase this 571 | # to the number of cores. 572 | #concurrent_compactors: 1 573 | 574 | # Throttles compaction to the given total throughput across the entire 575 | # system. The faster you insert data, the faster you need to compact in 576 | # order to keep the sstable count down, but in general, setting this to 577 | # 16 to 32 times the rate you are inserting data is more than sufficient. 578 | # Setting this to 0 disables throttling. Note that this account for all types 579 | # of compaction, including validation compaction. 580 | compaction_throughput_mb_per_sec: 16 581 | 582 | # Log a warning when compacting partitions larger than this value 583 | compaction_large_partition_warning_threshold_mb: 100 584 | 585 | # When compacting, the replacement sstable(s) can be opened before they 586 | # are completely written, and used in place of the prior sstables for 587 | # any range that has been written. This helps to smoothly transfer reads 588 | # between the sstables, reducing page cache churn and keeping hot rows hot 589 | sstable_preemptive_open_interval_in_mb: 50 590 | 591 | # Throttles all outbound streaming file transfers on this node to the 592 | # given total throughput in Mbps. This is necessary because Cassandra does 593 | # mostly sequential IO when streaming data during bootstrap or repair, which 594 | # can lead to saturating the network connection and degrading rpc performance. 595 | # When unset, the default is 200 Mbps or 25 MB/s. 596 | # stream_throughput_outbound_megabits_per_sec: 200 597 | 598 | # Throttles all streaming file transfer between the datacenters, 599 | # this setting allows users to throttle inter dc stream throughput in addition 600 | # to throttling all network stream traffic as configured with 601 | # stream_throughput_outbound_megabits_per_sec 602 | # inter_dc_stream_throughput_outbound_megabits_per_sec: 603 | 604 | # How long the coordinator should wait for read operations to complete 605 | read_request_timeout_in_ms: 5000 606 | # How long the coordinator should wait for seq or index scans to complete 607 | range_request_timeout_in_ms: 10000 608 | # How long the coordinator should wait for writes to complete 609 | write_request_timeout_in_ms: 2000 610 | # How long the coordinator should wait for counter writes to complete 611 | counter_write_request_timeout_in_ms: 5000 612 | # How long a coordinator should continue to retry a CAS operation 613 | # that contends with other proposals for the same row 614 | cas_contention_timeout_in_ms: 1000 615 | # How long the coordinator should wait for truncates to complete 616 | # (This can be much longer, because unless auto_snapshot is disabled 617 | # we need to flush first so we can snapshot before removing the data.) 618 | truncate_request_timeout_in_ms: 60000 619 | # The default timeout for other, miscellaneous operations 620 | request_timeout_in_ms: 10000 621 | 622 | # Enable operation timeout information exchange between nodes to accurately 623 | # measure request timeouts. If disabled, replicas will assume that requests 624 | # were forwarded to them instantly by the coordinator, which means that 625 | # under overload conditions we will waste that much extra time processing 626 | # already-timed-out requests. 627 | # 628 | # Warning: before enabling this property make sure to ntp is installed 629 | # and the times are synchronized between the nodes. 630 | cross_node_timeout: false 631 | 632 | # Enable socket timeout for streaming operation. 633 | # When a timeout occurs during streaming, streaming is retried from the start 634 | # of the current file. This _can_ involve re-streaming an important amount of 635 | # data, so you should avoid setting the value too low. 636 | # Default value is 3600000, which means streams timeout after an hour. 637 | # streaming_socket_timeout_in_ms: 3600000 638 | 639 | # phi value that must be reached for a host to be marked down. 640 | # most users should never need to adjust this. 641 | # phi_convict_threshold: 8 642 | 643 | # endpoint_snitch -- Set this to a class that implements 644 | # IEndpointSnitch. The snitch has two functions: 645 | # - it teaches Cassandra enough about your network topology to route 646 | # requests efficiently 647 | # - it allows Cassandra to spread replicas around your cluster to avoid 648 | # correlated failures. It does this by grouping machines into 649 | # "datacenters" and "racks." Cassandra will do its best not to have 650 | # more than one replica on the same "rack" (which may not actually 651 | # be a physical location) 652 | # 653 | # IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER, 654 | # YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS 655 | # ARE PLACED. 656 | # 657 | # Out of the box, Cassandra provides 658 | # - SimpleSnitch: 659 | # Treats Strategy order as proximity. This can improve cache 660 | # locality when disabling read repair. Only appropriate for 661 | # single-datacenter deployments. 662 | # - GossipingPropertyFileSnitch 663 | # This should be your go-to snitch for production use. The rack 664 | # and datacenter for the local node are defined in 665 | # cassandra-rackdc.properties and propagated to other nodes via 666 | # gossip. If cassandra-topology.properties exists, it is used as a 667 | # fallback, allowing migration from the PropertyFileSnitch. 668 | # - PropertyFileSnitch: 669 | # Proximity is determined by rack and data center, which are 670 | # explicitly configured in cassandra-topology.properties. 671 | # - Ec2Snitch: 672 | # Appropriate for EC2 deployments in a single Region. Loads Region 673 | # and Availability Zone information from the EC2 API. The Region is 674 | # treated as the datacenter, and the Availability Zone as the rack. 675 | # Only private IPs are used, so this will not work across multiple 676 | # Regions. 677 | # - Ec2MultiRegionSnitch: 678 | # Uses public IPs as broadcast_address to allow cross-region 679 | # connectivity. (Thus, you should set seed addresses to the public 680 | # IP as well.) You will need to open the storage_port or 681 | # ssl_storage_port on the public IP firewall. (For intra-Region 682 | # traffic, Cassandra will switch to the private IP after 683 | # establishing a connection.) 684 | # - RackInferringSnitch: 685 | # Proximity is determined by rack and data center, which are 686 | # assumed to correspond to the 3rd and 2nd octet of each node's IP 687 | # address, respectively. Unless this happens to match your 688 | # deployment conventions, this is best used as an example of 689 | # writing a custom Snitch class and is provided in that spirit. 690 | # 691 | # You can use a custom Snitch by setting this to the full class name 692 | # of the snitch, which will be assumed to be on your classpath. 693 | endpoint_snitch: SimpleSnitch 694 | 695 | # controls how often to perform the more expensive part of host score 696 | # calculation 697 | dynamic_snitch_update_interval_in_ms: 100 698 | # controls how often to reset all host scores, allowing a bad host to 699 | # possibly recover 700 | dynamic_snitch_reset_interval_in_ms: 600000 701 | # if set greater than zero and read_repair_chance is < 1.0, this will allow 702 | # 'pinning' of replicas to hosts in order to increase cache capacity. 703 | # The badness threshold will control how much worse the pinned host has to be 704 | # before the dynamic snitch will prefer other replicas over it. This is 705 | # expressed as a double which represents a percentage. Thus, a value of 706 | # 0.2 means Cassandra would continue to prefer the static snitch values 707 | # until the pinned host was 20% worse than the fastest. 708 | dynamic_snitch_badness_threshold: 0.1 709 | 710 | # request_scheduler -- Set this to a class that implements 711 | # RequestScheduler, which will schedule incoming client requests 712 | # according to the specific policy. This is useful for multi-tenancy 713 | # with a single Cassandra cluster. 714 | # NOTE: This is specifically for requests from the client and does 715 | # not affect inter node communication. 716 | # org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place 717 | # org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of 718 | # client requests to a node with a separate queue for each 719 | # request_scheduler_id. The scheduler is further customized by 720 | # request_scheduler_options as described below. 721 | request_scheduler: org.apache.cassandra.scheduler.NoScheduler 722 | 723 | # Scheduler Options vary based on the type of scheduler 724 | # NoScheduler - Has no options 725 | # RoundRobin 726 | # - throttle_limit -- The throttle_limit is the number of in-flight 727 | # requests per client. Requests beyond 728 | # that limit are queued up until 729 | # running requests can complete. 730 | # The value of 80 here is twice the number of 731 | # concurrent_reads + concurrent_writes. 732 | # - default_weight -- default_weight is optional and allows for 733 | # overriding the default which is 1. 734 | # - weights -- Weights are optional and will default to 1 or the 735 | # overridden default_weight. The weight translates into how 736 | # many requests are handled during each turn of the 737 | # RoundRobin, based on the scheduler id. 738 | # 739 | # request_scheduler_options: 740 | # throttle_limit: 80 741 | # default_weight: 5 742 | # weights: 743 | # Keyspace1: 1 744 | # Keyspace2: 5 745 | 746 | # request_scheduler_id -- An identifier based on which to perform 747 | # the request scheduling. Currently the only valid option is keyspace. 748 | # request_scheduler_id: keyspace 749 | 750 | # Enable or disable inter-node encryption 751 | # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that 752 | # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher 753 | # suite for authentication, key exchange and encryption of the actual data transfers. 754 | # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode. 755 | # NOTE: No custom encryption options are enabled at the moment 756 | # The available internode options are : all, none, dc, rack 757 | # 758 | # If set to dc cassandra will encrypt the traffic between the DCs 759 | # If set to rack cassandra will encrypt the traffic between the racks 760 | # 761 | # The passwords used in these options must match the passwords used when generating 762 | # the keystore and truststore. For instructions on generating these files, see: 763 | # http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore 764 | # 765 | server_encryption_options: 766 | internode_encryption: none 767 | keystore: conf/.keystore 768 | keystore_password: cassandra 769 | truststore: conf/.truststore 770 | truststore_password: cassandra 771 | # More advanced defaults below: 772 | # protocol: TLS 773 | # algorithm: SunX509 774 | # store_type: JKS 775 | # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] 776 | # require_client_auth: false 777 | 778 | # enable or disable client/server encryption. 779 | client_encryption_options: 780 | enabled: false 781 | keystore: conf/.keystore 782 | keystore_password: cassandra 783 | # require_client_auth: false 784 | # Set trustore and truststore_password if require_client_auth is true 785 | # truststore: conf/.truststore 786 | # truststore_password: cassandra 787 | # More advanced defaults below: 788 | # protocol: TLS 789 | # algorithm: SunX509 790 | # store_type: JKS 791 | # cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA] 792 | 793 | # internode_compression controls whether traffic between nodes is 794 | # compressed. 795 | # can be: all - all traffic is compressed 796 | # dc - traffic between different datacenters is compressed 797 | # none - nothing is compressed. 798 | internode_compression: all 799 | 800 | # Enable or disable tcp_nodelay for inter-dc communication. 801 | # Disabling it will result in larger (but fewer) network packets being sent, 802 | # reducing overhead from the TCP protocol itself, at the cost of increasing 803 | # latency if you block for cross-datacenter responses. 804 | inter_dc_tcp_nodelay: false 805 | 806 | # GC Pauses greater than gc_warn_threshold_in_ms will be logged at WARN level 807 | # Adjust the threshold based on your application throughput requirement 808 | # By default, Cassandra logs GC Pauses greater than 200 ms at INFO level 809 | # gc_warn_threshold_in_ms: 1000 810 | -------------------------------------------------------------------------------- /gateways/kong/files/kong.repo: -------------------------------------------------------------------------------- 1 | [kong] 2 | name=Kong local repository 3 | baseurl=file:///var/lib/packages 4 | gpgcheck=0 5 | enabled=1 6 | -------------------------------------------------------------------------------- /gateways/kong/files/kong.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kong API Gateway 3 | After=network.target 4 | After=cassandra.service 5 | Requires=cassandra.service 6 | 7 | [Service] 8 | ExecStart=/usr/local/bin/kong start 9 | ExecStop=/usr/local/bin/kong stop 10 | ExecReload=/usr/local/bin/kong reload 11 | Type=forking 12 | PIDFile=/usr/local/kong/kong.pid 13 | User=kong 14 | Group=kong 15 | 16 | [Install] 17 | WantedBy=default.target 18 | -------------------------------------------------------------------------------- /gateways/kong/files/kong.yml: -------------------------------------------------------------------------------- 1 | ## Available plugins on this server 2 | plugins_available: 3 | - ssl 4 | - jwt 5 | - acl 6 | - cors 7 | - oauth2 8 | - tcp-log 9 | - udp-log 10 | - file-log 11 | - http-log 12 | - key-auth 13 | - hmac-auth 14 | - basic-auth 15 | - ip-restriction 16 | - mashape-analytics 17 | - request-transformer 18 | - response-transformer 19 | - request-size-limiting 20 | - rate-limiting 21 | - response-ratelimiting 22 | 23 | ## The Kong working directory 24 | ## (Make sure you have read and write permissions) 25 | nginx_working_dir: /usr/local/kong/ 26 | 27 | ## Port configuration 28 | proxy_port: 8000 29 | proxy_ssl_port: 8443 30 | admin_api_port: 8001 31 | 32 | ## Secondary port configuration 33 | dnsmasq_port: 8053 34 | 35 | ## Specify the DAO to use 36 | database: cassandra 37 | 38 | ## Databases configuration 39 | databases_available: 40 | cassandra: 41 | properties: 42 | contact_points: 43 | - "localhost:9042" 44 | timeout: 1000 45 | keyspace: kong 46 | keepalive: 60000 # in milliseconds 47 | # ssl: false 48 | # ssl_verify: false 49 | # ssl_certificate: "/path/to/cluster-ca-certificate.pem" 50 | # user: cassandra 51 | # password: cassandra 52 | 53 | ## Cassandra cache configuration 54 | database_cache_expiration: 5 # in seconds 55 | 56 | ## SSL Settings 57 | ## (Uncomment the two properties below to set your own certificate) 58 | # ssl_cert_path: /path/to/certificate.pem 59 | # ssl_key_path: /path/to/certificate.key 60 | 61 | ## Sends anonymous error reports 62 | send_anonymous_reports: true 63 | 64 | ## In-memory cache size (MB) 65 | memory_cache_size: 128 66 | 67 | ## Nginx configuration 68 | nginx: | 69 | worker_processes auto; 70 | error_log logs/error.log error; 71 | daemon on; 72 | 73 | worker_rlimit_nofile {{auto_worker_rlimit_nofile}}; 74 | 75 | env KONG_CONF; 76 | env PATH; 77 | 78 | events { 79 | worker_connections {{auto_worker_connections}}; 80 | multi_accept on; 81 | } 82 | 83 | http { 84 | resolver {{dns_resolver}} ipv6=off; 85 | charset UTF-8; 86 | 87 | access_log logs/access.log; 88 | access_log off; 89 | 90 | # Timeouts 91 | keepalive_timeout 60s; 92 | client_header_timeout 60s; 93 | client_body_timeout 60s; 94 | send_timeout 60s; 95 | 96 | # Proxy Settings 97 | proxy_buffer_size 128k; 98 | proxy_buffers 4 256k; 99 | proxy_busy_buffers_size 256k; 100 | proxy_ssl_server_name on; 101 | 102 | # IP Address 103 | real_ip_header X-Forwarded-For; 104 | set_real_ip_from 0.0.0.0/0; 105 | real_ip_recursive on; 106 | 107 | # Other Settings 108 | client_max_body_size 0; 109 | underscores_in_headers on; 110 | reset_timedout_connection on; 111 | tcp_nopush on; 112 | 113 | ################################################ 114 | # The following code is required to run Kong # 115 | # Please be careful if you'd like to change it # 116 | ################################################ 117 | 118 | # Lua Settings 119 | lua_package_path ';;'; 120 | lua_code_cache on; 121 | lua_max_running_timers 4096; 122 | lua_max_pending_timers 16384; 123 | lua_shared_dict locks 100k; 124 | lua_shared_dict cache {{memory_cache_size}}m; 125 | lua_socket_log_errors off; 126 | {{lua_ssl_trusted_certificate}} 127 | 128 | init_by_lua ' 129 | kong = require "kong" 130 | local status, err = pcall(kong.init) 131 | if not status then 132 | ngx.log(ngx.ERR, "Startup error: "..err) 133 | os.exit(1) 134 | end 135 | '; 136 | 137 | init_worker_by_lua 'kong.exec_plugins_init_worker()'; 138 | 139 | server { 140 | server_name _; 141 | listen {{proxy_port}}; 142 | listen {{proxy_ssl_port}} ssl; 143 | 144 | ssl_certificate_by_lua 'kong.exec_plugins_certificate()'; 145 | 146 | ssl_certificate {{ssl_cert}}; 147 | ssl_certificate_key {{ssl_key}}; 148 | ssl_protocols TLSv1 TLSv1.1 TLSv1.2;# omit SSLv3 because of POODLE (CVE-2014-3566) 149 | 150 | location / { 151 | default_type 'text/plain'; 152 | 153 | # These properties will be used later by proxy_pass 154 | set $backend_host nil; 155 | set $backend_url nil; 156 | 157 | # Authenticate the user and load the API info 158 | access_by_lua 'kong.exec_plugins_access()'; 159 | 160 | # Proxy the request 161 | proxy_set_header X-Real-IP $remote_addr; 162 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 163 | proxy_set_header X-Forwarded-Proto $scheme; 164 | proxy_set_header Host $backend_host; 165 | proxy_pass $backend_url; 166 | proxy_pass_header Server; 167 | 168 | # Add additional response headers 169 | header_filter_by_lua 'kong.exec_plugins_header_filter()'; 170 | 171 | # Change the response body 172 | body_filter_by_lua 'kong.exec_plugins_body_filter()'; 173 | 174 | # Log the request 175 | log_by_lua 'kong.exec_plugins_log()'; 176 | } 177 | 178 | location /robots.txt { 179 | return 200 'User-agent: *\nDisallow: /'; 180 | } 181 | 182 | error_page 500 /500.html; 183 | location = /500.html { 184 | internal; 185 | content_by_lua ' 186 | local responses = require "kong.tools.responses" 187 | responses.send_HTTP_INTERNAL_SERVER_ERROR("An unexpected error occurred") 188 | '; 189 | } 190 | } 191 | 192 | server { 193 | listen {{admin_api_port}}; 194 | 195 | location / { 196 | default_type application/json; 197 | content_by_lua ' 198 | ngx.header["Access-Control-Allow-Origin"] = "*" 199 | if ngx.req.get_method() == "OPTIONS" then 200 | ngx.header["Access-Control-Allow-Methods"] = "GET,HEAD,PUT,PATCH,POST,DELETE" 201 | ngx.header["Access-Control-Allow-Headers"] = "Content-Type" 202 | ngx.exit(204) 203 | end 204 | local lapis = require "lapis" 205 | lapis.serve("kong.api.app") 206 | '; 207 | } 208 | 209 | location /nginx_status { 210 | internal; 211 | stub_status; 212 | } 213 | 214 | location /robots.txt { 215 | return 200 'User-agent: *\nDisallow: /'; 216 | } 217 | 218 | # Do not remove, additional configuration placeholder for some plugins 219 | # {{additional_configuration}} 220 | } 221 | } 222 | -------------------------------------------------------------------------------- /gateways/kong/files/waitforcassandra.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | check=1 4 | timelimit=180 5 | 6 | timeout=$timelimit 7 | while [ $check -gt 0 ] 8 | do 9 | check=`nc 127.0.0.1 9042 < /dev/null 2> /dev/null; echo $?` 10 | timeout=$(( $timeout - 1 )) 11 | if [ $timeout -eq 0 ] 12 | then 13 | echo "Cassandra not yet up, aborting." 14 | exit 1 15 | fi 16 | sleep 1s 17 | done 18 | 19 | timediff=$(( $timelimit - $timeout )) 20 | echo "Cassandra started after $timediff seconds" 21 | exit 0 22 | -------------------------------------------------------------------------------- /gateways/kong/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Setup Kong" 2 | hosts: all 3 | tasks: 4 | - name: "Ensure tools are installed" 5 | yum: name="{{ item }}" state="installed" 6 | with_items: 7 | - "createrepo" 8 | - "nmap-ncat" 9 | 10 | - name: "Create directory structure" 11 | file: path="{{ item }}" state="directory" mode="0755" 12 | with_items: 13 | - "/var/lib/packages" 14 | 15 | - name: "Ensure repository for Cassandra is present" 16 | copy: src="files/cassandra.repo" 17 | dest="/etc/yum.repos.d/cassandra.repo" 18 | mode=444 owner=root group=root 19 | notify: 20 | - "Update the yum cache" 21 | 22 | - name: "Ensure Cassandra is installed with dependencies" 23 | yum: name="{{ item }}" state="installed" 24 | with_items: 25 | - "dsc21" 26 | - "java-1.8.0-openjdk" 27 | 28 | - name: "Ensure Cassandra is configured" 29 | copy: src="files/cassandra.yaml" 30 | dest="/etc/cassandra/default.conf/cassandra.yaml" 31 | mode=444 owner=root group=root 32 | notify: 33 | - "Restart Cassandra" 34 | 35 | - name: "Ensure Cassandra is running" 36 | service: name="cassandra" state="running" enabled="yes" 37 | notify: 38 | - "Wait for Cassandra to start listening" 39 | 40 | - name: "Ensure repository for Kong is present" 41 | copy: src="files/kong.repo" 42 | dest="/etc/yum.repos.d/kong.repo" 43 | mode=444 owner=root group=root 44 | notify: 45 | - "Update the yum cache" 46 | 47 | - name: "Ensure Kong is downloaded" 48 | command: "wget -O /var/lib/packages/kong.rpm https://github.com/Mashape/kong/releases/download/0.5.3/kong-0.5.3.el7.noarch.rpm" 49 | args: 50 | creates: "/var/lib/packages/kong.rpm" 51 | notify: 52 | - "Update local repository" 53 | 54 | - meta: flush_handlers 55 | 56 | - name: "Ensure Kong is installed" 57 | yum: name="{{ item }}" state="installed" 58 | with_items: 59 | - "kong" 60 | 61 | - name: "Ensure the kong user is present" 62 | user: name="kong" comment="kong system user" 63 | system="yes" state="present" 64 | 65 | - name: "Ensure Kong is configured" 66 | copy: src="files/kong.yml" 67 | dest="/etc/kong/kong.yml" 68 | mode=444 owner=root group=root 69 | notify: 70 | - "Restart Kong" 71 | 72 | - name: "Ensure Kong service script is present" 73 | copy: src="files/kong.service" 74 | dest="/etc/systemd/system/kong.service" 75 | mode=444 owner=root group=root 76 | notify: 77 | - "Restart Kong" 78 | 79 | - name: "Ensure Kong is running" 80 | service: name="kong" state="running" enabled="yes" 81 | 82 | handlers: 83 | - name: "Update the yum cache" 84 | command: "yum clean metadata" 85 | 86 | - name: "Update local repository" 87 | command: "/usr/bin/createrepo /var/lib/packages" 88 | 89 | - name: "Restart Cassandra" 90 | service: name="cassandra" state="restarted" 91 | 92 | - name: "Wait for Cassandra to start listening" 93 | script: files/waitforcassandra.sh 94 | 95 | - name: "Restart Kong" 96 | service: name="kong" state="restarted" 97 | -------------------------------------------------------------------------------- /gateways/tyk/configure: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # test02 4 | curl -i -H "x-tyk-authorization: 3f03d372b1ff5d735dfcc2d0a82ad2b3" \ 5 | -H "Content-Type: application/json" \ 6 | -X POST \ 7 | -d '{ 8 | "allowance": 1000, 9 | "rate": 1000, 10 | "per": 1, 11 | "expires": -1, 12 | "quota_max": -1, 13 | "quota_renews": 1449051461, 14 | "quota_remaining": -1, 15 | "quota_renewal_rate": 60, 16 | "org_id": "test02", 17 | "active": true, 18 | "is_inactive": false, 19 | "access_rights": { 20 | "2": { 21 | "api_id": "2", 22 | "api_name": "test02", 23 | "versions": ["Default"] 24 | } 25 | }, 26 | "meta_data": { } 27 | }' \ 28 | "http://localhost:8000/tyk/keys/key02" 29 | 30 | # test03 31 | curl -i -H "x-tyk-authorization: 3f03d372b1ff5d735dfcc2d0a82ad2b3" \ 32 | -H "Content-Type: application/json" \ 33 | -X POST \ 34 | -d '{ 35 | "allowance": 60000000, 36 | "rate": 60000000, 37 | "per": 60, 38 | "expires": -1, 39 | "quota_max": -1, 40 | "quota_renews": 1449051461, 41 | "quota_remaining": -1, 42 | "quota_renewal_rate": 60, 43 | "org_id": "test03", 44 | "active": true, 45 | "is_inactive": false, 46 | "access_rights": { 47 | "3": { 48 | "api_id": "3", 49 | "api_name": "test03", 50 | "versions": ["Default"] 51 | } 52 | }, 53 | "meta_data": { } 54 | }' \ 55 | "http://localhost:8000/tyk/keys/key03" 56 | 57 | # test04 58 | curl -i -H "x-tyk-authorization: 3f03d372b1ff5d735dfcc2d0a82ad2b3" \ 59 | -H "Content-Type: application/json" \ 60 | -X POST \ 61 | -d '{ 62 | "allowance": 1, 63 | "rate": 1, 64 | "per": 1, 65 | "expires": -1, 66 | "quota_max": -1, 67 | "quota_renews": 1449051461, 68 | "quota_remaining": -1, 69 | "quota_renewal_rate": 60, 70 | "org_id": "test04", 71 | "active": true, 72 | "is_inactive": false, 73 | "access_rights": { 74 | "4": { 75 | "api_id": "4", 76 | "api_name": "test04", 77 | "versions": ["Default"] 78 | } 79 | }, 80 | "meta_data": { } 81 | }' \ 82 | "http://localhost:8000/tyk/keys/key04" 83 | -------------------------------------------------------------------------------- /gateways/tyk/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Check if Ansible is installed 4 | which ansible-playbook 2>&1 > /dev/null 5 | if [ $? -ne 0 ]; then 6 | # Install Ansible if it's not 7 | yum install -y ansible 8 | fi 9 | 10 | ansible-playbook -c local -i "localhost," setup.yml 11 | 12 | ./configure 13 | -------------------------------------------------------------------------------- /gateways/tyk/files/apps/test01.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "test01", 3 | "api_id": "1", 4 | "org_id": "test01", 5 | "active": true, 6 | "definition": { 7 | "location": "", 8 | "key": "" 9 | }, 10 | "use_keyless": true, 11 | "version_data": { 12 | "not_versioned": true, 13 | "versions": { 14 | "Default": { 15 | "name": "Default", 16 | "use_extended_paths": true 17 | } 18 | } 19 | }, 20 | "proxy": { 21 | "listen_path": "/test01", 22 | "target_url": "http://webserver:8888/", 23 | "strip_listen_path": false 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /gateways/tyk/files/apps/test02.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "test02", 3 | "api_id": "2", 4 | "org_id": "test02", 5 | "active": true, 6 | "definition": { 7 | "location": "", 8 | "key": "" 9 | }, 10 | "use_keyless": false, 11 | "auth": { 12 | "auth_header_name": "apikey" 13 | }, 14 | "version_data": { 15 | "not_versioned": true, 16 | "versions": { 17 | "Default": { 18 | "name": "Default", 19 | "use_extended_paths": true 20 | } 21 | } 22 | }, 23 | "proxy": { 24 | "listen_path": "/test02", 25 | "target_url": "http://webserver:8888/", 26 | "strip_listen_path": false 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /gateways/tyk/files/apps/test03.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "test03", 3 | "api_id": "3", 4 | "org_id": "test03", 5 | "active": true, 6 | "definition": { 7 | "location": "", 8 | "key": "" 9 | }, 10 | "use_keyless": false, 11 | "auth": { 12 | "auth_header_name": "apikey" 13 | }, 14 | "version_data": { 15 | "not_versioned": true, 16 | "versions": { 17 | "Default": { 18 | "name": "Default", 19 | "use_extended_paths": true 20 | } 21 | } 22 | }, 23 | "proxy": { 24 | "listen_path": "/test03", 25 | "target_url": "http://webserver:8888/", 26 | "strip_listen_path": false 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /gateways/tyk/files/apps/test04.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "test04", 3 | "api_id": "4", 4 | "org_id": "test04", 5 | "active": true, 6 | "definition": { 7 | "location": "", 8 | "key": "" 9 | }, 10 | "use_keyless": false, 11 | "auth": { 12 | "auth_header_name": "apikey" 13 | }, 14 | "version_data": { 15 | "not_versioned": true, 16 | "versions": { 17 | "Default": { 18 | "name": "Default", 19 | "use_extended_paths": true 20 | } 21 | } 22 | }, 23 | "proxy": { 24 | "listen_path": "/test04", 25 | "target_url": "http://webserver:8888/", 26 | "strip_listen_path": false 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /gateways/tyk/files/mongodb-org-3.0.repo: -------------------------------------------------------------------------------- 1 | [mongodb-org-3.0] 2 | name=MongoDB Repository 3 | baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/ 4 | gpgcheck=0 5 | enabled=1 6 | -------------------------------------------------------------------------------- /gateways/tyk/files/tyk-gateway.conf: -------------------------------------------------------------------------------- 1 | { 2 | "listen_port": 8000, 3 | "secret": "3f03d372b1ff5d735dfcc2d0a82ad2b3", 4 | "template_path": "/opt/tyk-gateway/templates", 5 | "tyk_js_path": "/opt/tyk-gateway/js/tyk.js", 6 | "use_db_app_configs": false, 7 | "app_path": "/etc/tyk/apps", 8 | "middleware_path": "/opt/tyk-gateway/middleware/", 9 | "storage": { 10 | "type": "redis", 11 | "host": "localhost", 12 | "port": 6379, 13 | "username": "", 14 | "password": "", 15 | "database": 0, 16 | "optimisation_max_idle": 500 17 | }, 18 | "enable_analytics": false, 19 | "analytics_config": { 20 | "type": "csv", 21 | "csv_dir": "/tmp", 22 | "mongo_url": "", 23 | "mongo_db_name": "", 24 | "mongo_collection": "", 25 | "purge_delay": -1, 26 | "ignored_ips": [] 27 | }, 28 | "health_check": { 29 | "enable_health_checks": true, 30 | "health_check_value_timeouts": 60 31 | }, 32 | "optimisations_use_async_session_write": true, 33 | "allow_master_keys": false, 34 | "policies": { 35 | "policy_source": "mongo", 36 | "policy_record_name": "tyk_policies" 37 | }, 38 | "hash_keys": false, 39 | "suppress_redis_signal_reload": false 40 | } 41 | -------------------------------------------------------------------------------- /gateways/tyk/files/tyk.repo: -------------------------------------------------------------------------------- 1 | [tyk-gateway] 2 | name=tyk-gateway 3 | baseurl=https://packagecloud.io/tyk/tyk-gateway/el/$releasever/$basearch 4 | #repo_gpgcheck=1 5 | gpgcheck=1 6 | enabled=1 7 | gpgkey=http://keyserver.tyk.io/tyk.io.rpm.signing.key 8 | https://packagecloud.io/gpg.key 9 | sslverify=1 10 | sslcacert=/etc/pki/tls/certs/ca-bundle.crt 11 | 12 | [tyk-dashboard] 13 | name=tyk-dashboard 14 | baseurl=https://packagecloud.io/tyk/tyk-dashboard/el/$releasever/$basearch 15 | #repo_gpgcheck=1 16 | gpgcheck=1 17 | enabled=1 18 | gpgkey=http://keyserver.tyk.io/tyk.io.rpm.signing.key 19 | https://packagecloud.io/gpg.key 20 | sslverify=1 21 | sslcacert=/etc/pki/tls/certs/ca-bundle.crt 22 | 23 | -------------------------------------------------------------------------------- /gateways/tyk/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Setup Tyk" 2 | hosts: all 3 | tasks: 4 | - name: "Ensure repository for Mongo is present" 5 | copy: src="files/mongodb-org-3.0.repo" 6 | dest="/etc/yum.repos.d/mongodb-org-3.0.repo" 7 | mode=444 owner=root group=root 8 | notify: 9 | - "Update the yum cache" 10 | 11 | - name: "Ensure repository for Tyk is present" 12 | copy: src="files/tyk.repo" 13 | dest="/etc/yum.repos.d/tyk.repo" 14 | mode=444 owner=root group=root 15 | notify: 16 | - "Update the yum cache" 17 | 18 | - name: "Ensure tools are installed" 19 | yum: name="{{ item }}" state="installed" 20 | with_items: 21 | - "pygpgme" 22 | - "yum-utils" 23 | - "wget" 24 | - "redis" 25 | - "mongodb-org" 26 | - "tyk-gateway" 27 | #- "tyk-dashboard" 28 | 29 | - name: "Ensure tyk-gateway is configured" 30 | copy: src="files/tyk-gateway.conf" 31 | dest="/opt/tyk-gateway/tyk.conf" 32 | mode=444 owner=root group=root 33 | notify: 34 | - "Restart tyk-gateway" 35 | 36 | - name: "Ensure that the API definitions are present" 37 | copy: src="files/apps/" dest="/etc/tyk/apps/" mode="0444" 38 | notify: 39 | - "Restart tyk-gateway" 40 | 41 | - name: "Ensure services are running" 42 | service: name="{{ item }}" state="running" enabled="yes" 43 | with_items: 44 | - "mongod" 45 | - "redis" 46 | 47 | - name: "Ensure services are running" 48 | service: name="{{ item }}" state="running" enabled="yes" 49 | with_items: 50 | - "tyk-gateway" 51 | #- "tyk-dashboard" 52 | 53 | handlers: 54 | - name: "Update the yum cache" 55 | command: "yum clean metadata" 56 | 57 | - name: "Restart tyk-gateway" 58 | service: name="tyk-gateway" state="restarted" 59 | 60 | #- name: "Restart tyk-dashboard" 61 | # service: name="tyk-dashboard" state="restarted" 62 | -------------------------------------------------------------------------------- /gateways/wso2/README.md: -------------------------------------------------------------------------------- 1 | # WSO2 API Manager 2 | 3 | The management interface is available at https://localhost:9443/carbon. 4 | The default username and password is *admin*. 5 | 6 | -------------------------------------------------------------------------------- /gateways/wso2/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Check if Ansible is installed 4 | which ansible-playbook 2>&1 > /dev/null 5 | if [ $? -ne 0 ]; then 6 | # Install Ansible if it's not 7 | yum install -y ansible 8 | fi 9 | 10 | ansible-playbook -c local -i "localhost," setup.yml 11 | -------------------------------------------------------------------------------- /gateways/wso2/files/wso2am.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=WSO2 API Manager 3 | After=network.target 4 | #After=cassandra.service 5 | #Requires=cassandra.service 6 | 7 | [Service] 8 | WorkingDirectory=/usr/local/wso2am-1.9.1/bin 9 | ExecStart=/usr/local/wso2am-1.9.1/bin/wso2server.sh start 10 | ExecStartPost=/bin/sleep 5 11 | ExecStop=/usr/local/wso2am-1.9.1/bin/wso2server.sh stop 12 | Type=forking 13 | PIDFile=/usr/local/wso2am-1.9.1/wso2carbon.pid 14 | PrivateTmp=true 15 | Environment=JAVA_HOME=/usr/lib/jvm/jre/ 16 | Environment=CARBON_HOME=/usr/local/wso2am-1.9.1/ 17 | #User=wso2 18 | #Group=wso2 19 | 20 | [Install] 21 | WantedBy=default.target 22 | -------------------------------------------------------------------------------- /gateways/wso2/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Setup WSO2 API Manager" 2 | hosts: all 3 | tasks: 4 | 5 | # From https://docs.wso2.com/display/AM191/Installation+Prerequisites: 6 | # "All WSO2 Carbon-based products are Java applications that can be run on any 7 | # platform that is Oracle JDK 1.6.*/1.7.* compliant. JDK 1.8 is not supported 8 | # yet. Also, we do not recommend or support OpenJDK." 9 | - name: "Ensure dependencies are installed" 10 | yum: name="{{ item }}" state="installed" 11 | with_items: 12 | - "java-1.7.0-openjdk" 13 | 14 | - name: "Ensure WSO2 Api Manager is downloaded" 15 | get_url: url="http://product-dist.wso2.com/products/api-manager/1.9.1/wso2am-1.9.1.zip" 16 | dest="/usr/local/wso2am.zip" 17 | timeout=1800 18 | 19 | - name: "Extract package" 20 | command: "unzip wso2am.zip" 21 | args: 22 | chdir: "/usr/local" 23 | creates: "/usr/local/wso2am-1.9.1" 24 | 25 | - name: "Ensure the wso2 user is present" 26 | user: name="wso2" comment="wso2 system user" 27 | system="yes" state="present" 28 | 29 | - name: "Ensure wso2am service script is present" 30 | copy: src="files/wso2am.service" 31 | dest="/etc/systemd/system/wso2am.service" 32 | mode=444 owner=root group=root 33 | notify: 34 | - "Restart wso2am" 35 | 36 | - name: "Ensure wso2am is running" 37 | service: name="wso2am" state="running" enabled="yes" 38 | handlers: 39 | - name: "Restart wso2am" 40 | service: name="wso2am" state="restarted" 41 | -------------------------------------------------------------------------------- /webservers/dummy-api/deploy: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # Check if Ansible is installed 4 | which ansible-playbook 2>&1 > /dev/null 5 | if [ $? -ne 0 ]; then 6 | # Install Ansible if it's not 7 | yum install -y ansible 8 | fi 9 | 10 | ansible-playbook -c local -i "localhost," setup.yml 11 | -------------------------------------------------------------------------------- /webservers/dummy-api/files/centos7-base.repo: -------------------------------------------------------------------------------- 1 | [centos7-base] 2 | name=CentOS-7 - Base 3 | mirrorlist=http://mirrorlist.centos.org/?release=7&arch=$basearch&repo=os&infra=$infra 4 | gpgcheck=1 5 | enabled=0 6 | gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7 7 | -------------------------------------------------------------------------------- /webservers/dummy-api/files/dummy-api.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Dummy API 3 | After=network.target 4 | 5 | [Service] 6 | ExecStart=/usr/local/go/bin/dummy-api --host 0.0.0.0 --port 8888 7 | StandardOutput=syslog 8 | StandardError=syslog 9 | SyslogIdentifier=dummy-api 10 | PrivateTmp=true 11 | User=dummy-api 12 | Group=dummy-api 13 | 14 | [Install] 15 | WantedBy=multi-user.target 16 | -------------------------------------------------------------------------------- /webservers/dummy-api/setup.yml: -------------------------------------------------------------------------------- 1 | - name: "Setup Dummy-API" 2 | hosts: all 3 | environment: 4 | GOPATH: "/usr/local/go" 5 | #PATH: "${PATH}:${GOPATH}/bin" 6 | tasks: 7 | - name: "Install disabled Centos-Base repo for RHEL7" 8 | copy: src="files/centos7-base.repo" dest="/etc/yum.repos.d/centos7-base.repo" mode="0444" 9 | when: ansible_distribution in ['RedHat'] and 10 | ansible_distribution_major_version in ['7'] 11 | 12 | - name: "Install golang with Centos repo for RHEL7" 13 | yum: name="golang" enablerepo="centos7-base" state="installed" 14 | when: ansible_distribution in ['RedHat'] and 15 | ansible_distribution_major_version in ['7'] 16 | 17 | - name: "Ensure tools are installed" 18 | yum: name="{{ item }}" state="installed" 19 | with_items: 20 | - "git" 21 | - "golang" 22 | 23 | - name: "Create directory structure" 24 | file: path="{{ item }}" state="directory" mode="0755" 25 | with_items: 26 | - "/usr/local/go" 27 | - "/usr/local/go/src" 28 | - "/usr/local/go/pkg" 29 | - "/usr/local/go/bin" 30 | 31 | - name: "Download and install Dummy-API" 32 | command: "/bin/go get github.com/espebra/dummy-api" 33 | args: 34 | creates: "/usr/local/go/src/github.com/espebra/dummy-api/" 35 | 36 | - name: "Ensure the dummy-api user is present" 37 | user: name="dummy-api" comment="Dummy API system user" 38 | system="yes" state="present" 39 | 40 | - name: "Ensure Dummy-API service script is present" 41 | copy: src="files/dummy-api.service" 42 | dest="/etc/systemd/system/dummy-api.service" 43 | mode=444 owner=root group=root 44 | notify: 45 | - "Restart Dummy-API" 46 | 47 | - name: "Ensure Dummy-API is running" 48 | service: name="dummy-api" state="running" enabled="yes" 49 | 50 | handlers: 51 | - name: "Restart Dummy-API" 52 | service: name="dummy-api" state="restarted" 53 | --------------------------------------------------------------------------------