101 |
102 |
103 |
104 |
105 |
120 |
121 |
122 |
123 |
--------------------------------------------------------------------------------
/docker/dyno/app/range.yml:
--------------------------------------------------------------------------------
1 | # Licensed to Elasticsearch B.V. under one or more contributor
2 | # license agreements. See the NOTICE file distributed with
3 | # this work for additional information regarding copyright
4 | # ownership. Elasticsearch B.V. licenses this file to you under
5 | # the Apache License, Version 2.0 (the "License"); you may
6 | # not use this file except in compliance with the License.
7 | # You may obtain a copy of the License at
8 | #
9 | # http://www.apache.org/licenses/LICENSE-2.0
10 | #
11 | # Unless required by applicable law or agreed to in writing,
12 | # software distributed under the License is distributed on an
13 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 | # KIND, either express or implied. See the License for the
15 | # specific language governing permissions and limitations
16 | # This file describes normalization for the sliders
17 | #
18 | # ========================================================
19 | # Slider range definitions
20 | # ========================================================
21 | #
22 | # Each slider can slide from values between 1-100,
23 | # where 100 is the highest position which represents
24 | # "maximum" pressure on a service and 0 is the lowest
25 | # slider position which represents no pressure being
26 | # applied to the service.
27 | #
28 | # However, the various toxics which can be applied to
29 | # a service need to have raw values supplied to them
30 | # and the associated units vary by service. For example,
31 | # with network toxics the raw value might be applied in
32 | # milliseconds while for container memory, the value might
33 | # be applied in megabytes.
34 | #
35 | # Therefore, we have this file which lists the lower and upper
36 | # bounds for each in raw units. When a slider is moved, we
37 | # do basic division to determine the value to pass into the
38 | # toxic itself. For example, if the range of the memory slider
39 | # in this file is between 100 MB and 1000MB, and the slider
40 | # is set to its midpoint (50), we subtract the lower bound
41 | # from the upper bound (900) and then multiply by 1/100 of
42 | # the slider value to reach the answer of 450. See the
43 | # implementation code for more details on this in action.
44 | #
45 | # Values are represented as a two-element list. The first element
46 | # is the low-bound, which should provide the *best* performance
47 | # (other than being disabled) and the latter value represents the
48 | # high-bound which should represent the *worst* performance outside
49 | # of simply disabling the service.
50 |
51 | ## Start the Toxi settings
52 | ---
53 | B:
54 | # Bandwidth: Limit a connection to a maximum number of kilobytes per second.
55 | # 1KB/sec -> 100KB/s
56 | - 5
57 | - 1
58 | L:
59 | # Latency: Add a delay to all data going through the proxy. The delay is equal to latency +/- jitter.
60 | # 0ms -> 1000ms
61 | - 1
62 | - 1000
63 | J:
64 | # Jitter: Add a delay to all data going through the proxy. The delay is equal to latency +/- jitter.
65 | # 0ms -> 1000ms
66 | - 1
67 | - 1000
68 | SC:
69 | # Slow close: Delay the TCP socket from closing until delay has elapsed.
70 | - 1
71 | - 1000
72 | T:
73 | # Timeout: Stops all data from getting through, and closes the connection after timeout.
74 | # If timeout is 0, the connection won't close, and data will be delayed until the toxic is removed.
75 | # 1ms -> 1000ms
76 | - 1000
77 | - 1
78 |
79 | # The following are slicer settings.
80 | # A slicer slices TCP data up into small bits, optionally adding a delay between each sliced "packet".
81 |
82 | Sas: # average_size: size in bytes of an average packet
83 | # 1 byte -> 1000 bytes
84 | - 1
85 | - 1000
86 | # Currently disabled because we need to bound it against Sas
87 | # FIXME: (should be smaller than average_size)
88 | Ssv: # size_variation: variation in bytes of an average packet
89 | - 1
90 | - 1000
91 | Sd: # time in microseconds to delay each packet by
92 | - 1
93 | - 50000 # All the way up to 50ms which will cause some massive destruction!
94 | Ld: # the size in bytes that should be sent before closing the connection
95 | - 1
96 | - 5000000
97 | ## End the Toxi settings
98 |
99 | ## Start the Docker settings
100 | ## For more information see: https://docs.docker.com/config/containers/resource_constraints/
101 | cpu:
102 | # Impose a CPU CFS quota on the container.
103 | # The number of microseconds per --cpu-period that the container is limited to before throttled.
104 | - 25000 # Not totally sure this is also the default for quota. FIXME: Possible bug!
105 | - 1000 # 1,000 is the lower limit offered by the Docker API
106 | mem:
107 | # The maximum amount of memory the container can use.
108 | # Note: We're going to always assume MB
109 | - 2000 # FIXME What's the default out-of-the-box?
110 | - 5 # That outta do it. 4MB is the Docker-imposed limit.
111 |
--------------------------------------------------------------------------------
/docker/dyno/tests/unit/test_docker.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 |
3 | # Licensed to Elasticsearch B.V. under one or more contributor
4 | # license agreements. See the NOTICE file distributed with
5 | # this work for additional information regarding copyright
6 | # ownership. Elasticsearch B.V. licenses this file to you under
7 | # the Apache License, Version 2.0 (the "License"); you may
8 | # not use this file except in compliance with the License.
9 | # You may obtain a copy of the License at
10 | #
11 | # http://www.apache.org/licenses/LICENSE-2.0
12 | #
13 | # Unless required by applicable law or agreed to in writing,
14 | # software distributed under the License is distributed on an
15 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
16 | # KIND, either express or implied. See the License for the
17 | # specific language governing permissions and limitations
18 |
19 | """
20 | Tests for the Openbeans Dyno Docker integration
21 | """
22 | from pytest import mark, raises
23 | from unittest import mock
24 | from flask import url_for
25 | import dyno.app.api.docker as dkr
26 |
27 | CONTAINER_NAME_FUZZ = ['a_foo', 'b__foo', '_c_foo']
28 |
29 | @mark.parametrize('container_fuzz', CONTAINER_NAME_FUZZ)
30 | @mock.patch('dyno.app.api.docker.container_list', return_value={'containers': CONTAINER_NAME_FUZZ})
31 | def test_normalize_name_multiple(cl, container_fuzz):
32 | """
33 | GIVEN multiple containers with names which end in `foo`
34 | WHEN the name ending in `foo` is passed into the _normalize_name function
35 | THEN function raises an exception
36 | """
37 | with raises(Exception, match="more than one"):
38 | dkr._normalize_name('foo')
39 |
40 |
41 | @mock.patch('dyno.app.api.docker.container_list', return_value={'containers': CONTAINER_NAME_FUZZ})
42 | def test_normalize_name_multiple_not_found(cl):
43 | """
44 | GIVEN no containers which end in `baz`
45 | WHEN a name ending in `baz` if passed into the _normalize_name func
46 | THEN an exception is raised
47 | """
48 | with raises(Exception, match="not found"):
49 | dkr._normalize_name('baz')
50 |
51 | @mock.patch('dyno.app.api.docker.client')
52 | def test_list(docker_mock, client):
53 | """
54 | GIVEN an HTTP call to /docker/list
55 | WHEN the results are returned
56 | THEN the results contain a list of running containers
57 | """
58 | fake_container = mock.Mock()
59 | fake_container.name = 'fake_container'
60 | list_mock = mock.Mock(return_value=[fake_container], name='list_mock')
61 | docker_mock.containers.list = list_mock
62 | ret = client.get(url_for('docker.container_list'))
63 | assert ret.json == {'containers': ['fake_container']}
64 |
65 | @mock.patch('dyno.app.api.docker._normalize_name', return_value='fake_container_name')
66 | def test_query(fake_container_patch, docker_inspect, client):
67 | """
68 | GIVEN an HTTP call to /docker/query
69 | WHEN the results are returned
70 | THEN the results container info about the CPU and memory
71 | """
72 | with mock.patch.object(dkr.low_client, 'inspect_container', return_value=docker_inspect):
73 | ret = client.get(url_for('docker.query'), query_string={'c': 'fake_container_name'})
74 | assert ret.json['CPU'] == 1000
75 | assert ret.json['Mem'] == 200
76 |
77 | @mock.patch('dyno.app.api.docker.client', name='docker_mock')
78 | @mock.patch('dyno.app.api.docker._normalize_name', return_value='fake_container_name', name='normalize_mock')
79 | def test_update(fake_container_patch, docker_mock, client):
80 | """
81 | GIVEN an HTTP call to /docker/update
82 | WHEN the call contains settings to be updated
83 | THEN the settings are updated
84 | """
85 | fake_container = mock.Mock(name='fake_container')
86 | fake_container.name = 'fake_container'
87 | get_mock = mock.Mock(return_value=fake_container, name='get_mock')
88 | docker_mock.containers.get = get_mock
89 | client.get(url_for('docker.update'), query_string={'c': 'opbeans-python', 'component': 'CPU', 'val': 100})
90 |
91 | fake_container.update.assert_called_with(cpu_quota=25990)
92 |
93 | # FIXME This is marked as xfail pending a centralization of the normalization functions
94 | @mark.xfail
95 | @mark.parametrize('val', range(1,101, 10))
96 | @mock.patch('dyno.app.api.control._range', mock.Mock(return_value={'Fr': [1,10]}))
97 | def test_normalize(val):
98 | """
99 | GIVEN values between 1-100
100 | WHEN the value is sent to be normalized
101 | THEN the correct normalized value is returned
102 | """
103 | got = dkr._normalize_value('cpu', val)
104 | want = (101 - val) / 10
105 | assert got == want
106 |
107 | # FIXME This is marked as xfail pending a centralization of the normalization functions
108 | @mark.xfail
109 | @mark.parametrize('val', range(1,10))
110 | @mock.patch('dyno.app.api.control._range', mock.Mock(return_value={'Fr': [1,10]}))
111 | def test_denormalize(val):
112 | """
113 | GIVEN values between 1-100
114 | WHEN the value is sent to be denormalized
115 | THEN the correct normalized value is returned
116 | """
117 | got = dkr._denormalize_value('cpu', val)
118 | want = 100 - (val * 10)
119 | assert got == want
120 |
--------------------------------------------------------------------------------
/Makefile:
--------------------------------------------------------------------------------
1 | .PHONY: help
2 | SHELL := /bin/bash
3 | PYTHON ?= python3
4 | VENV ?= ./venv
5 |
6 | COMPOSE_ARGS ?=
7 |
8 | JUNIT_RESULTS_DIR=tests/results
9 | JUNIT_OPT=--junitxml $(JUNIT_RESULTS_DIR)
10 |
11 | CERT_VALID_DAYS ?= 3650
12 |
13 | APM_SERVER_URL ?= http://apm-server:8200
14 | ES_URL ?= http://elasticsearch:9200
15 | KIBANA_URL ?= http://kibana:5601
16 |
17 | ES_USER ?= admin
18 | ES_PASS ?= changeme
19 | ELASTIC_APM_SECRET_TOKEN ?= SuPeRsEcReT
20 |
21 | PYTHONHTTPSVERIFY ?= 1
22 |
23 | PYTEST_ARGS ?=
24 |
25 | # Make sure we run local versions of everything, particularly commands
26 | # installed into our virtualenv with pip eg. `docker-compose`.
27 | export PATH := ./bin:$(VENV)/bin:$(PATH)
28 |
29 | export APM_SERVER_URL := $(APM_SERVER_URL)
30 | export KIBANA_URL := $(KIBANA_URL)
31 | export ES_URL := $(ES_URL)
32 | export ES_USER := $(ES_USER)
33 | export ES_PASS := $(ES_PASS)
34 | export ELASTIC_APM_SECRET_TOKEN := $(ELASTIC_APM_SECRET_TOKEN)
35 | export PYTHONHTTPSVERIFY := $(PYTHONHTTPSVERIFY)
36 |
37 | help: ## Display this help text
38 | @grep -E '^[a-zA-Z_-]+[%]?:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
39 |
40 | all: test
41 |
42 | # The tests are written in Python. Make a virtualenv to handle the dependencies.
43 | # make doesn't play nicely with custom VENV, intended only for CI usage
44 | venv: requirements.txt ## Prepare the virtual environment
45 | test -d $(VENV) || virtualenv -q --python=$(PYTHON) $(VENV);\
46 | source $(VENV)/bin/activate || exit 1;\
47 | pip install -q -r requirements.txt;\
48 | touch $(VENV);
49 |
50 | lint: venv ## Lint the project
51 | source $(VENV)/bin/activate; \
52 | flake8 --ignore=D100,D101,D102,D103,D104,D105,D106,D107,D200,D205,D400,D401,D403,W504 scripts/compose.py scripts/modules
53 |
54 | .PHONY: create-x509-cert
55 | create-x509-cert: ## Create an x509 certificate for use with the test suite
56 | openssl req -x509 -newkey rsa:4096 -keyout scripts/tls/key.pem -out scripts/tls/cert.crt -days "${CERT_VALID_DAYS}" -subj '/CN=apm-server' -nodes
57 |
58 | .PHONY: lint
59 |
60 | build-env: venv ## Build the test environment
61 | source $(VENV)/bin/activate; \
62 | $(PYTHON) scripts/compose.py build $(COMPOSE_ARGS)
63 | docker-compose build --parallel
64 |
65 | start-env: venv ## Start the test environment
66 | source $(VENV)/bin/activate; \
67 | $(PYTHON) scripts/compose.py start $(COMPOSE_ARGS)
68 | docker-compose up -d
69 |
70 | stop-env: venv ## Stop the test environment
71 | source $(VENV)/bin/activate; \
72 | docker-compose down -v --remove-orphans || true
73 |
74 | destroy-env: venv ## Destroy the test environment
75 | [ -n "$$(docker ps -aqf network=apm-integration-testing)" ] && (docker ps -aqf network=apm-integration-testing | xargs -t docker rm -f && docker network rm apm-integration-testing) || true
76 |
77 | # default (all) built for now
78 | build-env-%: venv
79 | $(MAKE) build-env
80 |
81 | # default (all) started for now
82 | env-%: venv
83 | $(MAKE) start-env
84 |
85 | .PHONY: copy-events
86 | copy-events:
87 | docker cp $(shell docker-compose ps | grep intake-receiver | awk '{print $$1}'):/events .
88 |
89 | test: test-all test-helps ## Run all the tests
90 |
91 | test-compose: venv ## Test compose.py
92 | source $(VENV)/bin/activate; \
93 | pytest $(PYTEST_ARGS) scripts/tests/test_*.py --reruns 3 --reruns-delay 5 -v -s $(JUNIT_OPT)/compose-junit.xml
94 |
95 | test-compose-2:
96 | virtualenv --python=python2.7 venv2
97 | ./venv2/bin/pip2 install mock pytest pyyaml
98 | ./venv2/bin/pytest $(PYTEST_ARGS) --noconftest scripts/tests/test_*.py
99 |
100 | SUBCOMMANDS = list-options load-dashboards start status stop upload-sourcemap versions
101 |
102 | test-helps:
103 | $(foreach subcommand,$(SUBCOMMANDS), $(PYTHON) scripts/compose.py $(subcommand) --help > /tmp/file-output && echo "Passed $(subcommand)" || { echo "Failed $(subcommand). See output: " ; cat /tmp/file-output ; exit 1; };)
104 |
105 | test-all: venv test-compose lint ## Run all the tests
106 | source $(VENV)/bin/activate; \
107 | pytest -v -s $(PYTEST_ARGS) $(JUNIT_OPT)/all-junit.xml
108 |
109 | docker-test-%: ## Run a specific dockerized test. Ex: make docker-test-server
110 | TARGET=test-$* $(MAKE) dockerized-test
111 |
112 | dockerized-test: ## Run all the dockerized tests
113 | ./scripts/docker-summary.sh
114 |
115 | @echo running make $(TARGET) inside a container
116 | docker build --pull -t apm-integration-testing .
117 |
118 | mkdir -p -m 777 "$(PWD)/$(JUNIT_RESULTS_DIR)"
119 | chmod 777 "$(PWD)/$(JUNIT_RESULTS_DIR)"
120 | docker run \
121 | --name=apm-integration-testing \
122 | --network=apm-integration-testing \
123 | --security-opt seccomp=unconfined \
124 | -e APM_SERVER_URL \
125 | -e ES_URL \
126 | -e KIBANA_URL \
127 | -e PYTHONDONTWRITEBYTECODE=1 \
128 | -e PYTHONHTTPSVERIFY=$(PYTHONHTTPSVERIFY) \
129 | -e ES_USER \
130 | -e ES_PASS \
131 | -e ELASTIC_APM_SECRET_TOKEN \
132 | -e OTEL_EXPORTER_OTLP_ENDPOINT \
133 | -e OTEL_EXPORTER_OTLP_HEADERS \
134 | -e OTEL_SERVICE_NAME="apm-integration-testing" \
135 | -e TRACEPARENT \
136 | -e OTEL_EXPORTER_OTLP_INSECURE \
137 | -v "$(PWD)/$(JUNIT_RESULTS_DIR)":"/app/$(JUNIT_RESULTS_DIR)" \
138 | --rm \
139 | --entrypoint make \
140 | apm-integration-testing \
141 | $(TARGET)
142 |
143 | @echo running make test-helps outside a container
144 | $(MAKE) test-helps
145 |
146 | .PHONY: test-% docker-test-% dockerized-test docker-compose-wait
147 |
--------------------------------------------------------------------------------
/docker/filebeat/filebeat.6.x-compat.yml:
--------------------------------------------------------------------------------
1 | ---
2 | setup.template.settings:
3 | index.number_of_shards: 1
4 | index.codec: best_compression
5 | index.number_of_replicas: 0
6 |
7 | setup.kibana:
8 | host: "${KIBANA_HOST:kibana:5601}"
9 |
10 | output.elasticsearch:
11 | hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
12 | username: '${ELASTICSEARCH_USERNAME:}'
13 | password: '${ELASTICSEARCH_PASSWORD:}'
14 |
15 | logging.json: true
16 | logging.metrics.enabled: false
17 |
18 | xpack.monitoring.enabled: true
19 | ###################################################################################################
20 | ## autodiscover
21 | ###################################################################################################
22 | filebeat.autodiscover:
23 | providers:
24 | - type: docker
25 | templates:
26 | - condition:
27 | contains:
28 | docker.container.image: "apm-server"
29 | config:
30 | - type: docker
31 | containers.ids:
32 | - "${data.docker.container.id}"
33 | fields_under_root: true
34 | json.keys_under_root: true
35 | json.overwrite_keys: true
36 | json.add_error_key: true
37 | json.message_key: message
38 | - condition:
39 | contains:
40 | docker.container.image: "filebeat"
41 | config:
42 | - type: docker
43 | containers.ids:
44 | - "${data.docker.container.id}"
45 | fields_under_root: true
46 | json.keys_under_root: true
47 | json.overwrite_keys: true
48 | json.add_error_key: true
49 | json.message_key: message
50 | - condition:
51 | contains:
52 | docker.container.image: "heartbeat"
53 | config:
54 | - type: docker
55 | containers.ids:
56 | - "${data.docker.container.id}"
57 | fields_under_root: true
58 | json.keys_under_root: true
59 | json.overwrite_keys: true
60 | json.add_error_key: true
61 | json.message_key: message
62 | - condition:
63 | contains:
64 | docker.container.image: "kibana"
65 | config:
66 | - type: docker
67 | containers.ids:
68 | - "${data.docker.container.id}"
69 | fields_under_root: true
70 | json.keys_under_root: true
71 | json.overwrite_keys: true
72 | json.add_error_key: true
73 | json.message_key: message
74 | - condition:
75 | contains:
76 | docker.container.image: "metricbeat"
77 | config:
78 | - type: docker
79 | containers.ids:
80 | - "${data.docker.container.id}"
81 | fields_under_root: true
82 | json.keys_under_root: true
83 | json.overwrite_keys: true
84 | json.add_error_key: true
85 | json.message_key: message
86 | - condition:
87 | contains:
88 | docker.container.image: "opbeans-node"
89 | config:
90 | - type: docker
91 | containers.ids:
92 | - "${data.docker.container.id}"
93 | multiline.pattern: '^ '
94 | multiline.negate: false
95 | multiline.match: after
96 | - condition:
97 | contains:
98 | docker.container.image: "opbeans-go"
99 | config:
100 | - type: docker
101 | containers.ids:
102 | - "${data.docker.container.id}"
103 | fields_under_root: true
104 | json.keys_under_root: true
105 | json.overwrite_keys: true
106 | json.add_error_key: true
107 | json.message_key: message
108 | - condition:
109 | contains:
110 | docker.container.image: "postgres"
111 | config:
112 | - type: docker
113 | containers.ids:
114 | - "${data.docker.container.id}"
115 | multiline.pattern: '^\t'
116 | multiline.negate: false
117 | multiline.match: after
118 | - condition:
119 | and:
120 | - not:
121 | contains:
122 | docker.container.image: "apm-server"
123 | - not:
124 | contains:
125 | docker.container.image: "filebeat"
126 | - not:
127 | contains:
128 | docker.container.image: "heartbeat"
129 | - not:
130 | contains:
131 | docker.container.image: "kibana"
132 | - not:
133 | contains:
134 | docker.container.image: "metricbeat"
135 | - not:
136 | contains:
137 | docker.container.image: "opbeans-node"
138 | - not:
139 | contains:
140 | docker.container.image: "opbeans-go"
141 | - not:
142 | contains:
143 | docker.container.image: "postgres"
144 | config:
145 | - type: docker
146 | containers.ids:
147 | - "${data.docker.container.id}"
148 |
--------------------------------------------------------------------------------
/QUICKSTART.md:
--------------------------------------------------------------------------------
1 | # APM LocalEnv Quickstart
2 |
3 | In addition to the end to end (eg agent -> apm server -> elasticsearch <- kibana) development and testing of Elastic APM, this repo also serves as a nice way to spin up a local environment of Elastic APM. The [README](/README.md) has very detailed instructions, but focuses mostly on using it for development. This doc really just concentrates on getting it running. For advanced topics, check out the main README.
4 |
5 | Note that by "local environment", this can be on your actual local machine, or running on a cloud instance with port forwards set up.
6 |
7 | [](https://apm-ci.elastic.co/job/elastic+apm-integration-testing+main+push/)
8 |
9 | ## Prerequisites
10 |
11 | The basic requirements for starting a local environment are:
12 |
13 | - Docker
14 | - Python (version 3 preferred)
15 |
16 | This repo is tested with Python 3 but best effort is made to make starting/stopping environments work with Python 2.7.
17 |
18 | ### Docker
19 |
20 | [Installation instructions](https://www.docker.com/community-edition)
21 |
22 | ### Python 3
23 |
24 | - Windows: [Installation instructions](https://www.python.org/downloads/windows/)
25 | - Mac (using [Homebrew](https://brew.sh/)):
26 | ```sh
27 | brew install python
28 | ```
29 | - Debian/Ubuntu
30 | ```sh
31 | sudo apt-get install python3
32 | ```
33 |
34 | ## Running Local Enviroments
35 |
36 | ### Starting an Environment
37 |
38 | The tool that we use to start and stop the environment is `./scripts/compose.py`. This provides a handy cli for starting an APM environment using docker-compose.
39 |
40 | #### TL;DR
41 |
42 | Start an env by running:
43 | `./scripts/compose.py start --all 6.4 --release`
44 |
45 | This will start a complete 6.4 environment, which includes all of the sample apps and hits them each with a load generator. Once that is done (and everything has started up) you can navigate to [Your local Kibana Instance](http://localhost:5601/app/apm#/)
46 |
47 | #### Details
48 |
49 | If you don't want to start everything (for example, on a laptop with limited resources while trying to run zoom at the same time) you can pick and choose which services you run. Say, for example, that you want to run node, java, and rum. You could use this command:
50 | ```console
51 | ./scripts/compose.py start \
52 | --release \
53 | --with-opbeans-node \
54 | --with-opbeans-rum \
55 | --with-opbeans-java \
56 | 6.4
57 | ```
58 |
59 | There are many other configuration options, but this is a quickstart. See the [README](/README.md).
60 |
61 | If you want to see what services are available to start, you can run: `./scripts/compose.py start --help | grep "^ --with-opbeans"` which will filter out a list of the agent envs:
62 | ```console
63 | --with-opbeans-dotnet Enable opbeans-dotnet
64 | --with-opbeans-go Enable opbeans-go
65 | --with-opbeans-java Enable opbeans-java
66 | --with-opbeans-node Enable opbeans-node
67 | --with-opbeans-python
68 | --with-opbeans-ruby Enable opbeans-ruby
69 | --with-opbeans-rum Enable opbeans-rum
70 | ```
71 | So when new agents are added we don't have to update these instructions.
72 |
73 |
74 | **Bonus**: With either the `all` or individual methods above, you can also pass `--with-metricbeat` or `--with-filebeat` flags, which will also set up appropriate containers and dashboards. One side note here is that you will probably need to set a default index pattern.
75 |
76 | #### Status
77 |
78 | Each app gets its own port. You can actually hit them with your browser. They all have a similar look & feel.
79 |
80 | You can check the status of your APM cluster with `./scripts/compose.py status`, which basically calls :
81 |
82 | `docker ps --format 'table {{.Names}}\t{{.Ports}}'...`
83 |
84 | Here is a tablular view, excluding non-essentials:
85 |
86 | |Container Name | Link |
87 | |--------------------------------------------|----------------------------------------|
88 | |`localtesting_6.4.0_opbeans-rum` |[opbeans-rum](http://localhost:9222) (note - this needs chrome) |
89 | |`localtesting_6.4.0_opbeans-java` |[opbeans-java](http://localhost:3002) |
90 | |`localtesting_6.4.0_opbeans-dotnet` |[opbeans-dotnet](http://localhost:3004) |
91 | |`localtesting_6.4.0_opbeans-go` |[opbeans-go](http://localhost:3003) |
92 | |`localtesting_6.4.0_opbeans-node` |[opbeans-node](http://localhost:3000) |
93 | |`localtesting_6.4.0_opbeans-ruby` |[opbeans-ruby](http://localhost:3001) |
94 | |`localtesting_6.4.0_opbeans-python` |[opbeans-python](http://localhost:8000) |
95 | |`localtesting_6.4.0_kibana` |[kibana](http://localhost:5601) |
96 | |`localtesting_6.4.0_elasticsearch` |[elasticsearch](http://localhost:9200) |
97 | |`localtesting_6.4.0_apm-server` |[APM Server](http://localhost:8200) |
98 |
99 | You can attach your own APM agent to the APM server if you wish`.`
100 |
101 | ### Note for Cloud Instances
102 |
103 | If you want to run this on a cloud server (GCP, AWS), you will need to set up port forwarding to access them, and the easiest way to do this is through your `~/.ssh/config` file. My section for my cloud box looks like this:
104 |
105 | ```
106 | Host gcptunnel
107 | HostName