├── .gitbook
└── assets
│ ├── a-typical-vernemq-deployment (1).svg
│ ├── a-typical-vernemq-deployment-1.png
│ ├── a-typical-vernemq-deployment-1.svg
│ ├── a-typical-vernemq-deployment.png
│ ├── a-typical-vernemq-deployment.svg
│ ├── all.png
│ ├── all_till_ok.png
│ ├── enhanced_authflow.svg
│ ├── enhanced_authflow5.svg
│ ├── flow_legend.png
│ ├── local_only (1).png
│ ├── local_only (1).svg
│ ├── local_only.png
│ ├── local_only.svg
│ ├── only.png
│ ├── prefer_local (1).png
│ ├── prefer_local (1).svg
│ ├── prefer_local.png
│ ├── prefer_local.svg
│ ├── publish_flow.png
│ ├── publish_flow.svg
│ ├── publish_flow5.svg
│ ├── random.png
│ ├── random.svg
│ ├── session-lifecycle.png
│ ├── session_lifecycle.png
│ ├── session_lifecycle.svg
│ ├── session_lifecycle5.svg
│ ├── subscription_flow.png
│ ├── subscription_flow.svg
│ ├── subscription_flow5.svg
│ ├── vernemq_status_page.png
│ └── vmq-status-page.png
├── .gitignore
├── README.md
├── SUMMARY.md
├── administration
├── certificates.md
├── config_values.md
├── http-administration.md
├── introduction.md
├── listeners.md
├── managing-sessions.md
├── output_format.md
├── retained-store.md
└── tracing.md
├── clustering
├── communication.md
├── introduction.md
└── netsplits.md
├── configuration
├── advanced_options.md
├── balancing.md
├── bridge.md
├── db-auth.md
├── file-auth.md
├── http-listeners.md
├── http-pub.md
├── introduction.md
├── listeners.md
├── logging.md
├── nonstandard.md
├── options.md
├── plugins.md
├── schema-files.md
├── shared_subscriptions.md
├── storage.md
├── the-vernemq-conf-file.md
└── websockets.md
├── getting-started.md
├── guides
├── change-open-file-limits.md
├── clustering-during-development.md
├── loadtesting.md
├── migration-to-2-0.md
├── not-a-tuning-guide.md
├── typical-vernemq-deployment.md
└── vernemq-on-kubernetes.md
├── installation
├── accepting-the-vernemq-eula.md
├── centos_and_redhat.md
├── debian_and_ubuntu.md
└── docker.md
├── misc
├── change-open-file-limits.md
├── loadtesting.md
└── not-a-tuning-guide.md
├── monitoring
├── graphite.md
├── health-check.md
├── introduction.md
├── netdata.md
├── prometheus.md
├── status.md
└── systree.md
└── plugindevelopment
├── boilerplate.md
├── enhancedauthflow.md
├── introduction.md
├── luaplugins.md
├── publishflow.md
├── sessionlifecycle.md
├── subscribeflow.md
└── webhookplugins.md
/.gitbook/assets/a-typical-vernemq-deployment-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/a-typical-vernemq-deployment-1.png
--------------------------------------------------------------------------------
/.gitbook/assets/a-typical-vernemq-deployment.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/a-typical-vernemq-deployment.png
--------------------------------------------------------------------------------
/.gitbook/assets/all.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/all.png
--------------------------------------------------------------------------------
/.gitbook/assets/all_till_ok.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/all_till_ok.png
--------------------------------------------------------------------------------
/.gitbook/assets/enhanced_authflow.svg:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/.gitbook/assets/enhanced_authflow5.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
--------------------------------------------------------------------------------
/.gitbook/assets/flow_legend.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/flow_legend.png
--------------------------------------------------------------------------------
/.gitbook/assets/local_only (1).png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/local_only (1).png
--------------------------------------------------------------------------------
/.gitbook/assets/local_only (1).svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
65 |
--------------------------------------------------------------------------------
/.gitbook/assets/local_only.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/local_only.png
--------------------------------------------------------------------------------
/.gitbook/assets/only.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/only.png
--------------------------------------------------------------------------------
/.gitbook/assets/prefer_local (1).png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/prefer_local (1).png
--------------------------------------------------------------------------------
/.gitbook/assets/prefer_local (1).svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
81 |
--------------------------------------------------------------------------------
/.gitbook/assets/prefer_local.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/prefer_local.png
--------------------------------------------------------------------------------
/.gitbook/assets/publish_flow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/publish_flow.png
--------------------------------------------------------------------------------
/.gitbook/assets/random.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/random.png
--------------------------------------------------------------------------------
/.gitbook/assets/session-lifecycle.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/session-lifecycle.png
--------------------------------------------------------------------------------
/.gitbook/assets/session_lifecycle.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/session_lifecycle.png
--------------------------------------------------------------------------------
/.gitbook/assets/subscription_flow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/subscription_flow.png
--------------------------------------------------------------------------------
/.gitbook/assets/vernemq_status_page.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/vernemq_status_page.png
--------------------------------------------------------------------------------
/.gitbook/assets/vmq-status-page.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/vernemq/vmq-docs/e69b3b3db7ca7fed10e770d875a4dd6a016a8e05/.gitbook/assets/vmq-status-page.png
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Logs
2 | logs
3 | *.log
4 | npm-debug.log*
5 | yarn-debug.log*
6 | yarn-error.log*
7 |
8 | # Runtime data
9 | pids
10 | *.pid
11 | *.seed
12 | *.pid.lock
13 |
14 | # Directory for instrumented libs generated by jscoverage/JSCover
15 | lib-cov
16 |
17 | # Coverage directory used by tools like istanbul
18 | coverage
19 |
20 | # nyc test coverage
21 | .nyc_output
22 |
23 | # Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
24 | .grunt
25 |
26 | # Bower dependency directory (https://bower.io/)
27 | bower_components
28 |
29 | # node-waf configuration
30 | .lock-wscript
31 |
32 | # Compiled binary addons (https://nodejs.org/api/addons.html)
33 | build/Release
34 |
35 | # Dependency directories
36 | node_modules/
37 | jspm_packages/
38 |
39 | # TypeScript v1 declaration files
40 | typings/
41 |
42 | # Optional npm cache directory
43 | .npm
44 |
45 | # Optional eslint cache
46 | .eslintcache
47 |
48 | # Optional REPL history
49 | .node_repl_history
50 |
51 | # Output of 'npm pack'
52 | *.tgz
53 |
54 | # Yarn Integrity file
55 | .yarn-integrity
56 |
57 | # dotenv environment variables file
58 | .env
59 |
60 | # parcel-bundler cache (https://parceljs.org/)
61 | .cache
62 |
63 | # next.js build output
64 | .next
65 |
66 | # nuxt.js build output
67 | .nuxt
68 |
69 | # vuepress build output
70 | .vuepress/dist
71 |
72 | # Serverless directories
73 | .serverless
74 |
75 | # emacs save-files
76 | *~
77 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Welcome
2 |
3 | Welcome to the VerneMQ documentation! This is a reference guide for most of the available features and options of VerneMQ. The [Getting Started guide](getting-started.md) might be a good entry point.
4 |
5 | The VerneMQ documentation is based on the [VerneMQ Documentation project](https://github.com/vernemq/vmq-docs). Any changes on Github are automatically deployed to the [VerneMQ online Documentation](https://docs.vernemq.com/).
6 |
7 | For a more general overview on VerneMQ and MQTT, you might want to start with the [introduction](https://vernemq.com/intro/index.html).
8 |
9 | For downloading the subscription-based binary VerneMQ packages and/or a quick description on how to compile VerneMQ from sources, see [Downloads](https://vernemq.com/downloads/index.html).
10 |
11 | ## How to help improve this documentation
12 |
13 | The [VerneMQ Documentation project](https://github.com/vernemq/vmq-docs) is an open-source effort, and your contributions are very welcome and appreciated.
14 | You can contribute on all levels:
15 | - Language, style and typos
16 | - Fixing obvious documentation errors and gaps
17 | - Providing more details and/or examples for specific topics
18 | - Extending the documentation where you find this useful to do
19 |
20 | Note that the documentation is versioned according to the VerneMQ releases. You can click the "Edit on Github" button in the upper right corner of every page to check what branch and document you are on. You can then create a Pull Request (PR) against that branch from your fork of the VerneMQ documentation repository. (Direct edits on Github are possible for members of the documentation repository).
21 |
22 |
--------------------------------------------------------------------------------
/SUMMARY.md:
--------------------------------------------------------------------------------
1 | # Table of contents
2 |
3 | * [Welcome](README.md)
4 | * [Getting Started](getting-started.md)
5 | * [Downloads](https://vernemq.com/downloads)
6 | * [VerneMQ / MQTT Introduction](https://vernemq.com/intro)
7 |
8 | ## Installing VerneMQ
9 |
10 | * [Installing on Debian and Ubuntu](installation/debian_and_ubuntu.md)
11 | * [Installing on CentOS and RHEL](installation/centos_and_redhat.md)
12 | * [Running VerneMQ using Docker](installation/docker.md)
13 |
14 | ## Configuring VerneMQ
15 |
16 | * [Introduction](configuration/introduction.md)
17 | * [The VerneMQ conf file](configuration/the-vernemq-conf-file.md)
18 | * [Schema Files](configuration/schema-files.md)
19 | * [Auth using files](configuration/file-auth.md)
20 | * [Auth using a database](configuration/db-auth.md)
21 | * [MQTT Options](configuration/options.md)
22 | * [MQTT Listeners](configuration/listeners.md)
23 | * [HTTP Listeners](configuration/http-listeners.md)
24 | * [Non-standard MQTT options](configuration/nonstandard.md)
25 | * [Websockets](configuration/websockets.md)
26 | * [Logging](configuration/logging.md)
27 | * [Consumer session balancing](configuration/balancing.md)
28 | * [Plugins](configuration/plugins.md)
29 | * [Shared subscriptions](configuration/shared_subscriptions.md)
30 | * [Advanced Options](configuration/advanced_options.md)
31 | * [Storage](configuration/storage.md)
32 | * [MQTT Bridge](configuration/bridge.md)
33 | * [REST Publisher](configuration/http-pub.md)
34 |
35 | ## VerneMQ Clustering
36 |
37 | * [Introduction](clustering/introduction.md)
38 | * [Inter-node Communication](clustering/communication.md)
39 | * [Dealing with Netsplits](clustering/netsplits.md)
40 |
41 | ## Live Administration
42 |
43 | * [Introduction](administration/introduction.md)
44 | * [Inspecting and managing sessions](administration/managing-sessions.md)
45 | * [Retained messages](administration/retained-store.md)
46 | * [Live reconfiguration](administration/config_values.md)
47 | * [Managing Listeners](administration/listeners.md)
48 | * [Certificate Management](administration/certificates.md)
49 | * [HTTP API](administration/http-administration.md)
50 | * [Tracing](administration/tracing.md)
51 | * [Output Format](administration/output_format.md)
52 |
53 | ## Monitoring
54 |
55 | * [Introduction](monitoring/introduction.md)
56 | * [$SYSTree](monitoring/systree.md)
57 | * [Graphite](monitoring/graphite.md)
58 | * [Netdata](monitoring/netdata.md)
59 | * [Prometheus](monitoring/prometheus.md)
60 | * [Health Checker](monitoring/health-check.md)
61 | * [Status Page](monitoring/status.md)
62 |
63 | ## Plugin Development
64 |
65 | * [Introduction](plugindevelopment/introduction.md)
66 | * [Session lifecycle](plugindevelopment/sessionlifecycle.md)
67 | * [Subscribe Flow](plugindevelopment/subscribeflow.md)
68 | * [Publish Flow](plugindevelopment/publishflow.md)
69 | * [Enhanced Auth Flow](plugindevelopment/enhancedauthflow.md)
70 | * [Erlang Boilerplate](plugindevelopment/boilerplate.md)
71 | * [Lua Scripting Support](plugindevelopment/luaplugins.md)
72 | * [Webhooks](plugindevelopment/webhookplugins.md)
73 |
74 | ## Misc
75 |
76 | * [Loadtesting VerneMQ](misc/loadtesting.md)
77 | * [Not a tuning guide](misc/not-a-tuning-guide.md)
78 | * [Change Open File Limits](misc/change-open-file-limits.md)
79 | ## Guides
80 |
81 | * [A typical VerneMQ deployment](guides/typical-vernemq-deployment.md)
82 | * [VerneMQ on Kubernetes](guides/vernemq-on-kubernetes.md)
83 | * [Loadtesting VerneMQ](guides/loadtesting.md)
84 | * [Clustering during development](guides/clustering-during-development.md)
85 | * [Not a tuning guide](guides/not-a-tuning-guide.md)
86 | * [Change Open File Limits](guides/change-open-file-limits.md)
87 | * [Migrating to 2.0](guides/migration-to-2-0.md)
88 |
--------------------------------------------------------------------------------
/administration/certificates.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Certificate management
3 | ---
4 |
5 | # Certificates
6 | VerneMQ supports different Transport Layer Security (TLS) options, which allow for secure communication between MQTT clients and VerneMQ. Certificates typically have only a limited validity (for example one year) after which they have to be replaced. VerneMQ allows to replace a certificate without interrupting active connections.
7 |
8 | ## Replace a certificate
9 | Replacing a certificate straightforward. One just need to replace (overwrite) the corresponding PEM files. VerneMQ will pickup the new certificates.
10 |
11 | For example, if you have the following configuration
12 |
13 | ```text
14 | listener.ssl.cafile = /etc/ssl/cacerts.pem
15 | listener.ssl.certfile = /etc/ssl/cert.pem
16 | listener.ssl.keyfile = /etc/ssl/key.pem
17 |
18 | listener.ssl.default = 127.0.0.1:8883
19 | ```
20 |
21 | the files cacerts.pem, cert.pem and key.pem can be overwritten (on the filesystem!) with new certificates. VerneMQ will pick-up the certificate after some time (by default around 2min). It is possible to invalidate the certificate immedialtly by issuing the following command
22 |
23 | ```text
24 | vmq-admin tls clear-pem-cache
25 | ```
26 |
27 | One can use the openssl s_client tool to verify that that the new certificate has been deployed:
28 | ```text
29 | openssl s_client -host 127.0.0.1 -port 8883
30 | ```
31 |
32 |
33 | ## Running sessions and certificate validity
34 | Unless the client is implemented otherwise, all active connection will remain active. Please note, that TCP is designed in a way that the validity is checked during the TLS/SSL handshake, which happens once at the beginning of the session. Running sessions are not affected by an expired certificate.
35 |
36 | In case you want to invalidate all existing connections it is recommended to stop/start the listener.
37 |
38 | ```text
39 | vmq-admin listener stop
40 | vmq-admin listener start
41 | ```
42 |
43 | If you generally want to force your clients to reconnect after a specified period of time you can configure a maximum connection lifetime, after which a client is disconnected by the broker.
44 |
45 | ```text
46 | listener.ssl.default.max_connection_lifetime = 25000
47 | ```
48 |
49 |
--------------------------------------------------------------------------------
/administration/config_values.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Managing VerneMQ live config values.
3 | ---
4 |
5 | # Live reconfiguration
6 |
7 | You can dynamically re-configure most of VerneMQ's settings on a running node by using the `vmq-admin set` command.
8 |
9 | The following config values can be handled dynamically:
10 |
11 | ```text
12 | allow_anonymous
13 | topic_alias_max_broker
14 | receive_max_broker
15 | vmq_acl.acl_file
16 | graphite_host
17 | vmq_acl.acl_reload_interval
18 | graphite_enabled
19 | queue_type
20 | suppress_lwt_on_session_takeover
21 | max_message_size
22 | vmq_passwd.password_file
23 | graphite_port
24 | max_client_id_size
25 | upgrade_outgoing_qos
26 | max_message_rate
27 | graphite_interval
28 | allow_multiple_sessions
29 | systree_enabled
30 | max_last_will_delay
31 | retry_interval
32 | receive_max_client
33 | max_offline_messages
34 | max_online_messages
35 | max_inflight_messages
36 | allow_register_during_netsplit
37 | vmq_passwd.password_reload_interval
38 | topic_alias_max_client
39 | systree_interval
40 | allow_publish_during_netsplit
41 | coordinate_registrations
42 | remote_enqueue_timeout
43 | persistent_client_expiration
44 | allow_unsubscribe_during_netsplit
45 | graphite_include_labels
46 | shared_subscription_policy
47 | queue_deliver_mode
48 | allow_subscribe_during_netsplit
49 | ```
50 |
51 | {% hint style="warning" %}
52 | Settings dynamically configured with the `vmq-admin set` command will be reset by vernemq.conf upon broker restart.
53 | {% endhint %}
54 |
55 | ## Setting a value for the local node
56 |
57 | Let's change the `max_client_id_size` as an example. \(We might have noticed that some clients can't login because their client ID is too long, but we don't want to restart the broker for that\). Note that you can also set multiple values with the same command.
58 |
59 | ```text
60 | vmq-admin set max_client_id_size=45
61 | ```
62 |
63 | ## Setting a value for an arbitrary cluster node
64 |
65 | ```text
66 | vmq-admin set max_client_id_size=45 --node=VerneMQ20@192.168.1.20
67 | ```
68 |
69 | ## Setting a value for all cluster nodes
70 |
71 | ```text
72 | vmq-admin set max_client_id_size=45 --all
73 | ```
74 |
75 | ## Show current VerneMQ config values
76 |
77 | ### For the local node
78 |
79 | You can show one or multiple values in a simple table:
80 |
81 | ```text
82 | vmq-admin show max_client_id_size retry_interval
83 | ```
84 |
85 | ```text
86 | +----------------------+------------------+--------------+
87 | | node |max_client_id_size|retry_interval|
88 | +----------------------+------------------+--------------+
89 | |VerneMQ20@192.168.1.50| 28 | 20 |
90 | +----------------------+------------------+--------------+
91 |
92 | `
93 | ```
94 |
95 | ### For an arbitrary node
96 |
97 | ```text
98 | vmq-admin show max_client_id_size retry_interval --node VerneMQ20@192.168.1.20
99 | ```
100 |
101 | ### For all cluster nodes
102 |
103 | ```text
104 | vmq-admin show max_client_id_size retry_interval --all
105 | ```
106 |
107 | ```text
108 | +----------------------+------------------+--------------+
109 | | node |max_client_id_size|retry_interval|
110 | +----------------------+------------------+--------------+
111 | |VerneMQ30@192.168.1.30| 33 | 20 |
112 | |VerneMQ40@192.168.1.40| 33 | 20 |
113 | |VerneMQ10@192.168.1.10| 33 | 20 |
114 | |VerneMQ50@192.168.1.50| 33 | 20 |
115 | |VerneMQ20@192.168.1.20| 28 | 20 |
116 | +----------------------+------------------+--------------+
117 | ```
118 |
119 |
--------------------------------------------------------------------------------
/administration/http-administration.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: >-
3 | Everything you need to know to work with the VerneMQ HTTP administration
4 | interface
5 | ---
6 |
7 | # HTTP API
8 |
9 | The VerneMQ HTTP API is enabled by default and installs an HTTP handler on `http://localhost:8888/api/v1`. To read more about configuring the HTTP listener, see [HTTP Listener Configuration](../configuration/http-listeners.md). You can configure a HTTP listener, or a HTTPS listener to serve the HTTP API v1.
10 |
11 | ## Managing API keys
12 | The VerneMQ HTTP API uses basic authentication where an API key is passed as the username and the password is left empty, as an alternative the x-api-key header option can be used. API keys have a scope and (optional) can have an expiry date. So the first step to us the HTTP API is to create an API key.
13 |
14 | ### Scopes
15 | Each HTTP Module can be protected by an API key. An API key can be limited to a certain http module or further restrict some functionality within the http module. The scope used by the management API is "mgmt". Currently, the following scopes are supported "status", "mgmt", "metrics", "health".
16 |
17 | ### Create API key
18 | ```text
19 | $ vmq-admin api-key create
20 | JxctXkZ1OTVnlwvguSCE9KtujacMkOLF
21 | ```
22 | or with scope and an expiry date (in local time)
23 |
24 | ```text
25 | $ vmq-admin api-key create scope=mgmt expires=2023-04-04T12:00:00
26 | q85i5HbFCDdAVLNJuOj48QktDbchvOMS
27 | ```
28 |
29 | The keys are persisted and available on all cluster nodes.
30 |
31 | ### List API keys
32 | To list existing keys do:
33 |
34 | ```text
35 | $ vmq-admin api-key show
36 | +----------------------------------+-------+---------------------+-------------+
37 | | Key | Scope | Expires (UTC) | has expired |
38 | +----------------------------------+-------+---------------------+-------------+
39 | | q85i5HbFCDdAVLNJuOj48QktDbchvOMS | mgmt | 2023-04-04 10:00:00 | false |
40 | +----------------------------------+-------+---------------------+-------------+
41 | | JxctXkZ1OTVnlwvguSCE9KtujacMkOLF | mgmt | never | false |
42 | +----------------------------------+-------+---------------------+-------------+
43 | ```
44 |
45 | ### Add API key
46 | To add an API key of your own choosing, do:
47 |
48 | ```text
49 | vmq-admin api-key add key=mykey
50 | ```
51 |
52 | ### Delete API key
53 | To delete an API key do:
54 |
55 | ```text
56 | vmq-admin api-key delete key=JxctXkZ1OTVnlwvguSCE9KtujacMkOLF
57 | ```
58 |
59 | ### Advanced Settings (key rotation, key complexity)
60 | You can specifiy the minimal length of an API key (default: 0) in vernemq.conf
61 | ```text
62 | min_apikey_length = 30
63 | ```
64 |
65 | or a set a max duration of an API key before it expires (default: undefined)
66 | ```text
67 | max_apikey_expiry_days = 180
68 | ```
69 |
70 | Please note that changing those settings after some api keys have already been created has no influence on already created keys.
71 |
72 | You can enable or disable API key authentication per module, or per module per listener.
73 |
74 | ```text
75 | http_module.$module.auth.mode
76 | listener.http.$name.http_module.$module.auth.mode
77 | listener.https.$name.http_module.$module.auth.mode
78 | ```
79 |
80 | Possible modules are vmq_metrics_http,vmq_http_mgmt_api, vmq_status_http, vmq_health_http. Possible values for auth.mode are noauth or apikey.
81 |
82 |
83 | ## API usage
84 |
85 | The VerneMQ HTTP API is a wrapper over the [`vmq-admin`](introduction.md) CLI tool, and anything that can be done using `vmq-admin` can be done using the HTTP API. Note that the HTTP API is therefore subject to any changes made to the `vmq-admin` tools and their flags & options structure. All requests are performed doing a HTTP GET and if no errors occurred an HTTP 200 OK code is returned with a possible non-empty JSON payload.
86 |
87 | The API is using basic auth where the API key is passed as the username. An example using `curl` would look like this:
88 |
89 | ```text
90 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/session/show"
91 | ```
92 |
93 | The mapping between `vmq-admin` and the HTTP API is straightforward, and if one is already familiar with how the `vmq-admin` tool works, working with the API should be easy. The mapping works such that the command part of a `vmq-admin` invocation is turned into a path, and the options and flags are turned into the query string.
94 |
95 | A mandatory parameter like the `client-id` in the `vmq-admin session disconnect client-id=myclient` command should be translated as: `?client-id=myclient`.
96 |
97 | An optional flag like `--cleanup` in the `vmq-admin session disconnect client-id=myclient --cleanup` command should be translated as: `&--cleanup`
98 |
99 | Let's look at the cluster join command as an example, which looks like this:
100 |
101 | ```text
102 | vmq-admin cluster join discovery-node=NodeB@10.0.0.2
103 | ```
104 |
105 | This turns into a GET request:
106 |
107 | ```text
108 | GET /api/v1/cluster/join?discovery-node=NodeB@10.0.0.2
109 | ```
110 |
111 | To test, run it with `curl`:
112 |
113 | ```text
114 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/cluster/join?discovery-node=NodeB@10.0.0.2"
115 | ```
116 |
117 | And the returned response would look like:
118 |
119 | ```javascript
120 | {
121 | "text": "Done",
122 | "type": "text"
123 | }
124 | ```
125 |
126 | Below are some other examples.
127 |
128 | ### Get cluster status information
129 |
130 | Request:
131 |
132 | ```text
133 | GET /api/v1/cluster/show
134 | ```
135 |
136 | Curl:
137 |
138 | ```text
139 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/cluster/show"
140 | ```
141 |
142 | Response:
143 |
144 | ```javascript
145 | {
146 | "type" : "table",
147 | "table" : [
148 | {
149 | "Running" : true,
150 | "Node" : "VerneMQ@127.0.0.1"
151 | }
152 | ]
153 | }
154 | ```
155 |
156 | ### Retrieve session information
157 |
158 | Request:
159 |
160 | ```text
161 | GET /api/v1/session/show
162 | ```
163 |
164 | Curl:
165 |
166 | ```text
167 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/session/show"
168 | ```
169 |
170 | Response:
171 |
172 | ```javascript
173 | {
174 | "type" : "table",
175 | "table" : [
176 | {
177 | "user" : "client1",
178 | "peer_port" : 50402,
179 | "is_online" : true,
180 | "mountpoint" : "",
181 | "client_id" : "mosq/qJpvoqe1PA4lBN1e4E",
182 | "peer_host" : "127.0.0.1"
183 | },
184 | {
185 | "user" : "client2",
186 | "is_online" : true,
187 | "peer_port" : 50406,
188 | "peer_host" : "127.0.0.1",
189 | "client_id" : "mosq/tikkXdlM28PaznBv2T",
190 | "mountpoint" : ""
191 | }
192 | ]
193 | }
194 |
195 | ```
196 |
197 | ### List all installed listeners
198 |
199 | Request:
200 |
201 | ```text
202 | GET /api/v1/listener/show
203 | ```
204 |
205 | Curl:
206 |
207 | ```text
208 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/listener/show"
209 | ```
210 |
211 | Response:
212 |
213 | ```javascript
214 | {
215 | "type" : "table",
216 | "table" : [
217 | {
218 | "max_conns" : 10000,
219 | "port" : "8888",
220 | "mountpoint" : "",
221 | "ip" : "127.0.0.1",
222 | "type" : "http",
223 | "status" : "running"
224 | },
225 | {
226 | "status" : "running",
227 | "max_conns" : 10000,
228 | "port" : "44053",
229 | "mountpoint" : "",
230 | "ip" : "0.0.0.0",
231 | "type" : "vmq"
232 | },
233 | {
234 | "max_conns" : 10000,
235 | "port" : "1883",
236 | "mountpoint" : "",
237 | "ip" : "127.0.0.1",
238 | "type" : "mqtt",
239 | "status" : "running"
240 | }
241 | ]
242 | }
243 | ```
244 |
245 | ### Retrieve plugin information
246 |
247 | Request:
248 |
249 | ```text
250 | GET /api/v1/plugin/show
251 | ```
252 |
253 | Curl:
254 |
255 | ```text
256 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/plugin/show"
257 | ```
258 |
259 | Response:
260 |
261 | ```javascript
262 | {
263 | "type" : "table",
264 | "table" : [
265 | {
266 | "Hook(s)" : "auth_on_register\n",
267 | "Plugin" : "vmq_passwd",
268 | "M:F/A" : "vmq_passwd:auth_on_register/5\n",
269 | "Type" : "application"
270 | },
271 | {
272 | "Type" : "application",
273 | "M:F/A" : "vmq_acl:auth_on_publish/6\nvmq_acl:auth_on_subscribe/3\n",
274 | "Plugin" : "vmq_acl",
275 | "Hook(s)" : "auth_on_publish\nauth_on_subscribe\n"
276 | }
277 | ]
278 | }
279 | ```
280 |
281 | ### Set configuration values
282 |
283 | Request:
284 |
285 | ```text
286 | GET /api/v1/set?allow_publish_during_netsplit=on
287 | ```
288 |
289 | Curl:
290 |
291 | ```text
292 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/set?allow_publish_during_netsplit=on"
293 | ```
294 |
295 | Response:
296 |
297 | ```javascript
298 | []
299 | ```
300 |
301 | ### Disconnect a client
302 |
303 | Request:
304 |
305 | ```text
306 | GET /api/v1/session/disconnect?client-id=myclient&--cleanup
307 | ```
308 |
309 | Curl:
310 |
311 | ```text
312 | curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/session/disconnect?client-id=myclient&--cleanup"
313 | ```
314 |
315 | Response:
316 |
317 | ```javascript
318 | []
319 | ```
320 |
321 |
--------------------------------------------------------------------------------
/administration/introduction.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 |
3 | On every VerneMQ node you'll find the `vmq-admin` command line tool in the release's bin directory (in case you use the binary VerneMQ packages, `vmq-admin` should already be callable in your path, without changing directories). It has different sub-commands that let you check for status, start and stop listeners, re-configure values and a couple of other administrative tasks.
4 |
5 | `vmq-admin` has different sub-commands with a lot of respective options. You can familiarize yourself by using the `--help` option on the different levels of `vmq-admin`. You might see additional sub-commands in case integrated plugins are running (`vmq-admin bridge` is an example).
6 |
7 | ```text
8 | $ sudo vmq-admin --help
9 | Usage: vmq-admin
10 |
11 | Administrate the cluster.
12 |
13 | Sub-commands:
14 | node Manage this node
15 | cluster Manage this node's cluster membership
16 | session Retrieve session information
17 | retain Show and filter MQTT retained messages
18 | plugin Manage plugin system
19 | listener Manage listener interfaces
20 | metrics Retrieve System Metrics
21 | api-key Manage API keys for the HTTP management interface
22 | trace Trace various aspects of VerneMQ
23 | Use --help after a sub-command for more details.
24 | ```
25 |
26 | {% hint style="info" %}
27 | `vmq-admin` works by RPC'ing into the local VerneMQ node by default. For most commands you can add a `--node` option and set values on other cluster nodes, even if the local VerneMQ node is down.
28 |
29 | To check for the global cluster state in case the local VerneMQ node is down, you'll have to go to another node though.
30 | {% endhint %}
31 |
32 | {% hint style="info" %}
33 | `vmq-admin` uses RPC to connect to some node. By default, it has a timeout of 60secs before vmq-admin terminates with a RPC timeout. Sometimes a call (for example cluster leave) might need more time. In that case, you can set a different timeout with vmq-admin -rpctimeout timeoutsecs or even -rpctimeout infinity.
34 | {% endhint %}
35 |
36 | {% hint style="danger" %}
37 | `vmq-admin` is a live re-configuration utility. Please note that all dynamically configured values will be reset by vernemq.conf upon broker restart.
38 | As a consequence, it's good practice to keep track of the applied changes when re-configuring a broker with `vmq-admin`. If needed, you can then persist changes by adding them to the vernemq.conf file.
39 | {% endhint %}
40 |
--------------------------------------------------------------------------------
/administration/listeners.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Managing VerneMQ tcp listeners
3 | ---
4 |
5 | # Managing Listeners
6 |
7 | You can configure as many listeners as you wish in the vernemq.conf file. In addition to this, the `vmq-admin listener` command let's you configure, start, stop and delete listeners on the fly. Those can be MQTT, WebSocket or Cluster listeners, in the command line output they will be tagged mqtt, ws or vmq accordingly.
8 |
9 | {% hint style="info" %}
10 | To get info on a listener sub-command, invoke it with the --help option. Example: `vmq-admin listener start --help`
11 | {% endhint %}
12 |
13 | {% hint style="warning" %}
14 | Listeners configured with the `vmq-admin listener` command will not survive a broker restart. Live changes to listeners configured in vernemq.conf are possible, but the vernemq.conf listeners will just be restarted with a broker restart.
15 | {% endhint %}
16 |
17 | ## Status of all listeners
18 |
19 | ```text
20 | vmq-admin listener show
21 | ```
22 |
23 | ```text
24 | +----+-------+------------+-----+----------+---------+
25 | |type|status | ip |port |mountpoint|max_conns|
26 | +----+-------+------------+-----+----------+---------+
27 | |vmq |running|192.168.1.50|44053| | 30000 |
28 | |mqtt|running|192.168.1.50|1883 | | 30000 |
29 | +----+-------+------------+-----+----------+---------+
30 | `
31 | ```
32 |
33 | You can retrieve additional information by adding the --tls or --mqtt switch. See
34 |
35 | ```text
36 | vmq-admin listener show --help
37 | ```
38 |
39 | for more information.
40 |
41 | ## Starting a new listener
42 |
43 | ```text
44 | vmq-admin listener start address=192.168.1.50 port=1884 --mountpoint /test --nr_of_acceptors=10 --max_connections=1000
45 | ```
46 |
47 | This will start an MQTT listener on port `1884` and IP address `192.168.1.50`. If you want to start a WebSocket listener, just tell VerneMQ by adding the `--websocket` flag. There are more options, mainly for configuring SSL \(use `vmq-admin listener start --help`\).
48 |
49 | You can isolate client connections accepted by a certain listener from other clients by setting a mountpoint.
50 |
51 | To start an MQTT listener using defaults, just set the port and IP address as a minimum.
52 |
53 | ## Stopping a listener
54 |
55 | ```text
56 | vmq-admin listener stop address=192.168.1.50 port=1884
57 | ```
58 |
59 | A stopped listener will not accept new connections, but continue existing sessions. You can add the `-k` or `--kill_sessions` switch to that command. This will disconnect all client connections setup by that listener. In combination with a mountpoint, this can be useful for terminating clients for a specific application, or to force re-connects to another cluster node \(to prepare for a cluster leave for your node\).
60 |
61 | ## Restarting a stopped listener
62 |
63 | ```text
64 | vmq-admin listener restart address=192.168.1.50 port=1884
65 | ```
66 |
67 | ## Deleting a stopped listener
68 |
69 | ```text
70 | vmq-admin listener delete address=192.168.1.50 port=1884
71 | ```
72 |
73 |
--------------------------------------------------------------------------------
/administration/managing-sessions.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Inspecting and managing MQTT sessions
3 | ---
4 |
5 | # Inspecting and managing sessions
6 |
7 | ## Inspecting sessions
8 |
9 | VerneMQ comes with powerful tools for inspecting the state of MQTT sessions. To list current MQTT sessions simply invoke `vmq-admin session show`:
10 |
11 | ```text
12 | $ vmq-admin session show
13 | +---------+---------+----------+---------+---------+---------+
14 | |client_id|is_online|mountpoint|peer_host|peer_port| user |
15 | +---------+---------+----------+---------+---------+---------+
16 | | client2 | true | |127.0.0.1| 37098 |undefined|
17 | | client1 | true | |127.0.0.1| 37094 |undefined|
18 | +---------+---------+----------+---------+---------+---------+
19 | ```
20 |
21 | To see detailed information about the command see `vmq-admin session show --help`.
22 |
23 | The command is able to show a lot of different information about a client, for example the client id, the peer host and port if the client is online or offline and much more, see `vmq-admin session show --help` for details. Further the information can also be used to filter information which is very helpful when wanting to narrow down the information to a single client.
24 |
25 | A sample query which lists only the node where the client session exists and if the client is online would look like the following:
26 |
27 | ```text
28 | $ vmq-admin session show --node --is_online --client_id=client1
29 | +---------+--------------+
30 | |is_online| node |
31 | +---------+--------------+
32 | | true |dev2@127.0.0.1|
33 | +---------+--------------+
34 | ```
35 |
36 | {% hint style="success" %}
37 | Note, by default a maximum of 100 rows are returned from each node in the cluster. This is a mechanism to protect the cluster from overload as there can be millions of MQTT sessions and resulting rows. Use `--limit=` to override the default value.
38 | {% endhint %}
39 |
40 | ### More examples
41 |
42 | Listing the clients and the subscriptions one can do the following:
43 |
44 | ```text
45 | $ vmq-admin session show --topic --client_id
46 | +---------+-----------------+
47 | |client_id| topic |
48 | +---------+-----------------+
49 | | client2 |some/other/topic1|
50 | | client1 |some/other/topic2|
51 | | client1 | some/topic |
52 | +---------+-----------------+
53 | ```
54 |
55 | And to list only the clients subscribed to the topic `some/topic`:
56 |
57 | ```text
58 | $ vmq-admin session show --topic --client_id --topic=some/topic
59 | +---------+----------+
60 | |client_id| topic |
61 | +---------+----------+
62 | | client1 |some/topic|
63 | +---------+----------+
64 | ```
65 |
66 | you can also do a regex search to query a subset of topics:
67 |
68 | ```text
69 | $ vmq-admin session show --topic --client_id --topic=~some/other/.*
70 | +---------+-----------------+
71 | |client_id| topic |
72 | +---------+-----------------+
73 | | client2 |some/other/topic1|
74 | | client1 |some/other/topic |
75 | +---------+-----------------+
76 | ```
77 |
78 | A regex search uses the =~ syntax and is currently limited to alpha-numeric searches. Please note that the regex search consumes more load an a node than a regular search.
79 |
80 | ```text
81 | $ vmq-admin session show --topic --client_id --topic=some/topic
82 | +---------+----------+
83 | |client_id| topic |
84 | +---------+----------+
85 | | client1 |some/topic|
86 | +---------+----------+
87 | ```
88 |
89 | To figure out when the queue for a persisted session \(clean\_session=false\) was created and when the client last connected one can use the `--queue_started_at` and `--session_started_at` to list the POSIX timestamps \(in microseconds\):
90 |
91 | ```text
92 | $ vmq-admin session show --client_id=client1 --queue_started_at --session_started_at
93 | +----------------+------------------+
94 | |queue_started_at|session_started_at|
95 | +----------------+------------------+
96 | | 1549379963575 | 1549379974905 |
97 | +----------------+------------------+
98 | ```
99 |
100 | Besides the examples above it is also possible to inspect the number of online or offline messages as well as their payloads and much more. See `vmq-admin session show --help` for an exhaustive list of all the available options.
101 |
102 | ## Managing sessions
103 |
104 | VerneMQ also supports disconnecting clients and reauthorizing client subscriptions. To disconnect a client and cleanup store messages and remove subscriptions one can invoke:
105 |
106 | ```text
107 | $ vmq-admin session disconnect client-id=client1 --cleanup
108 | ```
109 |
110 | See `vmq-admin session disconnect --help` for more options and details.
111 |
112 | To reauthorize subscriptions for a client issue the following command:
113 |
114 | ```text
115 | $ vmq-admin session reauthorize username=username client-id=client1
116 | Unchanged
117 | ```
118 |
119 | This works by reapplying the logic in any installed `auth_on_subscribe` or `auth_on_subscribe_m5` plugin hooks to check the validity of the existing topics and removing those that are no longer allowed. In the example above the reauthorization of the client subscriptions resulted in no changes.
120 |
121 |
--------------------------------------------------------------------------------
/administration/output_format.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Changing the output format of CLI commands
3 | ---
4 |
5 | ## Default Output Format
6 |
7 | The default output format is called `human-readable`. It will print tables or text answers in response to your CLI commands.
8 |
9 |
10 | ## JSON Output Format
11 |
12 | The only alternative format is JSON. You can request it by adding the `--format=json` key to a command.
13 |
14 |
15 | ```
16 | vmq-admin listener show --format=json
17 | ```
18 |
19 | ```
20 | {"table":[{"type":"vmq","status":"running","address":"0.0.0.0","port":"44053","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0},{"type":"mqtt","status":"running","address":"127.0.0.1","port":"1883","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0},{"type":"mqttws","status":"running","address":"127.0.0.1","port":"1887","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0},{"type":"http","status":"running","address":"127.0.0.1","port":"8888","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0}],"type":"table"}%
21 | ```
22 |
23 | To pretty-print your JSON or extract the `table` object, use the `jq` command. Currently, not all responses give you a nice table and attributes format. Namely, `vmq-admin metrics show` will only give the metrics as text.
24 |
25 | ```
26 | vmq-admin listener show --format=json | jq '.table'
27 | ```
28 |
29 | ```json
30 | [
31 | {
32 | "type": "vmq",
33 | "status": "running",
34 | "address": "0.0.0.0",
35 | "port": "44053",
36 | "mountpoint": "",
37 | "max_conns": 10000,
38 | "active_conns": 0,
39 | "all_conns": 0
40 | },
41 | {
42 | "type": "mqtt",
43 | "status": "running",
44 | "address": "127.0.0.1",
45 | "port": "1883",
46 | "mountpoint": "",
47 | "max_conns": 10000,
48 | "active_conns": 0,
49 | "all_conns": 0
50 | },
51 | {
52 | "type": "mqttws",
53 | "status": "running",
54 | "address": "127.0.0.1",
55 | "port": "1887",
56 | "mountpoint": "",
57 | "max_conns": 10000,
58 | "active_conns": 0,
59 | "all_conns": 0
60 | },
61 | {
62 | "type": "http",
63 | "status": "running",
64 | "address": "127.0.0.1",
65 | "port": "8888",
66 | "mountpoint": "",
67 | "max_conns": 10000,
68 | "active_conns": 0,
69 | "all_conns": 0
70 | }
71 | ]
72 | ```
--------------------------------------------------------------------------------
/administration/retained-store.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Inspecting the retained message store
3 | ---
4 |
5 | # Retained messages
6 |
7 | To list the retained messages simply invoke `vmq-admin retain show`:
8 |
9 | ```text
10 | $ vmq-admin retain show
11 | +------------------+----------------+
12 | | payload | topic |
13 | +------------------+----------------+
14 | | a-third-message | a/third/topic |
15 | |some-other-message|some/other/topic|
16 | | a-message | some/topic |
17 | | a-message | another/topic |
18 | +------------------+----------------+
19 | ```
20 |
21 | {% hint style="success" %}
22 | Note, by default a maximum of 100 results are returned. This is a mechanism to protect the from overload as there can be millions of retained messages. Use `--limit=` to override the default value.
23 | {% endhint %}
24 |
25 | Besides listing the retained messages it is also possible to filter them:
26 |
27 | ```text
28 | $ vmq-admin retain show --payload --topic=some/topic
29 | +---------+
30 | | payload |
31 | +---------+
32 | |a-message|
33 | +---------+
34 | ```
35 |
36 | In the above example we list only the payload for the topic `some/topic`.
37 |
38 | Another example where all topics are list with retained messages with a specific payload:
39 |
40 | ```text
41 | $ vmq-admin retain show --payload a-message --topic
42 | +-------------+
43 | | topic |
44 | +-------------+
45 | | some/topic |
46 | |another/topic|
47 | +-------------+
48 | ```
49 |
50 | See the full set of options and documentation by invoking `vmq-admin retain show --help`:
51 |
52 | ```text
53 | $ sudo vmq-admin retain --help
54 | Usage: vmq-admin retain show
55 |
56 | Show and filter MQTT retained messages.
57 |
58 | Default options:
59 | --payload --topic
60 |
61 | Options
62 |
63 | --limit=
64 | Limit the number of results returned. Defaults is 100.
65 | --payload
66 | --topic
67 | --mountpoint
68 | ```
69 |
--------------------------------------------------------------------------------
/administration/tracing.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Real-time inspection
3 | ---
4 |
5 | # Tracing
6 |
7 | ## Introduction
8 |
9 | When working with a system like VerneMQ sometimes when troubleshooting it would be nice to know what a client is actually sending and receiving and what VerneMQ is doing with this information. For this purpose VerneMQ has a built-in tracing mechanism which is safe to use in production settings as there is very little overhead in running the tracer and has built-in protection mechanisms to stop traces that produce too much information.
10 |
11 | ## Tracing clients
12 |
13 | To trace a client the following command is available:
14 |
15 | ```text
16 | vmq-admin trace client client-id=
17 | ```
18 |
19 | See the available flags by calling `vmq-admin trace client --help`.
20 |
21 | A typical trace could look like the following:
22 |
23 | ```text
24 | $ vmq-admin trace client client-id=client
25 | No sessions found for client "client"
26 | New session with PID <7616.3443.1> found for client "client"
27 | <7616.3443.1> MQTT RECV: CID: "client" CONNECT(c: client, v: 4, u: username, p: password, cs: 1, ka: 30)
28 | <7616.3443.1> Calling auth_on_register({{172,17,0,1},34274},{[],<<"client">>},username,password,true)
29 | <7616.3443.1> Hook returned "ok"
30 | <7616.3443.1> MQTT SEND: CID: "client" CONNACK(sp: 0, rc: 0)
31 | <7616.3443.1> MQTT RECV: CID: "client" SUBSCRIBE(m1) with topics:
32 | q:0, t: "topic"
33 | <7616.3443.1> Calling auth_on_subscribe(username,{[],<<"client">>}) with topics:
34 | q:0, t: "topic"
35 | <7616.3443.1> Hook returned "ok"
36 | <7616.3443.1> MQTT SEND: CID: "client" SUBACK(m1, qt[0])
37 | <7616.3443.1> Trace session for client stopped
38 | ```
39 |
40 | In this particular trace a trace was started for the client with client-id `client`. At first no clients are connected to the node where the trace has been started, but a little later the client connects and we see the trace come alive. The strange identifier `<7616.3443.1>` is called a process identifier and is the identifier of the process in which the trace happened - this isn't relevant unless one wants to correlate the trace with log entries where process identifiers are also logged. Besides the process identifier there are some lines with `MQTT SEND` and `MQTT RECV` which are to be understood from the perspective of the broker. In the above trace this means that first the broker receives a `CONNECT` frame and replies with a `CONNACK` frame. Each MQTT event is annotated with the data from the MQTT frame to give as much detail and insight as possible.
41 |
42 | Notice the `auth_on_register` call between `CONNECT` and `CONNACK` which is the authentication plugin hook being called to authenticate the client. In this case the hook returned `ok` which means the client was successfully authenticated.
43 |
44 | Likewise notice the `auth_on_subscribe` call between the `SUBSCRIBE` and `SUBACK` frames which is plugin hook used to authorize if this particular subscription should be allowed or not. In this case the subscription was authorized.
45 |
46 | ### Trace options
47 |
48 | The client trace command has additional options as shown by `vmq-admin trace client --help`. Those are hopefully self-explaining:
49 |
50 | ```text
51 | Options
52 |
53 | --mountpoint=
54 | the mountpoint for the client to trace.
55 | Defaults to "" which is the default mountpoint.
56 | --rate-max=
57 | the maximum number of messages for the given interval,
58 | defaults to 10.
59 | --rate-interval=
60 | the interval in milliseconds over which the max number of messages
61 | is allowed. Defaults to 100.
62 | --trunc-payload=
63 | control when the payload should be truncated for display.
64 | Defaults to 200.
65 | ```
66 |
67 | {% hint style="info" %}
68 | A convenient tool is the `ts` \(timestamp\) tool which is available on many systems. If the trace output is piped to this command each line is prefixed with a timestamp:
69 |
70 | `ts | sudo vmq-admin trace client client-id=tester`
71 | {% endhint %}
72 |
73 | {% hint style="info" %}
74 | It is currently not possible to start multiple traces from multiple shells, or trace multiple ClientIDs.
75 | {% endhint %}
76 |
77 | ## Stopping a Trace from another shell
78 |
79 | If you loose access to your shell from where you started a trace, you might need to stop that trace before you can spawn a new one. Your attempt to spawn a second trace will result in the following output:
80 |
81 | ```text
82 | Cannot start trace as another trace is already running.
83 | ```
84 |
85 | You can stop a running trace using the `stop_all` command from a second shell. This will log a message to the other shell telling that session it's being externally terminated. The calling shell will silently return and be available for a new trace.
86 |
87 | ```text
88 | $ sudo vmq-admin trace stop_all
89 | ```
90 |
--------------------------------------------------------------------------------
/clustering/communication.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Everything you must know to properly configure and deploy a VerneMQ Cluster
3 | ---
4 |
5 | # Inter-node Communication
6 |
7 | VerneMQ uses the Erlang distribution mechanism for most inter-node communication. VerneMQ identifies other machines in the cluster using Erlang identifiers \(e.g. `VerneMQ@10.9.8.7`\). Erlang resolves these node identifiers to a TCP port on a given machine via the Erlang Port Mapper daemon \(epmd\) running on each cluster node.
8 |
9 | By default, epmd binds to TCP port 4369 and listens on the wildcard interface. For inter-node communication, Erlang uses an unpredictable port by default; it binds to port 0, which means the first available port.
10 |
11 | For ease of firewall configuration, VerneMQ can be configured to instruct the Erlang interpreter to use a limited range of ports. For example, to restrict the range of ports that Erlang will use for inter-Erlang node communication to 6000-7999, add the following lines to vernemq.conf on each VerneMQ node:
12 |
13 | ```text
14 | erlang.distribution.port_range.minimum = 6000
15 | erlang.distribution.port_range.maximum = 7999
16 | ```
17 |
18 | The settings above are only used for distributing subscription updates and maintenance messages. For distributing the 'real' MQTT messages the proper `vmq` listener must be configured in the vernemq.conf.
19 |
20 | ```text
21 | listener.vmq.clustering = 0.0.0.0:44053
22 | ```
23 |
24 | {% hint style="info" %}
25 | It isn't necessary to configure the same port on every machine, as the nodes will probe each other for this information.
26 | {% endhint %}
27 |
28 | **Attributions:**
29 |
30 | This section, "VerneMQ Inter-node Communication", is a derivative of Security and Firewalls by Riak, used under Creative Commons Attribution 3.0 Unported License.
31 |
32 |
--------------------------------------------------------------------------------
/clustering/introduction.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Everything you must know to properly configure and deploy a VerneMQ Cluster
3 | ---
4 |
5 | # Introduction
6 |
7 | VerneMQ can be easily clustered. Clients can then connect to any cluster node and receive messages from any other cluster nodes. However, the MQTT specification gives certain guarantees that are hard to fulfill in a distributed environment, especially when network partitions occur. We'll discuss the way VerneMQ deals with network partitions in its [own subsection](netsplits.md)
8 |
9 | {% hint style="danger" %}
10 | **Set the Cookie!** All cluster nodes need to be configured to use the same Cookie value. It can be set in the `vernemq.conf` with the `distributed_cookie` setting. Set the Cookie to a private value for security reasons!
11 | {% endhint %}
12 |
13 | {% hint style="info" %}
14 | For a successful VerneMQ cluster setup, it is important to choose proper VerneMQ node names. In `vernemq.conf` change the `nodename = VerneMQ@127.0.0.1` to something appropriate. Make sure that the node names are unique within the cluster. Read the section on [VerneMQ Inter-node Communication](communication.md) if firewalls are involved.
15 | {% endhint %}
16 |
17 | #### A note on statefulness
18 |
19 | Before you go ahead and experience the full power of clustering VerneMQ, be aware of its stateful character. An MQTT broker is a stateful application and a VerneMQ cluster is a stateful cluster.
20 |
21 | What does this mean in detail? It means that clustered VerneMQ nodes will share information about connected clients and sessions but also meta-information about the cluster itself.
22 |
23 | For instance, if you stop a cluster node, the VerneMQ cluster will not just forget about it. It will know that there's a node missing and it will keep looking for it. It will know there's a netsplit situation and it will heal the partition when the node comes back up. But if the missing node never comes back there's an eternal netsplit. \(still resolvable by making the missing node explicitly leave\).
24 |
25 | This doesn't mean that a VerneMQ cluster cannot dynamically grow and shrink. But it means you have to tell the cluster what you intend to do, by using join and leave commands.
26 |
27 | If you want a cluster node to leave the cluster, well... use the `vmq-admin cluster leave` command. If you want a node to join a cluster use the `vmq-admin cluster join` command.
28 |
29 | Makes sense? Go ahead and create your first VerneMQ cluster!
30 |
31 | ## Joining a Cluster
32 |
33 | ```text
34 | vmq-admin cluster join discovery-node=
35 | ```
36 | The discovery-node can be any other node. It is not necessary to always choose the same node as discovery node. It is important that only a node with an empty history joins a cluster. One should not try to add a node, that had already traffic on it, to a cluster.
37 |
38 | ## Leaving a Cluster
39 |
40 | ```text
41 | vmq-admin cluster leave node= (only the first step!)
42 | ```
43 |
44 | ## Detailed Cluster Leave, Case A: Make a live node leave
45 |
46 | A cluster leave will actually do a lot more work, and gives you some options to choose. The node leaving the cluster will go to great length trying to migrate its existing queues to other nodes. As queues \(online or offline\) are live processes in a VerneMQ node, it will only exit after it has migrated them.
47 |
48 | Let's look at the steps in detail:
49 |
50 | 1. `vmq-admin cluster leave node=`
51 |
52 | This first step will only **stop** the MQTT Listeners of the node to ensure that no new connections are accepted. It will **not** interrupt the existing connections, and behind the scenes the node will **not** leave the cluster yet. Existing clients are still able to publish and receive messages at this point.
53 |
54 | The idea is to give a grace period with the hope that existing clients might re-connect \(to another node\). If you have decided that this period is over \(after 5 minutes or 1 day is up to you\), you proceed with step 2: disconnecting the rest of the clients.
55 |
56 | 1. `vmq-admin cluster leave node= -k`
57 |
58 | The `-k` flag will **delete** the MQTT Listeners of the leaving node, taking down all live connections. If this is what you want from the beginning, you can do this right away as a first step.
59 |
60 | Now, queue migration is triggered by clients re-connecting to other nodes. They will claim their queue and it will get migrated. Still, there might be some offline queues remaining on the leaving node, because they were pre-existing or because some clients do not re-connect and do not reclaim their queues.
61 |
62 | VerneMQ will throw an exception if there are remaining offline queues after a configurable timeout. The default is 60 seconds, but you can set it as an option to the cluster leave command. As soon as the exception shows in console or console.log, you can actually retry the cluster leave command \(including setting a migration timeout \(`-t`\), and an interval in seconds \(`-i`\) indicating how often information on the migration progress should be printed to the console.log\):
63 |
64 | 1. `vmq-admin cluster leave node= -k -i 5 -t 120`
65 |
66 | After this timeout VerneMQ will forcefully migrate the remaining offline queues to other cluster nodes in a round robin manner. After doing that, it will stop the leaving VerneMQ node.
67 |
68 | {% hint style="info" %}
69 | **Note 1:** While doing a cluster leave, it's a good idea to tail -f the VerneMQ console.log to see queue migration progress.
70 | {% endhint %}
71 |
72 | {% hint style="info" %}
73 | **Note 2:** A node that has left the cluster is considered dead. If you want to reuse that node as a single node broker, you have to \(backup & rename &\) delete the whole VerneMQ`data` directory and start with a new directory. \(It will be created automatically by VerneMQ at boot\).
74 |
75 | Otherwise that node will start looking for its old cluster peers when you restart it.
76 | {% endhint %}
77 |
78 | ## Detailed Cluster Leave, Case B: Make a stopped node leave
79 |
80 | So, case A was the happy case. You left the cluster with your node in a controlled manner, and everything worked, including a complete queue \(and message\) transfer to other nodes.
81 |
82 | Let's look at the second possibility where the node is already down. Your cluster is still counting on it though and possibly blocking new subscription for that reason, so you want to make the node leave.
83 |
84 | To do this, use the same command\(s\) as in the first case. There is one important consequence to note: by making a stopped node leave, you basically throw away persistant queue content, as VerneMQ won't be able to migrate or deliver it.
85 |
86 | Let's repeat that to make sure:
87 |
88 | {% hint style="danger" %}
89 | **Case B:** Currently the persisted QoS 1 & QoS 2 messages aren't replicated to the other nodes by the default message store backend. Currently you will **lose** the offline messages stored on the leaving node.
90 | {% endhint %}
91 |
92 | ## Getting Cluster Status Information
93 |
94 | ```text
95 | vmq-admin cluster show
96 | ```
97 |
98 |
--------------------------------------------------------------------------------
/clustering/netsplits.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: How does VerneMQ deals with Network Partitions aka. Netsplits.
3 | ---
4 |
5 | # Dealing with Netsplits
6 |
7 | This section elaborates how a VerneMQ cluster deals with network partitions \(aka. netsplit or split brain situation\). A netsplit is mostly the result of a failure of one or more network devices resulting in a cluster where nodes can no longer reach each other.
8 |
9 | VerneMQ is able to detect a network partition, and by default it will stop serving `CONNECT`, `PUBLISH`, `SUBSCRIBE`, and `UNSUBSCRIBE` requests. A properly implemented client will always resend unacked commands and messages are therefore not lost \(QoS 0 publishes will be lost\). However, the time window between the network partition and the time VerneMQ detects the partition **much** can happen. Moreover, this time frame will be different on every participating cluster node. In this guide we're referring to this time frame as the _Window of Uncertainty_.
10 |
11 | {% hint style="info" %}
12 | The behaviour during a netsplit is completely configurable via `allow_register_during_netsplit`, `allow_publish_during_netsplit`, `allow_subscribe_during_netsplit`, and `allow_unsubscribe_during_netsplit`. These options supersede the `trade_consistency` option. In order to reach the same behaviour as `trade_consistency = on` all the mentioned netsplit options have to set to `on`.
13 | {% endhint %}
14 |
15 | ## Possible Scenario for Message Loss:
16 |
17 | VerneMQ follows an eventually consistent model for storing and replicating the subscription data. This also includes retained messages.
18 |
19 | Due to the eventually consistent data model it is possible that during the Window of Uncertainty a publish won't take into account a subscription made on a remote node \(in another partition\). Obviously, VerneMQ can't deliver the message in this case. The same holds for delivering retained messages to remote subscribers.
20 |
21 | `last will` messages that are triggered during the Window of Uncertainty will be delivered to the reachable subscribers. Currently during a netsplit, but after the Window of Uncertainty last will messages will be lost.
22 |
23 | ## Possible Scenario for Duplicate Clients:
24 |
25 | Normally, client registration is synchronized using an _elected_ leader node for the given client id. Such a synchronization removes the race condition between multiple clients trying to connect with the same client id on different nodes. However, during the Window of Uncertainty it is currently possible that VerneMQ fails to disconnect a client connected to a different node. Although this scenario sounds like artificially crafted it is possible to end up with duplicate clients connected to the cluster.
26 |
27 | ## Recovering from a Netsplit
28 |
29 | As soon as the partition is healed, and connectivity reestablished, the VerneMQ nodes replicate the latest changes made to the subscription data. This includes all the changes 'accidentally' made during the Window of Uncertainty. Using [Dotted Version Vectors](https://github.com/ricardobcl/Dotted-Version-Vectors) VerneMQ ensures that convergence regarding subscription data and retained messages is eventually reached.
30 |
31 |
--------------------------------------------------------------------------------
/configuration/advanced_options.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Configure a couple of hidden options for VerneMQ
3 | ---
4 |
5 | # Advanced Options
6 |
7 | There are a couple of hidden options you can set in the `vernemq.conf` file. Hidden means that you have to add and set the value explicitly. Hidden options still have default values. Changing them should be considered advanced, possibly with the exception of setting a `max_message_rate`.
8 |
9 | ## Queue Deliver mode
10 |
11 | Specify how the queue should deliver messages when multiple sessions are allowed. In case of `fanout` all the attached sessions will receive the message, in case of `balance` an attached session is chosen randomly.
12 |
13 | {% hint style="info" %} The feature to enable multiple sessions will be deprecated in VerneMQ 2.0.{% endhint %}
14 |
15 | ```text
16 | queue_deliver_mode = balance
17 | ```
18 |
19 | ## Queue Type
20 |
21 | Specify how queues should process messages, either the `fifo` or `lifo` way, with a default setting of `fifo`. The setting will apply globally, that is, for every spawned queue in a VerneMQ broker. (You can override the `queue_type` setting in plugins in the `auth_on_register` hook).
22 |
23 | ```text
24 | queue_type = fifo
25 | ```
26 |
27 | ## Max Message Rate
28 |
29 | Specifies the maximum incoming publish rate per session per second. Depending on the underlying network buffers this rate isn't enforced. Defaults to `0`, which means no rate limits apply. Setting to a value of `2` limits any publisher to 2 messages per second, for instance.
30 |
31 | ```text
32 | max_message_rate = 2
33 | ```
34 |
35 | ## Max Drain Time
36 |
37 | Due to the eventually consistent nature of the subscriber store it is possible that during queue migration messages still arrive on the old cluster node. This parameter enables compensation for that fact by keeping the queue around for some configured time \(in seconds\) after it was migrated to the other cluster node.
38 |
39 | ```text
40 | max_drain_time = 20
41 | ```
42 |
43 | ## Max Msgs per Drain Step
44 |
45 | Specifies the number of messages that are delivered to the remote node per drain step. A large value will provide a faster migration of a queue, but increases the waste of bandwidth in case the migration fails.
46 |
47 | ```text
48 | max_msgs_per_drain_step = 1000
49 | ```
50 |
51 | ## Default Reg View
52 |
53 | Allows to select a new default reg\_view. A reg\_view is a pre-defined way to route messages. Multiple views can be loaded and used, but one has to be selected as a default. The default routing is `vmq_reg_trie`, i.e. routing via the built-in trie data structure.
54 |
55 | ```text
56 | vmq_reg_view = "vmq_reg_trie"
57 | ```
58 |
59 | ## Reg Views
60 |
61 | A list of views that are started during startup. It's only used in plugins that want to choose dynamically between routing reg\_views.
62 |
63 | ```text
64 | reg_views = "[vmq_reg_trie]"
65 | ```
66 |
67 | ## Outgoing Clustering Buffer Size
68 |
69 | An integer specifying how many bytes are buffered in case the remote node is not available. Default is `10000`
70 |
71 | ```text
72 | outgoing_clustering_buffer_size = 10000
73 | ```
74 |
75 | ## Max Connection Lifetime
76 | Defines the maximum lifetime of MQTT connection in seconds. Max_connection_lifetime can be set per-listener. This is an implementation of MQTT security proposal:
77 | "Servers may close the Network Connection of Clients and require them to re-authenticate with new credentials."
78 |
79 | ```text
80 | listener.max_connection_lifetime = 25000
81 | ```
82 |
83 | It is possible to override the value in auth_on_register(_m5) to a lower limit.
84 |
85 |
--------------------------------------------------------------------------------
/configuration/balancing.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: MQTT consumers can share and loadbalance a topic subscription.
3 | ---
4 |
5 | # Consumer session balancing
6 |
7 | {% hint style="warning" %}
8 | Consumer session balancing has been deprecated and will be removed in VerneMQ 2.0. Use [Shared Subscriptions](shared_subscriptions) instead.
9 | {% endhint %}
10 |
11 | Sometimes consumers get overwhelmed by the number of messages they receive. VerneMQ can load balance between multiple consumer instances subscribed to the same topic with the same ClientId.
12 |
13 | ## Enabling Session Balancing
14 |
15 | To enable session balancing, activate the following two settings in vernemq.conf
16 |
17 | ```text
18 | allow_multiple_sessions = on
19 | queue_deliver_mode = balance
20 | ```
21 |
22 | {% hint style="info" %}
23 | Currently those settings will activate consumer session balancing globally on the respective node. Restricting balancing to specific consumers only, will require a plugin. Note that you cannot balance consumers spread over different cluster nodes.
24 | {% endhint %}
25 |
26 |
--------------------------------------------------------------------------------
/configuration/bridge.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: VerneMQ can interface with other brokers (and itself) via MQTT bridges.
3 | ---
4 |
5 | # MQTT Bridge
6 |
7 | Bridges are a non-standard way (but de-facto standard) among MQTT broker implementations to connect two different MQTT brokers. Over a bridge, the topic tree of a remote broker becomes part of the topic tree on the local broker. VerneMQ bridges support plain TCP connections as well as SSL connections.
8 |
9 | A bridge will be a point-to-point connection between 2 brokers, but can still forward all the messages from all cluster nodes to another cluster.
10 |
11 | {% hint style="info" %} The VerneMQ bridge plugin currently forwards messages using MQTT protocol version 3.1.1. MQTT v5 messages will still be forwarded but be aware that metadata like user-defined properties will be dropped.{% endhint %}
12 |
13 | ## Enabling the bridge functionality
14 |
15 | The MQTT bridge plugin (`vmq_bridge`) is distributed with VerneMQ as an integrated plugin but is not enabled by default. After configuring the bridge as described below, make sure to enable the plugin by setting (`vernemq.conf`):
16 |
17 | ```text
18 | plugins.vmq_bridge = on
19 | ```
20 |
21 | See [Managing plugins](plugins.md) for more information on working with plugins.
22 |
23 | Basic information on the configured bridges can be displayed on the admin CLI:
24 |
25 | ```text
26 | $ vmq-admin bridge show
27 | +-----------------+-----------+----------+-------------------+
28 | | endpoint |buffer size|buffer max|buffer dropped msgs|
29 | +-----------------+-----------+----------+-------------------+
30 | |192.168.1.10:1883| 0 | 0 | 0 |
31 | +-----------------+-----------+----------+-------------------+
32 | ```
33 |
34 | {% hint style="info" %}
35 | The `vmq-admin bridge` command is only available when the bridge plugin is running.
36 | {% endhint %}
37 |
38 | ## Sample MQTT Bridge
39 |
40 | To configure `vmq_bridge` you need to edit the bridge section of the `vernemq.conf` file to set endpoints and
41 | mapping topics. A bridge can push or pull messages, as defined in the topic pattern list.
42 |
43 | Setup a bridge to a remote broker:
44 |
45 | ```text
46 | vmq_bridge.tcp.br0 = 192.168.1.12:1883
47 | ```
48 |
49 | Different connection parameters can be set:
50 |
51 | ```text
52 | # use a clean session (defaults to 'off')
53 | vmq_bridge.tcp.br0.cleansession = off | on
54 |
55 | # set the client id (defaults to 'auto', which generates one)
56 | vmq_bridge.tcp.br0.client_id = auto | my_bridge_client_id
57 |
58 | # set keepalive interval (defaults to 60 seconds)
59 | vmq_bridge.tcp.br0.keepalive_interval = 60
60 |
61 | # set the username and password for the bridge connection
62 | vmq_bridge.tcp.br0.username = my_bridge_user
63 | vmq_bridge.tcp.br0.password = my_bridge_pwd
64 |
65 | # set the restart timeout (defaults to 10 seconds)
66 | vmq_bridge.tcp.br0.restart_timeout = 10
67 |
68 | # VerneMQ indicates other brokers that the connection
69 | # is established by a bridge instead of a normal client.
70 | # This can be turned off if needed:
71 | vmq_bridge.tcp.br0.try_private = off
72 |
73 | # Set the maximum number of outgoing messages the bridge will buffer
74 | # while not connected to the remote broker. Messages published while
75 | # the buffer is full are dropped. A value of 0 means buffering is
76 | # disabled.
77 | vmq_bridge.tcp.br0.max_outgoing_buffered_messages = 100
78 | ```
79 |
80 | Define the topics the bridge should incorporate in its local topic tree \(by subscribing to the remote\), or the topics it should export to the remote broker \(by publishing to the remote\). We share a similar configuration syntax to that used by the Mosquitto broker:
81 |
82 | ```text
83 | topic [[[ out | in | both ] qos-level] local-prefix remote-prefix]
84 | ```
85 |
86 | > `topic` defines a topic pattern that is shared between the two brokers. Any topics matching the pattern \(which may include wildcards\) are shared. The second parameter defines the direction that the messages will be shared in, so it is possible to import messages from a remote broker using `in`, export messages to a remote broker using `out` or share messages in `both` directions. If this parameter is not defined, VerneMQ defaults to `out`. The QoS level defines the publish/subscribe QoS level used for this topic and defaults to `0`. _\(Source: mosquitto.conf\)_
87 |
88 | The `local-prefix` and `remote-prefix` can be used to prefix incoming or outgoing publish messages.
89 |
90 | {% hint style="warning" %}
91 | Currently the `#` wildcard is treated as a comment from the configuration parser, please use `*` instead.
92 | {% endhint %}
93 |
94 | A simple example:
95 |
96 | ```text
97 | # share messages in both directions and use QoS 1
98 | vmq_bridge.tcp.br0.topic.1 = /demo/+ both 1
99 |
100 | # import the $SYS tree of the remote broker and
101 | # prefix it with the string 'remote'
102 | vmq_bridge.tcp.br0.topic.2 = $SYS/* in remote
103 | ```
104 |
105 | ## Sample MQTT Bridge that uses SSL/TLS
106 |
107 | SSL bridges support the same configuration parameters as TCP bridges (change `.tcp` to `.ssl`), but need further instructions for handling the SSL specifics:
108 |
109 | ```text
110 | vmq_bridge.ssl.br0 = 192.168.1.12:1883
111 |
112 | # set the username and password for the bridge connection
113 | vmq_bridge.ssl.br0.username = my_bridge_user
114 | vmq_bridge.ssl.br0.password = my_bridge_pwd
115 |
116 | # define the CA certificate file or the path to the
117 | # installed CA certificates
118 | vmq_bridge.ssl.br0.cafile = cafile.crt
119 | #or
120 | vmq_bridge.ssl.br0.capath = /path/to/cacerts
121 |
122 | # if the remote broker requires client certificate authentication
123 | vmq_bridge.ssl.br0.certfile = /path/to/certfile.pem
124 | # and the keyfile
125 | vmq_bridge.ssl.br0.keyfile = /path/to/keyfile
126 |
127 | # disable the verification of the remote certificate (defaults to 'off')
128 | vmq_bridge.ssl.br0.insecure = off
129 |
130 | # set the used tls version (defaults to 'tlsv1.2')
131 | vmq_bridge.ssl.br0.tls_version = tlsv1.2
132 | ```
133 |
134 | ## Restarting MQTT Bridges
135 |
136 | MQTT Bridges that are initiated from the source broker (push bridges) are started when VerneMQ boots and finds a bridge configuration in the `vernemq.conf` file.
137 | Sometimes it's useful to restart MQTT bridges without restarting a broker. This can be done by disabling, then enabling the `vmq_bridge` plugin and manually calling the `bridge start` command:
138 |
139 | ```text
140 | $ sudo vmq-admin plugin disable --name vmq_bridge
141 | $ sudo vmq-admin plugin enable --name vmq_bridge
142 | $ sudo vmq-admin bridge start
143 | ```
144 |
--------------------------------------------------------------------------------
/configuration/file-auth.md:
--------------------------------------------------------------------------------
1 | # Auth using files
2 |
3 | ## Authentication
4 |
5 | VerneMQ comes with a simple file-based password authentication mechanism which is enabled by default. If you don't need this it can be disabled by setting:
6 |
7 | ```text
8 | plugins.vmq_passwd = off
9 | ```
10 |
11 | Per default VerneMQ doesn't accept any client that hasn't been configured using `vmq-passwd`. If you want to change this and accept any client connection you can set:
12 |
13 | ```text
14 | allow_anonymous = on
15 | ```
16 |
17 | {% hint style="info" %}
18 | Warning: Setting `allow_anonymous=on` completely disables authentication in the broker and plugin authentication hooks are never called! Find more information on the authentication hooks [here](../plugin-development/sessionlifecycle.md#auth_on_register-and-auth_on_register_m5).
19 | {% endhint %}
20 |
21 | In a production setup you can use the provided password based authentication mechanism, one of the provided authentication Database plugins, or implement your own authentication plugins.
22 |
23 | VerneMQ periodically checks the specified password file.
24 |
25 | ```text
26 | vmq_passwd.password_file = /etc/vernemq/vmq.passwd
27 | ```
28 |
29 | The check interval defaults to 10 seconds and can also be defined in the `vernemq.conf`.
30 |
31 | ```text
32 | vmq_passwd.password_reload_interval = 10
33 | ```
34 |
35 | Setting the `password_reload_interval = 0` disables automatic reloading.
36 |
37 | {% hint style="info" %}
38 | Both configuration parameters can also be changed at runtime using the `vmq-admin` script.
39 |
40 | Example: to dynamically set the reload interval to 60 seconds on all your cluster nodes, issue the following command on one of the nodes:
41 |
42 | `sudo vmq-admin set vmq_passwd.password_reload_interval=60 --all`
43 | {% endhint %}
44 |
45 | ### Manage Password Files for VerneMQ
46 |
47 | `vmq-passwd` is a tool for managing password files for the VerneMQ broker. Usernames must not contain `":"`, passwords are stored in similar format to [crypt\(3\)](http://man7.org/linux/man-pages/man3/crypt.3.html).
48 |
49 | **How to use vmq-passwd**
50 |
51 | ```text
52 | vmq-passwd [-c | -D] passwordfile username
53 |
54 | vmq-passwd -U passwordfile
55 | ```
56 |
57 | **Options**
58 |
59 | `-c`
60 |
61 | > Creates a new password file. Does not overwrite existing file.
62 |
63 | `-cf`
64 |
65 | > Creates a new password file. If the file already exists, it will be overwritten.
66 |
67 | ``
68 |
69 | > When run with no option, It will create a new user and password and append it to the password file if exists. Does not overwrite the existing file
70 |
71 | `-D`
72 |
73 | > Deletes the specified user from the password file.
74 |
75 | `-U`
76 |
77 | > This option can be used to upgrade/convert a password file with plain text passwords into one using hashed passwords. It will modify the specified file. It does not detect whether passwords are already hashed, so using it on a password file that already contains hashed passwords will generate new hashes based on the old hashes and render the password file unusable. Note, with this option neither usernames or passwords may contain `":"`.
78 |
79 | `passwordfile`
80 |
81 | > The password file to modify.
82 |
83 | `username`
84 |
85 | > The username to add/update/delete.
86 |
87 | **Examples**
88 |
89 | Add a user to a new password file: \(you can choose an arbitrary name for the password file, it only has to match the configuration in the VerneMQ configuration file\).
90 |
91 | ```text
92 | vmq-passwd -c /etc/vernemq/vmq.passwd henry
93 | ```
94 |
95 | Delete a user from a password file
96 |
97 | ```text
98 | vmq-passwd -D /etc/vernemq/vmq.passwd henry
99 | ```
100 |
101 | Add multiple user to the existing password file :
102 |
103 | ```text
104 | vmq-passwd /etc/vernemq/vmq.passwd bob
105 | vmq-passwd /etc/vernemq/vmq.passwd john
106 | ```
107 |
108 |
109 | **Acknowledgements**
110 |
111 | The original version of `vmq-passwd` was developed by Roger Light \(roger@atchoo.org\).
112 |
113 | `vmq-passwd` includes :
114 |
115 | * software developed by the \[OpenSSL
116 |
117 | Project\]\([http://www.openssl.org/](http://www.openssl.org/)\) for use in the OpenSSL Toolkit.
118 |
119 | * cryptographic software written by Eric Young
120 |
121 | \(eay@cryptsoft.com\)
122 |
123 | * software written by Tim Hudson \(tjh@cryptsoft.com\)
124 |
125 | ## Authorization
126 |
127 | VerneMQ comes with a simple ACL based authorization mechanism which is enabled by default. If you don't need this it can be disabled by setting:
128 |
129 | ```text
130 | plugins.vmq_acl = off
131 | ```
132 |
133 | VerneMQ periodically checks the specified ACL file.
134 |
135 | ```text
136 | vmq_acl.acl_file = /etc/vernemq/vmq.acl
137 | ```
138 |
139 | The check interval defaults to 10 seconds and can also be defined in the `vernemq.conf`.
140 |
141 | ```text
142 | vmq_acl.acl_reload_interval = 10
143 | ```
144 |
145 | Setting the `acl_reload_interval = 0` disables automatic reloading.
146 |
147 | {% hint style="info" %}
148 | Both configuration parameters can also be changed at runtime using the `vmq-admin` script.
149 | {% endhint %}
150 |
151 | ### Managing the ACL entries
152 |
153 | Topic access is added with lines of the format:
154 |
155 | ```text
156 | topic [read|write]
157 | ```
158 |
159 | The access type is controlled using `read` or `write`. If not provided then read an write access is granted for the `topic`. The `topic` can use the MQTT subscription wildcards `+` or `#`.
160 |
161 | The first set of topics are applied to all anonymous clients \(assuming `allow_anonymous = on`\). User specific ACLs are added after a user line as follows \(this is the username not the client id\):
162 |
163 | ```text
164 | user
165 | ```
166 |
167 | It is also possible to define ACLs based on pattern substitution within the the topic. The form is the same as for the topic keyword, but using pattern as the keyword.
168 |
169 | ```text
170 | pattern [read|write]
171 | ```
172 |
173 | The patterns available for substitution are:
174 |
175 | > * `%c` to match the client id of the client
176 | > * `%u` to match the username of the client
177 |
178 | The substitution pattern must be the only text for that level of hierarchy. Pattern ACLs apply to all users even if the **user** keyword has previously been given.
179 |
180 | Example:
181 |
182 | ```text
183 | pattern write sensor/%u/data
184 | ```
185 |
186 | {% hint style="warning" %}
187 | VerneMQ currently doesn't cancel active subscriptions in case the ACL file revokes access for a topic. It is possible to reauthenticate sessions manually (`vmq-admin`)
188 | {% endhint %}
189 |
190 | ### Simple ACL Example
191 |
192 | ```text
193 | # ACL for anonymous clients
194 | topic bar
195 | topic write foo
196 | topic read open_to_all
197 |
198 |
199 | # ACL for user 'john'
200 | user john
201 | topic foo
202 | topic read baz
203 | topic write open_to_all
204 | ```
205 |
206 | Anonymous users are allowed to
207 |
208 | * publish & subscribe to topic bar.
209 | * publish to topic foo.
210 | * subscribe to topic open_to_all.
211 |
212 | User john is allowed to
213 |
214 | * publish & subscribe to topic foo.
215 | * subscribe to topic baz.
216 | * publish to topic open_to_all.
217 |
--------------------------------------------------------------------------------
/configuration/http-listeners.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: How to setup and configure the HTTP listener.
3 | ---
4 |
5 | # HTTP Listeners
6 |
7 | The VerneMQ HTTP listener is used to serve various VerneMQ subsystems such as:
8 |
9 | * [Status page](../monitoring/status.md)
10 | * [Prometheus metrics](../monitoring/prometheus.md)
11 | * [management API](../live-administration/http-administration.md)
12 | * [Health check](../monitoring/health-check.md)
13 | * [HTTP Publish](http-pub.md)
14 |
15 | By default listener runs on port `8888`. To disable the HTTP listener, use a HTTPS listener instead or change the port, adapt the configuration in `vernemq.conf`:
16 |
17 | ```text
18 | listener.http.default = 127.0.0.1:8888
19 | ```
20 |
21 | You can have multiple HTTP(s) listener listening to different port and running different modules:
22 | ```text
23 | listener.https.default = 127.0.0.1:443
24 | listeners.https.default.http_modules = vmq_status_http, vmq_health_http, vmq_metrics_http
25 |
26 | listener.https.mgmt = 127.0.0.1:444
27 | listeners.https.mgmt.http_modules = vmq_mgmt_http
28 | ```
29 |
30 | This configuration snippet defines two HTTPS listeners with different modules. One for default traffic and one for management traffic. It specifies which HTTP modules will be enabled on each listener, allowing for status, health, and metrics information to be retrieved from the default listener and providing a web-based interface for managing and monitoring VerneMQ through the management listener.
31 |
--------------------------------------------------------------------------------
/configuration/http-pub.md:
--------------------------------------------------------------------------------
1 | # HTTP Pub plugin
2 |
3 | VerneMQ provides a HTTP REST pub plugin for publishing messages using HTTP/REST. The http_pub plugin accepts HTTP POST requests containing message payloads, and then forwards those messages to the appropriate MQTT subscribers.
4 |
5 | The HTTP REST plugin can be used to publish messages from a wider range of devices and platforms, that may not support MQTT natively. Please note, while the plugin can handle a decent amount of requests, the primary protocol of VerneMQ is MQTT. Whenever possible, it is recommended to use MQTT natively to communicate with VerneMQ.
6 |
7 | ## Enabling the plugin
8 |
9 | The MQTTplugin (`vmq_http_pub`) is distributed with VerneMQ as an integrated plugin, but is not enabled by default. After configuring the plugin as described below, make sure to enable the plugin by setting (`vernemq.conf`):
10 |
11 | ```text
12 | plugins.vmq_http_pub = on
13 | ```
14 |
15 | ## Configuration
16 | ### Bind plugin to HTTP(s) listener
17 | By default the plugin is not bound to any listener. It is recommended to use a dedicated HTTPS listener. For security, reasons the use of HTTPS instead of HTTP is preferred. It is possible to have more than one listener.
18 |
19 | ```text
20 | listener.https.http_pub = 127.0.0.1:3001
21 | listener.https.http_pub.http_module.vmq_http_pub.auth.mode = apikey
22 | listener.https.http_pub.http_modules = vmq_http_pub
23 | ```
24 | This configuration defines an HTTPS listener for an application running on the server at IP address 127.0.0.1 and using port 3001. The listener is used to forward HTTP requests to vmq_http_pub.
25 |
26 | Additionally, this configuration sets the authentication method for the vmq_http_pub instance to API key (which is the default). This means that a valid API key is required to access this instance. The API key needs to have the scope httppub. You can create a new API key as follows:
27 | ```text
28 | vmq-admin api-key create scope=httppub
29 | ```
30 |
31 | It is important to note that this configuration is only a part of a larger configuration file, and that other settings such as SSL certificates, encryption, protocol versions, etc. may also be defined to improve the security and performance of the HTTPS listener.
32 |
33 | ### Authentication and Authorization
34 | The plugin currently supports two authentication and authorization modes: "on-behalf-of" and "predefined". "On-behalf-of" means, that the client_id, user and password used for authentication and authorization is part of request (payload). Afterwards, the regular VerneMQ authentication and authorization flows are used. When using "predefined" the client, user, and password is bound to the plugin instance. It is recommended to use "on-behalf-of" and use a separate client_id, user and password for REST-based clients. For testing purposes, the plugin also supports the global allow_anonymous flag.
35 |
36 | For on-behalf-of authentication use:
37 | ```text
38 | listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.mode = on-behalf-of
39 | ```
40 |
41 | For predefined, please use a configuration similar to:
42 | ```text
43 | listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.mode = predefined
44 | listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.user = restUser
45 | listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.password = restPasswd
46 | listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.client_id = restClient
47 | ```
48 |
49 | If you configure a listener with "predefined" authorization, but provide authorization information (username, password, client_id) those will be ignored.
50 |
51 | ## MQTT Payload
52 | The plugin currently supports three different payload encodings:
53 | * JSON (regular and base64) in body
54 | * Header parameters, and payload in body
55 | * Query String parameters, and payload in body
56 |
57 | Which one to choose is depends on your application.
58 |
59 | ### JSON
60 | ```text
61 | {
62 | "topic": "testtopic/testtopic1",
63 | "user": "testuser",
64 | "password": "test123",
65 | "qos": 1,
66 | "retain": false,
67 | "payload": "this is a payload string",
68 | "user_properties": [{"a":"b"}]
69 | }
70 | ```
71 | In order to allow more complex payload to be encoded as part of the json, the payload itself can be also be base64 encoded. The query string "encoding=base64" has to be used to indicate that the payload is base64 encoded. The encoding query string parameter can either be "base64" or "plain". Plain is the default.
72 |
73 | ### Header parameters
74 | Topic, user, password, qos, retain and user_properties can also be part of the HTTP header. The HTTP body is used for the actual message payload. The payload then does not need to be base64 encoded.
75 |
76 | The following header options are supported:
77 | | Header | Description |
78 | :- | :-----------
79 | Content-Type | application/json or application/octet-stream
80 | user | User (on-behalf-authorization)
81 | password | Password (on-behalf-authorization)
82 | client_id | Client ID (on-behalf-authorization)
83 | topic | Topic as string
84 | qos | QoS (0,1,2)
85 | retain | Boolean, true or false
86 | user_properties | Json-style array
87 |
88 | ### Query String
89 | Topic, user, password, qos and retain flag can also be uurlencoded as part of the query string. The HTTP body is used for the actual message payload. There is no need to specify the encoding in the query string. Query String currently does not support user_properties.
90 |
91 | ## Examples
92 | ### All required information encoded in the payload
93 | ```text
94 | curl --request POST \
95 | --url https://mqtt.myhost.example:3001/restmqtt/api/v1/publish \
96 | --header 'Authorization: Basic ...' \
97 | --header 'Content-Type: application/json' \
98 | --data '{
99 | "topic": "T1",
100 | "user": "myuser",
101 | "password": "test123",
102 | "client_id": "myclient",
103 | "qos": 1,
104 | "retain": false,
105 | "payload": "asddsadsadas22dasasdsad",
106 | "user_properties": [{"a":"b"}]
107 | }'
108 | ```
109 |
110 | ### All required information encoded in the payload (base64payload)
111 | ```text
112 | curl --request POST \
113 | --url 'https://mqtt.myhost.example:3001/restmqtt/api/v1/publish?encoding=base64' \
114 | --header 'Authorization: Basic ...' \
115 | --header 'Content-Type: application/json' \
116 | --data '{
117 | "topic": "a/b/c",
118 | "user": "myuser",
119 | "password": "test123",
120 | "client_id": "myclient",
121 | "qos": 1,
122 | "retain": false,
123 | "payload": "aGFsbG8gd2VsdA==",
124 | "user_properties": [{"a":"b"}]
125 | }'
126 | ```
127 |
128 | ### MQTT information encoded in header parameters
129 | ```text
130 | curl --request POST \
131 | --url https://mqtt.myhost.example:3001/restmqtt/api/v1/publish \
132 | --header 'Authorization: Basic ...' \
133 | --header 'Content-Type: application/json' \
134 | --header 'QoS: 1' \
135 | --header 'clientid: myclient' \
136 | --header 'password: test123' \
137 | --header 'retain: false' \
138 | --header 'topic: T1' \
139 | --header 'user: myuser' \
140 | --header 'user_properties: [{"a":"b2"}]' \
141 | --data '{"hello": "world"}'
142 | ```
143 | ### MQTT information encoded in query string
144 | ```text
145 | curl --request POST \
146 | --url 'https://mqtt.myhost.example:3001/restmqtt/api/v1/publish?topic=a%2Fb%2Fc&user=test-user3&password=test123&client_id=test-client3&qos=0' \
147 | --header 'Authorization: Basic Og==' \
148 | --header 'Content-Type: application/json' \
149 | --data '{"Just a": "test"}'
150 | ```
151 |
152 |
153 | ## Metrics
154 | The plugin exposes three metrics:
155 | * The number of messages sent through the REST Publish API
156 | * Number of errors reported by the REST Publish API
157 | * Number of Auth errors reported by the REST Publish API
158 |
159 | ## Misc Notes
160 | * The plugin allows the authentication and authorization flows to override mountpoint, max_message_size, qos and topic.
161 | * Currently, the regular (non m5) authentication and authorization flow is used.
162 | * The query string payload does not allow to set user parameters.
163 | * The plugin currently checks the maximum payload size before base64 decoding.
164 | * The verbs "put" and "post" are supported. There is no difference in functionality.
165 |
--------------------------------------------------------------------------------
/configuration/introduction.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Everything you must know to properly configure VerneMQ
3 | ---
4 |
5 | # Introduction
6 |
7 | Every VerneMQ node has to be configured as the default configuration probably does not match your needs. Depending on the installation method and chosen platform the configuration file `vernemq.conf` resides at different locations. If VerneMQ was installed through a Linux package the default location for the configuration file is `/etc/vernemq/vernemq.conf`.
8 |
9 | ## General Format of the `vernemq.conf` file
10 |
11 | * A single setting is handled on one line.
12 | * Lines are structured `Key = Value`
13 | * Any line starting with \# is a comment, and will be ignored.
14 |
15 | ## Minimal Quickstart Configuration
16 |
17 | You certainly want to try out VerneMQ right away. To just check the broker without configured authentication for now, you can allow anonymous access:
18 |
19 | * Set `allow_anonymous = on`
20 |
21 | By default the `vmq_acl` authorization plugin is enabled and configured to allow publishing and subscribing to any topic (basically allowing everything), check the [section on file-based authorization](file-auth.md##authorization) for more information.
22 |
23 | {% hint style="warning" %}
24 | Setting `allow_anonymous=on` completely disables authentication in the broker and plugin authentication hooks are never called! Find the details on all the authentication hooks [here](../plugin-development/sessionlifecycle.md#auth_on_register-and-auth_on_register_m5). **In a production system you should configure `vmq_acl` to be less permissive or configure some other plugin to handle authorization**.
25 | {% endhint %}
26 |
27 |
--------------------------------------------------------------------------------
/configuration/logging.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Configure VerneMQ Logging.
3 | ---
4 |
5 | # Logging
6 |
7 | ## Console Logging
8 |
9 | Where should VerneMQ emit the default console log messages \(which are typically at `info` severity\):
10 |
11 | ```text
12 | log.console = off | file | console | both
13 | ```
14 |
15 | VerneMQ defaults to log the console messages to a file, which can specified by:
16 |
17 | ```text
18 | log.console.file = /path/to/log/file
19 | ```
20 |
21 | This option defaults to `/var/log/vernemq/console.log` for Ubuntu, Debian, RHEL and Docker installs.
22 |
23 | The default console logging level `info` could be setting one of the following:
24 |
25 | ```text
26 | log.console.level = debug | info | warning | error
27 | ```
28 |
29 | ## Error Logging
30 |
31 | VerneMQ log error messages by default. One can change the default behaviour by setting:
32 |
33 | ```text
34 | log.error = on | off
35 | ```
36 |
37 | VerneMQ defaults to log the error messages to a file, which can specified by:
38 |
39 | ```text
40 | log.error.file = /path/to/log/file
41 | ```
42 |
43 | This option defaults to `/var/log/vernemq/error.log` for Ubuntu, Debian, RHEL and Docker installs.
44 |
45 | ## Crash Logging
46 |
47 | VerneMQ log crash messages by default. One can change the default behaviour by setting:
48 |
49 | ```text
50 | log.crash = on | off
51 | ```
52 |
53 | VerneMQ defaults to log the crash messages to a file, which can specified by:
54 |
55 | ```text
56 | log.crash.file = /path/to/log/file
57 | ```
58 |
59 | This option defaults to `/var/log/vernemq/crash.log` for Ubuntu, Debian, RHEL and Docker installs.
60 |
61 | The maximum sizes in bytes of individual messages in the crash log defaults to `64KB` but can be specified by:
62 |
63 | ```text
64 | log.crash.maximum_message_size = 64KB
65 | ```
66 |
67 | VerneMQ rotate crash logs. By default, the crash log file is rotated at midnight or when the size exceeds `10MB`. This behaviour can be changed by setting:
68 |
69 | ```text
70 | ## Acceptable values:
71 | ## - a byte size with units, e.g. 10GB
72 | log.crash.size = 10MB
73 |
74 | ## For acceptable values see https://github.com/basho/lager/blob/master/README.md#internal-log-rotation
75 | log.crash.rotation = $D0
76 | ```
77 |
78 | The default number of rotated log files is 5 and can be set with the option:
79 |
80 | ```text
81 | log.crash.rotation.keep = 5
82 | ```
83 |
84 | ## SysLog
85 |
86 | VerneMQ supports logging to SysLog, enable it by setting:
87 |
88 | ```text
89 | log.syslog = on
90 | ```
91 |
92 | Logging to SysLog is disabled by default.
93 |
94 |
--------------------------------------------------------------------------------
/configuration/nonstandard.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Configure Non-Standard MQTT Options VerneMQ Supports.
3 | ---
4 |
5 | # Non-standard MQTT options
6 |
7 | ## Maximum Client Id Size
8 |
9 | Set the maximum size for client ids, MQTT v3.1 specifies a limit of 23 characters.
10 |
11 | ```text
12 | max_client_id_size = 23
13 | ```
14 |
15 | This option default to `23`.
16 |
17 | ## Maximum Topic Depth
18 |
19 | Usually, you'll configure permissions on your topic structures using ACLs. In addition to that, `topic_max_depth` sets a global maximum value for topic levels. This protects the broker from clients subscribing to arbitrary deep topic levels.
20 |
21 | ```text
22 | topic_max_depth = 20
23 | ```
24 | The default value for `topic_max_depth` is 10. As an example, this value will allow topics like `a/b/c/d/e/f/g/h/i/k`, that is 10 levels.
25 | A client running into the topic depth limit will be disconnected and an error will be logged.
26 |
27 | ## Persistent Client Expiration
28 |
29 | This option allows persistent clients \(those with `clean_session` set to `false`\) to be removed if they do not reconnect within a certain time frame.
30 |
31 | {% hint style="warning" %}
32 | This is a non-standard option. As far as the MQTT specification is concerned, persistent clients are persisted forever.
33 | {% endhint %}
34 |
35 | The expiration period should be an integer followed by one of `h`, `d`, `w`, `m`, `y` for hour, day, week, month, and year; or `never`:
36 |
37 | ```text
38 | persistent_client_expiration = 1w
39 | ```
40 |
41 | This option defaults to `never`.
42 |
43 | ## Message Size Limit
44 |
45 | Limit the maximum publish payload size in bytes that VerneMQ allows. Messages that exceed this size won't be accepted.
46 |
47 | ```text
48 | max_message_size = 0
49 | ```
50 |
51 | Defaults to `0`, which means that all valid messages are accepted. MQTT specification imposes a maximum payload size of 268435455 bytes.
52 |
53 |
--------------------------------------------------------------------------------
/configuration/options.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Configure how VerneMQ handles certain aspects of MQTT
3 | ---
4 |
5 | # MQTT Options
6 |
7 | ## Retry Interval
8 |
9 | Set the time in seconds after a `QoS=1 or QoS=2` message has been sent that VerneMQ will wait before retrying when no response is received.
10 |
11 | ```text
12 | retry_interval = 20
13 | ```
14 |
15 | This option default to `20` seconds.
16 |
17 | ## Inflight Messages
18 |
19 | This option defines the maximum number of QoS 1 or 2 messages that can be in the process of being transmitted simultaneously.
20 |
21 | ```text
22 | max_inflight_messages = 20
23 | ```
24 |
25 | Defaults to `20` messages, use `0` for no limit. The inflight window serves as a protection for sessions, on the incoming side.
26 |
27 | ## Load Shedding
28 |
29 | The maximum number of messages to hold in the queue above those messages that are currently in flight. Defaults to `1000`. Set to `-1` for no limit. This option protects a client session from overload by dropping messages \(of any QoS\).
30 |
31 | ```text
32 | max_online_messages = 1000
33 | ```
34 |
35 | Defaults to `1000` messages, use `-1` for no limit. This parameter was named `max_queued_messages` in `0.10.*`. Note that `0` will totally block message delivery from any queue!
36 |
37 | This option specifies the maximum number of QoS 1 and 2 messages to hold in the offline queue.
38 |
39 | ```text
40 | max_offline_messages = 1000
41 | ```
42 |
43 | Defaults to `1000` messages, use `-1` for no limit, use `0` if no messages should be stored.
44 |
45 | In contrast to the session based inflight window, max\_online\_messages and max\_offline\_messages serves as a protection of queues, on the outgoing side.
46 |
47 | ```text
48 | override_max_online_messages = off
49 | ```
50 |
51 | When an offline session transits to online, by default VerneMQ will adhere to the queue sizes also for moving data from the offline queue to the online queue. Therefore, if max_offline_messages > max_online_message VerneMQ will start dropping messages. It is possible to override this behaviour and allow VerneMQ to move all messages from the offline queue to the online queue. The queue will then batched (or streamed) to the subscribers, and the messages are read from disk in batches as well. The additional memory needed thus is just the amount needed to store references to those messages and not the messages themselves.
52 |
--------------------------------------------------------------------------------
/configuration/plugins.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Managing VerneMQ Plugins
3 | ---
4 |
5 | # Plugins
6 |
7 | Many aspects of VerneMQ can be extended using plugins. The standard VerneMQ package comes with several official plugins. You can show the enabled & running plugins via:
8 |
9 | ```text
10 | vmq-admin plugin show
11 | ```
12 |
13 | The command above displays all the enabled plugins together with the hooks they implement:
14 |
15 | ```text
16 | +-----------+-----------+-----------------+-----------------------------+
17 | | Plugin | Type | Hook(s) | M:F/A |
18 | +-----------+-----------+-----------------+-----------------------------+
19 | |vmq_passwd |application|auth_on_register |vmq_passwd:auth_on_register/5|
20 | | vmq_acl |application| auth_on_publish | vmq_acl:auth_on_publish/6 |
21 | | | |auth_on_subscribe| vmq_acl:auth_on_subscribe/3 |
22 | +-----------+-----------+-----------------+-----------------------------+
23 | ```
24 | The table will show the following information:
25 |
26 | - name of the plugin
27 | - type (application or single module)
28 | - all the hooks implemented in the plugin
29 | - the exact module and function names (`M:F/A`) implementing those hooks.
30 |
31 | As an example on how to read the table: the `vmq_passwd:auth_on_register/5` function is the actual implementation of the `auth_on_register` hook in the `vmq_passwd` application plugin.
32 |
33 | In addition, you can conclude that the plugin is currently running, as it shows up in the table.
34 |
35 | To display information on internal plugins, add the `--internal` flag. The table below shows you that the generic metadata application and the generic message store are actually internal plugins.
36 |
37 | ```text
38 | $ sudo vmq-admin plugin show --internal
39 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
40 | | Plugin | Type | Hook(s) | M:F/A |
41 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
42 | | vmq_swc | application | metadata_put | vmq_swc_plugin:metadata_put/3 |
43 | | | | metadata_get | vmq_swc_plugin:metadata_get/2 |
44 | | | | metadata_delete | vmq_swc_plugin:metadata_delete/2 |
45 | | | | metadata_fold | vmq_swc_plugin:metadata_fold/3 |
46 | | | | metadata_subscribe | vmq_swc_plugin:metadata_subscribe/1 |
47 | | | | cluster_join | vmq_swc_plugin:cluster_join/1 |
48 | | | | cluster_leave | vmq_swc_plugin:cluster_leave/1 |
49 | | | | cluster_members | vmq_swc_plugin:cluster_members/0 |
50 | | | | cluster_rename_member | vmq_swc_plugin:cluster_rename_member/2 |
51 | | | | cluster_events_add_handler | vmq_swc_plugin:cluster_events_add_handler/2 |
52 | | | | cluster_events_delete_handler | vmq_swc_plugin:cluster_events_delete_handler/2 |
53 | | | | cluster_events_call_handler | vmq_swc_plugin:cluster_events_call_handler/3 |
54 | | | | | |
55 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
56 | | vmq_generic_msg_store | application | msg_store_write | vmq_generic_msg_store:msg_store_write/2 |
57 | | | | msg_store_delete | vmq_generic_msg_store:msg_store_delete/2 |
58 | | | | msg_store_find | vmq_generic_msg_store:msg_store_find/2 |
59 | | | | msg_store_read | vmq_generic_msg_store:msg_store_read/2 |
60 | | | | | |
61 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
62 | | vmq_config | module | change_config | vmq_config:change_config/1 |
63 | | | | | |
64 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
65 | | vmq_acl | application | change_config | vmq_acl:change_config/1 |
66 | | | | auth_on_publish | vmq_acl:auth_on_publish/6 |
67 | | | | auth_on_subscribe | vmq_acl:auth_on_subscribe/3 |
68 | | | | auth_on_publish_m5 | vmq_acl:auth_on_publish_m5/7 |
69 | | | | auth_on_subscribe_m5 | vmq_acl:auth_on_subscribe_m5/4 |
70 | | | | | |
71 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
72 | | vmq_passwd | application | change_config | vmq_passwd:change_config/1 |
73 | | | | auth_on_register | vmq_passwd:auth_on_register/5 |
74 | | | | auth_on_register_m5 | vmq_passwd:auth_on_register_m5/6 |
75 | | | | | |
76 | +-----------------------+-------------+-------------------------------+------------------------------------------------+
77 | ```
78 |
79 | ## Enable a plugin
80 |
81 | ```text
82 | vmq-admin plugin enable --name=vmq_acl
83 | ```
84 |
85 | This enables the ACL plugin. Because the `vmq_acl` plugin is already started the above command won't succeed. In case the plugin sits in an external directory you must also to provide the `--path=PathToPlugin`.
86 |
87 | ## Disable a plugin
88 |
89 | ```text
90 | vmq-admin plugin disable --name=vmq_acl
91 | ```
92 |
93 | ## Persisting Plugin Configurations and Starts
94 |
95 | To make a plugin start when VerneMQ boots, you need to tell VerneMQ in the main `vernemq.conf` file.
96 |
97 | The general syntax to enable a plugin is to add a line like `plugins.pluginname = on`. Using the `vmq_passwd` plugin as an example:
98 |
99 | ```text
100 | plugins.vmq_passwd = on
101 | ```
102 |
103 | If the plugin is external (all your own VerneMQ plugin will be of this category), the path can be specified like this:
104 |
105 | ```text
106 | plugins.myplugin = on
107 | plugins.myplugin.path = /path/to/plugin
108 | ```
109 |
110 | Plugin specific settings can be configured via `myplugin.somesetting = value`, like:
111 |
112 | ```text
113 | vmq_passwd.password_file = ./etc/vmq.passwd
114 | ```
115 |
116 | Check the `vernemq.conf` file for additional details and examples.
117 |
118 |
--------------------------------------------------------------------------------
/configuration/schema-files.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Schema Files in VerneMQ
3 | ---
4 |
5 | ## Multiple Schema files, one VerneMQ conf file
6 |
7 | During every boot up, VerneMQ will run your `vernemq.conf` file against the Schema files of the VerneMQ release. This serves as a validation and as a mechanism to create the timestamped internal config files that you'll find in the `generated.configs` directory.
8 |
9 | In general, every application of the VerneMQ release has its own schema file in the `priv` subdirectory (the only exception is the `vmq.schema` file in the `file` directory). A `my_app.schema` defines all the configuration settings you can use for that application.
10 |
11 | And that's almost the only reason to know a bit about schema files: you can browse them for possible settings if you suspect a minor settings is not yet fully documented. Most of the time you'll also find at least a short snippet documenting the setting in the schema.file.
12 |
13 | An example from the `vmq_server.schema`:
14 |
15 | ```
16 | %% @doc specifies the max duration of an API key before it expires (default: undefined)
17 | {mapping, "max_apikey_expiry_days", "vmq_server.max_apikey_expiry_days",
18 | [{datatype, integer},
19 | {default, undefined},
20 | hidden
21 | ]}.
22 | ```
23 |
24 | This is a relatively minor feature where you can set a default expiry for API keys. You can determine from the mapping schema that a default is not set. To set the value in the `vernemq.conf` file, always use the left-hand name from the mapping in the schema:
25 |
26 | ```
27 | max_apikey_expiry_days = 30
28 | ```
29 |
30 | You can also see the keyword `hidden` in the mapping. This means that the setting will not show up automatically in the `vernemq.conf` file and you'll have to add the it manually.
--------------------------------------------------------------------------------
/configuration/shared_subscriptions.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Working with shared subscriptions
3 | ---
4 |
5 | # Shared subscriptions
6 |
7 | A shared subscription is a mechanism for distributing messages to a set of subscribers to shared subscription topic, such that each message is received by only one subscriber. This contrasts with normal subscriptions where each subscriber will receive a copy of the published message.
8 |
9 | A shared subscription is on the form `$share/sharename/topic` and subscribers to this topic will receive messages published to the topic `topic`. The messages will be distributed according to the defined distribution policy.
10 |
11 | The MQTT spec only defines shared subscriptions for protocol version 5. VerneMQ supports shared subscription for v5 (as per the specification) and for v3.1.1 (backported feature).
12 |
13 | {% hint style="success" %}
14 | When subscribing to a shared subscription using command line tools remember to quote the topic as some command line shells, like `bash`, will otherwise expand the `$share` part of the topic as an environment variable.
15 | {% endhint %}
16 |
17 | ## Configuration
18 |
19 | Currently four message distribution policies for shared subscriptions are supported: `prefer_local`, `random`, `local_only` and `prefer_online_before_local`. Under the `random` policy messages will be published to a random member of the shared subscription, if any exist. Under the `prefer_local` policy messages will be delivered to a random node-local member of the shared subscription, if none exist, the message will be delivered to a random member of the shared subscription on a remote cluster node. The `prefer_online_before_local` policy works similar to `prefer_local`, but will look for an online subscriber on a non-local node, if there are only offline subscribers on the local one. Under the `local_only` policy message will be delivered to a random node-local member of the shared subscription.
20 |
21 | ```text
22 | shared_subscription_policy = prefer_local
23 | ```
24 |
25 | When a messages is being delivered to subscribers of a shared subscription, the message will be delivered to an online subscriber if possible, otherwise the message will be delivered to an offline subscriber.
26 |
27 | 
28 |
29 | 
30 |
31 | 
32 |
33 | {% hint style="info" %}
34 | Note that Shared Subscriptions still fully operate under the MQTT specification \(be it MQTT 5.0 or backported to older protocol versions\). Be aware of this, especially regarding QoS and clean\_session configurations. This also means that there is no shared offline message queue for all clients, but each client has its own offline message queue. MQTT v5 shared subscriptions thus have a different behaviour than e.g. Kafka where consumers read from a single shared message queue.
35 | {% endhint %}
36 |
37 | ## Examples
38 |
39 | **Subscriptions** _Note: When subscribing to a shared topic, make sure to escape the_ `$`
40 |
41 | So, for dash or bash shells
42 |
43 | ```bash
44 | mosquitto_sub -h mqtt.example.io -p 1883 -q 2 -t \$share/group/topicname
45 | mosquitto_sub -h mqtt.example.io -p 1883 -q 2 -t \$share/group/topicname/#
46 | ```
47 |
48 | **Publishing** _Note: When publishing to a shared topic, do not include the prefix_ `$share/group/` _as part of the publish topic name_
49 |
50 | ```bash
51 | mosquito_pub -h mqtt.example.io -p 1883 -t topicname -m "This is a test message"
52 | mosquito_pub -h mqtt.example.io -p 1883 -t topicname/group1 -m "This is a test message"
53 | ```
54 |
55 |
--------------------------------------------------------------------------------
/configuration/storage.md:
--------------------------------------------------------------------------------
1 | # Storage
2 |
3 | VerneMQ uses Google's LevelDB as a fast storage backend for messages and subscriber information. Each VerneMQ node runs its own embedded LevelDB store.
4 |
5 | ## Configuration of LevelDB memory
6 |
7 | There's not much you need to know about LevelDB and VerneMQ. One really important thing to note is that LevelDB manages its own memory. This means that VerneMQ will not allocate and free memory for LevelDB. Instead, you'll have to tell LevelDB how much memory it can use up by setting `leveldb.maximum_memory.percent`.
8 |
9 | Configuring LevelDB memory:
10 |
11 | ```text
12 | leveldb.maximum_memory.percent = 20
13 | ```
14 |
15 | {% hint style="danger" %}
16 | LevelDB means business with its allocated memory. It will eventually end up with the configured max, making it look like there's a memory leak, or even triggering OOM kills. Keep that in mind when configuring the percentage of RAM you give to LevelDB. Historically, the configured default was at 70% percent of RAM, which is too high for a lot of use cases and can be safely lowered.
17 | {% endhint %}
18 |
19 | ## Advanced options
20 |
21 | (e)LevelDB exposes a couple of additional configuration values that we link here for the sake of completeness. You can change all the values mentioned in the
22 | [eleveldb schema file](https://github.com/vernemq/eleveldb/blob/develop/priv/eleveldb.schema). VerneMQ mostly uses the configured defaults, and for most use cases it should not be necessary to change those.
23 |
24 |
--------------------------------------------------------------------------------
/configuration/websockets.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Configure WebSocket Listeners for VerneMQ.
3 | ---
4 |
5 | # Websockets
6 |
7 | VerneMQ supports the WebSocket protocol out of the box. To be able to open a WebSocket connection to VerneMQ, you have to configure a WebSocket listener or Secure WebSocket listener in the `vernemq.conf` file first:
8 |
9 | ```text
10 | listener.ws.default = 127.0.0.1:9001
11 |
12 | listener.wss.wss_default = 127.0.0.1:9002
13 | # To use WSS, you'll have to configure additional options for your WSS listener (called `wss_default` here):
14 | listener.wss.wss_default.cafile = ./etc/cacerts.pem
15 | listener.wss.wss_default.certfile = ./etc/cert.pem
16 | listener.wss.wss_default.keyfile = ./etc/key.pem
17 | ```
18 |
19 | Keep in mind that you'll use MQTT-over-WebSocket, so you will need a Javascript library that implements the MQTT client behaviour. We have used the [Eclipse Paho client](https://eclipse.org/paho/clients/js/) as well as [MQTT.js](https://github.com/mqttjs/MQTT.js)
20 |
21 | You won't be able to open WebSocket connections on a base URL, always add the `/mqtt` path.
22 |
23 | When establishing a WebSocket connection to the VerneMQ MQTT broker, the process begins with an HTTP connection that is then upgraded to WebSocket. This upgrade mechanism means the broker's ability to accept connections can be influenced by HTTP listener settings.
24 |
25 | In certain scenarios, such as when connecting from a frontend application, the size of HTTP request headers (including cookies) can exceed the default maximum allowed by VerneMQ. This can lead to a 'HTTP 431 Request Header Fields Too Large' error, preventing the connection from being established.
26 |
27 | This behavior is configurable in the `vernemq.conf` file to accommodate larger headers:
28 |
29 | ```text
30 | listener.http.default.max_request_line_length=32000
31 | listener.http.default.max_header_value_length=32000
32 | ```
--------------------------------------------------------------------------------
/getting-started.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: A quick and simple guide to get started with VerneMQ
3 | ---
4 |
5 | # Getting Started
6 |
7 | ## Installing VerneMQ
8 |
9 | VerneMQ is a high-performance, distributed MQTT message broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. To use it, all you need to do is install the VerneMQ package.
10 |
11 | Choose your OS and follow the instructions:
12 |
13 | * [CentOS/RHEL](installation/centos_and_redhat.md)
14 | * [Debian/Ubuntu](installation/debian_and_ubuntu.md)
15 |
16 | It is also possible to run VerneMQ using our Docker image:
17 |
18 | * [Docker](installation/docker.md)
19 |
20 | ## Starting VerneMQ
21 |
22 | {% hint style="info" %}
23 | If you built VerneMQ from sources, you can add the `/bin` directory of your VerneMQ release to `PATH`. For example, if you compiled VerneMQ in the `/home/vernemq` directory, then add the binary directory \(`/home/vernemq/_build/default/rel/vernemq/bin`\) to your PATH, so that VerneMQ commands can be used in the same manner as with a packaged installation.
24 | {% endhint %}
25 |
26 | To start a VerneMQ broker, use the vernemq start command in your Shell:
27 |
28 | ```text
29 | vernemq start
30 | ```
31 |
32 | A successful start will return no output. If there is a problem starting the broker, an error message is printed to `STDERR`.
33 |
34 | To run VerneMQ with an attached interactive Erlang console:
35 |
36 | ```text
37 | vernemq console
38 | ```
39 |
40 | A VerneMQ broker is typically started in console mode for debugging or troubleshooting purposes. Note that if you start VerneMQ in this manner, it is running as a foreground process that will exit when the console is closed.
41 |
42 | You can close the console by issuing this command at the Erlang prompt:
43 |
44 | ```text
45 | q().
46 | ```
47 |
48 | Once your broker has started, you can initially check that it is running with the vernemq ping command:
49 |
50 | ```text
51 | vernemq ping
52 | ```
53 |
54 | The command will respond with `pong` if the broker is running or `Node not responding to pings` in case it’s not.
55 |
56 | {% hint style="warning" %}
57 | As you may have noticed, VerneMQ will warn you at startup when your system’s open files limit \(`ulimit -n`\) is too low. You’re advised to increase the OS default open files limit when running VerneMQ. Read more about why and how in the [Open Files Limit documentation](guides/change-open-file-limits.md).
58 | {% endhint %}
59 |
60 | ### Starting using systemd/systemctl
61 |
62 | If you use a `systemd` service file (as in the binary packages), you can start VerneMQ using the `systemctl` interface to `systemd`:
63 |
64 | ```text
65 | $ sudo systemctl start vernemq
66 | ```
67 |
68 | Other `systemctl` commands work as well:
69 |
70 | ```text
71 | $ sudo systemctl stop vernemq
72 | $ sudo systemctl status vernemq
73 | ```
74 |
--------------------------------------------------------------------------------
/guides/change-open-file-limits.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: A guide that shows how to change the open file limtits
3 | ---
4 |
5 | # Change Open File Limits
6 |
7 | VerneMQ can consume a large number of open file handles when thousands of clients are connected as every connection requires at least one file handle.
8 |
9 | Most operating systems can change the open-files limit using the `ulimit -n` command. Example:
10 |
11 | ```text
12 | ulimit -n 65536
13 | ```
14 |
15 | However, this only changes the limit for the _**current shell session**_. Changing the limit on a system-wide, permanent basis varies more between systems.
16 |
17 | ## Linux
18 |
19 | On most Linux distributions, the total limit for open files is controlled by `sysctl`.
20 |
21 | ```text
22 | sysctl fs.file-max
23 | fs.file-max = 50384
24 | ```
25 |
26 | As seen above, it is generally set high enough for VerneMQ. If you have other things running on the system, you might want to consult the [sysctl manpage](http://linux.die.net/man/8/sysctl) manpage for how to change that setting. However, what most needs to be changed is the per-user open files limit. This requires editing `/etc/security/limits.conf`, for which you'll need superuser access. If you installed VerneMQ from a binary package, add lines for the `vernemq` user like so, substituting your desired hard and soft limits:
27 |
28 | ```text
29 | vernemq soft nofile 4096
30 | vernemq hard nofile 65536
31 | ```
32 |
33 | On Ubuntu, if you’re always relying on the init scripts to start VerneMQ, you can create the file /etc/default/vernemq and specify a manual limit like so:
34 |
35 | ```text
36 | ulimit -n 65536
37 | ```
38 |
39 | This file is automatically sourced from the init script, and the VerneMQ process started by it will properly inherit this setting. As init scripts are always run as the root user, there’s no need to specifically set limits in `/etc/security/limits.conf` if you’re solely relying on init scripts.
40 |
41 | On CentOS/RedHat systems, make sure to set a proper limit for the user you’re usually logging in with to do any kind of work on the machine, including managing VerneMQ. On CentOS, `sudo` properly inherits the values from the executing user.
42 |
43 | ## Systemd
44 | Systemd allows you to set the open file limit. The LimitNOFILE parameter defines the maximum number of file descriptors that a service or system unit can open. In the past, "infinite" was often chosen, which actually means an OS/systemD dependent maximum number. However, in recent versions of systemd like RHEL 9, CentOS Stream 9, and others, the default value is set to around a billion, significantly higher than necessary and the defaults used in older distributions. It is advisable to set a reasonable default value for LimitNOFILE based on the specific use case. Please consult https://access.redhat.com/solutions/1479623 for more information (RHEL9).
45 |
46 | ## Enable PAM-Based Limits for Debian & Ubuntu
47 |
48 | It can be helpful to enable PAM user limits so that non-root users, such as the `vernemq` user, may specify a higher value for maximum open files. For example, follow these steps to enable PAM user limits and set the soft and hard values **for all users of the system** to allow for up to 65536 open files.
49 |
50 | Edit `/etc/pam.d/common-session` and append the following line:
51 |
52 | ```text
53 | session required pam_limits.so
54 | ```
55 |
56 | If `/etc/pam.d/common-session-noninteractive` exists, append the same line as above.
57 |
58 | Save and close the file.
59 |
60 | Edit `/etc/security/limits.conf` and append the following lines to the file:
61 |
62 | ```text
63 | * soft nofile 65536
64 | * hard nofile 65536
65 | ```
66 |
67 | 1. Save and close the file.
68 | 2. \(optional\) If you will be accessing the VerneMQ nodes via secure shell \(ssh\), you should also edit `/etc/ssh/sshd_config` and uncomment the following line:
69 |
70 | ```text
71 | #UseLogin no
72 | ```
73 |
74 | and set its value to `yes` as shown here:
75 |
76 | ```text
77 | UseLogin yes
78 | ```
79 |
80 | 1. Restart the machine so that the limits to take effect and verify
81 |
82 | that the new limits are set with the following command:
83 |
84 | ```text
85 | ulimit -a
86 | ```
87 |
88 | ## Enable PAM-Based Limits for CentOS and Red Hat
89 |
90 | 1. Edit `/etc/security/limits.conf` and append the following lines to
91 |
92 | the file:
93 |
94 | ```text
95 | * soft nofile 65536
96 | * hard nofile 65536
97 | ```
98 |
99 | 1. Save and close the file.
100 | 2. Restart the machine so that the limits to take effect and verify that the new limits are set with the following command:
101 |
102 | ```text
103 | ulimit -a
104 | ```
105 |
106 | {% hint style="info" %}
107 | In the above examples, the open files limit is raised for all users of the system. If you prefer, the limit can be specified for the `vernemq` user only by substituting the two asterisks \(\*\) in the examples with `vernemq`.
108 | {% endhint %}
109 |
110 | ## Solaris
111 |
112 | In Solaris 8, there is a default limit of 1024 file descriptors per process. In Solaris 9, the default limit was raised to 65536. To increase the per-process limit on Solaris, add the following line to `/etc/system`:
113 |
114 | ```text
115 | set rlim_fd_max=65536
116 | ```
117 |
118 | Reference:
119 |
120 | ## Mac OS X
121 |
122 | To check the current limits on your Mac OS X system, run:
123 |
124 | ```text
125 | launchctl limit maxfiles
126 | ```
127 |
128 | The last two columns are the soft and hard limits, respectively.
129 |
130 | To adjust the maximum open file limits in OS X 10.7 \(Lion\) or newer, edit `/etc/launchd.conf` and increase the limits for both values as appropriate.
131 |
132 | For example, to set the soft limit to 16384 files, and the hard limit to 32768 files, perform the following steps:
133 |
134 | 1. Verify current limits:
135 |
136 | > ```text
137 | > launchctl limit
138 | > ```
139 | >
140 | > The response output should look something like this:
141 | >
142 | > ```text
143 | > cpu unlimited unlimited
144 | > filesize unlimited unlimited
145 | > data unlimited unlimited
146 | > stack 8388608 67104768
147 | > core 0 unlimited
148 | > rss unlimited unlimited
149 | > memlock unlimited unlimited
150 | > maxproc 709 1064
151 | > maxfiles 10240 10240
152 | > ```
153 |
154 | 2. Edit \(or create\) `/etc/launchd.conf` and increase the limits. Add lines that look like the following \(using values appropriate to your environment\):
155 |
156 | > ```text
157 | > limit maxfiles 16384 32768
158 | > ```
159 |
160 | 3. Save the file, and restart the system for the new limits to take effect. After restarting, verify the new limits with the launchctl limit command:
161 |
162 | > ```text
163 | > launchctl limit
164 | > ```
165 | >
166 | > The response output should look something like this:
167 | >
168 | > ```text
169 | > cpu unlimited unlimited
170 | > filesize unlimited unlimited
171 | > data unlimited unlimited
172 | > stack 8388608 67104768
173 | > core 0 unlimited
174 | > rss unlimited unlimited
175 | > memlock unlimited unlimited
176 | > maxproc 709 1064
177 | > maxfiles 16384 32768
178 | > ```
179 |
180 | **Attributions**
181 |
182 | This work, "Open File Limits", is a derivative of Open File Limits by Riak, used under Creative Commons Attribution 3.0 Unported License. "Open File Limits" is licensed under Creative Commons Attribution 3.0 Unported License by Erlio GmbH.
183 |
184 |
--------------------------------------------------------------------------------
/guides/clustering-during-development.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: This describes a quick way to create a VerneMQ cluster on developer's machines
3 | ---
4 |
5 | # Clustering during development
6 |
7 | Sometimes you want to have a quick way to test a cluster on your development machine as a VerneMQ developer.
8 |
9 | You need to take care of a couple things if you want to run multiple VerneMQ instances on the same machine. There is a `make` option that let's you build multiple releases, as a commodity, taking care of all the configuration.
10 |
11 | First, build a normal release \(this is just needed the first time\) with:
12 |
13 | `➜ default git:(master) ✗ make rel`
14 |
15 | The following command will then prepare 3 correctly configured vernemq.conf files, with different ports for the MQTT listeners etc. It will also build 3 full VerneMQ releases.
16 |
17 | `➜ default git:(master) ✗ make dev1 dev2 dev3`
18 |
19 | Check if you have the 3 new releases in the `_build` directory of your VerneMQ code repo.
20 |
21 | You can then start the respective broker instances in 3 terminal windows, by using the respective commands and directory paths. Example:
22 |
23 | `➜ (_build/dev2/rel/vernemq/bin) ✗ vernemq console`
24 |
25 | The MQTT listeners will of course be configured differently for each node \(the default 1883 port is not used, so that you can still run a default MQTT broker besides your dev nodes\). A couple of other ports are also adapted \(HTTP status page, cluster communication\). The MQTT ports are automically configured in increasing steps of 50: \(if in doubt, consult the respective `vernemq.conf` files\)
26 |
27 | | Node | MQTT listener port |
28 | | :--- | :--- |
29 | | dev1@127.0.0.1 | 10053 |
30 | | dev2@127.0.0.1 | 10103 |
31 | | dev3@127.0.0.1 | 10153 |
32 | | ... | ... |
33 |
34 | Note that the dev nodes are not automatically clustered. You still need to manually cluster them with commands like the following:
35 |
36 | `➜ (_build/dev2/rel/vernemq/bin) ✗ vmq-admin cluster join discovery-node=dev1@127.0.0.1`
37 |
38 | {% hint style="info" %}
39 | In case this wasn't clear so far: You can configure an arbitrary number of cluster nodes, from dev1 to devn.
40 | {% endhint %}
41 |
42 |
--------------------------------------------------------------------------------
/guides/loadtesting.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Loadtesting VerneMQ with vmq_mzbench
3 | ---
4 |
5 | # Loadtesting VerneMQ
6 |
7 | You can loadtest VerneMQ with any MQTT-capable loadtesting framework. Our recommendation is to use a framework you are familiar with, with MQTT plugins or scenarios that suit your needs.
8 |
9 | While MZBench is currently not actively developed, it is still one of the options you can use [vmq\_mzbench tool](https://github.com/vernemq/vmq_mzbench). It is based on Machinezone's very powerful original MZBench system, currently available in a community repository here: [MZBench system](https://github.com/mzbench/mzbench). MZBench lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles \(and much more\) in a formal way, and let vmq\_mzbench automatically fail, if it can't meet the requirements.
10 |
11 | If you have an AWS account, vmq\_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.
12 |
13 | ## 1. Install MZBench
14 |
15 | Please follow the [MZBench installation guide](https://mzbench.github.com/mzbench/#installation)
16 |
17 | ## 2. Install vmq\_mzbench
18 |
19 | Actually, you don't even have to install vmq\_mzbench, if you don't want to. Your scenario file will automatically fetch vmq\_mzbench for any test you do. vmq\_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.
20 |
21 | To install vmq\_mzbench on your computer, go through the following steps:
22 |
23 | ```text
24 | git clone git://github.com/vernemq/vmq_mzbench.git
25 | cd vmq_mzbench
26 | ./rebar get-deps
27 | ./rebar compile
28 | ```
29 |
30 | To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:
31 |
32 | ```erlang
33 | {make_install, [
34 | {rsync, "/path/to/your/installation/vmq_mzbench/"},
35 | {exclude, "deps"}]},
36 | ```
37 |
38 | If you'd just like the script itself fetch vmq\_mzbench, then you can direct it to github:
39 |
40 | ```erlang
41 | {make_install, [
42 | {git, "git://github.com/vernemq/vmq_mzbench.git"}]},
43 | ```
44 |
45 | ## 3. Write vmq\_mzbench scenario files
46 |
47 | {% hint style="info" %}
48 | MZBench recently switched from an Erlang-styled Scenario DSL to a more python-like DSL dubbed BDL \(Benchmark Definition Language\). Have a look at the [BDL examples](https://github.com/mzbench/mzbench/tree/master/examples.bdl) on Github.
49 | {% endhint %}
50 |
51 | You can familiarize yourself quickly with [MZBench's guide](https://mzbench.github.io/mzbench/scenarios/spec/) on writing loadtest scenarios.
52 |
53 | There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq\_mzbench statement functions to the mix and define actual loadtest scenarios.
54 |
55 | Currently vmq\_mzbench exposes the following statement functions for use in MQTT scenario files:
56 |
57 | * `random_client_id(State, Meta, I)`: Create a random client Id of length I
58 | * `fixed_client_id(State, Meta, Name, Id)`: Create a deterministic client Id with schema Name ++ "-" ++ Id
59 | * `worker_id(State, Meta)`: Get the internal, sequential worker Id
60 | * `client(State, Meta)`: Get the client Id you set yourself during connection setup with the option {t, client, "client"}
61 | * `connect(State, Meta, ConnectOpts)`: Connect to the broker with the options given in ConnectOpts
62 | * `disconnect(State, Meta)`: Disconnect normally
63 | * `subscribe(State, Meta, Topic, QoS)`: Subscribe to Topic with Quality of Service QoS
64 | * `unsubscribe(State, Meta, Topic)`: Unsubscribe from Topic
65 | * `publish(State, Meta, Topic, Payload, QoS)`: Publish a message with binary Payload to Topic with QoS
66 | * `publish(State, Meta, Topic, Payload, QoS, RetainFlag)`: Publish a message with binary Payload to Topic with QoS and RetainFlag
67 |
68 | It's easy to add more statement functions to the MQTT worker if needed, get in touch with us.
69 |
70 |
--------------------------------------------------------------------------------
/guides/migration-to-2-0.md:
--------------------------------------------------------------------------------
1 | # Migrating to 2.0.0 from Previous VerneMQ versions
2 |
3 | Release 2.0.0 has a small number of minor incompatibilities:
4 |
5 | ### Error Logger
6 |
7 | VerneMQ now uses the internal logger library instead of the lager library. It's best for your custom VerneMQ plugins to do the same and replace the lager log calls with internal log statements. Instead of using lager:error/2, you can use the following format:
8 |
9 | ```
10 | ?LOG_ERROR("an error happened because: ~p", [Reason]). % With macro
11 | logger:error("an error happened because: ~p", [Reason]). % Without macro
12 | ```
13 |
14 | To use the Logger Macros, add this include line to your module: -include_lib("kernel/include/logger.hrl").
15 |
16 | ### Removed Features
17 |
18 | - The multiple sessions feature has been fully removed. (you are likely not affected by this)
19 | - Compatibility to and old (v0.15) subscriber format was removed. (you are likely not affected by this)
20 |
21 | ### on_deliver hook
22 |
23 | The `on_deliver` hook now has a Properties argument like the `on_deliver_m5` hook. This changes the function arity from `on_deliver/6` to `on_deliver/7`. You can ignore the Properties argument in your on_deliver hook implementation, but you'll have to adapt the function definition, by adding a variable similar to:
24 |
25 | ```
26 | on_deliver(UserName, SubscriberId, QoS, Topic, Payload, IsRetain, _Properties) ->
27 | ...
28 | ```
29 |
30 | ### Credentials obfuscation
31 |
32 | VerneMQ now uses internal credentials obfuscation, using the following library: https://github.com/rabbitmq/credentials-obfuscation/.
33 | This avoids passwords in stacktraces and/or logs. Your own authentication plugins might need adaptation since you want to de-encrypt the password "at the last moment".
34 | You can check examples of how the internal VerneMQ auth plugins were adapted to make a
35 | `credentials_obfuscation:decrypt(Password)` call to check for a potentially encrypted password before given it to the database to check.
36 |
37 | ### General note
38 | Some settings related to logging were adapted a bit, and there are additional settings exposed in the vernemq.conf file. The Linux package installer gives you the choice to use an existing `vernemq.conf` file, or start with a new template. Depending on the number of settings you have changed, it might be easiest to to move and safe your old `vernemq.conf`, and then use the new template to re-add your settings.
--------------------------------------------------------------------------------
/guides/not-a-tuning-guide.md:
--------------------------------------------------------------------------------
1 | # Not a tuning guide
2 |
3 | ## General relation to OS configuration values
4 |
5 | You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our [guide here](change-open-file-limits.md). Second, when you run into performance problems, don't forget to check the [settings in the `vernemq.conf` file](../configuration/introduction.md). \(Can't open more than 10k connections? Well, is the listener configured to open more than 10k?\)
6 |
7 | ## TCP buffer sizes
8 |
9 | This is the number one topic to look at, if you need to keep an eye on RAM usage.
10 |
11 | Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use in VerneMQ. The sndbuf and recbuf of the TCP socket will not count towards VerneMQ RAM, but will be used by the Linux Kernel.
12 |
13 | VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:
14 |
15 | `val(buffer) >= max(val(sndbuf),val(recbuf))`
16 |
17 | Those values correspond to `net.ipv4.tcp_wmem` and `net.ipv4.tcp_rmem` in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings \(Debian example\):
18 |
19 | ```bash
20 | sudo sysctl -w net.ipv4.tcp_rmem="4096 16384 32768"
21 | sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 32768"
22 |
23 | # Nope, these values are not recommendations!
24 | # You really need to decide yourself.
25 | ```
26 |
27 | This would result in a 32KB application buffer for every connection.
28 |
29 | If your VerneMQ use case requires the use of different TCP buffer optimisations \(per groups of clients for instance\) you will have to make sure the that the Linux OS buffer configuration, namely `net.ipv4.tcp_wmem` and `net.ipv4.tcp_rmem`allows for this kind of flexibility, allowing for small TCP buffers and big TCP buffers at the same time.
30 |
31 | ```text
32 | Example 1 (from Linux OS config):
33 | net.ipv4.tcp_rmem="4096 16384 32768"
34 | net.ipv4.tcp_wmem="4096 16384 65536"
35 | ```
36 |
37 | Example 1 above would allow VerneMQ to allocate minimal TCP read and write buffers of 4KB in the Linux Kernel, a max read buffer of 32KB in the kernel, and a max write buffer of 65KB in the kernel. VerneMQ itself would set its own internal per connection buffer to 65KB in addition.
38 |
39 | What we just described is VerneMQ automatically configuring TCP read and write buffers and internal buffers, deriving their values from OS settings.
40 |
41 | There are multiple additional ways to configure TCP buffers described below:
42 |
43 | #### Setting TCP buffer sizes globally within VerneMQ:
44 |
45 | If VerneMQ finds an `advanced.config` file, it will use the buffer sizes you have configured there for all it’s TCP listeners \(and the TCP connections accepted by those listeners\), except the Erlang distribution listeners within the cluster.
46 |
47 | \(You'll find an example in the section below on the `advanced.config` [file](https://docs.vernemq.com/misc/not-a-tuning-guide#the-advanced-config-file)\)
48 |
49 | #### Per protocol \(since 1.8.0\):
50 |
51 | If VerneMQ finds a per protocol configuration \(`listener.tcp.buffer_sizes`\) in the `vernemq.conf` file, it will use those buffer sizes for the specific protocol. \(currently only MQTT or MQTTS. Support for WS/WSS/HTTP/VMQ listeners is on the roadmap\).
52 |
53 | For `listener.tcp.buffer_sizes` you’ll always have to state 3 values in bytes: the TCP receive buffer \(recbuf\), the TCP send buffer \(sndbuf\), and the internal application side buffer \(buffer\). You should set “buffer” \(the 3rd value\) to`val(buffer) >= max(val(sndbuf),val(recbuf))`
54 |
55 | ```text
56 | Example 2 (vernemq.conf):
57 | listener.tcp.buffer_sizes = 4096,16384,32768
58 | ```
59 |
60 | #### Per listener \(since 1.8.0\):
61 |
62 | If VerneMQ finds per listener config values \(`listener.tcp.my_listener.buffer_sizes`\), it will use those buffer sizes for all connections setup by that specific listener. This is the most useful approach if you want to set specific different buffer sizes, like huge send buffers for listeners that accept massive consumers. \(consumers with high expected message throughput\).
63 |
64 | You would then configure a different listener for those massive consumers, and by that have the option to fine tune the TCP buffer sizes.
65 |
66 | ```text
67 | Example 3: (vernemq.conf)
68 | listener.tcp.my_listener.buffer_sizes = 4096,16384,32768
69 | ```
70 |
71 | For `listener.tcp.my_listener.buffer_sizes` you’ll always have to state 3 values in bytes: the TCP receive buffer \(recbuf\), the TCP send buffer \(sndbuf\), and an internal application side buffer \(buffer\). You should set “buffer” \(the 3rd value\) to`val(buffer) >= max(val(sndbuf),val(recbuf))`
72 |
73 | #### VerneMQ per single ClientID/or TCP connection:
74 |
75 | This scenario would be possible with a plugin.
76 |
77 | ## The advanced.config file
78 |
79 | The `advanced.config` file is a supplementary configuration file that sits in the same directory as the `vernemq.conf`. You can set additional config values for any of the OTP applications that are part of a VerneMQ release. To just configure the TCP buffer size manually, you can create an `advanced.config` file:
80 |
81 | ```erlang
82 | [{vmq_server, [
83 | {tcp_listen_options,
84 | [{sndbuf, 4096},
85 | {recbuf, 4096}]}]}].
86 | ```
87 |
88 | ## The vm.args file
89 |
90 | For very advanced & custom configurations, you can add a `vm.args` file to the same directory where the `vernemq.conf` file is located. Its purpose is to configure parameters for the Erlang Virtual Machine. This will override any Erlang specific parameters your might have configured via the `vernemq.conf`. Normally, VerneMQ auto-generates a vm.args file for every boot in `/var/lib/vernemq/generated.configs/` \(Debian package example\) from `vernemq.conf` and other potential configuration sources.
91 |
92 | {% hint style="info" %}
93 | A manually generated `vm.args` is not supplementary, it is a full replacement of the auto-generated file! Keep that in mind. An easy way to go about this, is by copying and extending the auto-generated file.
94 | {% endhint %}
95 |
96 | This is how a `vm.args` might look like:
97 |
98 | ```erlang
99 | +P 256000
100 | -env ERL_MAX_ETS_TABLES 256000
101 | -env ERL_CRASH_DUMP /erl_crash.dump
102 | -env ERL_FULLSWEEP_AFTER 0
103 | -env ERL_MAX_PORTS 262144
104 | +A 64
105 | -setcookie vmq # Important: Use your own private cookie...
106 | -name VerneMQ@127.0.0.1
107 | +K true
108 | +sbt db
109 | +sbwt very_long
110 | +swt very_low
111 | +sub true
112 | +Mulmbcs 32767
113 | +Mumbcgs 1
114 | +Musmbcs 2047
115 | # Nope, these values are not recommendations!
116 | # You really need to decide yourself, again ;)
117 | ```
118 |
119 | ## A note on TLS
120 |
121 | Using TLS will of course increase the CPU load during connection setup. Latencies in message delivery will be increased, and your overall message throughput per second will be lower.
122 |
123 | TLS will require considerably more RAM. Instead of 2 Erlang processes per connection, TLS will use 3. You'll have a session process, a queue process, and a TLS handler process that can encapsulate quite a big state \(> 30KB\).
124 |
125 | Erlang/OTP uses its own TLS implementation, only using OpenSSL for crypto, but not connection handling. For situations with high connection setup rate or overall high connection churn rate, the Erlang TLS implementation might be too slow. On the other hand, Erlang TLS gives you great concurrency & fault isolation for long-lived connections.
126 |
127 | Some Erlang deployments terminate SSL/TLS with an external component or with a load balancer component. Do some testing & try to find out what works best for you.
128 |
129 | {% hint style="info" %}
130 | The Erlang TLS implementation is rather picky on certificate chains & formats. Don't give up, if you encounter errors first. On Linux, you can find out more with the `openssl s_client` command quickly.
131 | {% endhint %}
132 |
133 |
--------------------------------------------------------------------------------
/guides/typical-vernemq-deployment.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: >-
3 | In the following we describe how a typical VerneMQ deployment can look and
4 | some of the concerns one have to take into account when designing such a
5 | system.
6 | ---
7 |
8 | # A typical VerneMQ deployment
9 |
10 | A typical VerneMQ deployment could from a high level look like the following:
11 |
12 | 
13 |
14 | In this scenario MQTT clients connect from the internet and are authenticated and authorized against the Authentication Management Service and publish and receive messages, either with each other or with the Backend-Services which might be responsible for sending control messages to the clients or storing and forwarding messages to other systems or databases for later processing.
15 |
16 | To build and deploy a system such as the above a lot of decisions has to be made. These can concern how to do authentication and authorization, where to do TLS termination, how the load balancer should be configured \(if one is needed at all\), what the MQTT architecture and topic trees should look and how and to what level the system can/should scale. To simplify the following discussion we'll set a few requirements:
17 |
18 | * Clients connecting from the internet are using TLS client certificates
19 | * The messaging pattern is largely fan-in: The clients continuously publish a lot of messages to a set of topics which have to be handled by the Backend-Services.
20 | * The client sessions are persistent, which means the broker will store QoS 1 & 2 messages routed to the clients while the clients are offline.
21 |
22 | In the following we'll cover some of these options and concerns.
23 |
24 | ### Load Balancers and the PROXY Protocol
25 |
26 | Often a load balancer is deployed between MQTT clients and the VerneMQ cluster. One of the main purposes of the load balancer is to ensure that client connections are distributed between the VerneMQ nodes so each node has the same amount of connections. Usually a load balancer provides different load balancing strategies for deciding how to select the node where it should route an incoming connection. Examples of these are random, source hashing \(based on source IP\) or even protocol-aware balancing based on for example the MQTT client-id. The last two are examples of sticky balancing or session affine strategies where a client will always be routed to the same cluster node as long as the source IP or client-id remains the same.
27 |
28 | When using a load balancer the client is no longer directly connected to the VerneMQ nodes and therefore the peer port and IP-address VerneMQ sees is therefore not that of the client, but of the load balancer. The peer information is often important for logging reasons or if a plugin checks it up against a white/black list.
29 |
30 | To solve this problem VerneMQ supports the [PROXY Protocol](http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) v1 and v2 which is designed to transport connection information across proxies. See [here](../configuration/listeners.md#proxy-protocol) how to enable the proxy protocol for an MQTT listener. In case TLS is terminated at the load balancer and client certificates are used PROXY Protocol \(v2\) will also take care of forwarding TLS client certificate details.
31 |
32 | ### Client certificates and authentication
33 |
34 | Often if client certificates are used to verify and authenticate the clients. VerneMQ makes it possible to make the client certificate common name \(CN\) available for the authentication plugin system by overriding the MQTT username with the CN, before authentication is performed. If TLS is terminated at the load balancer then the PROXY Protocol would be used. This works for both if TLS is terminated in a load balancer or if TLS is terminated directly in VerneMQ. In case TLS is terminated at the load balancer then the listener can be configured as follows to achieve this effect:
35 |
36 | ```text
37 | listener.tcp.proxy_protocol = on
38 | listener.tcp.proxy_protocol_use_cn_as_username = on
39 | ```
40 |
41 | If TLS is terminated directly in VerneMQ the PROXY protocol isn't needed as the TLS client certificate is directly available in VerneMQ and the CN can be used to instead of the username by setting:
42 |
43 | ```text
44 | listener.ssl.require_certificate = on
45 | listener.ssl.use_identity_as_username = on
46 | ```
47 |
48 | See the details in the [MQTT listener](../configuration/listeners.md) section.
49 |
50 | The actual authentication can then be handled by an authentication and authorization plugin like [vmq\_diversity](../configuration/db-auth.md) which supports [PostgreSQL](https://www.postgresql.org/), [CockroachDB](https://www.cockroachlabs.com/), [MongoDB](https://www.mongodb.com/), [Redis](https://redis.io/) and [MySQL](https://www.mysql.com/) as backends for storing credentials and ACL rules.
51 |
52 | ### Monitoring and Alerting
53 |
54 | Another important aspect of running a VerneMQ is having proper monitoring and alerting in place. All the usual things should be monitored at the OS level such as memory and cpu usage and alerts should be put in place to actions can be taken if a disk is filling up or a VerneMQ node is starting to use too much CPU. VerneMQ exports a large number of metrics and depending on the use case these can be used as important indicators that the system is running
55 |
56 | ### Performance considerations
57 |
58 | When designing a system like the one described here, there are a number of things to consider in order to get the best performance out of the available resources.
59 |
60 | #### Lower load through session affine load balancing
61 |
62 | As mentioned earlier clients in this scenario are using persistent sessions. In VerneMQ a persistent session exists only on the VerneMQ node where the client connected. This implies that if the client using a persistent session later reconnects to another node, then the session, including any offline messages, will be moved to the new node. This has a certain overhead and can be avoided if the load balancer in front of VerneMQ is using a session affine load balancing strategy such as IP source hashing to assign the client connecting to a node. Of course this strategy isn't perfect if clients often change their IP addresses, but for most cases it is a huge improvement over a random load balancing strategy.
63 |
64 | #### Handling large fan-ins
65 |
66 | In many systems the MQTT clients provide a lot of data by periodically broadcasting data to the MQTT cluster. The amount of published messages can very easily become hard to manage for a single MQTT client. Further using normal MQTT subscriptions all subscribers would receive the same messages, so adding more subscribers to a topic doesn't help handling the amount of messages. To solve this VerneMQ implements a concept called [shared subscriptions](../configuration/shared_subscriptions.md) which makes it possible to distribute MQTT messages published to a topic over several MQTT clients. In this specific scenario this would mean the Backend-Services would consist of a set of clients subscribing to cluster nodes using shared subscriptions.
67 |
68 | To avoid expensive intra-node communication, VerneMQ shared subscriptions support a policy called `local_only` which means that messages being will be delivered to shared subscribers on the local node only and not forwarded to shared subscribers on other nodes in the cluster. With this policy messages for the backend-services can be delivered in the fastest and most expedient manner with the lowest overhead. See the [shared subscriptions](../configuration/shared_subscriptions.md) documentation for more information.
69 |
70 | #### Tuning buffer sizes
71 |
72 | Controlling TCP buffer sizes is important in ensuring optimal memory usage. The rule is that the more bandwidth or the lower latency required, the larger the TCP buffer sizes should be. Many IoT communicate with a very low bandwidth and as such the server side TCP buffer sizes for these does not need to be very large. On the other hand, in this scenario the consumers handling the fan-ins in the Balanced-Services will have many \(thousands or tens of thousands of messages per second\) and they can benefit from larger TCP buffer sizes. Read more about tuning TCP buffer sizes [here](not-a-tuning-guide.md#tcp-buffer-sizes).
73 |
74 | ### Protecting from overload
75 |
76 | An important guideline in protecting a VerneMQ cluster from overload is to allow only what is necessary. This means having and enforcing sensible authentication and authorization rules as well as configuring conservatively so resources cannot be exhausted due to human error or MQTT clients that have turned malicious. For example in VerneMQ it is possible to specify how many offline messages a persistent session can maximally hold via the `max_offline_messages` setting - and it should then be set to the lowest acceptable value which works for all clients and/or use a plugin which is able to override such settings on a per-client basis. The load balancer can also play an important role in protecting the system in that it can control the connect rates as well as imposing bandwidth restrictions on clients.
77 |
78 | ### Deploying a VerneMQ cluster
79 |
80 | Somehow a system like this has to be deployed. How to do this will not be covered here, bit it is certainly possible to deploy VerneMQ using tools like[ Ansible](https://www.ansible.com/), [Chef](https://www.chef.io/products/chef-infra/) or [Puppet](https://puppet.com/) or use container solutions such as Kubernetes. For more information on how to deploy VerneMQ on Kubernetes check out our guide: [VerneMQ on Kubernetes](vernemq-on-kubernetes.md).
81 |
82 |
83 |
84 |
85 |
86 |
--------------------------------------------------------------------------------
/installation/accepting-the-vernemq-eula.md:
--------------------------------------------------------------------------------
1 | # Accepting the VerneMQ EULA
2 |
3 | To use the VerneMQ pre-built packages and Docker images you have to accept the [VerneMQ EULA](https://vernemq.com/end-user-license-agreement). Make sure to read and understand the EULA before accepting it.
4 |
5 | ## For OS Packages
6 |
7 | Accepting the EULA for OS packages can be done by either changing the `accept_eula` line in the `vernemq.conf` file from `no` to `yes` or accepting the EULA the first time starting VerneMQ. In general, the installation of VerneMQ OS packages is now a 3 step process:
8 |
9 | 1. If you install the package with tools like `dpkg` \(example: `sudo dpkg -i vernemq-1.10.0.xenial.x86_64.deb`\), VerneMQ will install but will fail to start due to the missing EULA acceptance.
10 | 2. Accept the EULA by running `sudo vernemq chkconfig` or by adding the following line to your `vernemq.conf file`: `accept_eula = yes`.
11 | 3. Start/restart VerneMQ with: `sudo systemctl restart vernemq.`
12 |
13 |
14 |
15 | ## For Docker Images
16 |
17 | For Docker images the EULA can be accepted by setting the environment variable`DOCKER_VERNEMQ_ACCEPT_EULA=yes`, for Docker Swarm add `DOCKER_VERNEMQ_ACCEPT_EULA: yes` to the environment.
18 |
19 | For the Helm chart the EULA for the Docker images can be accepted by extending the `additionalEnv` section with:
20 |
21 | `additionalEnv:
22 | - name: DOCKER_VERNEMQ_ACCEPT_EULA
23 | value: "yes"`
24 |
25 | and similarly for the [VerneMQ Operator](../guides/vernemq-on-kubernetes.md#deploy-vernemq-using-the-kubernetes-operator), to accept the EULA for the Docker images, the `env` can be extended with:
26 |
27 | `env:
28 | - name: DOCKER_VERNEMQ_ACCEPT_EULA
29 | value: "yes"`
30 |
31 |
--------------------------------------------------------------------------------
/installation/centos_and_redhat.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: >-
3 | VerneMQ can be installed on CentOS-based systems using the binary package we
4 | provide.
5 | ---
6 |
7 | # Installing on CentOS and RHEL
8 |
9 | ## Install VerneMQ
10 |
11 | Once you have downloaded the binary package, execute the following command to install VerneMQ:
12 |
13 | ```text
14 | sudo yum install vernemq-.centos7.x86_64.rpm
15 | ```
16 |
17 | or:
18 |
19 | ```text
20 | sudo rpm -Uvh vernemq-.centos7.x86_64.rpm
21 | ```
22 |
23 | ## Activate VerneMQ node
24 |
25 | {% hint style="danger" %}
26 | To use the provided binary packages the VerneMQ EULA must be accepted. See [Accepting the VerneMQ EULA](accepting-the-vernemq-eula.md) for more information.
27 | {% endhint %}
28 |
29 | Once you've installed VerneMQ, start it on your node:
30 |
31 | ```text
32 | service vernemq start
33 | ```
34 |
35 | ## Verify your installation
36 |
37 | You can verify that VerneMQ is successfully installed by running:
38 |
39 | ```text
40 | rpm -qa | grep vernemq
41 | ```
42 |
43 | If VerneMQ has been installed successfully `vernemq` is returned.
44 |
45 | ## Next Steps
46 |
47 | Now that you've installed VerneMQ, check out [How to configure VerneMQ](../configuration/introduction.md).
48 |
49 |
--------------------------------------------------------------------------------
/installation/debian_and_ubuntu.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: >-
3 | VerneMQ can be installed on Debian or Ubuntu-based systems using the binary
4 | package we provide.
5 | ---
6 |
7 | # Installing on Debian and Ubuntu
8 |
9 | ## Install VerneMQ
10 |
11 | Once you have downloaded the binary package, execute the following command to install VerneMQ:
12 |
13 | ```text
14 | sudo dpkg -i vernemq-.bionic.x86_64.deb
15 | ```
16 |
17 | _Note_: Replace bionic with appropriate OS version such as focal/trusty/xenial.
18 |
19 | ## Verify your installation
20 |
21 | You can verify that VerneMQ is successfully installed by running:
22 |
23 | ```text
24 | dpkg -s vernemq | grep Status
25 | ```
26 |
27 | If VerneMQ has been installed successfully `Status: install ok installed` is returned.
28 |
29 | ## Activate VerneMQ node
30 |
31 | {% hint style="danger" %}
32 | To use the provided binary packages the VerneMQ EULA must be accepted. See [Accepting the VerneMQ EULA](accepting-the-vernemq-eula.md) for more information.
33 | {% endhint %}
34 |
35 | Once you've installed VerneMQ, start it on your node:
36 |
37 | ```text
38 | service vernemq start
39 | ```
40 |
41 | ## Default Directories and Paths
42 |
43 | The `whereis vernemq` command will give you a couple of directories:
44 |
45 | ```text
46 | whereis vernemq
47 | vernemq: /usr/sbin/vernemq /usr/lib/vernemq /etc/vernemq /usr/share/vernemq
48 | ```
49 |
50 | | Path | Description |
51 | | :--- | :--- |
52 | | /usr/sbin/vernemq: | the vernemq and vmq-admin commands |
53 | | /usr/lib/vernemq | the vernemq package |
54 | | /etc/vernemq | the vernemq.conf file |
55 | | /usr/share/vernemq | the internal vernemq schema files |
56 | | /var/lib/vernemq | the vernemq data dirs for LevelDB \(Metadata Store and Message Store\) |
57 |
58 | ## Next Steps
59 |
60 | Now that you've installed VerneMQ, check out [How to configure VerneMQ](../configuration/introduction.md).
61 |
62 |
--------------------------------------------------------------------------------
/installation/docker.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: >-
3 | As well as being available as packages that can be installed directly into the
4 | operating systems, VerneMQ is also available as a Docker image. Below is an
5 | example of how to set up a couple of VerneMQ
6 | ---
7 |
8 | # Running VerneMQ using Docker
9 |
10 | ## Start a VerneMQ cluster node
11 |
12 | {% hint style="danger" %}
13 | To use the provided docker images the VerneMQ EULA must be accepted. See [Accepting the VerneMQ EULA](accepting-the-vernemq-eula.md) for more information.
14 | {% endhint %}
15 |
16 | ```text
17 | docker run --name vernemq1 -d vernemq/vernemq
18 | ```
19 |
20 | Sometimes you need to configure a forwarding for ports \(on a Mac for example\):
21 |
22 | ```text
23 | docker run -p 1883:1883 --name vernemq1 -d vernemq/vernemq
24 | ```
25 |
26 | This starts a new node that listens on 1883 for MQTT connections and on 8080 for MQTT over websocket connections. However, at this moment the broker won't be able to authenticate the connecting clients. To allow anonymous clients use the `DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on` environment variable.
27 |
28 | ```text
29 | docker run -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" --name vernemq1 -d vernemq/vernemq
30 | ```
31 |
32 | {% hint style="info" %}
33 | Warning: Setting `allow_anonymous=on` completely disables authentication in the broker and plugin authentication hooks are never called! See more information about the authentication hooks [here](../plugindevelopment/sessionlifecycle.md#auth_on_register-and-auth_on_register_m5).
34 | {% endhint %}
35 |
36 | ## Autojoining a VerneMQ cluster
37 |
38 | This allows a newly started container to automatically join a VerneMQ cluster. Assuming you started your first node like the example above you could autojoin the cluster \(which currently consists of a single container 'vernemq1'\) like the following:
39 |
40 | ```text
41 | docker run -e "DOCKER_VERNEMQ_DISCOVERY_NODE=" --name vernemq2 -d vernemq/vernemq
42 | ```
43 |
44 | \(Note, you can find the IP of a docker container using `docker inspect | grep \"IPAddress\"`\).
45 |
46 | ## Checking cluster status
47 |
48 | To check if the above containers have successfully clustered you can issue the `vmq-admin` command:
49 |
50 | ```text
51 | docker exec vernemq1 vmq-admin cluster show
52 | +--------------------+-------+
53 | | Node |Running|
54 | +--------------------+-------+
55 | |VerneMQ@172.17.0.151| true |
56 | |VerneMQ@172.17.0.152| true |
57 | +--------------------+-------+
58 | ```
59 |
60 |
--------------------------------------------------------------------------------
/misc/change-open-file-limits.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: How to change the open file limits
3 | ---
4 |
5 | # Change Open File Limits
6 |
7 | VerneMQ can consume a large number of open file handles when thousands of clients are connected as every connection requires at least one file handle.
8 |
9 | Most operating systems can change the open-files limit using the `ulimit -n` command. Example:
10 |
11 | ```text
12 | ulimit -n 262144
13 | ```
14 |
15 | However, this only changes the limit for the _**current shell session**_. Changing the limit on a system-wide, permanent basis varies more between systems.
16 |
17 | What will actually happen when VerneMQ runs out of OS-side file descriptors?
18 |
19 | In short, VerneMQ will be unable to function properly, because it can't open database files or accept incoming connections.
20 | In case you see exceptions with `{error,emfile}` in the VerneMQ log files, you now know what to do, though: increase the OS settings as described below.
21 |
22 | ## Linux
23 |
24 | On most Linux distributions, the total limit for open files is controlled by `sysctl`.
25 |
26 | ```text
27 | sysctl fs.file-max
28 | fs.file-max = 262144
29 | ```
30 | An alternative way to read the `file-max` settings is:
31 |
32 | ```text
33 | cat /proc/sys/fs/file-max
34 | ```
35 |
36 | This might be high enough for your VerneMQ deployment, or not - we cannot know that. You will need at least 1 file descriptor per TCP connection, and VerneMQ needs additional file descriptors for file access etc. Also, if you have other components running on the system, you might want to consult the [sysctl manpage](http://linux.die.net/man/8/sysctl) manpage for how to change that setting. The `fs.file-max` setting represents the global maximum of file handlers a Linux kernel will allocate. Make sure this is high enough for your system.
37 |
38 | Once you're good regarding `file-max`, you still need to configure the per-process open files limit. You'll set the number of file descriptors a single process or application like VerneMQ is allowed to grab. As every process belongs to a user, you need to bind the setting to a Linux user (here, the `vernemq` user).
39 | To do this, edit `/etc/security/limits.conf`, for which you'll need superuser access. If you installed VerneMQ from a binary package, add lines for the `vernemq` user, substituting your desired hard and soft limits:
40 |
41 | ```text
42 | vernemq soft nofile 65536
43 | vernemq hard nofile 262144
44 | ```
45 |
46 | On Ubuntu, if you’re always relying on the init scripts to start VerneMQ, you can create the file /etc/default/vernemq and specify a manual limit:
47 |
48 | ```text
49 | ulimit -n 262144
50 | ```
51 |
52 | This file is automatically sourced from the init script, and the VerneMQ process started by it will properly inherit this setting. As init scripts are always run as the root user, there’s no need to specifically set limits in `/etc/security/limits.conf` if you’re solely relying on init scripts.
53 |
54 | On CentOS/RedHat systems, make sure to set a proper limit for the user you’re usually logging in with to do any kind of work on the machine, including managing VerneMQ. On CentOS, `sudo` properly inherits the values from the executing user.
55 |
56 | ### Linux and Systemd service files
57 |
58 | Newer VerneMQ packages use a systemd service file. You can adapt the `LimitNOFILE` setting in the `vernemq.service` file to the value you need. It is set to `infinity` by default already, so you only need to adapt it in case you want a lower value. The reason we need to enforce the setting is that systemd doesn't automatically take over the `nofile` settings from the OS.
59 |
60 | ```text
61 | LimitNOFILE=infinity
62 | ```
63 |
64 | ## Enable PAM-Based Limits for Debian & Ubuntu
65 |
66 | It can be helpful to enable PAM user limits so that non-root users, such as the `vernemq` user, may specify a higher value for maximum open files. For example, follow these steps to enable PAM user limits and set the soft and hard values **for all users of the system** to allow for up to 65536 open files.
67 |
68 | Edit `/etc/pam.d/common-session` and append the following line:
69 |
70 | ```text
71 | session required pam_limits.so
72 | ```
73 |
74 | If `/etc/pam.d/common-session-noninteractive` exists, append the same line as above.
75 |
76 | Save and close the file.
77 |
78 | Edit `/etc/security/limits.conf` and append the following lines to the file:
79 |
80 | ```text
81 | * soft nofile 65536
82 | * hard nofile 262144
83 | ```
84 |
85 | 1. Save and close the file.
86 | 2. \(optional\) If you will be accessing the VerneMQ nodes via secure shell \(ssh\), you should also edit `/etc/ssh/sshd_config` and uncomment the following line:
87 |
88 | ```text
89 | #UseLogin no
90 | ```
91 |
92 | and set its value to `yes` as shown here:
93 |
94 | ```text
95 | UseLogin yes
96 | ```
97 |
98 | 1. Restart the machine so that the limits to take effect and verify
99 |
100 | that the new limits are set with the following command:
101 |
102 | ```text
103 | ulimit -a
104 | ```
105 |
106 | ## Enable PAM-Based Limits for CentOS and Red Hat
107 |
108 | 1. Edit `/etc/security/limits.conf` and append the following lines to
109 |
110 | the file:
111 |
112 | ```text
113 | * soft nofile 65536
114 | * hard nofile 262144
115 | ```
116 |
117 | 1. Save and close the file.
118 | 2. Restart the machine so that the limits to take effect and verify that the new limits are set with the following command:
119 |
120 | ```text
121 | ulimit -a
122 | ```
123 |
124 | {% hint style="info" %}
125 | In the above examples, the open files limit is raised for all users of the system. If you prefer, the limit can be specified for the `vernemq` user only by substituting the two asterisks \(\*\) in the examples with `vernemq`.
126 | {% endhint %}
127 |
128 | ## Solaris
129 |
130 | In Solaris 8, there is a default limit of 1024 file descriptors per process. In Solaris 9, the default limit was raised to 65536. To increase the per-process limit on Solaris, add the following line to `/etc/system`:
131 |
132 | ```text
133 | set rlim_fd_max=262144
134 | ```
135 |
136 | Reference:
137 |
138 | ## Mac OS X
139 |
140 | To check the current limits on your Mac OS X system, run:
141 |
142 | ```text
143 | launchctl limit maxfiles
144 | ```
145 |
146 | The last two columns are the soft and hard limits, respectively.
147 |
148 | To adjust the maximum open file limits in OS X 10.7 \(Lion\) or newer, edit `/etc/launchd.conf` and increase the limits for both values as appropriate.
149 |
150 | For example, to set the soft limit to 16384 files, and the hard limit to 32768 files, perform the following steps:
151 |
152 | 1. Verify current limits:
153 |
154 | > ```text
155 | > launchctl limit
156 | > ```
157 | >
158 | > The response output should look something like this:
159 | >
160 | > ```text
161 | > cpu unlimited unlimited
162 | > filesize unlimited unlimited
163 | > data unlimited unlimited
164 | > stack 8388608 67104768
165 | > core 0 unlimited
166 | > rss unlimited unlimited
167 | > memlock unlimited unlimited
168 | > maxproc 709 1064
169 | > maxfiles 10240 10240
170 | > ```
171 |
172 | 2. Edit \(or create\) `/etc/launchd.conf` and increase the limits. Add lines that look like the following \(using values appropriate to your environment\):
173 |
174 | > ```text
175 | > limit maxfiles 16384 32768
176 | > ```
177 |
178 | 3. Save the file, and restart the system for the new limits to take effect. After restarting, verify the new limits with the launchctl limit command:
179 |
180 | > ```text
181 | > launchctl limit
182 | > ```
183 | >
184 | > The response output should look something like this:
185 | >
186 | > ```text
187 | > cpu unlimited unlimited
188 | > filesize unlimited unlimited
189 | > data unlimited unlimited
190 | > stack 8388608 67104768
191 | > core 0 unlimited
192 | > rss unlimited unlimited
193 | > memlock unlimited unlimited
194 | > maxproc 709 1064
195 | > maxfiles 16384 32768
196 | > ```
197 |
198 | **Attributions**
199 |
200 | This work, "Open File Limits", is a derivative of Open File Limits by Riak, used under Creative Commons Attribution 3.0 Unported License. "Open File Limits" is licensed under Creative Commons Attribution 3.0 Unported License by Erlio GmbH.
201 |
202 |
--------------------------------------------------------------------------------
/misc/loadtesting.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Loadtesting VerneMQ with vmq_mzbench
3 | ---
4 |
5 | # Loadtesting VerneMQ
6 |
7 | You can loadtest VerneMQ with our [vmq\_mzbench tool](https://github.com/vernemq/vmq_mzbench). It is based on Machinezone's very powerful [MZBench system](https://github.com/mzbench/mzbench) and lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles \(and much more\) in a formal way, and let vmq\_mzbench automatically fail, if it can't meet the requirements.
8 |
9 | If you have an AWS account, vmq\_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.
10 |
11 | ## 1. Install MZBench
12 |
13 | Please follow the [MZBench installation guide](https://mzbench.github.io/mzbench/#installation)
14 |
15 | ## 2. Install vmq\_mzbench
16 |
17 | Actually, you don't even have to install vmq\_mzbench, if you don't want to. Your scenario file will automatically fetch vmq\_mzbench for any test you do. vmq\_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.
18 |
19 | In case you still want to have `vmq\_mzbench on your local machine, go through the following steps:
20 |
21 | ```text
22 | git clone git://github.com/vernemq/vmq_mzbench.git
23 | cd vmq_mzbench
24 | ./rebar get-deps
25 | ./rebar compile
26 | ```
27 |
28 | To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:
29 |
30 | ```erlang
31 | {make_install, [
32 | {rsync, "/path/to/your/installation/vmq_mzbench/"},
33 | {exclude, "deps"}]},
34 | ```
35 |
36 | If you'd just like the script itself fetch vmq\_mzbench, then you can direct it to github:
37 |
38 | ```erlang
39 | {make_install, [
40 | {git, "git://github.com/vernemq/vmq_mzbench.git"}]},
41 | ```
42 |
43 | ## 3. Write vmq\_mzbench scenario files
44 |
45 | {% hint style="info" %}
46 | MZBench recently switched from an Erlang-styled Scenario DSL to a more python-like DSL dubbed BDL \(Benchmark Definition Language\). Have a look at the [BDL examples](https://github.com/mzbench/mzbench/tree/master/examples.bdl) on Github.
47 | {% endhint %}
48 |
49 | You can familiarize yourself quickly with [MZBench's guide](https://mzbench.github.io/mzbench/scenarios/spec/) on writing loadtest scenarios.
50 |
51 | There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq\_mzbench statement functions to the mix and define actual loadtest scenarios.
52 |
53 | Here's a list of the most important vmq\_mzbench statement functions you can use in MQTT scenario files:
54 |
55 | * `random_client_id(State, Meta, I)`: Create a random client Id of length I
56 | * `fixed_client_id(State, Meta, Name, Id)`: Create a deterministic client Id with schema Name ++ "-" ++ Id
57 | * `worker_id(State, Meta)`: Get the internal, sequential worker Id
58 | * `client(State, Meta)`: Get the client Id you set yourself during connection setup with the option {t, client, "client"}
59 | * `connect(State, Meta, ConnectOpts)`: Connect to the broker with the options given in ConnectOpts
60 | * `disconnect(State, Meta)`: Disconnect normally
61 | * `subscribe(State, Meta, Topic, QoS)`: Subscribe to Topic with Quality of Service QoS
62 | * `subscribe_to_self(State, _Meta, TopicPrefix, Qos)`: Subscribe to an exclusive topic, for 1:1 testing
63 | * `unsubscribe(State, Meta, Topic)`: Unubscribe from Topic
64 | * `publish(State, Meta, Topic, Payload, QoS)`: Publish a message with binary Payload to Topic with QoS
65 | * `publish(State, Meta, Topic, Payload, QoS, RetainFlag)`: Publish a message with binary Payload to Topic with QoS and RetainFlag
66 | * `publish_to_self(State, Meta, TopicPrefix, Payload, Qos)`: -> Publish a payload to an exclusive topic, for 1:1 testing
67 |
68 |
69 | It's easy to add more statement functions to the MQTT worker if needed. For a full list of the exported statement functions, we encourage you to have a look at the [MQTT worker](https://github.com/vernemq/vmq_mzbench/blob/master/src/mqtt_worker.erl) code directly.
70 |
71 |
--------------------------------------------------------------------------------
/misc/not-a-tuning-guide.md:
--------------------------------------------------------------------------------
1 | # Not a tuning guide
2 |
3 | ## General relation to OS configuration values
4 |
5 | You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our [guide here](change-open-file-limits.md). Second, when you run into performance problems, don't forget to check the [settings in the `vernemq.conf` file](../configuring-vernemq/introduction.md). \(Can't open more than 10k connections? Well, is the listener configured to open more than 10k?\)
6 |
7 | ## TCP buffer sizes
8 |
9 | This is the number one topic to look at, if you need to keep an eye on RAM usage.
10 |
11 | Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use.
12 |
13 | VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:
14 |
15 | `val(buffer) >= max(val(sndbuf),val(recbuf))`
16 |
17 | Those values correspond to `net.ipv4.tcp_wmem` and `net.ipv4.tcp_rmem` in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings \(Debian example\):
18 |
19 | ```bash
20 | sudo sysctl -w net.ipv4.tcp_rmem="4096 16384 32768"
21 | sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 32768"
22 |
23 | # Nope, these values are not recommendations!
24 | # You really need to decide yourself.
25 | ```
26 |
27 | This would result in a 32KB application buffer for every connection. On a multi-purpose server where you install VerneMQ as a test, you might not want to change your OS's TCP settings, of course. In that case, you can still configure the buffer sizes manually for VerneMQ by using the `advanced.config` file.
28 |
29 | ## The advanced.config file
30 |
31 | The `advanced.config` file is a supplementary configuration file that sits in the same directory as the `vernemq.conf`. You can set additional config values for any of the OTP applications that are part of a VerneMQ release. To just configure the TCP buffer size manually, you can create an `advanced.config` file:
32 |
33 | ```erlang
34 | [{vmq_server, [
35 | {tcp_listen_options,
36 | [{sndbuf, 4096},
37 | {recbuf, 4096}]}]}].
38 | ```
39 |
40 | ## The vm.args file
41 |
42 | For very advanced & custom configurations, you can add a `vm.args` file to the same directory where the `vernemq.conf` file is located. Its purpose is to configure parameters for the Erlang Virtual Machine. This will override any Erlang specific parameters your might have configured via the `vernemq.conf`. Normally, VerneMQ auto-generates a vm.args file for every boot in `/var/lib/vernemq/generated.configs/` \(Debian package example\) from `vernemq.conf` and other potential configuration sources.
43 |
44 | {% hint style="info" %}
45 | A manually generated `vm.args` is not supplementary, it is a full replacement of the auto-generated file! Keep that in mind. An easy way to go about this, is by copying and extending the auto-generated file.
46 | {% endhint %}
47 |
48 | This is how a `vm.args` might look like:
49 |
50 | ```erlang
51 | +P 256000
52 | -env ERL_MAX_ETS_TABLES 256000
53 | -env ERL_CRASH_DUMP /erl_crash.dump
54 | -env ERL_FULLSWEEP_AFTER 0
55 | -env ERL_MAX_PORTS 262144
56 | +A 64
57 | -setcookie vmq # Important: Use your own private cookie...
58 | -name VerneMQ@127.0.0.1
59 | +K true
60 | +sbt db
61 | +sbwt very_long
62 | +swt very_low
63 | +sub true
64 | +Mulmbcs 32767
65 | +Mumbcgs 1
66 | +Musmbcs 2047
67 | # Nope, these values are not recommendations!
68 | # You really need to decide yourself, again ;)
69 | ```
70 |
71 | ## A note on TLS
72 |
73 | Using TLS will of course increase the CPU load during connection setup. Latencies in message delivery will be increased, and your overall message throughput per second will be lower.
74 |
75 | TLS will require considerably more RAM. Instead of 2 Erlang processes per connection, TLS will use 3. You'll have a session process, a queue process, and a TLS handler process that can encapsulate quite a big state \(> 30KB\).
76 |
77 | Erlang/OTP uses its own TLS implementation, only using OpenSSL for crypto, but not connection handling. For situations with high connection setup rate or overall high connection churn rate, the Erlang TLS implementation might be too slow. On the other hand, Erlang TLS gives you great concurrency & fault isolation for long-lived connections.
78 |
79 | Some Erlang deployments terminate SSL/TLS with an external component or with a load balancer component. Do some testing & try to find out what works best for you.
80 |
81 | {% hint style="info" %}
82 | The Erlang TLS implementation is rather picky on certificate chains & formats. Don't give up, if you encounter errors first. On Linux, you can find out more with the `openssl s_client` command quickly.
83 | {% endhint %}
84 |
85 |
--------------------------------------------------------------------------------
/monitoring/graphite.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Description and Configuration of the Graphite exporter
3 | ---
4 |
5 | # Graphite
6 |
7 | The graphite exporter reports the broker metrics at a fixed interval \(defined in milliseconds\) to a graphite server. The necessary configuration is done inside the `vernemq.conf`.
8 |
9 | ```text
10 | graphite_enabled = on
11 | graphite_host = carbon.hostedgraphite.com
12 | graphite_port = 2003
13 | graphite_interval = 20000
14 | graphite_api_key = YOUR-GRAPHITE-API-KEY
15 | graphite.interval = 15000
16 | ```
17 |
18 | You can further tune the connection to the Graphite server:
19 |
20 | ```text
21 | # set the connect timeout (defaults to 5000 ms)
22 | graphite_connect_timeout = 10000
23 |
24 | # set a reconnect timeout (default to 15000 ms)
25 | graphite_reconnect_timeout = 10000
26 |
27 | # set a custom graphite prefix (defaults to '')
28 | graphite_prefix = vernemq
29 | ```
30 |
31 | {% hint style="info" %}
32 | The above configuration parameters can be changed at runtime using the `vmq-admin` script.
33 | Usage: `vmq-admin set = ... [[--node | -n] | --all]`
34 | Example: `vmq-admin set graphite_interval=20000 graphite_port=2003 -n VerneMQ@127.0.0.1`
35 | {% endhint %}
36 |
37 |
--------------------------------------------------------------------------------
/monitoring/health-check.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: The VerneMQ health checker
3 | ---
4 |
5 | # Health Checker
6 |
7 | A simple way to gauge the health of a VerneMQ cluster is to query the `/health` path on the [HTTP listener](../configuration/http-listeners.md).
8 |
9 | The health check will return **200** when VerneMQ is accepting connections and is joined with the cluster \(for clustered setups\). **503** will be returned in case any of those two conditions are not met.
10 | In addition to the simple `/health` path, the following options are available as well
11 |
12 | - `/health/ping`: Cowboy (ie. Verne) is up.
13 | - `/health/listeners`: will fail if any of the configured listeners is down or suspended
14 | - `/health/listeners_full_cluster`: will fail if any listener is down or any of the cluster nodes is offline. (you probably don't want to use this to base automated actions on the status)
15 |
16 | With the `ping` or `listeners` option, you can configure a health check for a single node, even if it is part of a cluster.
17 |
18 | If you want to configure any automated actions based on the health check results, you need to chose an appropriate health check path. For example, you should not use the `/health` check (checking for full cluster consistency) to automatically restart a single node. This is of special importance for Kubernetes deployments.
19 |
--------------------------------------------------------------------------------
/monitoring/introduction.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Description and Configuration of the built-in Monitoring mechanism
3 | ---
4 |
5 | # Introduction
6 |
7 | VerneMQ can be monitored in several ways. We implemented native support for [Graphite](https://graphiteapp.org/), [MQTT $SYS tree](systree.md), and [Prometheus](http://prometheus.io).
8 |
9 | The metrics are also available via the command line tool:
10 |
11 | ```text
12 | vmq-admin metrics show
13 | ```
14 |
15 | Or with:
16 |
17 | ```text
18 | vmq-admin metrics show -d
19 | ```
20 |
21 | Which will output the metrics together with a short description describing what the metric is about. An example looks like:
22 |
23 | ```text
24 | # The number of AUTH packets received.
25 | counter.mqtt_auth_received = 0
26 |
27 | # The number of times a MQTT queue process has been initialized from offline storage.
28 | counter.queue_initialized_from_storage = 0
29 |
30 | # The number of PUBLISH packets sent.
31 | counter.mqtt_publish_sent = 10
32 |
33 | # The number of bytes used for storing retained messages.
34 | gauge.retain_memory = 21184
35 | ```
36 |
37 | Notice that the metrics:
38 |
39 | ```text
40 | mqtt_connack_not_authorized_sent
41 | mqtt_connack_bad_credentials_sent
42 | mqtt_connack_server_unavailable_sent
43 | mqtt_connack_identifier_rejected_sent
44 | mqtt_connack_unacceptable_protocol_sent
45 | mqtt_connack_accepted_sent
46 | ```
47 |
48 | Are no longer used \(always 0\) and will be removed in the future. They were replaced with `mqtt_connack_sent` using the `return_code` label. For MQTT 5.0 the `reason_code` label is used instead.
49 |
50 | The output on the command line are aggregated by default, but details for a label can be shown as well, for example all metrics with the `not_authorized` label:
51 |
52 | ```text
53 | vmq-admin metrics show --return_code=not_authorized
54 | counter.mqtt_connack_sent = 0
55 | ```
56 |
57 | All available labels can be show using `vmq-admin metrics show --help`.
58 |
59 |
--------------------------------------------------------------------------------
/monitoring/netdata.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Netdata Metrics
3 | ---
4 |
5 | # Netdata
6 |
7 | A great way to monitor VerneMQ is to use [Netdata](https://github.com/netdata/netdata) or [Netdata Cloud.](https://www.netdata.cloud/) Netdata uses VerneMQ in its Netdata Cloud service, and has developed full integration with VerneMQ.
8 |
9 | This means that you have one of the best monitoring tools ready for VerneMQ. Netdata will show you all the VerneMQ metrics in a realtime dashboard.
10 |
11 | When Netdata runs on the same node as VerneMQ it will automatically discover the VerneMQ node.
12 |
13 | Learn how to setup Netdata for VerneMQ with the following guide:
14 |
15 | [https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vernemq](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vernemq)
16 |
17 |
--------------------------------------------------------------------------------
/monitoring/prometheus.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Description and Configuration of the Prometheus exporter
3 | ---
4 |
5 | # Prometheus
6 |
7 | The Prometheus exporter is enabled by default and installs an HTTP handler on `http://localhost:8888/metrics`. To read more about configuring the HTTP listener, see [HTTP Listener Configuration](../configuration/http-listeners.md).
8 |
9 | ## Example Scrape Config
10 |
11 | Add the following configuration to the `scrape_configs` section inside `prometheus.yml` of your Prometheus server.
12 |
13 | ```yaml
14 | # A scrape configuration containing exactly one endpoint to scrape:
15 | # Here it's Prometheus itself.
16 | scrape_configs:
17 | - job_name: 'vernemq'
18 | scrape_interval: 5s
19 | scrape_timeout: 5s
20 | static_configs:
21 | - targets: ['localhost:8888']
22 | ```
23 |
24 | This tells Prometheus to scrape the VerneMQ metrics endpoint every 5 seconds.
25 |
26 | Please follow the documentation on the [Prometheus](http://prometheus.io) website to properly configure the metrics scraping as well as how to access those metrics and configure alarms and graphs.
27 |
28 |
--------------------------------------------------------------------------------
/monitoring/status.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: The VerneMQ Status Page
3 | ---
4 |
5 | # Status Page
6 |
7 | VerneMQ comes with a built-in Status Page that is enabled by default and is available on `http://localhost:8888/status`, see [HTTP listeners](../configuration/http-listeners.md).
8 |
9 | The Status Page is a simple overview of the cluster and the individual nodes in the cluster as seen below. Note that while the Status Page is running on each node of the cluster, it's enough to look at one of them to get a quick status of your cluster.
10 |
11 | The Status Page has the following sections:
12 |
13 | - Issues (Warnings on netsplits, etc)
14 | - Cluster Overview
15 | - Node Status
16 |
17 | The Status Page will automatically refresh itself every 10 seconds, and try to calculate rates in Javascript, based on that reload window. Therefore, the displayed rates might be slightly inaccurate.
18 | The Status Page should not be considered a replacement for a metrics system. Running in production, you certainly want to hook up VerneMQ to a metrics system like Prometheus.
19 |
20 | 
21 |
22 |
--------------------------------------------------------------------------------
/monitoring/systree.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: Description and Configuration of the $SYSTree Monitoring Feature
3 | ---
4 |
5 | # $SYSTree
6 |
7 | The systree functionality is enabled by default and reports the broker metrics at a fixed interval defined in the `vernemq.conf`. The metrics defined [here](introduction.md) are transformed to MQTT topics e.g. `mqtt_publish_received` is transformed to `$SYS//mqtt/publish/received`. `` is your node's name, as configured in the `vernemq.conf`. To find it, you can grep the file for it: `grep nodename vernemq.conf`
8 |
9 | The complete list of metrics can be found [here.](introduction.md)
10 |
11 | ```text
12 | systree_interval = 20000
13 | ```
14 |
15 | This option defaults to `20000` milliseconds.
16 |
17 | If the systree feature is not required it can be disabled in `vernemq.conf`
18 |
19 | ```text
20 | systree_enabled = off
21 | ```
22 |
23 | {% hint style="success" %}
24 | The feature and the interval can be changed at runtime using the `vmq-admin` script.
25 | Usage: vmq-admin set = ... \[\[--node \| -n\] \| --all\]
26 | Example: `vmq-admin set systree_interval=60000 -n VerneMQ@127.0.0.1`
27 | {% endhint %}
28 |
29 | Examples:
30 |
31 | ```text
32 | mosquitto_sub -t '$SYS//#' -u -P -d
33 | ```
34 |
35 |
--------------------------------------------------------------------------------
/plugindevelopment/boilerplate.md:
--------------------------------------------------------------------------------
1 | # Erlang Boilerplate
2 |
3 | We recommend to use the `rebar3` toolchain to generate the basic Erlang OTP application boilerplate and start from there.
4 |
5 | ```text
6 | rebar3 new app name="myplugin" desc="this is my first VerneMQ plugin"
7 | ===> Writing myplugin/src/myplugin_app.erl
8 | ===> Writing myplugin/src/myplugin_sup.erl
9 | ===> Writing myplugin/src/myplugin.app.src
10 | ===> Writing myplugin/rebar.config
11 | ===> Writing myplugin/.gitignore
12 | ===> Writing myplugin/LICENSE
13 | ===> Writing myplugin/README.md
14 | ```
15 |
16 | Change the `rebar.config` file to include the `vernemq_dev` dependency:
17 |
18 | ```erlang
19 | {erl_opts, [debug_info]}.
20 | {deps, [{vernemq_dev,
21 | {git, "git://github.com/vernemq/vernemq_dev.git", {branch, "master"}}}
22 | ]}.
23 | ```
24 |
25 | Compile the application, this will automatically fetch `vernemq_dev`.
26 |
27 | ```text
28 | rebar3 compile
29 | ===> Verifying dependencies...
30 | ===> Fetching vmq_commons ({git,
31 | "git://github.com/vernemq/vernemq_dev.git",
32 | {branch,"master"}})
33 | ===> Compiling vernemq_dev
34 | ===> Compiling myplugin
35 | ```
36 |
37 | Now you're ready to implement the hooks. Don't forget to add the proper `vmq_plugin_hooks` entries to your `src/myplugin.app.src` file.
38 |
39 | For a complete example, see the [vernemq\_demo\_plugin](https://github.com/vernemq/vernemq_demo_plugin).
40 |
41 |
--------------------------------------------------------------------------------
/plugindevelopment/enhancedauthflow.md:
--------------------------------------------------------------------------------
1 | # Enhanced Auth Flow
2 |
3 | VerneMQ supports [enhanced authentication](http://docs.oasis-open.org/mqtt/mqtt/v5.0/cs02/mqtt-v5.0-cs02.html#_Toc514345528) flows or SASL style authentication for MQTT 5.0 sessions. The enhanced authentication mechanism can be used for initial authentication when the client connects or to re-authenticate clients at a later point.
4 |
5 | 
6 |
7 | The `on_auth_m5` hook allows the plugin to implement SASL style authentication flows by either accepting, rejecting \(disconnecting the client\) or continue the flow. The `on_auth_m5` hook is specified in the Erlang behaviour [on\_auth\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_auth_m5_hook.erl) in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
8 |
9 |
--------------------------------------------------------------------------------
/plugindevelopment/introduction.md:
--------------------------------------------------------------------------------
1 | ---
2 | description: >-
3 | Learn how to implement VerneMQ Plugins for customizing many aspects of how
4 | VerneMQ deals with client connections, subscriptions, and message flows.
5 | ---
6 |
7 | # Introduction
8 |
9 | VerneMQ is implemented in [Erlang OTP](https://www.erlang.org/) and therefore runs on top of the Erlang VM. For this reason *native* plugins have to be developed in a programming language that runs on the Erlang VM. The most popular choice is obviously the Erlang programming language itself, but Elixir or Lisp flavoured Erlang LFE could be used too.
10 | That said, all the plugin hooks are also exposed over (a subset of) Lua, and over WebHooks. This allows you to implement a VerneMQ plugin, by simply implementing a WebHook endpoint, using any programming language you like. You can also implement a VerneMQ plugin as a Lua script.
11 |
12 | {% hint style="danger" %}
13 | Be aware that in VerneMQ a plugin does NOT run in a sandboxed environment and misbehaviour could seriously harm the system \(e.g. performance degradation, reduced availability as well as consistency, and message loss\). Get in touch with us in case you require a review of your plugin.
14 | {% endhint %}
15 |
16 | This guide explains the different flows that expose different hooks to be used for custom plugins. It also describes the code structure a plugin must comply to in order to be successfully loaded and started by the VerneMQ plugin mechanism.
17 |
18 | All the hooks that are currently exposed fall into one of three categories.
19 |
20 | 
21 |
22 | 1. Hooks that allow you to change the protocol flow. An example could be to authenticate a client using the `auth_on_register` hook.
23 | 2. Hooks that inform you about a certain action, that could be used for example to implement a custom logging or audit plugin.
24 | 3. Hooks that are called given a certain condition
25 |
26 | Notice that some hooks come in two variants, for example the `auth_on_register` and then `auth_on_register_m5` hooks. The `_m5` postfix refers to the fact that this hook is only invoked in an MQTT 5.0 session context whereas the other is invoked in a MQTT 3.1/3.1.1 session context.
27 |
28 | Before going into the details, let's give a quick intro to the VerneMQ plugin system.
29 |
30 | ## Plugin System
31 |
32 | The VerneMQ plugin system allows you to load, unload, start and stop plugins during runtime, and you can even upgrade a plugin during runtime. To make this work it is required that the plugin is an OTP application and strictly follows the rules of implementing the Erlang OTP application behaviour. It is recommended to use the `rebar3` toolchain to compile the plugin. VerneMQ comes with built-in support for the directory structure used by `rebar3`.
33 |
34 | Every plugin has to describe the hooks it is implementing as part of its application environment file. The `vmq_acl` plugin for instance comes with the application environment file below:
35 |
36 | ```erlang
37 | {application, vmq_acl,
38 | [
39 | {description, "Simple File based ACL for VerneMQ"},
40 | {vsn, git},
41 | {registered, []},
42 | {applications, [
43 | kernel,
44 | stdlib,
45 | clique
46 | ]},
47 | {mod, { vmq_acl_app, []}},
48 | {env, [
49 | {file, "priv/test.acl"},
50 | {interval, 10},
51 | {vmq_config_enabled, true},
52 | {vmq_plugin_hooks, [
53 | {vmq_acl, change_config, 1, [internal]},
54 | {vmq_acl, auth_on_publish, 6, []},
55 | {vmq_acl, auth_on_subscribe, 3, []}
56 | ]}
57 | ]}
58 | ]}.
59 | ```
60 |
61 | Lines 6 to 10 instruct the plugin system to ensure that those dependent applications are loaded and started. If you're using third party dependencies make sure that they are available in compiled form and part of the plugin load path. Lines 16 to 20 allow the plugin system to compile the plugin rules. Yes, you've heard correctly. The rules are compiled into Erlang VM code to make sure the lookup and execution of plugin code is as fast as possible. Some hooks exist which are used internally such as the `change_config/1`, we'll describe those at some other point.
62 |
63 | The environment value for `vmq_plugin_hooks` is a list of hooks. A hook is specified by `{Module, Function, Arity, Options}`.
64 |
65 | To streamline the plugin development we provide a different Erlang behaviour for every hook a plugin implements. Those behaviours are part of the `vernemq_dev` library application, which you should add as a dependency to your plugin. `vernemq_dev` also comes with a header file that contains all the type definitions used by the hooks.
66 |
67 | ### Chaining
68 |
69 | It is possible to have multiple plugins serving the same hook. Depending on the hook the plugin chain is used differently. The most elaborate chains can be found for the hooks that deal with authentication and authorization flows. We also call them _conditional chains_ as a plugin can give control away to the next plugin in the chain. The image show a sample plugin chain for the `auth_on_register` hook.
70 |
71 | 
72 |
73 | Most hooks don't require conditions and are mainly used as event handlers. In this case all plugins in a chain are called. An example for such a hook would be the `on_register` hook.
74 |
75 | 
76 |
77 | A rather specific case is the need to call only one plugin instead of iterating through the whole chain. VerneMQ uses such hooks for it's pluggable message storage system.
78 |
79 | 
80 |
81 | Unless you're implementing your custom message storage backend, you probably won't need this style of hook.
82 |
83 | {% hint style="info" %}
84 | The position in the plugin call chain is currently implicitly given by the order the plugins have been started.
85 | {% endhint %}
86 |
87 | ### Startup
88 |
89 | The plugin mechanism uses the application environment file to infer the applications that it has to load and start prior to starting the plugin itself. It internally uses the `application:ensure_all_started/1` function call to start the plugin. If your setup is more complex you could override this behaviour by implementing a custom `start/0` function inside a module that's named after your plugin.
90 |
91 | ### Teardown
92 |
93 | The plugin mechanism uses `application:stop/1` to stop and unload the plugin. This won't stop the dependent application started at startup. If you rely on third party applications that aren't started as part of the VerneMQ release, e.g. a database driver, you can implement a custom `stop/0` function inside a module that's named after your plugin and properly stop the driver there.
94 |
95 | ## Public Type Specs
96 |
97 | The `vmq_types.hrl` exposes all the type specs used by the hooks. The following types are used by the plugin system:
98 |
99 | ```erlang
100 | -type peer() :: {inet:ip_address(), inet:port_number()}.
101 | -type username() :: binary() | undefined.
102 | -type password() :: binary() | undefined.
103 | -type client_id() :: binary().
104 | -type mountpoint() :: string().
105 | -type subscriber_id() :: {mountpoint(), client_id()}.
106 | -type reg_view() :: atom().
107 | -type topic() :: [binary()].
108 | -type qos() :: 0 | 1 | 2.
109 | -type routing_key() :: [binary()].
110 | -type payload() :: binary().
111 | -type flag() :: boolean().
112 | ```
113 |
114 |
--------------------------------------------------------------------------------
/plugindevelopment/publishflow.md:
--------------------------------------------------------------------------------
1 | # Publish Flow
2 |
3 | In this section the publish flow is described. VerneMQ provides multiple hooks throughout the flow of a message. The most important ones are the `auth_on_publish` and `auth_on_publish_m5` hooks which acts as an application level firewall granting or rejecting a publish message.
4 |
5 | 
6 |
7 | ## auth\_on\_publish and auth\_on\_publish\_m5
8 |
9 | The `auth_on_publish` and `auth_on_publish_m5` hooks allow your plugin to grant or reject publish requests sent by a client. It also enables to rewrite the publish topic, payload, qos, or retain flag and in the case of `auth_on_publish_m5` properties. The `auth_on_publish` hook is specified in the Erlang behaviour [auth\_on\_publish\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/auth_on_publish_hook.erl) and the `auth_on_publish_m5` hook in the [auth\_on\_publish\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/auth_on_publish_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
10 |
11 | Every plugin that implements the `auth_on_publish` or `auth_on_publish_m5` hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values. In case the plugin can't validate the publish message it is best to return `next` as this would allow subsequent plugins in the chain to validate the request. If no plugin is able to validate the request it gets automatically rejected.
12 |
13 | ## on\_publish and on\_publish\_m5
14 |
15 | The `on_publish` and `on_publish_m5` hooks allow your plugin to get informed about an authorized publish message. The hook is specified in the Erlang behaviour [on\_publish\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_publish_hook.erl) and the `on_publish_m5` hook in the [on\_publish\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_publish_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
16 |
17 | ## on\_offline\_message
18 |
19 | The `on_offline_message` hook allows your plugin to get notified about a new a queued message for a client that is currently offline. The hook is specified in the Erlang behaviour [on\_offline\_message\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_offline_message_hook.erl) available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
20 |
21 | ## on\_deliver and on\_deliver\_m5
22 |
23 | The `on_deliver` and `on_deliver_m5` hooks allow your plugin to get informed about outgoing publish messages, but also allows you to rewrite topic and payload of the outgoing message. The hook is specified in the Erlang behaviour [on\_deliver\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_deliver_hook.erl) and the `on_deliver_m5` hook in the [on\_deliver\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_deliver_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
24 |
25 | Every plugin that implements the `on_deliver` or `on_deliver_m5` hooks are part of a conditional plugin chain, although NO verdict is required in this case. The message gets delivered in any case. If your plugin uses this hook to rewrite the message the plugin system stops evaluating subsequent plugins in the chain.
26 |
27 |
--------------------------------------------------------------------------------
/plugindevelopment/sessionlifecycle.md:
--------------------------------------------------------------------------------
1 | # Session lifecycle
2 |
3 | VerneMQ provides multiple hooks throughout the lifetime of a session. The most important ones are the `auth_on_register` and `auth_on_register_m5` hooks which act as an application level firewall granting or rejecting new clients.
4 |
5 | 
6 |
7 | ## auth\_on\_register and auth\_on\_register\_m5
8 |
9 | The `auth_on_register` and `auth_on_register_m5` hooks allow your plugin to grant or reject new client connections. Moreover it lets you exert fine grained control over the configuration of the client session. The `auth_on_register` hook is specified in the Erlang behaviour [auth\_on\_register\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/auth_on_register_hook.erl) and the `auth_on_register_m5` hook in the [auth\_on\_register\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/auth_on_register_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
10 |
11 | Every plugin that implements the `auth_on_register` or `auth_on_register_m5` hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values depending on how the plugin grants or rejects this client. In case the plugin doesn't know the client it is best to return `next` as this would allow subsequent plugins in the chain to validate this client. If no plugin is able to validate the client it gets automatically rejected.
12 |
13 | ## on\_auth\_m5
14 |
15 | The `on_auth_m5` hook allows your plugin to implement MQTT enhanced authentication, see [Enhanced Authentication Flow](enhancedauthflow.md).
16 |
17 | ## on\_register and on\_register\_m5
18 |
19 | The `on_register` and `on_register_m5` hooks allow your plugin to get informed about a newly authenticated client. The hook is specified in the Erlang behaviour [on\_register\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_register_hook.erl) and the [on\_register\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_register_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
20 |
21 | ## on\_client\_wakeup
22 |
23 | Once a new client was successfully authenticated and the above described hooks have been called, the client attaches to its queue. If it is a returning client using `clean_session=false` or if the client had previous sessions in the cluster, this process could take a while. \(As offline messages are migrated to a new node, existing sessions are disconnected\). The [on\_client\_wakeup](https://github.com/vernemq/vernemq_dev/blob/master/src/on_client_wakeup_hook.erl) hook is called at the point where a queue has been successfully instantiated, possible offline messages migrated, and potential duplicate sessions have been disconnected. In other words: when the client has reached a completely initialized, normal state for accepting messages. The hook is specified in the Erlang behaviour `on_client_wakeup_hook` available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
24 |
25 | ## on\_client\_offline
26 |
27 | This hook is called if an MQTT 3.1/3.1.1 client using `clean_session=false` or an MQTT 5.0 client with a non-zero `session_expiry_interval` closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour [on\_client\_offline\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_client_offline_hook.erl) available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
28 |
29 | ## on\_client\_gone
30 |
31 | This hook is called if an MQTT 3.1/3.1.1 client using `clean_session=true` or an MQTT 5.0 client with the `session_expiry_interval` set to zero closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour [on\_client\_gone\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_client_gone_hook.erl) available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
32 |
33 |
--------------------------------------------------------------------------------
/plugindevelopment/subscribeflow.md:
--------------------------------------------------------------------------------
1 | # Subscribe Flow
2 |
3 | In this section the subscription flow is described. VerneMQ provides several hooks to intercept the subscription flow. The most important ones are the `auth_on_subscribe` and `auth_on_subscribe_m5` hooks which act as an application level firewall granting or rejecting subscribe requests.
4 |
5 | 
6 |
7 | ## auth\_on\_subscribe and auth\_on\_subscribe\_m5
8 |
9 | The `auth_on_subscribe` and `auth_on_subscribe_m5` hooks allow your plugin to grant or reject subscribe requests sent by a client. They also makes it possible to rewrite the subscribe topic and qos. The `auth_on_subscribe` hook is specified in the Erlang behaviour [auth\_on\_subscribe\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/auth_on_subscribe_hook.erl) and the `auth_on_subscribe` hook in the [auth\_on\_subscribe\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/auth_on_subscribe_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
10 |
11 | ## on\_subscribe and on\_subscribe\_m5
12 |
13 | The `on_subscribe` and `on_subscribe_m5` hooks allow your plugin to get informed about an authorized subscribe request. The `on_subscribe` hook is specified in the Erlang behaviour [on\_subscribe\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_subscribe_hook.erl) and the `on_subscribe_m5` hook in the [on\_subscribe\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_subscribe_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
14 |
15 | ## on\_unsubscribe and on\_unsubscribe\_m5
16 |
17 | The `on_unsubscribe` and `on_unsubscribe_m5` hooks allow your plugin to get informed about an unsubscribe request. They also allow you to rewrite the unsubscribe topic if required. The `on_subscribe` hook is specified in the Erlang behaviour [on\_unsubscribe\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_unsubscribe_hook.erl) and the `on_unsubscribe_m5` hook in the [on\_unsubscribe\_m5\_hook](https://github.com/vernemq/vernemq_dev/blob/master/src/on_unsubscribe_m5_hook.erl) behaviour available in the [vernemq\_dev](https://github.com/vernemq/vernemq_dev) repo.
18 |
19 |
--------------------------------------------------------------------------------