├── .gitignore ├── .gitpod.yml ├── LICENSE ├── README.md ├── dashboard └── grafana │ ├── README.md │ ├── env_variables │ └── home_dir │ ├── etc_grafana │ ├── grafana.ini │ └── provisioning │ │ ├── dashboards │ │ └── qdb-dashboard-provider.yml │ │ ├── datasources │ │ └── questdb.yml │ │ └── plugins │ │ └── questdb-questdb-datasource.yml │ └── var_lib_grafana │ └── dashboards │ └── DeviceData-QuestDBDemo.json ├── demo_queries.md ├── docker-compose.yml ├── energy_2018.csv ├── ingestion ├── go │ ├── README.md │ └── tsbs_send │ │ ├── Dockerfile │ │ ├── go.mod │ │ └── src │ │ └── main_orig.go ├── java │ ├── README.md │ └── tsbs_send │ │ ├── pom.xml │ │ └── src │ │ └── main │ │ └── java │ │ └── io │ │ └── questdb │ │ └── samples │ │ └── ilp_ingestion │ │ ├── IlpCryptoSender.java │ │ └── IlpSender.java └── python │ ├── README.md │ └── tsbs_send │ ├── Dockerfile │ ├── app_monitoring_ingestion.py │ ├── ilp_http_ingestion.py │ ├── ilp_ingestion.py │ ├── requirements.txt │ ├── ticker_ingestion.py │ └── ticker_names.txt ├── loading_and_querying_data.md └── trips.csv /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled class file 2 | *.class 3 | 4 | # Log file 5 | *.log 6 | 7 | # BlueJ files 8 | *.ctxt 9 | 10 | # Mobile Tools for Java (J2ME) 11 | .mtj.tmp/ 12 | 13 | # Package Files # 14 | target/ 15 | *.jar 16 | *.war 17 | *.nar 18 | *.ear 19 | *.zip 20 | *.tar.gz 21 | *.rar 22 | 23 | # virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml 24 | hs_err_pid* 25 | 26 | 27 | __debug_bin 28 | go.sum 29 | 30 | .DS_Store 31 | .idea 32 | ilp_ingestion.iml 33 | .vscode/ 34 | 35 | dashboard/grafana/home_dir/var_lib_grafana/* 36 | !dashboard/grafana/home_dir/var_lib_grafana/dashboards/ 37 | -------------------------------------------------------------------------------- /.gitpod.yml: -------------------------------------------------------------------------------- 1 | tasks: 2 | - name: open_dashboard 3 | command: | 4 | gp ports await 3000 && sleep 5 && gp preview --external $(gp url 3000)/d/qdb-ilp-demo/device-data-questdb-demo?orgId=1&refresh=5s 5 | - name: open_questdb_console 6 | command: | 7 | gp ports await 9000 && sleep 2 && gp preview --external $(gp url 9000) 8 | - name: start_demo 9 | command: | 10 | DOCKER_COMPOSE_USER_ID=`id -u` docker-compose up 11 | 12 | 13 | ports: 14 | - name: QuestDB Web Console 15 | port: 9000 16 | visibility: private 17 | onOpen: ignore 18 | - name: QuestDB PostgreSQL Wire protocol 19 | port: 8812 20 | visibility: private 21 | onOpen: ignore 22 | - name: QuestDB ILP ingestion protocol 23 | port: 9009 24 | visibility: private 25 | onOpen: ignore 26 | - name: QuestDB metrics and healthcheck server 27 | port: 9003 28 | visibility: private 29 | onOpen: ignore 30 | - name: Grafana 31 | port: 3000 32 | visibility: private 33 | onOpen: ignore 34 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # questdb-quickstart 2 | 3 | QuestDB is an Apache 2.0 licensed time-series database for high throughput ingestion and fast SQL queries with operational simplicity. To learn more about QuestDB visit https://questdb.io 4 | 5 | This is a quickstart covering: 6 | 7 | * Getting started with QuestDB on docker 8 | * Loading data using a CSV 9 | * Using the web console for interactive queries 10 | * Ingesting real-time data using the official clients in Go, Java, or Python 11 | * Real-time dashboards with Grafana and QuestDB using the PostgreSQL connector 12 | 13 | For a video walkthrough, please visit: 14 | [![For a video walkthrough, please visit](https://img.youtube.com/vi/r8zE1JNuqyA/maxresdefault.jpg)](https://youtu.be/r8zE1JNuqyA) 15 | 16 | # Deploying the demo 17 | 18 | This quickstart requires starting or deploying QuestDB, Grafana, and some ingestion scripts. You have three ways of setting this up: 19 | 20 | * Fully managed cloud-based installation (requires creating a free account on gitpod) using the Gitpod link in the next section. Recommended for quick low-friction demo 21 | * Local installation using docker-compose. Recommended for quick low-friction demo, as long as you have docker/docker-compose installed locally and are comfortable using them 22 | * Local installation using docker but doing step-by-step. Recommended to learn more about the details and how everything fits together 23 | 24 | 25 | After you install with your preferred method (instructions below) you can proceed to [loading and querying data](./loading_and_querying_data.md) 26 | 27 | ## Fully managed deployment using Gitpod 28 | 29 | When you click the button below, gitpod will provision an environment with questdb, a python script generating demo data, 30 | and a grafana dashboard for visualisation. On finishing (typically about one minute), gitpod will try to open two new 31 | tabs, one with the grafana dashboard, one with the QuestDB web interface. When opening the grafana dashboard, 32 | user is "demo" and password is "quest". 33 | 34 | Click the button below to start a new development environment: 35 | 36 | [![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/questdb/questdb-quickstart) 37 | 38 | Note: If you already have a gitpod account, you will need to log in when launching the deployment. If you don't have a gitpod 39 | account, you can create one for free when launching the deployment. If your browser is blocking pop-ups, you will 40 | need to click on the alert icon on the navigation bar to open the two links after the deployment is complete. 41 | 42 | If you want to explore loading batch data using the web interface or the REST API, please visit [loading and querying data](./loading_and_querying_data.md). 43 | Note you will need to use the endpoint provided by gitpod, rather than the default http://localhost:9000. 44 | 45 | ## Docker-compose local deployment 46 | 47 | Docker compose will provision an environment with questdb, a python script generating demo data, 48 | and a grafana dashboard for visualisation. The whole process will take about 2-3 minutes, depending on yout internet speed 49 | downloading the container images, and how quick your machine can build a python-based docker image. 50 | 51 | ``` 52 | git clone https://github.com/questdb/questdb-quickstart.git 53 | cd questdb-quickstart 54 | docker-compose up 55 | ``` 56 | 57 | The grafana web interface will be available at http://localhost:3000/d/qdb-ilp-demo/device-data-questdb-demo?orgId=1&refresh=5s. 58 | User is "demo" and password is "quest". 59 | 60 | The QuestDB console is available at http://localhost:9000 61 | 62 | If you want to explore loading batch data using the web interface or the REST API, please visit [loading and querying data](./loading_and_querying_data.md). 63 | 64 | Stop the demo via: 65 | 66 | ``` 67 | docker-compose down 68 | ``` 69 | 70 | ## Local docker based deployment 71 | 72 | The local deployment has four steps: starting QuestDB, loading batch data, ingesting real-time data, and creating dashboards with Grafana. 73 | 74 | ### Starting QuestDB 75 | 76 | There are [many ways to install QuestDB](https://questdb.io/docs/get-started/docker/), but I am choosing docker for portability. Note I won't be using a docker volume and the data will be ephemeral. Check the QuestDB docks to see how to start with a persistent directory. 77 | 78 | ```docker run --add-host=host.docker.internal:host-gateway -p 9000:9000 -p 9009:9009 -p 8812:8812 -p 9003:9003 questdb/questdb:latest``` 79 | 80 | Port 9000 serves the web console, port 9009 is for streaming ingestion, port 8812 is for PostgreSQL-protocol reads or writes, port 9003 is a monitoring/metrics endpoint. 81 | 82 | ### Importing batch data 83 | 84 | If you want to explore loading batch data using the web interface or the REST API, please visit [loading and querying data](./loading_and_querying_data.md). 85 | If you are only interested in streaming data you can skip this step. 86 | 87 | ### Ingesting real-time data using the official clients in Go, Java, or Python 88 | 89 | We will generate simulated IoT data and will use the QuestDB client libraries to ingest in real-time into questdb. 90 | Depending on your language of choice, follow the instructions at 91 | * https://github.com/javier/questdb-quickstart/tree/main/ingestion/go 92 | * https://github.com/javier/questdb-quickstart/tree/main/ingestion/java 93 | * https://github.com/javier/questdb-quickstart/tree/main/ingestion/python 94 | 95 | ### Real-time dashboards with Grafana and QuestDB using the PostgreSQL connector 96 | 97 | Please follow the instructions at https://github.com/javier/questdb-quickstart/tree/main/dashboard/grafana 98 | 99 | 100 | 101 | -------------------------------------------------------------------------------- /dashboard/grafana/README.md: -------------------------------------------------------------------------------- 1 | # Building a Grafana dashboard 2 | 3 | We will use Grafana, a popular open source tool for dashboards, to connect to your QuestDB instance and displaying near real time dashboards. We will use the grafana postgresql connector to connect to QuestDB. 4 | 5 | ## Starting grafana via docker 6 | 7 | Even if you already have a working grafana installation, it is recommended to start a new one via docker, as the process 8 | will provisioning sample connections and dashboards. If you prefer to provision a data source and a dashboard by hand, 9 | skip this section and proceed to "Manual Provisioning" 10 | 11 | Make sure you are at the `dashboard/grafana` directory before starting the grafana container. On starting, the contents 12 | of the `./home_folder` will be used for provisioning sample dashboards, so it is important you start from the right 13 | relative directory. 14 | 15 | _Note: The following command uses the `env_variables` file to configure your grafana. It uses the default for a brand new 16 | local installation of QuestDB. If you are not running QuestDB on the local host, you need to edit that file and change 17 | host/port/user/password as needed. For users of QuestDB cloud you need to make sure you change the value of the 18 | `QDB_SSL_MODE` variable on that file from `disable` to `require`._ 19 | 20 | ```shell 21 | docker run -d -p 3000:3000 --name=grafana-quickstart --user "$(id -u)" --volume "$PWD/home_dir/var_lib_grafana:/var/lib/grafana" --volume "$PWD/home_dir/etc_grafana:/etc/grafana/" --env-file ./env_variables grafana/grafana-oss 22 | ``` 23 | 24 | You can now navigate to [http://localhost:3000/d/qdb-ilp-demo/device-data-questdb-demo?orgId=1&refresh=5s](http://localhost:3000/d/qdb-ilp-demo/device-data-questdb-demo?orgId=1&refresh=5s) 25 | using the user `demo` and password `quest` to see a live grafana dashboard. Assuming you are running the IoT example 26 | provided for Python, Go, or JAVA, you should see the charts being updated every 5 seconds. 27 | 28 | 29 | Feel free to explore Grafana. Both this sample dashboard and a datasource named `qdb` are automatically created. 30 | 31 | When you want to stop your docker container you can run: 32 | 33 | `docker stop grafana-quickstart` 34 | 35 | And if you want to remove the docker container (and optionally the image) from your drive, execute: 36 | 37 | ```shell 38 | docker rm grafana-quickstart 39 | docker rmi grafana/grafana-oss 40 | ```` 41 | 42 | The rest of this README will explain how to manually provision your Grafana. 43 | 44 | ## Manual provisioning 45 | 46 | ### Creating the connection to QuestDB 47 | 48 | In your grafana, find the data sources icon on the left menu and add a new one. Choose the QuestDB type. Note that if you didn't 49 | start via Docker, you might have to first search for the QuuestDB type and Install it from the "Add New Connection" option in the UI. 50 | 51 | You can enter any name you want, for example `qdb`, for host you need to enter 52 | * when running grafana locally (no docker) in the same machine where QuestDB is running: `localhost:8812` 53 | * when running grafana via docker, in the same machine where QuestDB is running: `host.docker.internal:8812` 54 | * when running QuestDB remotely, make sure you enter host and port accordingly. 55 | 56 | For user choose `admin`, password `quest`. Those are the default values for a fresh QuestDB installation. 57 | 58 | Out of the box, QuestDB Open Source does not support TLS/SSL, so we need to select `disable` for TLS/SSL Mode, unless when using 59 | QuestDB Enterprise, in which case you need to select `require`. 60 | 61 | Scroll down to the bottom of the screen and click on `Save & Test`. You should see the connection is working. 62 | 63 | ### Importing the dashboard 64 | 65 | From here on, you could create your own dashboards and charts by following [this post](https://questdb.io/blog/time-series-monitoring-dashboard-grafana-questdb/). 66 | 67 | However, if you prefer it, you can import the sample dashboard located at `/dashboard/grafana/home_dir/data/dashboards/DeviceData-QuestDBDemo.json` 68 | in this repository. To import it, find the `dashboard>import` option on the left menu of your local grafana, and then 69 | select the json file. The dashboard will try to use a connection named `qdb`. 70 | 71 | -------------------------------------------------------------------------------- /dashboard/grafana/env_variables: -------------------------------------------------------------------------------- 1 | QDB_CLIENT_HOST=host.docker.internal 2 | QDB_CLIENT_PORT=8812 3 | QDB_CLIENT_USER=admin 4 | QDB_CLIENT_PASSWORD=quest 5 | 6 | # use the value "disable" for local installations, and "require" for QuestDB Cloud 7 | QDB_SSL_MODE=disable 8 | 9 | -------------------------------------------------------------------------------- /dashboard/grafana/home_dir/etc_grafana/grafana.ini: -------------------------------------------------------------------------------- 1 | [security] 2 | admin_user=demo 3 | admin_password=quest 4 | 5 | -------------------------------------------------------------------------------- /dashboard/grafana/home_dir/etc_grafana/provisioning/dashboards/qdb-dashboard-provider.yml: -------------------------------------------------------------------------------- 1 | apiVersion: 1 2 | 3 | providers: 4 | - name: 'questdb sample dashboards' 5 | type: file 6 | disableDeletion: false 7 | updateIntervalSeconds: 1000 8 | options: 9 | path: /var/lib/grafana/dashboards 10 | 11 | -------------------------------------------------------------------------------- /dashboard/grafana/home_dir/etc_grafana/provisioning/datasources/questdb.yml: -------------------------------------------------------------------------------- 1 | apiVersion: 1 2 | 3 | # list of datasources that should be deleted from the database 4 | deleteDatasources: 5 | - name: qdb 6 | orgId: 1 7 | 8 | datasources: 9 | - name: qdb 10 | type: questdb-questdb-datasource 11 | isDefault: true 12 | secureJsonData: 13 | password: ${QDB_CLIENT_PASSWORD} 14 | jsonData: 15 | server: ${QDB_CLIENT_HOST} 16 | port: ${QDB_CLIENT_PORT} 17 | username: ${QDB_CLIENT_USER} 18 | tlsMode: ${QDB_SSL_MODE} # disable/require/verify-ca/verify-full 19 | # timeout: 20 | # queryTimeout: 21 | maxOpenConnections: 100 22 | maxIdleConnections: 100 23 | maxConnectionLifetime: 14400 24 | -------------------------------------------------------------------------------- /dashboard/grafana/home_dir/etc_grafana/provisioning/plugins/questdb-questdb-datasource.yml: -------------------------------------------------------------------------------- 1 | apiVersion: 1 2 | 3 | apps: 4 | # the type of app, plugin identifier. Required 5 | - type: questdb-questdb-datasource 6 | disabled: false 7 | -------------------------------------------------------------------------------- /dashboard/grafana/home_dir/var_lib_grafana/dashboards/DeviceData-QuestDBDemo.json: -------------------------------------------------------------------------------- 1 | { 2 | "annotations": { 3 | "list": [ 4 | { 5 | "builtIn": 1, 6 | "datasource": { 7 | "type": "grafana", 8 | "uid": "-- Grafana --" 9 | }, 10 | "enable": true, 11 | "hide": true, 12 | "iconColor": "rgba(0, 211, 255, 1)", 13 | "name": "Annotations & Alerts", 14 | "target": { 15 | "limit": 100, 16 | "matchAny": false, 17 | "tags": [], 18 | "type": "dashboard" 19 | }, 20 | "type": "dashboard" 21 | } 22 | ] 23 | }, 24 | "description": "Sending synthetic data from sensors", 25 | "editable": true, 26 | "fiscalYearStartMonth": 0, 27 | "graphTooltip": 0, 28 | "id": 1, 29 | "links": [], 30 | "liveNow": false, 31 | "panels": [ 32 | { 33 | "collapsed": false, 34 | "gridPos": { 35 | "h": 1, 36 | "w": 24, 37 | "x": 0, 38 | "y": 0 39 | }, 40 | "id": 11, 41 | "panels": [], 42 | "title": "Measure1 and Measure2 samples", 43 | "type": "row" 44 | }, 45 | { 46 | "datasource": "qdb", 47 | "fieldConfig": { 48 | "defaults": { 49 | "color": { 50 | "mode": "palette-classic" 51 | }, 52 | "custom": { 53 | "axisLabel": "", 54 | "axisPlacement": "auto", 55 | "barAlignment": 0, 56 | "drawStyle": "line", 57 | "fillOpacity": 0, 58 | "gradientMode": "none", 59 | "hideFrom": { 60 | "legend": false, 61 | "tooltip": false, 62 | "viz": false 63 | }, 64 | "lineInterpolation": "smooth", 65 | "lineStyle": { 66 | "fill": "solid" 67 | }, 68 | "lineWidth": 1, 69 | "pointSize": 4, 70 | "scaleDistribution": { 71 | "type": "linear" 72 | }, 73 | "showPoints": "always", 74 | "spanNulls": true, 75 | "stacking": { 76 | "group": "A", 77 | "mode": "none" 78 | }, 79 | "thresholdsStyle": { 80 | "mode": "off" 81 | } 82 | }, 83 | "mappings": [], 84 | "thresholds": { 85 | "mode": "absolute", 86 | "steps": [ 87 | { 88 | "color": "green", 89 | "value": null 90 | }, 91 | { 92 | "color": "red", 93 | "value": 80 94 | } 95 | ] 96 | } 97 | }, 98 | "overrides": [] 99 | }, 100 | "gridPos": { 101 | "h": 10, 102 | "w": 9, 103 | "x": 0, 104 | "y": 1 105 | }, 106 | "id": 4, 107 | "options": { 108 | "legend": { 109 | "calcs": [], 110 | "displayMode": "list", 111 | "placement": "bottom" 112 | }, 113 | "tooltip": { 114 | "mode": "single", 115 | "sort": "none" 116 | } 117 | }, 118 | "targets": [ 119 | { 120 | "datasource": "qdb", 121 | "format": 1, 122 | "group": [], 123 | "metricColumn": "measure1", 124 | "rawQuery": true, 125 | "rawSql": "SELECT\n timestamp AS \"time\", device_type,\n avg(measure1) AS metric,\n avg(measure2) as m2\nFROM ilp_test\nWHERE\n $__timeFilter(timestamp)\nsample by 30s \nORDER BY 1,2", 126 | "refId": "A", 127 | "select": [ 128 | [ 129 | { 130 | "params": [ 131 | "duration_ms" 132 | ], 133 | "type": "column" 134 | }, 135 | { 136 | "params": [ 137 | "avg" 138 | ], 139 | "type": "aggregate" 140 | }, 141 | { 142 | "params": [ 143 | "duration_ms" 144 | ], 145 | "type": "alias" 146 | } 147 | ] 148 | ], 149 | "table": "ilp_test", 150 | "timeColumn": "timestamp", 151 | "where": [ 152 | { 153 | "name": "$__timeFilter", 154 | "params": [], 155 | "type": "macro" 156 | } 157 | ] 158 | } 159 | ], 160 | "title": "Measures sampled by 30 seconds", 161 | "type": "timeseries" 162 | }, 163 | { 164 | "datasource": "qdb", 165 | "fieldConfig": { 166 | "defaults": { 167 | "color": { 168 | "mode": "palette-classic" 169 | }, 170 | "custom": { 171 | "axisLabel": "", 172 | "axisPlacement": "auto", 173 | "barAlignment": 0, 174 | "drawStyle": "points", 175 | "fillOpacity": 0, 176 | "gradientMode": "none", 177 | "hideFrom": { 178 | "legend": false, 179 | "tooltip": false, 180 | "viz": false 181 | }, 182 | "lineInterpolation": "smooth", 183 | "lineStyle": { 184 | "fill": "solid" 185 | }, 186 | "lineWidth": 1, 187 | "pointSize": 4, 188 | "scaleDistribution": { 189 | "type": "linear" 190 | }, 191 | "showPoints": "always", 192 | "spanNulls": true, 193 | "stacking": { 194 | "group": "A", 195 | "mode": "normal" 196 | }, 197 | "thresholdsStyle": { 198 | "mode": "off" 199 | } 200 | }, 201 | "mappings": [], 202 | "thresholds": { 203 | "mode": "absolute", 204 | "steps": [ 205 | { 206 | "color": "green", 207 | "value": null 208 | }, 209 | { 210 | "color": "red", 211 | "value": 80 212 | } 213 | ] 214 | } 215 | }, 216 | "overrides": [] 217 | }, 218 | "gridPos": { 219 | "h": 10, 220 | "w": 9, 221 | "x": 9, 222 | "y": 1 223 | }, 224 | "id": 7, 225 | "options": { 226 | "legend": { 227 | "calcs": [], 228 | "displayMode": "list", 229 | "placement": "bottom" 230 | }, 231 | "tooltip": { 232 | "mode": "single", 233 | "sort": "none" 234 | } 235 | }, 236 | "targets": [ 237 | { 238 | "datasource": "qdb", 239 | "format": 1, 240 | "group": [], 241 | "metricColumn": "measure1", 242 | "rawQuery": true, 243 | "rawSql": "SELECT\n timestamp AS \"time\", device_type,\n avg(measure1) AS metric,\n avg(measure2) as m2\nFROM ilp_test\nWHERE\n $__timeFilter(timestamp)\nsample by 1m \nORDER BY 1,2", 244 | "refId": "A", 245 | "select": [ 246 | [ 247 | { 248 | "params": [ 249 | "duration_ms" 250 | ], 251 | "type": "column" 252 | }, 253 | { 254 | "params": [ 255 | "avg" 256 | ], 257 | "type": "aggregate" 258 | }, 259 | { 260 | "params": [ 261 | "duration_ms" 262 | ], 263 | "type": "alias" 264 | } 265 | ] 266 | ], 267 | "table": "ilp_test", 268 | "timeColumn": "timestamp", 269 | "where": [ 270 | { 271 | "name": "$__timeFilter", 272 | "params": [], 273 | "type": "macro" 274 | } 275 | ] 276 | } 277 | ], 278 | "title": "Measures sampled by minute, stacked", 279 | "type": "timeseries" 280 | }, 281 | { 282 | "collapsed": false, 283 | "gridPos": { 284 | "h": 1, 285 | "w": 24, 286 | "x": 0, 287 | "y": 11 288 | }, 289 | "id": 9, 290 | "panels": [], 291 | "title": "Speed and location", 292 | "type": "row" 293 | }, 294 | { 295 | "datasource": "qdb", 296 | "fieldConfig": { 297 | "defaults": { 298 | "color": { 299 | "mode": "thresholds" 300 | }, 301 | "custom": { 302 | "align": "auto", 303 | "displayMode": "auto", 304 | "inspect": false 305 | }, 306 | "mappings": [], 307 | "thresholds": { 308 | "mode": "absolute", 309 | "steps": [ 310 | { 311 | "color": "green", 312 | "value": null 313 | }, 314 | { 315 | "color": "red", 316 | "value": 80 317 | } 318 | ] 319 | } 320 | }, 321 | "overrides": [] 322 | }, 323 | "gridPos": { 324 | "h": 7, 325 | "w": 3, 326 | "x": 0, 327 | "y": 12 328 | }, 329 | "id": 6, 330 | "options": { 331 | "footer": { 332 | "fields": "", 333 | "reducer": [ 334 | "sum" 335 | ], 336 | "show": false 337 | }, 338 | "showHeader": true 339 | }, 340 | "pluginVersion": "9.0.0", 341 | "targets": [ 342 | { 343 | "datasource": "qdb", 344 | "format": 1, 345 | "group": [], 346 | "metricColumn": "none", 347 | "rawQuery": true, 348 | "rawSql": "SELECT\n device_type,\n avg(speed) as avg_speed\nFROM ilp_test\nWHERE\n $__timeFilter(timestamp)\nORDER BY 1", 349 | "refId": "A", 350 | "select": [ 351 | [ 352 | { 353 | "params": [ 354 | "value" 355 | ], 356 | "type": "column" 357 | } 358 | ] 359 | ], 360 | "table": "ilp_test", 361 | "timeColumn": "time", 362 | "where": [ 363 | { 364 | "name": "$__timeFilter", 365 | "params": [], 366 | "type": "macro" 367 | } 368 | ] 369 | } 370 | ], 371 | "title": "Average Speeds", 372 | "type": "table" 373 | }, 374 | { 375 | "datasource": "qdb", 376 | "fieldConfig": { 377 | "defaults": { 378 | "color": { 379 | "mode": "thresholds" 380 | }, 381 | "mappings": [], 382 | "thresholds": { 383 | "mode": "absolute", 384 | "steps": [ 385 | { 386 | "color": "green", 387 | "value": null 388 | }, 389 | { 390 | "color": "red", 391 | "value": 80 392 | } 393 | ] 394 | } 395 | }, 396 | "overrides": [] 397 | }, 398 | "gridPos": { 399 | "h": 7, 400 | "w": 4, 401 | "x": 3, 402 | "y": 12 403 | }, 404 | "id": 14, 405 | "options": { 406 | "colorMode": "value", 407 | "graphMode": "area", 408 | "justifyMode": "auto", 409 | "orientation": "auto", 410 | "reduceOptions": { 411 | "calcs": [ 412 | "lastNotNull" 413 | ], 414 | "fields": "", 415 | "values": false 416 | }, 417 | "textMode": "auto" 418 | }, 419 | "pluginVersion": "9.0.0", 420 | "targets": [ 421 | { 422 | "datasource": "qdb", 423 | "format": 1, 424 | "group": [], 425 | "metricColumn": "none", 426 | "rawQuery": true, 427 | "rawSql": "WITH in_interval AS (\nSELECT count(*) as total_in_interval\nFROM ilp_test\nWHERE\n $__timeFilter(timestamp)\n ), absolute_total AS (\n SELECT count(*) as total_seen\nFROM ilp_test\n )\n select * from in_interval cross join absolute_total\n \n", 428 | "refId": "A", 429 | "select": [ 430 | [ 431 | { 432 | "params": [ 433 | "value" 434 | ], 435 | "type": "column" 436 | } 437 | ] 438 | ], 439 | "timeColumn": "time", 440 | "where": [ 441 | { 442 | "name": "$__timeFilter", 443 | "params": [], 444 | "type": "macro" 445 | } 446 | ] 447 | } 448 | ], 449 | "title": "Number of data points received", 450 | "type": "stat" 451 | }, 452 | { 453 | "datasource": {}, 454 | "fieldConfig": { 455 | "defaults": { 456 | "color": { 457 | "mode": "fixed" 458 | }, 459 | "custom": { 460 | "hideFrom": { 461 | "legend": false, 462 | "tooltip": false, 463 | "viz": false 464 | } 465 | }, 466 | "mappings": [ 467 | { 468 | "options": { 469 | "blue": { 470 | "color": "dark-blue", 471 | "index": 0, 472 | "text": "blue" 473 | }, 474 | "green": { 475 | "color": "dark-green", 476 | "index": 1, 477 | "text": "green" 478 | }, 479 | "red": { 480 | "color": "dark-red", 481 | "index": 2, 482 | "text": "red" 483 | }, 484 | "yellow": { 485 | "color": "dark-yellow", 486 | "index": 3, 487 | "text": "yellow" 488 | } 489 | }, 490 | "type": "value" 491 | } 492 | ], 493 | "max": -3, 494 | "thresholds": { 495 | "mode": "absolute", 496 | "steps": [ 497 | { 498 | "color": "green", 499 | "value": null 500 | } 501 | ] 502 | }, 503 | "unit": "velocityms" 504 | }, 505 | "overrides": [] 506 | }, 507 | "gridPos": { 508 | "h": 15, 509 | "w": 11, 510 | "x": 7, 511 | "y": 12 512 | }, 513 | "id": 2, 514 | "options": { 515 | "basemap": { 516 | "config": {}, 517 | "name": "Layer 0", 518 | "type": "osm-standard" 519 | }, 520 | "controls": { 521 | "mouseWheelZoom": true, 522 | "showAttribution": true, 523 | "showDebug": false, 524 | "showScale": false, 525 | "showZoom": true 526 | }, 527 | "layers": [ 528 | { 529 | "config": { 530 | "showLegend": false, 531 | "style": { 532 | "color": { 533 | "field": "device_type", 534 | "fixed": "dark-green" 535 | }, 536 | "opacity": 1, 537 | "rotation": { 538 | "field": "avg_m1", 539 | "fixed": 0, 540 | "max": 360, 541 | "min": -360, 542 | "mode": "mod" 543 | }, 544 | "size": { 545 | "fixed": 30, 546 | "max": 15, 547 | "min": 2 548 | }, 549 | "symbol": { 550 | "fixed": "img/icons/marker/plane.svg", 551 | "mode": "fixed" 552 | }, 553 | "text": { 554 | "field": "device_type", 555 | "fixed": "", 556 | "mode": "field" 557 | }, 558 | "textConfig": { 559 | "fontSize": 20, 560 | "offsetX": 20, 561 | "offsetY": 0, 562 | "textAlign": "left", 563 | "textBaseline": "middle" 564 | } 565 | } 566 | }, 567 | "location": { 568 | "mode": "auto" 569 | }, 570 | "name": "Layer 2", 571 | "tooltip": true, 572 | "type": "markers" 573 | } 574 | ], 575 | "view": { 576 | "id": "coords", 577 | "lat": 43.375762, 578 | "lon": 9.267784, 579 | "zoom": 1.9 580 | } 581 | }, 582 | "pluginVersion": "9.0.0", 583 | "targets": [ 584 | { 585 | "datasource": "qdb", 586 | "format": 1, 587 | "group": [], 588 | "metricColumn": "none", 589 | "rawQuery": true, 590 | "rawSql": "WITH average_vals AS (\n select device_type, avg(duration_ms) as avg_duration_ms, avg(speed) as avg_speed, avg(measure1) as avg_m1, avg(measure2) as avg_m2 from ilp_test \n WHERE\n $__timeFilter(timestamp)\n), latest_seen AS (\nselect timestamp, device_type, lat, lon from 'ilp_test' latest on timestamp partition by device_type)\nSelect timestamp as `time`, latest_seen.device_type as device_type, lat, lon, avg_duration_ms, avg_speed, avg_m1, avg_m2\nfrom latest_seen JOIN average_vals ON (device_type)\n\n", 591 | "refId": "A", 592 | "select": [ 593 | [ 594 | { 595 | "params": [ 596 | "value" 597 | ], 598 | "type": "column" 599 | } 600 | ] 601 | ], 602 | "timeColumn": "time", 603 | "where": [ 604 | { 605 | "name": "$__timeFilter", 606 | "params": [], 607 | "type": "macro" 608 | } 609 | ] 610 | } 611 | ], 612 | "title": "Last seen position by device type", 613 | "type": "geomap" 614 | }, 615 | { 616 | "datasource": { 617 | "type": "postgres", 618 | "uid": "P0F15568B0DD880D0" 619 | }, 620 | "fieldConfig": { 621 | "defaults": { 622 | "color": { 623 | "mode": "palette-classic" 624 | }, 625 | "mappings": [], 626 | "thresholds": { 627 | "mode": "percentage", 628 | "steps": [ 629 | { 630 | "color": "green", 631 | "value": null 632 | }, 633 | { 634 | "color": "red", 635 | "value": 80 636 | } 637 | ] 638 | } 639 | }, 640 | "overrides": [] 641 | }, 642 | "gridPos": { 643 | "h": 8, 644 | "w": 7, 645 | "x": 0, 646 | "y": 19 647 | }, 648 | "id": 12, 649 | "options": { 650 | "orientation": "auto", 651 | "reduceOptions": { 652 | "calcs": [ 653 | "mean" 654 | ], 655 | "fields": "", 656 | "values": false 657 | }, 658 | "showThresholdLabels": false, 659 | "showThresholdMarkers": true 660 | }, 661 | "pluginVersion": "9.0.0", 662 | "targets": [ 663 | { 664 | "datasource": "qdb", 665 | "format": 1, 666 | "group": [], 667 | "metricColumn": "none", 668 | "rawQuery": true, 669 | "rawSql": " select device_type, avg(duration_ms) as avg_duration_ms, avg(speed) as avg_speed, avg(measure1) as avg_m1, avg(measure2) as avg_m2 from ilp_test \n WHERE\n $__timeFilter(timestamp)\n\n", 670 | "refId": "A", 671 | "select": [ 672 | [ 673 | { 674 | "params": [ 675 | "value" 676 | ], 677 | "type": "column" 678 | } 679 | ] 680 | ], 681 | "timeColumn": "time", 682 | "where": [ 683 | { 684 | "name": "$__timeFilter", 685 | "params": [], 686 | "type": "macro" 687 | } 688 | ] 689 | } 690 | ], 691 | "title": "Mean values", 692 | "type": "gauge" 693 | } 694 | ], 695 | "refresh": "5s", 696 | "schemaVersion": 36, 697 | "style": "dark", 698 | "tags": [], 699 | "templating": { 700 | "list": [] 701 | }, 702 | "time": { 703 | "from": "now-5m", 704 | "to": "now" 705 | }, 706 | "timepicker": {}, 707 | "timezone": "", 708 | "title": "Device Data - QuestDB Demo", 709 | "uid": "qdb-ilp-demo", 710 | "version": 2, 711 | "weekStart": "" 712 | } 713 | -------------------------------------------------------------------------------- /demo_queries.md: -------------------------------------------------------------------------------- 1 | # Running the demo Queries 2 | 3 | If you want to run some interesting queries on top of pre-existing demo datasets, you can head to [QuestDB live demo](https://demo.questdb.io/) and just click on the top where it says 'Example Queries'. The `trips` dataset at that live demo has over 1.6 billion rows. All the datasets at the demo site are static, except for the `trades` table, which pulls crypto prices from Coinbase's API every second or so. 4 | 5 | 6 | ## Performance 7 | 8 | We start with number of records 9 | 10 | ``` 11 | select count() from trips: 12 | ``` 13 | 14 | Ask how long they think it’d take and run the next query 15 | 16 | ``` 17 | select count(), avg(fare_amount) from trips; 18 | ``` 19 | 20 | We explain it was ~500ms. Not too bad, but this was a full scan, so no time-series in place. Let’s filter by time 21 | 22 | ``` 23 | select count(), avg(fare_amount) from trips where pickup_datetime IN '2018'; 24 | ``` 25 | 26 | ``` 27 | select count(), avg(fare_amount) from trips where pickup_datetime IN '2018-06'; 28 | ``` 29 | 30 | Cool. See how we are getting faster. That’s for performance. Now we switch to business functionality. Lets run this a couple of times to show we are getting new data about every second 31 | 32 | ## Sample By and interpolation, to group rows by time and to work with missing data 33 | 34 | ``` 35 | select count() from trades; 36 | ``` 37 | 38 | A common business metric would be the volume weighted average every 15 minutes. This is how you can downsample data, by the way, if you just INSERT INTO a new table the result 39 | 40 | ``` 41 | SELECT 42 | timestamp, 43 | sum(price * amount) / sum(amount) AS vwap_price, 44 | sum(amount) AS volume 45 | FROM trades 46 | WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) 47 | SAMPLE BY 15m ALIGN TO CALENDAR; 48 | ``` 49 | 50 | Let’s see what happens if I go down to one second. I can see some seconds are missing! It seems we have some ingestion gaps. Explain conventional databases can only show you what’s inside, but cannot show you what’s NOT inside. But that’s actually super good insights for many use cases. In sensor data you need that all the time 51 | 52 | ``` 53 | SELECT 54 | timestamp, 55 | sum(price * amount) / sum(amount) AS vwap_price, 56 | sum(amount) AS volume 57 | FROM trades 58 | WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) 59 | SAMPLE BY 1s ALIGN TO CALENDAR; 60 | ``` 61 | 62 | So. Introduce interpolate. You might mention linear and prev, but for identifying gaps I want to use the NULL one 63 | 64 | ``` 65 | SELECT 66 | timestamp, 67 | sum(price * amount) / sum(amount) AS vwap_price, 68 | sum(amount) AS volume 69 | FROM trades 70 | WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) 71 | SAMPLE BY 1s FILL(NULL) ALIGN TO CALENDAR 72 | ``` 73 | 74 | So I can now see all the rows, including those with no values. How can I see only the anomalies? Easy, as we support SQL and that’s straightforward 75 | 76 | ``` 77 | with sampled as ( 78 | SELECT 79 | timestamp, 80 | sum(price * amount) / sum(amount) AS vwap_price, 81 | sum(amount) AS volume 82 | FROM trades 83 | WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) 84 | SAMPLE BY 1s FILL(NULL) ALIGN TO CALENDAR 85 | ) select * from sampled where vwap_price IS NULL 86 | ``` 87 | 88 | We can see if we increase the sample from 1s to 5s, the number of gaps gets smaller, and eventually if we keep increasing, no gaps. A lot of real business cool use cases here 89 | 90 | ``` 91 | with sampled as ( 92 | SELECT 93 | timestamp, 94 | sum(price * amount) / sum(amount) AS vwap_price, 95 | sum(amount) AS volume 96 | FROM trades 97 | WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) 98 | SAMPLE BY 5s FILL(NULL) ALIGN TO CALENDAR 99 | ) select * from sampled where vwap_price IS NULL 100 | ``` 101 | 102 | ``` 103 | with sampled as ( 104 | SELECT 105 | timestamp, 106 | sum(price * amount) / sum(amount) AS vwap_price, 107 | sum(amount) AS volume 108 | FROM trades 109 | WHERE symbol = 'BTC-USD' AND timestamp > dateadd('d', -1, now()) 110 | SAMPLE BY 10s FILL(NULL) ALIGN TO CALENDAR 111 | ) select * from sampled where vwap_price IS NULL 112 | ``` 113 | 114 | ## Powerful time semantics using IN 115 | 116 | Next, we run the same query but in small steps. We start asking it would be cool to know how many trades we had at one particular second, and we have our handy IN syntax 117 | 118 | ``` 119 | select timestamp, count() from 'trades' 120 | where timestamp IN '2022-10-28T23:59:58' 121 | ``` 122 | 123 | That’s a lot of detail. I want just the count per second, not for each timestamp, so SAMPLE BY again 124 | 125 | ``` 126 | select timestamp, count() from 'trades' 127 | where timestamp IN '2022-10-28T23:59:58' 128 | sample by 1s ALIGN TO CALENDAR 129 | ``` 130 | 131 | Even better, we might want to see what happened in the 2 seconds before and after the top of midnight 132 | 133 | ``` 134 | select timestamp, count() from ‘trades’ 135 | where timestamp IN '2022-10-28T23:59:58;4s' 136 | sample by 1s ALIGN TO CALENDAR 137 | ``` 138 | 139 | And what if we want what happened around those seconds for that day and the 7 days afterwards? Yep, more IN syntax. Doing this with other DB would be clunky 140 | 141 | ``` 142 | select timestamp, count() from 'trades' 143 | where timestamp IN '2022-10-28T23:59:58;4s;1d;7' 144 | sample by 1s ALIGN TO CALENDAR 145 | ``` 146 | 147 | ## Joining two tables by closer timestamp 148 | 149 | Last bit. ASOF JOIN. Weather dataset and trips join at midnight 150 | 151 | ``` 152 | SELECT 153 | timestamp as weather_timestamp, pickup_datetime, fare_amount, tempF, windDir 154 | FROM 155 | ( 156 | select * from trips WHERE pickup_datetime in '2018-06-01' 157 | ) ASOF JOIN weather; 158 | ``` 159 | 160 | And now we change the IN to a different time so we see we are matching a different weather record 161 | 162 | ``` 163 | SELECT 164 | timestamp as weather_timestamp, pickup_datetime, fare_amount, tempF, windDir 165 | FROM 166 | ( 167 | select * from trips WHERE pickup_datetime in '2018-06-01T00:55' 168 | ) ASOF JOIN weather; 169 | ``` 170 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | 3 | services: 4 | questdb: 5 | image: questdb/questdb 6 | container_name: questdb_quickstart 7 | restart: always 8 | ports: 9 | - "8812:8812" 10 | - "9000:9000" 11 | - "9009:9009" 12 | - "9003:9003" 13 | extra_hosts: 14 | - "host.docker.internal:host-gateway" 15 | 16 | grafana: 17 | image: grafana/grafana-oss 18 | container_name: questdb_quickstart_grafana 19 | restart: always 20 | user: "${DOCKER_COMPOSE_USER_ID:-}" 21 | ports: 22 | - 3000:3000 23 | extra_hosts: 24 | - "host.docker.internal:host-gateway" 25 | volumes: 26 | - ./dashboard/grafana/home_dir/var_lib_grafana:/var/lib/grafana/ 27 | - ./dashboard/grafana/home_dir/etc_grafana:/etc/grafana/ 28 | environment: 29 | - GF_INSTALL_PLUGINS=questdb-questdb-datasource 30 | - QDB_CLIENT_HOST=${QDB_CLIENT_HOST:-host.docker.internal} 31 | - QDB_CLIENT_PORT=${QDB_CLIENT_PORT:-8812} 32 | - QDB_CLIENT_USER=${QDB_CLIENT_USER:-admin} 33 | - QDB_CLIENT_PASSWORD=${QDB_CLIENT_PASSWORD:-quest} 34 | # use the value "disable" for local installations, and "require" for QuestDB Cloud 35 | - QDB_SSL_MODE=${QDB_SSL_MODE:-disable} 36 | 37 | ilp_ingestion: 38 | build: ./ingestion/python/tsbs_send 39 | container_name: questdb_quickstart_ingestion 40 | depends_on: 41 | - questdb 42 | extra_hosts: 43 | - "host.docker.internal:host-gateway" 44 | environment: 45 | - QDB_CLIENT_HOST=${QDB_CLIENT_HOST:-host.docker.internal} 46 | - QDB_CLIENT_PORT=${QDB_CLIENT_PORT:-9009} 47 | - QDB_CLIENT_TLS=${QDB_CLIENT_TLS:-False} 48 | - QDB_CLIENT_AUTH_KID=${QDB_CLIENT_AUTH_KID:-} 49 | - QDB_CLIENT_AUTH_D=${QDB_CLIENT_AUTH_D:-} 50 | - QDB_CLIENT_AUTH_X=${QDB_CLIENT_AUTH_X:-} 51 | - QDB_CLIENT_AUTH_Y=${QDB_CLIENT_AUTH_Y:-} 52 | 53 | 54 | 55 | -------------------------------------------------------------------------------- /ingestion/go/README.md: -------------------------------------------------------------------------------- 1 | # Ingesting data using go 2 | 3 | For ingesting data we are using the official QuestDB Go client. 4 | 5 | The demo program will generate random data simulating IoT sensor data into a table named "ilp_test". Note we don't need to create the table beforehand, as QuestDB will automatically create a table, if it doesn't already exist, when we start sending data. 6 | 7 | This demo will generate and ingest 100,000 events in batches of 100 events every 500 milliseconds. You can interrupt the program at any point while executing without any side effects. 8 | 9 | ## Getting the dependencies 10 | 11 | Change directory to `tsbs_send` 12 | This will download the go questdb package 13 | 14 | `go mod download github.com/questdb/go-questdb-client` 15 | 16 | ### Configuration 17 | 18 | It defaults to localhost with all the QuestDB defaults, but can be adapted to use different credentials (or to run with TLS if using the QuestDB Cloud) by setting these environment variables: 19 | * QDB_CLIENT_HOST, defaults to 'localhost' 20 | * QDB_CLIENT_PORT, defaults to 9009 21 | * QDB_CLIENT_TLS, defaults to False 22 | * QDB_CLIENT_AUTH_KID, no default. Only used for authenticated ILP. You can find this param on your QuestDB Cloud instance console 23 | * QDB_CLIENT_AUTH_D, no default. Only used for authenticated ILP. You can find this param on your QuestDB Cloud instance console 24 | 25 | ## Running the program 26 | 27 | `go run src/main_orig.go` 28 | 29 | ## Validating we ingested some data 30 | 31 | Go to the webconsole (http://localhost:9000 if running locally) and execute this query 32 | 33 | `Select * from ilp_test` 34 | 35 | Then 36 | 37 | `Select count() from ilp_test` 38 | 39 | You can leave the go program running while you proceed to the last step of this quickstart and visualise your data on a dashboard. 40 | -------------------------------------------------------------------------------- /ingestion/go/tsbs_send/Dockerfile: -------------------------------------------------------------------------------- 1 | # syntax=docker/dockerfile:1 2 | FROM golang:1.20-alpine 3 | RUN apk --no-cache add curl 4 | WORKDIR /app 5 | COPY go.mod ./ 6 | RUN go mod download 7 | RUN go mod download github.com/questdb/go-questdb-client 8 | COPY src/*.go ./ 9 | CMD while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' host.docker.internal:19003)" != "200" ]]; do sleep 1; done;go run main_orig.go 10 | -------------------------------------------------------------------------------- /ingestion/go/tsbs_send/go.mod: -------------------------------------------------------------------------------- 1 | module tsbs_send 2 | 3 | go 1.18 4 | 5 | require ( 6 | github.com/go-sql-driver/mysql v1.6.0 // indirect 7 | github.com/jackc/chunkreader/v2 v2.0.1 // indirect 8 | github.com/jackc/pgconn v1.12.1 // indirect 9 | github.com/jackc/pgio v1.0.0 // indirect 10 | github.com/jackc/pgpassfile v1.0.0 // indirect 11 | github.com/jackc/pgproto3/v2 v2.3.0 // indirect 12 | github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b // indirect 13 | github.com/jackc/pgtype v1.11.0 // indirect 14 | github.com/jackc/pgx/v4 v4.16.1 // indirect 15 | github.com/jinzhu/inflection v1.0.0 // indirect 16 | github.com/jinzhu/now v1.1.5 // indirect 17 | github.com/mmcloughlin/spherand v0.0.0-20200201191112-cd5c4c9261aa // indirect 18 | github.com/questdb/go-questdb-client v0.0.0-20220912094445-fa4d7bd7b59e // indirect 19 | golang.org/x/crypto v0.0.0-20220214200702-86341886e292 // indirect 20 | golang.org/x/text v0.3.7 // indirect 21 | gorm.io/datatypes v1.0.7 // indirect 22 | gorm.io/driver/mysql v1.3.2 // indirect 23 | gorm.io/driver/postgres v1.3.8 // indirect 24 | gorm.io/gorm v1.23.8 // indirect 25 | ) 26 | -------------------------------------------------------------------------------- /ingestion/go/tsbs_send/src/main_orig.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "context" 5 | "log" 6 | "math/rand" 7 | "time" 8 | "os" 9 | 10 | qdb "github.com/questdb/go-questdb-client" 11 | ) 12 | 13 | func getEnv(key, fallback string) string { 14 | if value, ok := os.LookupEnv(key); ok { 15 | return value 16 | } 17 | return fallback 18 | } 19 | 20 | func getBoolEnv(key string) bool { 21 | if value, ok := os.LookupEnv(key); ok { 22 | return (value == "TRUE" || value == "True" || value == "true" || value == "1" || value == "t") 23 | } 24 | return false 25 | } 26 | 27 | func main() { 28 | opts := make([]qdb.LineSenderOption, 0, 4) 29 | 30 | host := getEnv("QDB_CLIENT_HOST", "localhost") 31 | port := getEnv("QDB_CLIENT_PORT", "9009") 32 | opts = append(opts, qdb.WithAddress(host + ":" + port) ) 33 | 34 | auth_kid := getEnv("QDB_CLIENT_AUTH_KID","") 35 | auth_d := getEnv("QDB_CLIENT_AUTH_D","") 36 | log.Printf("kid %s d %s ", auth_kid, auth_d) 37 | if (auth_kid != "" && auth_d != "") { 38 | log.Printf("kid %s d %s ", auth_kid, auth_d) 39 | opts = append(opts, qdb.WithAuth(auth_kid, auth_d)) 40 | } 41 | 42 | if getBoolEnv("QDB_CLIENT_TLS") { 43 | log.Printf("with TLS") 44 | opts = append(opts, qdb.WithTls()) 45 | } 46 | 47 | ctx := context.TODO() 48 | sender, err := qdb.NewLineSender( 49 | ctx, 50 | opts... 51 | ) 52 | if err != nil { 53 | log.Fatal(err) 54 | } 55 | defer sender.Close() 56 | 57 | var ( 58 | device_types = []string{"blue", "red", "green", "yellow"} 59 | ) 60 | 61 | iter := 10000 62 | batch := 100 63 | 64 | ts := 1 65 | delay_ms := 500 66 | 67 | min_lat := 19.50139 68 | max_lat := 64.85694 69 | min_lon := -161.75583 70 | max_lon := -68.01197 71 | 72 | for it := 0; it < iter; it++ { 73 | for i := 0; i < batch; i++ { 74 | //t := time.Now() 75 | err = sender. 76 | Table("ilp_test"). 77 | Symbol("device_type", device_types[rand.Intn(len(device_types))]). 78 | Int64Column("duration_ms", int64(rand.Intn(4000))). 79 | Float64Column("lat", rand.Float64()*(max_lat-min_lat)). 80 | Float64Column("lon", rand.Float64()*(max_lon-min_lon)). 81 | Int64Column("measure1", int64(rand.Int31())). 82 | Int64Column("measure2", int64(rand.Int31())). 83 | Int64Column("speed", int64(rand.Intn(100))). 84 | AtNow(ctx) 85 | //At(ctx, t.UnixNano()) 86 | if err != nil { 87 | log.Fatal(err) 88 | } 89 | ts += 1 90 | } 91 | err = sender.Flush(ctx) 92 | if err != nil { 93 | log.Fatal(err) 94 | } 95 | wrote := int64(batch) 96 | log.Printf("wrote %d rows", wrote) 97 | 98 | if delay_ms > 0 { 99 | log.Printf("sleeping %d milliseconds", delay_ms) 100 | time.Sleep(time.Duration(delay_ms) * time.Millisecond) 101 | } 102 | } 103 | log.Printf("Summary: %d rows sent", iter*batch) 104 | } 105 | -------------------------------------------------------------------------------- /ingestion/java/README.md: -------------------------------------------------------------------------------- 1 | # Ingesting data using JAVA 2 | 3 | For ingesting data we are adding QuestDB as a dependency. 4 | 5 | The demo program will generate random data simulating IoT sensor data into a table named "ilp_test". Note we don't need to create the table beforehand, as QuestDB will automatically create a table, if it doesn't already exist, when we start sending data. 6 | 7 | This demo will generate and ingest 1,000,000 events in batches of 100 events every 500 milliseconds. You can interrupt the program at any point while executing without any side effects. 8 | 9 | ## Getting the dependencies 10 | 11 | Change directory to `tsbs_send` 12 | 13 | Generate the `jar` and dependencies with `maven` 14 | 15 | `mvn clean package` 16 | 17 | ## Running the program 18 | 19 | `java -jar target/ilp_ingestion-1.0-SNAPSHOT.jar` 20 | 21 | ## Validating we ingested some data 22 | 23 | Go to the webconsole (http://localhost:9000 if running locally) and execute this query 24 | 25 | `Select * from ilp_test` 26 | 27 | Then 28 | 29 | `Select count() from ilp_test` 30 | 31 | You can leave the JAVA program running while you proceed to the last step of this quickstart and visualise your data on a dashboard. 32 | -------------------------------------------------------------------------------- /ingestion/java/tsbs_send/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 5 | 4.0.0 6 | 7 | org.example 8 | ilp_ingestion 9 | 1.0-SNAPSHOT 10 | 11 | ilp_ingestion 12 | 13 | http://www.example.com 14 | 15 | 16 | UTF-8 17 | 1.7 18 | 1.7 19 | 20 | 21 | 22 | 23 | junit 24 | junit 25 | 4.11 26 | test 27 | 28 | 29 | 30 | 31 | org.questdb 32 | questdb 33 | 7.1.2 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | maven-clean-plugin 45 | 3.1.0 46 | 47 | 48 | 49 | maven-resources-plugin 50 | 3.0.2 51 | 52 | 53 | maven-compiler-plugin 54 | 3.8.0 55 | 56 | 57 | maven-surefire-plugin 58 | 2.22.1 59 | 60 | 61 | maven-jar-plugin 62 | 3.0.2 63 | 64 | 65 | maven-install-plugin 66 | 2.5.2 67 | 68 | 69 | maven-deploy-plugin 70 | 2.8.2 71 | 72 | 73 | 74 | maven-site-plugin 75 | 3.7.1 76 | 77 | 78 | maven-project-info-reports-plugin 79 | 3.0.0 80 | 81 | 82 | 83 | 84 | 85 | org.apache.maven.plugins 86 | maven-compiler-plugin 87 | 88 | 8 89 | 8 90 | 91 | 92 | 93 | org.apache.maven.plugins 94 | maven-dependency-plugin 95 | 96 | 97 | copy-dependencies 98 | prepare-package 99 | 100 | copy-dependencies 101 | 102 | 103 | ${project.build.directory}/${project.build.finalName}.lib 104 | 105 | 106 | 107 | 108 | 109 | org.apache.maven.plugins 110 | maven-jar-plugin 111 | 112 | 113 | 114 | true 115 | ${project.build.finalName}.lib/ 116 | io.questdb.samples.ilp_ingestion.IlpSender 117 | true 118 | 119 | 120 | ${buildNumber} 121 | 122 | 123 | 124 | 125 | 126 | org.apache.maven.plugins 127 | maven-compiler-plugin 128 | 129 | 1.8 130 | 1.8 131 | 132 | 133 | 134 | 135 | 136 | -------------------------------------------------------------------------------- /ingestion/java/tsbs_send/src/main/java/io/questdb/samples/ilp_ingestion/IlpCryptoSender.java: -------------------------------------------------------------------------------- 1 | package io.questdb.samples.ilp_ingestion; 2 | 3 | import io.questdb.client.Sender; 4 | 5 | import java.util.Random; 6 | import java.util.concurrent.ThreadLocalRandom; 7 | 8 | 9 | public class IlpCryptoSender { 10 | static final double MAX_PRICE = 21000.0, MIN_PRICE = 1200.0; 11 | static final String[] venues = {"CBS", "FUS", "LMX", "BTS"}, sides = {"BUY", "SELL"}, 12 | instruments = {"ETH-USD", "BTC-USD"}; 13 | 14 | public static void main(String[] args) { 15 | Random random = new Random(); 16 | int max_items = 100000, batch = 100, delay_ms = 500; 17 | 18 | double price = ThreadLocalRandom.current().nextDouble(11000, 15000); 19 | try (Sender sender = Sender.builder() 20 | .address("localhost:9009") 21 | .build()) { 22 | 23 | for (int i = 1; i <= max_items; i++) { 24 | price += ThreadLocalRandom.current().nextDouble(-0.11, 0.11); 25 | if (price >= MAX_PRICE) { 26 | price = MAX_PRICE; 27 | } else if (price <= MIN_PRICE) { 28 | price = MIN_PRICE; 29 | } 30 | 31 | sender.table("prices") 32 | .symbol("venue", venues[random.nextInt(venues.length)]) 33 | .symbol("instrument_key", instruments[random.nextInt(instruments.length)]) 34 | .symbol("side", sides[random.nextInt(sides.length)]) 35 | .doubleColumn("qty", ThreadLocalRandom.current().nextDouble(2800)) 36 | .doubleColumn("price", price) 37 | .atNow(); 38 | if (i % batch == 0) { 39 | sender.flush(); 40 | System.out.println("total rows written so far: " + i); 41 | Thread.sleep(delay_ms); 42 | 43 | } 44 | } 45 | 46 | } catch (InterruptedException e) { 47 | e.printStackTrace(); 48 | } 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /ingestion/java/tsbs_send/src/main/java/io/questdb/samples/ilp_ingestion/IlpSender.java: -------------------------------------------------------------------------------- 1 | package io.questdb.samples.ilp_ingestion; 2 | 3 | import io.questdb.client.Sender; 4 | 5 | import java.util.Random; 6 | 7 | 8 | public class IlpSender { 9 | static final String[] deviceTypes = {"blue", "red", "green", "yellow"}; 10 | static final Double min_lat = 19.50139, max_lat = 64.85694, min_lon = -161.75583, max_lon = -68.01197; 11 | 12 | public static void main(String[] args) { 13 | Random random = new Random(); 14 | int max_items = 1000000, batch = 100, delay_ms = 500; 15 | 16 | try (Sender sender = Sender.builder() 17 | .address("localhost:9009") 18 | .build()) { 19 | 20 | for (int i = 1; i <= max_items; i++) { 21 | sender.table("ilp_test") 22 | .symbol("device_type", deviceTypes[random.nextInt(deviceTypes.length)]) 23 | .longColumn("duration_ms", random.nextInt(4000)) 24 | .doubleColumn("lat", random.nextDouble() * (max_lat - min_lat)) 25 | .doubleColumn("lon", random.nextDouble() * (max_lon - min_lon)) 26 | .longColumn("measure1", random.nextInt(Integer.MAX_VALUE)) 27 | .longColumn("measure2", random.nextInt(Integer.MAX_VALUE)) 28 | .longColumn("speed", random.nextInt(100)) 29 | .atNow(); 30 | if (i % batch == 0) { 31 | sender.flush(); 32 | Thread.sleep(delay_ms); 33 | } 34 | } 35 | 36 | } catch (InterruptedException e) { 37 | e.printStackTrace(); 38 | } 39 | } 40 | } 41 | -------------------------------------------------------------------------------- /ingestion/python/README.md: -------------------------------------------------------------------------------- 1 | # Ingesting data using Python 2 | 3 | We provide three different scripts to ingest data into QuestDB: 4 | 5 | * `ilp_ingestion.py`: simulates IoT data using the ILP protocol. It only requires the questdb library. This script is also available in the Go and JAVA samples. The 6 | demo Grafana dashboard provided in this quickstart works on this table 7 | * `app_monitoring_ingestion.py`: simulates app observability data using the PostgreSQL protocol. It requires the psycopg and the faker libraries 8 | * `ticker_ingestion.py`: reads real live data from Yahoo Finance and inserts into QuestDB using the PostgreSQL protocol. It requires the psycopg and the yliveticker libraries 9 | 10 | You can install each dependency manually using `pip`, or just all of them via 11 | 12 | `cd tsbs_send` 13 | 14 | `pip install -r requirements.txt` 15 | 16 | ## IoT simulated data: ilp_ingestion.py demo 17 | 18 | The demo program will generate random data simulating IoT sensor data into a table named "ilp_test". Note we don't need to create the table beforehand, as QuestDB will automatically create a table, if it doesn't already exist, when we start sending data. 19 | 20 | This demo will generate and ingest 1,000,000 events in batches of 100 events every 500 milliseconds. You can interrupt the program at any point while executing without any side effects. 21 | 22 | ### Configuration 23 | 24 | It defaults to localhost with all the QuestDB defaults, but can be adapted to use different credentials (or to run with TLS if using the QuestDB Cloud) by setting these environment variables: 25 | * QDB_CLIENT_HOST, defaults to 'localhost' 26 | * QDB_CLIENT_PORT, defaults to 9009 27 | * QDB_CLIENT_TLS, defaults to False 28 | * QDB_CLIENT_AUTH_KID, no default. Only used for authenticated ILP. You can find this param on your QuestDB Cloud instance console 29 | * QDB_CLIENT_AUTH_D, no default. Only used for authenticated ILP. You can find this param on your QuestDB Cloud instance console 30 | * QDB_CLIENT_AUTH_X, no default. Only used for authenticated ILP. You can find this param on your QuestDB Cloud instance console 31 | * QDB_CLIENT_AUTH_Y, no default. Only used for authenticated ILP. You can find this param on your QuestDB Cloud instance console 32 | 33 | ### Running the program 34 | 35 | `python ilp_ingestion.py` 36 | 37 | ### Validating we ingested some data 38 | 39 | Go to the webconsole (http://localhost:9000 if running locally) and execute this query 40 | 41 | `Select * from ilp_test` 42 | 43 | Then 44 | 45 | `Select count() from ilp_test` 46 | 47 | You can leave the Python program running while you proceed to the last step of this quickstart and visualise your data on a dashboard. 48 | 49 | ## App observability simulated data: app_monitoring_ingestion.py demo 50 | 51 | This demo program will generate random data (using the Faker library) simulating application usage data into a table named "app_monitor". 52 | Data is ingested using the pg_wire protocol. Note we don't need to create the table beforehand, as the script starts with a call to `CREATE TABLE IT NOT EXISTS`. 53 | 54 | This demo will generate one row every 100 milliseconds until stopped. You can interrupt the program at any point while executing without any side effects. 55 | 56 | ### Configuration 57 | 58 | It defaults to localhost with all the QuestDB defaults, but can be adapted to use different credentials (including QuestDB Cloud) by setting these environment variables: 59 | * QDB_CLIENT_PG_HOST, defaults to '127.0.0.1' 60 | * QDB_CLIENT_PG_PORT, defaults to 8812 61 | * QDB_CLIENT_PG_USER, defaults to 'admin' 62 | * QDB_CLIENT_PG_PASSWORD, defaults to 'quest' 63 | 64 | ### Running the program 65 | 66 | `python app_monitoring_ingestion.py` 67 | 68 | ### Validating we ingested some data 69 | 70 | Go to the webconsole (http://localhost:9000 if running locally) and execute this query 71 | 72 | `Select * from app_monitor` 73 | 74 | Then 75 | 76 | `Select count() from app_monitor` 77 | 78 | 79 | 80 | ## Live financial data from Yahoo Finance: ticker_ingestion.py demo 81 | 82 | This demo program will read live data (using the yliveticker library) from Yahoo Finance websocket and ingest into a table named "live_ticker". 83 | Data is ingested using the pg_wire protocol. Note we don't need to create the table beforehand, as the script starts with a call to `CREATE TABLE IT NOT EXISTS`. 84 | 85 | This demo will generate data as it comes (volume depends on market hours, but at most it generates a few records per second) until stopped. 86 | You can interrupt the program at any point while executing without any side effects. 87 | 88 | ### Configuration 89 | 90 | It defaults to localhost with all the QuestDB defaults, but can be adapted to use different credentials (including QuestDB Cloud) by setting these environment variables: 91 | * QDB_CLIENT_PG_HOST, defaults to '127.0.0.1' 92 | * QDB_CLIENT_PG_PORT, defaults to 8812 93 | * QDB_CLIENT_PG_USER, defaults to 'admin' 94 | * QDB_CLIENT_PG_PASSWORD, defaults to 'quest' 95 | 96 | The program will read the file `ticker_names.txt` on start, and will monitor all the tickers there. By default it is monitoring exchange between some currencies, 97 | and some tickers in US, India, Japan, and Spanish markets, so there is some activity most of the time independently of market opening/close hours. 98 | If you want to change the file or to add your own tickets, you can get the ticker name from the `https://finance.yahoo.com/` website. Notice the ticker needs to 99 | have the exact format from Yahoo's website (including any preffixes or suffixes) so we can match it. You need to add a ticker per line with no spaces before or after. 100 | 101 | ### Running the program 102 | 103 | `python ticker_ingestion.py` 104 | 105 | ### Validating we ingested some data 106 | 107 | Go to the webconsole (http://localhost:9000 if running locally) and execute this query 108 | 109 | `Select * from live_ticker` 110 | 111 | Then 112 | 113 | `Select count() from live_ticker` 114 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/Dockerfile: -------------------------------------------------------------------------------- 1 | # syntax=docker/dockerfile:1 2 | FROM python:3.7-alpine 3 | RUN apk add --no-cache gcc musl-dev linux-headers curl 4 | COPY ilp_ingestion.py . 5 | COPY requirements.txt . 6 | RUN pip install -r requirements.txt 7 | CMD while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' host.docker.internal:19003)" != "200" ]]; do sleep 1; done;python ./ilp_ingestion.py 8 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/app_monitoring_ingestion.py: -------------------------------------------------------------------------------- 1 | import psycopg as pg 2 | from faker import Faker 3 | from faker.providers import DynamicProvider 4 | from faker.providers import internet 5 | import os 6 | import sys 7 | import time 8 | 9 | HOST = os.getenv('QDB_CLIENT_PG_HOST', '127.0.0.1') 10 | PORT = os.getenv('QDB_CLIENT_PG_PORT', 8812) 11 | PG_USER = os.getenv('QDB_CLIENT_PG_USER', 'admin') 12 | PG_PASSWORD = os.getenv('QDB_CLIENT_PG_PASSWORD', 'quest') 13 | DELAY = 0.1 14 | 15 | 16 | def get_fake_generator(): 17 | app_actions_provider = DynamicProvider( 18 | provider_name="app_action", 19 | elements=[ 20 | "/login", 21 | "/stock/buy", 22 | "/stock/sell", 23 | "/stock/check", 24 | "/user/profile", 25 | "/file/download" 26 | ], 27 | ) 28 | fake_generator = Faker() 29 | fake_generator.add_provider(app_actions_provider) 30 | fake_generator.add_provider(internet) 31 | 32 | return fake_generator 33 | 34 | def fake_row(fake_generator): 35 | row=dict() 36 | profile = fake_generator.profile() 37 | row['name'] = profile['name'] 38 | row['username'] = profile['username'] 39 | row["email"] = profile['mail'] 40 | row['company'] = profile['company'] 41 | row['app_action'] = fake_generator.app_action() 42 | row['credit_card_provider'] = None 43 | if row['app_action'] == "/stock/buy": 44 | row['credit_card_provider'] = fake_generator.credit_card_provider() 45 | row['file_name'] = None 46 | if row['app_action'] == "/file/download": 47 | row['file_name'] = fake_generator.file_path(extension='pdf') 48 | row['ip']=fake_generator.ipv4_private() 49 | row['method']=fake_generator.http_method() 50 | row['user_agent']=fake_generator.user_agent() 51 | row['country_code'] = fake_generator.country_code() 52 | row['time_ms'] = fake_generator.pyint(min_value=20, max_value=1500) 53 | row['error'] = fake_generator.pybool(truth_probability=8) 54 | row['timestamp'] = timestamp = time.time_ns() // 1000 55 | return row 56 | 57 | if __name__ == '__main__': 58 | conn_str = f'user={PG_USER} password={PG_PASSWORD} host={HOST} port={PORT} dbname=qdb' 59 | with pg.connect(conn_str, autocommit=True) as connection: 60 | with connection.cursor() as cur: 61 | cur.execute(''' 62 | CREATE TABLE IF NOT EXISTS app_monitor( 63 | timestamp TIMESTAMP, 64 | name STRING, 65 | username SYMBOL capacity 256 CACHE, 66 | email STRING, 67 | company STRING, 68 | app_action SYMBOL capacity 10 CACHE, 69 | method SYMBOL capacity 10 CACHE, 70 | time_ms LONG, 71 | credit_card_provider SYMBOL capacity 10 CACHE, 72 | file_name STRING, 73 | ip STRING, 74 | country_code SYMBOL capacity 150 CACHE, 75 | user_agent SYMBOL capacity 50 CACHE 76 | ) TIMESTAMP (timestamp) PARTITION BY DAY WAL; 77 | ''' 78 | ) 79 | 80 | fg = get_fake_generator() 81 | while True: 82 | row=fake_row(fg) 83 | cur.execute(''' 84 | INSERT INTO app_monitor( 85 | timestamp, 86 | name, 87 | username, 88 | email, 89 | company, 90 | app_action, 91 | method, 92 | time_ms, 93 | credit_card_provider, 94 | file_name, 95 | ip, 96 | country_code, 97 | user_agent 98 | ) 99 | VALUES ( 100 | %(timestamp)s, 101 | %(name)s, 102 | %(username)s, 103 | %(email)s, 104 | %(company)s, 105 | %(app_action)s, 106 | %(method)s, 107 | %(time_ms)s, 108 | %(credit_card_provider)s, 109 | %(file_name)s, 110 | %(ip)s, 111 | %(country_code)s, 112 | %(user_agent)s 113 | ) 114 | ''' 115 | , row 116 | ) 117 | sys.stdout.write(f'sent : {row}\n') 118 | time.sleep(DELAY) 119 | 120 | 121 | 122 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/ilp_http_ingestion.py: -------------------------------------------------------------------------------- 1 | from questdb.ingress import Sender, IngressError, TimestampNanos, Protocol 2 | import os 3 | import sys 4 | import random 5 | import time 6 | 7 | CONF_STRING=os.getenv('QDB_CLIENT_CONF', 'http::addr=localhost:9000;username=admin;password=quest;') 8 | 9 | DEVICE_TYPES = ["blue", "red", "green", "yellow"] 10 | ITER = 10000 11 | BATCH = 10000 12 | DELAY = 0 13 | MIN_LAT = 19.50139 14 | MAX_LAT = 64.85694 15 | MIN_LON = -161.75583 16 | MAX_LON = -68.01197 17 | 18 | 19 | def send(conf: str = CONF_STRING): 20 | try: 21 | with Sender.from_conf(conf) as sender: 22 | for it in range(ITER): 23 | for i in range(BATCH): 24 | sender.row( 25 | 'ilp_test', 26 | symbols={'device_type': random.choice(DEVICE_TYPES)}, 27 | columns={ 28 | 'duration_ms': random.randint(0, 4000), 29 | "lat": random.uniform(MIN_LAT, MAX_LAT), 30 | "lon": random.uniform(MIN_LON, MAX_LON), 31 | "measure1": random.randint(-2147483648, 2147483647), 32 | "measure2": random.randint(-2147483648, 2147483647), 33 | "speed": random.randint(0, 100) 34 | }, 35 | at=TimestampNanos.now()) 36 | sys.stdout.write(f'added : {BATCH} rows\n') 37 | time.sleep(DELAY) 38 | except IngressError as e: 39 | sys.stderr.write(f'Got error: {e}') 40 | 41 | 42 | if __name__ == '__main__': 43 | sys.stdout.write(f'Ingestion started. Connecting to {CONF_STRING}\n') 44 | send() 45 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/ilp_ingestion.py: -------------------------------------------------------------------------------- 1 | from questdb.ingress import Sender, IngressError, TimestampNanos 2 | import os 3 | import sys 4 | import random 5 | import time 6 | 7 | HOST = os.getenv('QDB_CLIENT_HOST', 'localhost') 8 | PORT = os.getenv('QDB_CLIENT_PORT', 9009) 9 | TLS = os.getenv('QDB_CLIENT_TLS', "False" ).lower() in ('true', '1', 't') 10 | AUTH_KID = os.getenv('QDB_CLIENT_AUTH_KID', '') 11 | AUTH_D = os.getenv('QDB_CLIENT_AUTH_D', '') 12 | AUTH_X = os.getenv('QDB_CLIENT_AUTH_X', '') 13 | AUTH_Y = os.getenv('QDB_CLIENT_AUTH_Y', '') 14 | 15 | DEVICE_TYPES = ["blue", "red", "green", "yellow"] 16 | ITER = 10000 17 | BATCH = 100 18 | DELAY = 0.5 19 | MIN_LAT = 19.50139 20 | MAX_LAT = 64.85694 21 | MIN_LON = -161.75583 22 | MAX_LON = -68.01197 23 | 24 | 25 | def send(host: str = HOST, port: int = PORT): 26 | try: 27 | auth = None 28 | if AUTH_KID and AUTH_D and AUTH_X and AUTH_Y: 29 | sys.stdout.write(f'Ingestion using credentials\n') 30 | auth = ( AUTH_KID, AUTH_D, AUTH_X, AUTH_Y ) 31 | with Sender(host, port, auth=auth, tls=TLS) as sender: 32 | for it in range(ITER): 33 | for i in range(BATCH): 34 | sender.row( 35 | 'ilp_test', 36 | symbols={'device_type': random.choice(DEVICE_TYPES)}, 37 | columns={ 38 | 'duration_ms': random.randint(0, 4000), 39 | "lat": random.uniform(MIN_LAT, MAX_LAT), 40 | "lon": random.uniform(MIN_LON, MAX_LON), 41 | "measure1": random.randint(-2147483648, 2147483647), 42 | "measure2": random.randint(-2147483648, 2147483647), 43 | "speed": random.randint(0, 100) 44 | }, 45 | at=TimestampNanos.now()) 46 | sys.stdout.write(f'sent : {BATCH} rows\n') 47 | sender.flush() 48 | time.sleep(DELAY) 49 | except IngressError as e: 50 | sys.stderr.write(f'Got error: {e}') 51 | 52 | 53 | if __name__ == '__main__': 54 | sys.stdout.write(f'Ingestion started. Connecting to {HOST} {PORT}\n') 55 | send() 56 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/requirements.txt: -------------------------------------------------------------------------------- 1 | questdb 2 | yliveticker 3 | psycopg[binary] 4 | faker 5 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/ticker_ingestion.py: -------------------------------------------------------------------------------- 1 | import psycopg as pg 2 | import yliveticker 3 | import os 4 | import sys 5 | 6 | HOST = os.getenv('QDB_CLIENT_PG_HOST', '127.0.0.1') 7 | PORT = os.getenv('QDB_CLIENT_PG_PORT', 8812) 8 | PG_USER = os.getenv('QDB_CLIENT_PG_USER', 'admin') 9 | PG_PASSWORD = os.getenv('QDB_CLIENT_PG_PASSWORD', 'quest') 10 | 11 | def on_new_row(ws, msg): 12 | with pg.connect(yliveticker.conn_str, autocommit=True) as connection: 13 | msg['timestamp'] = msg['timestamp'] * 1000 14 | with connection.cursor() as cur: 15 | cur.execute(''' 16 | INSERT INTO live_ticker( 17 | timestamp, 18 | id, 19 | exchange, 20 | quoteType, 21 | price, 22 | marketHours, 23 | changePercent, 24 | dayVolume, 25 | change, 26 | priceHint 27 | ) 28 | VALUES( 29 | %(timestamp)s, 30 | %(id)s, 31 | %(exchange)s, 32 | %(quoteType)s, 33 | %(price)s, 34 | %(marketHours)s, 35 | %(changePercent)s, 36 | %(dayVolume)s , 37 | %(change)s, 38 | %(priceHint)s 39 | ); 40 | ''' 41 | , msg 42 | ) 43 | sys.stdout.write(f'sent : {msg}\n') 44 | 45 | def get_ticker_names(): 46 | with open('ticker_names.txt') as f: 47 | lines = f.read().splitlines() 48 | return list(set(lines)) 49 | 50 | 51 | if __name__ == '__main__': 52 | yliveticker.conn_str = f'user={PG_USER} password={PG_PASSWORD} host={HOST} port={PORT} dbname=qdb' 53 | with pg.connect(yliveticker.conn_str, autocommit=True) as connection: 54 | with connection.cursor() as cur: 55 | cur.execute(''' 56 | CREATE TABLE IF NOT EXISTS live_ticker( 57 | timestamp TIMESTAMP, 58 | 'id' SYMBOL capacity 256 CACHE, 59 | exchange SYMBOL capacity 256 CACHE, 60 | quoteType LONG, 61 | price DOUBLE, 62 | marketHours LONG, 63 | changePercent DOUBLE, 64 | dayVolume DOUBLE, 65 | change DOUBLE, 66 | priceHint LONG 67 | ) TIMESTAMP (timestamp) PARTITION BY DAY WAL; 68 | ''' 69 | ) 70 | 71 | yliveticker.YLiveTicker(on_ticker=on_new_row, ticker_names=get_ticker_names()) 72 | -------------------------------------------------------------------------------- /ingestion/python/tsbs_send/ticker_names.txt: -------------------------------------------------------------------------------- 1 | BTC=X 2 | ^GSPC 3 | ^DJI 4 | ^IXIC 5 | ^RUT 6 | CL=F 7 | GC=F 8 | SI=F 9 | EURUSD=X 10 | ^TNX 11 | ^VIX 12 | GBPUSD=X 13 | JPY=X 14 | BTC-USD 15 | ^CMC200 16 | ^FTSE 17 | ^N225 18 | HDFC.NS 19 | ADANIENT.NS 20 | APOLLOHOSP.NS 21 | ASIANPAINT.NS 22 | AXISBANK.NS 23 | BRITANNIA.NS 24 | COALINDIA.NS 25 | HEROMOTOCO.NS 26 | SUNPHARMA.NS 27 | TATACONSUM.NS 28 | TATASTEEL.NS 29 | TITAN.NS 30 | AENA.MC 31 | ANA.MC 32 | AMS.MC 33 | SAB.MC 34 | BKT.MC 35 | FER.MC 36 | IDR.MC 37 | TEF.MC 38 | MEL.MC 39 | 4151.T 40 | 4502.T 41 | 8306.T 42 | 4755.T 43 | 9501.T 44 | 7203.T 45 | 8411.T 46 | 9613.T 47 | 9434.T 48 | 9432.T 49 | 9433.T 50 | 9984.T 51 | 8630.T 52 | 2501.T 53 | 3086.T 54 | 9766.T 55 | 3863.T 56 | 4021.T 57 | 5101.T 58 | 5232.T 59 | 5332.T 60 | 5406.T 61 | 5801.T 62 | 7012.T 63 | 7951.T 64 | 9107.T 65 | 9201.T 66 | 9532.T 67 | MULN 68 | MELI 69 | CRBU 70 | AMZN 71 | GOOG 72 | GENI 73 | JBLU 74 | TSLA 75 | RIVN 76 | CCL 77 | PLTR 78 | MARA 79 | META 80 | F 81 | SOFI 82 | BAC 83 | AAPL 84 | RIOT 85 | NVDA 86 | AMD 87 | OPEN 88 | MSFT 89 | T 90 | XPEV 91 | AAL 92 | ZGN 93 | CXMSF 94 | WMG 95 | DOCN 96 | PSNY 97 | -------------------------------------------------------------------------------- /loading_and_querying_data.md: -------------------------------------------------------------------------------- 1 | # Loading and Querying data 2 | 3 | Make sure you have a working QuestDB installation as explained at the [README](./README.md) 4 | 5 | ## Loading data using a CSV 6 | 7 | There are different ways of loading CSV data into QuestDB. I am showing here the simplest one, recommended only for small/medium files with rows sorted by timestamp. 8 | 9 | Go to the url of the web console, which runs on port 9000. If you are running this on your machine, it should be running at http://localhost:9000 10 | 11 | You will see some icons at the left bar. Choose the one with an arrow pointing up. When you hover it will read "import". Click to browse or drag the provided demo file `trips.csv` at the [root of this repository](./trips.csv). After a few seconds, your file should be loaded. 12 | 13 | Go back to the web console main screen by clicking the `` icon on the left menu bar. 14 | 15 | 16 | ## Using the web console for interactive queries 17 | 18 | If the name `trips.csv` is not showing at the `tables` section, click the reload icon (a circle formed by two arrows) at the top left. 19 | 20 | You can now click on the table name to see the auto-discovered schema. 21 | 22 | Run your first query by writting `select * from 'trips.csv'` at the editor and click run. 23 | 24 | The data we loaded represents real taxi rides in the city of New York in January 2018. It is a very small dataset with only 999 rows for demo purposes. 25 | 26 | Since the name of the table is not great, let's rename it by running this SQL statement 27 | 28 | `rename table 'trips.csv' to trips_2018` 29 | 30 | And now we can run queries like 31 | 32 | `select count() from trips_2018` 33 | 34 | You can find the complete SQL reference for QuestDB (including time-series extensions) at [the docs](https://questdb.io/docs/concept/sql-execution-order/) 35 | 36 | If you want to run some interesting queries on top of larger demo datasets, you can head to [QuestDB live demo](https://demo.questdb.io/) and just click on the top where it says 'Example Queries'. The `trips` dataset at that live demo has over 1.6 billion rows. All the datasets at the demo site are static, except for the `trades` table, which pulls crypto prices from Coinbase's API every second or so. 37 | 38 | I have compiled some of the queries you can run on the demo dataset in [this markdown file](./demo_queries.md) 39 | 40 | ## Loading CSV data using the API 41 | 42 | We can also load CSV files using the API. In this case, we can add schema details (for every column or just specific ones), 43 | and table details, such as the name, or the partitioning strategy. 44 | 45 | In this repository, I am providing a dataset with energy consumption and forecast data in 15-minutes intervals for a few 46 | European countries. This file is a subset of [the original](https://data.open-power-system-data.org/time_series/2020-10-06) 47 | and contains data only for 2018 (205,189 rows). 48 | 49 | Import using curl: 50 | 51 | ``` 52 | curl -F schema='[{"name":"timestamp", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd HH:mm:ss"}]' -F data=@energy_2018.csv 'http://localhost:9000/imp?overwrite=false&name=energy_2018×tamp=timestamp&partitionBy=MONTH' 53 | ``` 54 | 55 | Navigate to the [QuestDB Web Console](http://localhost:9000) and explore the table we just created. 56 | 57 | 58 | 59 | --------------------------------------------------------------------------------