├── .github ├── ISSUE_TEMPLATE │ ├── bug_report.md │ ├── content_change.md │ └── update.md ├── PULL_REQUEST_TEMPLATE │ ├── content_update.md │ ├── correction.md │ ├── new_page.md │ └── pull_request_template.md └── pull_request_template.md ├── .gitignore ├── NOTICE ├── README.md ├── api.md ├── development.md ├── development └── contribute-to-docs.md ├── faq.md ├── getting-started.md ├── getting-started ├── cloud-multi-node.md ├── configuring.md ├── creating-hypertables.md ├── exploring-cloud.md ├── exploring-forge.md ├── exploring-labs.md ├── forge-configuration.md ├── forge-multi-node.md ├── forge-resize.md ├── install-psql-tutorial.md ├── installation-apt-debian.md ├── installation-apt-ubuntu.md ├── installation-docker.md ├── installation-grafana.md ├── installation-homebrew.md ├── installation-source-windows.md ├── installation-source.md ├── installation-timescale-cloud.md ├── installation-timescale-forge.md ├── installation-ubuntu-ami.md ├── installation-windows.md ├── installation-yum.md ├── installation.md ├── migrating-data.md ├── multi-node-self-managed.md ├── setup-multi-node-basic.md ├── setup-multi-node.md └── setup.md ├── guc.md ├── integration-tools.md ├── introduction.md ├── introduction ├── architecture.md ├── data-model.md ├── time-series-data.md ├── timescaledb-vs-nosql.md └── timescaledb-vs-postgres.md ├── main.md ├── multinode └── bootstrapping.md ├── page-index └── page-index.js ├── release-notes.md ├── release-notes └── changes-in-timescaledb-2.md ├── starting-from-scratch.md ├── tutorials.md ├── tutorials ├── analyze-cryptocurrency-data.md ├── clustering.md ├── continuous-aggs-tutorial.md ├── getting-started-with-promscale.md ├── other-sample-datasets.md ├── outflux.md ├── prometheus-adapter.md ├── promscale-benefits.md ├── promscale-how-it-works.md ├── promscale-install.md ├── promscale-run-queries.md ├── quickstart-go.md ├── quickstart-node.md ├── quickstart-python.md ├── quickstart-ruby.md ├── replication.md ├── telegraf-output-plugin.md ├── tutorial-forecasting.md ├── tutorial-grafana-alerts.md ├── tutorial-grafana-dashboards.md ├── tutorial-grafana-geospatial.md ├── tutorial-grafana-variables.md ├── tutorial-grafana.md ├── tutorial-hello-timescale.md ├── tutorial-howto-monitor-django-prometheus.md ├── tutorial-howto-simulate-iot-sensor-data.md ├── tutorial-howto-visualize-missing-data-grafana.md ├── tutorial-setting-up-timescale-cloud-endpoint-for-prometheus.md ├── tutorial-setup-timescale-prometheus.md ├── tutorial-use-timescale-prometheus-grafana.md └── visualizing-time-series-data-in-tableau.md ├── update-timescaledb.md ├── update-timescaledb ├── update-docker.md ├── update-tsdb-1.md ├── update-tsdb-2.md └── upgrade-pg.md ├── using-timescaledb.md └── using-timescaledb ├── actions.md ├── alerting.md ├── backup.md ├── compression.md ├── continuous-aggregates.md ├── data-retention.md ├── data-tiering.md ├── distributed-hypertables.md ├── hypertables.md ├── ingesting-data.md ├── limitations.md ├── reading-data.md ├── schema-management.md ├── telemetry.md ├── tooling.md ├── troubleshooting.md ├── update-db.md ├── visualizing-data.md └── writing-data.md /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: a bug/typo found on docs 4 | title: "[Bug/Typo]" 5 | labels: 'bug' 6 | assignees: '' 7 | 8 | --- 9 | **Add the appropriate label(s) -->** 10 | 11 | **Describe the bug** 12 | 13 | **Location** 14 | 15 | 16 | **Screenshots** 17 | 18 | 19 | **Device and browser** 20 | 21 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/content_change.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Request change 3 | about: Request a change in content/appearance/functionality 4 | title: "[Change]" 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | **Change in content, appearance, or functionality?** 10 | Add the appropriate label(s) -->> 11 | 12 | **Describe the change** 13 | 14 | 15 | **Location(s)** 16 | 18 | 19 | **How soon is this needed?** 20 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/update.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Update 3 | about: Request an update 4 | title: "[Update]" 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | **Add the appropriate label -->** 10 | 11 | **Describe the update** 12 | 13 | 14 | **Location(s)** 15 | 16 | 17 | **How soon is this needed?** 18 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE/content_update.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Content update 3 | about: updating docs content 4 | title: "[Update]" 5 | labels: 'add-to-branches', 'update content' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Add label for the earliest DB version your changes apply to -->>** 11 | 12 | **If you are changing page names or in-page anchors, you must update the 13 | `page-index.js` file** 14 | 15 | **The default reviewers (CODEOWNERS) are added to each PR. They will 16 | primarily be reviewing on formatting, not content. You only need ONE of them 17 | to approve in order to merge your PR** 18 | 19 | **Please add at least one content reviewer. If you don't add an 20 | additional reviewer with knowledge about the area you are writing about, 21 | one of the codeowners may add one** 22 | 23 | **If this is in response to a posted GitHub issue, please add "Fixes " 24 | below so that it's referenced (i.e. "Fixes #122")** 25 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE/correction.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Correction 3 | title: "[Correction]" 4 | labels: 'add-to-branches', 'bug' 5 | assignees: '' 6 | 7 | --- 8 | 9 | **Add label for the earliest DB version your changes apply to -->>** 10 | 11 | **The default reviewers (CODEOWNERS) are added to each PR. They will 12 | primarily be reviewing on formatting, not content. You only need ONE of them 13 | to approve in order to merge your PR** 14 | 15 | **Reviewers may add comments to your PR without approving or requesting changes. 16 | This is to avoid additional request/review cycles for simple changes. 17 | You still need to address these changes before your PR is ready to merge** 18 | 19 | **If this is in response to a posted GitHub issue, please add "Fixes " 20 | below so that it's referenced (i.e. "Fixes #122")** 21 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE/new_page.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: New docs page 3 | about: updating docs content 4 | title: "[New]" 5 | labels: 'add-to-branches', 'new page' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Add label for the earliest DB version your changes apply to (e.g. `1.7`)-->>** 11 | 12 | **Make sure to update the `page-index.js` file to include your new page 13 | (otherwise it won't be visible)** 14 | 15 | **The default reviewers (CODEOWNERS) are added to each PR. They will 16 | primarily be reviewing on formatting, not content. You only need ONE of them 17 | to approve in order to merge your PR** 18 | 19 | **Please add at least one content reviewer with knowledge about the area you 20 | are writing about. If you don't add an additional reviewer, one of the 21 | codeowners may add one** 22 | 23 | **If this is in response to a posted GitHub issue, please add "Fixes " 24 | below so that it's referenced (i.e. "Fixes #122")** 25 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE/pull_request_template.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Pull request 3 | about: updating docs content 4 | title: "" 5 | labels: 'add-to-branches' 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Add label for the earliest DB version your changes apply to -->>** 11 | 12 | **If you are changing page names or in-page anchors, you must update the 13 | `page-index.js` file** 14 | 15 | **The default reviewers (CODEOWNERS) are added to each PR. They will 16 | primarily be reviewing on formatting, not content. You only need ONE of them 17 | to approve in order to merge your PR** 18 | 19 | **If you are making simple corrections, reviewers may add comments to your PR without approving or requesting changes. 20 | This is to avoid additional request/review cycles for simple changes. 21 | You still need to address these changes before your PR is ready to merge** 22 | 23 | **Please add at least one content reviewer. If you don't add an 24 | additional reviewer with knowledge about the area you are writing about, 25 | one of the codeowners may add one** 26 | 27 | **If this is in response to a posted GitHub issue, please add "Fixes " 28 | below so that it's referenced (i.e. "Fixes #122")** 29 | -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | PR instructions 2 | 3 | 1. Describe your PR right below: 4 | 5 | 6 | 7 | --- 8 | 2. Add label for the earliest DB version your changes apply to -->> 9 | 10 | 3. If you are changing page names or in-page anchors, you must update the 11 | `page-index.js` file 12 | 13 | 4. Please add at least one content reviewer. If you don't add an 14 | additional reviewer with knowledge about the area you are writing about, 15 | one of the codeowners may add one 16 | 17 | 5. If this is in response to a posted GitHub issue, please add "Fixes " 18 | below so that it's referenced (i.e. "Fixes #122") 19 | 20 | 5. If you are making simple corrections, reviewers may add comments to your 21 | PR without officially requesting changes. This is to avoid additional 22 | request/review cycles for simple changes. Make sure to address these 23 | comments (i.e. with changes) before your PR is ready to merge 24 | 25 | 6. Feel free to delete these instructions in your PR message after everything 26 | else is filled out 27 | --- 28 | Additional notes: 29 | 30 | The default reviewers (CODEOWNERS) are added to each PR. They will 31 | primarily be reviewing on formatting, not content. You only need ONE of them 32 | to approve in order to merge your PR 33 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | .DS_Store -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | TimescaleDB (TM) Documentation 2 | 3 | Copyright (c) 2017-2021 Timescale, Inc. All Rights Reserved. 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # README # 2 | 3 | ***THIS REPOSITORY IS DEPRECATED. FOR DOCS CHANGES, USE https://github.com/timescale/docs INSTEAD*** 4 | 5 | This is the source for content for docs.timescale.com. 6 | The docs site uses this repo as a submodule and converts the files directly into 7 | pages using a bash script and markdown parser. 8 | 9 | All files are written in standard markdown. 10 | 11 | ## Contributing 12 | 13 | We welcome and appreciate any help the community can provide to make 14 | TimescaleDB's documentation better! 15 | 16 | You can help either by opening an 17 | [issue](https://github.com/timescale/docs.timescale.com-content/issues) with 18 | any suggestions or bug reports, or by forking this repository, making your own 19 | contribution, and submitting a pull request. 20 | 21 | Before we accept any contributions, Timescale contributors need to 22 | sign the [Contributor License Agreement](https://cla-assistant.io/timescale/docs.timescale.com-content) (CLA). 23 | By signing a CLA, we can ensure that the community is free and confident in its 24 | ability to use your contributions. 25 | 26 | ## Docs versions 27 | 28 | There is a version of the docs for each supported version of the database, stored in 29 | a separate git branch. Our docs site parses those branches to allow users to choose 30 | what version of the docs they want to see. When submitting pull requests, you should determine 31 | what versions of the docs your changes will apply to and attach a label to the pull request 32 | that denotes the earliest version that your changes should apply to (`0.9`, `0.10`, `1.0`, etc.) 33 | The admin for the docs will use that as a guide when updating version branches. 34 | 35 | ### A note on page links 36 | 37 | None of the internal page links within these files will work on GitHub. They are designed to function within the code for the documentation site at [docs.timescale.com](http://docs.timescale.com). All external links should work. 38 | 39 | ### A note on anchors 40 | 41 | If you want to link to a specific part of the page from the docs sidebar, you 42 | need to place a special anchor `[](anchor_name)`. 43 | 44 | **Your anchor name must be unique** in order for the highlight scrolling to work properly. 45 | 46 | ### A note on code blocks 47 | When showing commands being entered from a command line, do not include a 48 | character for the prompt. Do this: 49 | 50 | ```bash 51 | some_command 52 | ``` 53 | 54 | instead of this: 55 | ```bash 56 | $ some_command 57 | ``` 58 | 59 | or this: 60 | ```bash 61 | > some_command 62 | ``` 63 | 64 | Otherwise the code highlighter may be disrupted. 65 | 66 | ### General formatting conventions 67 | 68 | To maintain consistency, please follow these general rules. 69 | 1. Make sure to add line breaks to your paragraphs so that your PRs are readable 70 | in the browser. 71 | 1. All links should be reference-style links where the link address is at the 72 | bottom of the page. The only exceptions are links to anchors on the same page 73 | as the link itself. 74 | 1. All functions, commands and standalone function arguments (ex. `SELECT`, 75 | `time_bucket`) should be set as inline code within backticks ("\`command\`"). 76 | 1. Functions should not be written with parentheses unless the function is 77 | being written with arguments within the parentheses. 78 | 1. "PostgreSQL" is the way to write the elephant database name, rather than 79 | "Postgres". "TimescaleDB" refers to the database, "Timescale" refers to the 80 | company. 81 | 1. Use single quotes when referring to the object of a user interface action. 82 | For example: Click 'Get started' to proceed with the tutorial. 83 | 84 | ### Special rules 85 | There are some custom modifications to the markdown parser to allow for special 86 | formatting within the docs. 87 | 88 | + Adding `sss ` to the start of every list item in an ordered list will result in 89 | a switch to "steps" formatting which is used to denote instructional steps, as 90 | for a tutorial. 91 | + Adding `>:TIP: ` to the start of a blockquote (using '>') will create a "tip" callout. 92 | + Adding `>:WARNING: ` to the start of a blockquote (using '>') will create a "warning" callout. 93 | + Adding `>:TOPLIST: ` as the first line of a blockquote (using '>') will 94 | create a fixed right-oriented box, useful for a table of contents or list of 95 | functions, etc. See the FAQ page (faq.md) for an example. 96 | - The first headline in the toplist will act as the title and will be separated from the remainder of the content stylewise (on the FAQ page, it's the headline "Questions"). 97 | - Everything else acts as a normal blockquote does. 98 | + Adding a text free link to a header with a text address (Ex. `## Important Header [](indexing)`) will create an anchor icon that links to that header with the hash name of the text. 99 | + Adding `:FOOTER_LINK: ` to the start of a paragraph(line) will format it as a "footer link". 100 | + Adding `:DOWNLOAD_LINK: ` to the start of a link will append a 'download link' icon to the end of the link inline. 101 | + Adding `x.y.z` anywhere in the text will be replaced by the version number of the branch. Ex. `look at file foo-x.y.z` >> `look at file foo-0.4.2`. 102 | + Adding `:pg_version:` to text displayed in an installation section (i.e. any page with a filename beginning `installation-`) will display the PostgreSQL version number. This is primarily to be used for displayed filenames in install instructions that need to be modular based on the version. 103 | + Designating functions 104 | + Adding `:community_function:` to a header (for example, in the api section) adds decorator text "community function". 105 | 106 | _Make sure to include the space after the formatting command!_ 107 | 108 | **Warning**: Note the single space required in the special formats before adding 109 | normal text. Adding ':TIP:' or ':WARNING:' to the start of any standard paragraph will 110 | result in non-optimal html. The characters will end up on the outside of the 111 | paragraph tag. This is due to the way that the markdown parser interprets 112 | blockquotes with the new modifications. 113 | This will be fixed in future versions if it becomes a big issue, but we don't 114 | anticipate that. 115 | 116 | ### Editing the API section 117 | 118 | There is a specific format for the API section which consists of: 119 | - **Function name** with empty parentheses (if function takes arguments). Ex. `add_dimension()` 120 | - A brief, specific description of the function 121 | - Any warnings necessary 122 | - **Required Arguments** 123 | - A table with columns for "Name" and "Description" 124 | - **Optional Arguments** 125 | - A table with columns for "Name" and "Description" 126 | - Any specific instructions about the arguments, including valid types 127 | - **Sample Usage** 128 | - One or two literal examples of the function being used to demonstrate argument syntax. 129 | 130 | See the API file to get an idea. 131 | -------------------------------------------------------------------------------- /development.md: -------------------------------------------------------------------------------- 1 |

Development

2 | 3 | 17 | -------------------------------------------------------------------------------- /getting-started.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | TimescaleDB is PostgreSQL for time-series data. TimescaleDB provides all 4 | the benefits of PostgreSQL, including: 5 | 6 | - Ability to coexist with other TimescaleDB databases and PostgreSQL databases on a PostgreSQL server 7 | - Full SQL as its primary interface language 8 | - All the standard database objects (like tables, indexes, triggers, and more) 9 | - Ability to use the entire PostgreSQL ecosystem of third-party tools 10 | 11 | The way the database accomplishes this synchronicity is through its packaging 12 | as a PostgreSQL extension, whereby a standard PostgreSQL database is 13 | transformed into a TimescaleDB database. 14 | 15 | hierarchy illustration 16 | 17 | But TimescaleDB improves upon PostgreSQL for handling time-series data. 18 | These advantages are most easily seen when interacting with 19 | [hypertables][hypertables], which behave like normal tables yet maintain 20 | high performance even while scaling storage to normally prohibitive amounts of data. 21 | Hypertables can engage in normal table operations, including JOINs with standard 22 | tables. 23 | 24 | If you know PostgreSQL, you are 90% of the way to knowing TimescaleDB. If 25 | you want to learn more, here are some additional resources: 26 | 27 | - [Learn more about time-series data][time-series-data] and how you can best use it for your applications. 28 | - Learn more about [the TimescaleDB architecture][timescaledb-architecture]. 29 | - Learn more about [the unique features of TimescaleDB][using-timescaledb]. 30 | 31 | ### Options for installing TimescaleDB 32 | 33 | The best way to get TimescaleDB is through our hosted offering. You can 34 | [try TimecaleDB for free][try-for-free] and get started in seconds. Hosted 35 | TimescaleDB lets you focus on your workloads while we handle the operations 36 | and management of your critical time-series data. TimescaleDB is available in 37 | the three top cloud providers (Amazon Web Services, Microsoft Azure, and 38 | Google Cloud Platform) across 75+ regions and over 2000 different configurations. 39 | 40 | You can also [install TimescaleDB][install-timescale] on your desktop or 41 | self-managed in your own infrastructure for free. 42 | 43 | ### Getting familiar with TimescaleDB 44 | 45 | There are a lot of things TimescaleDB can do for you and your time-series data. 46 | Here are some of our favorite features, along with links to learn more: 47 | 48 | - [Migrating your data to a hypertable][migrate] (optional) 49 | - Analyze your data using advanced [time-series analytical functions][time-series-functions] (e.g., gap filling, LOCF, interpolation) 50 | - [Native compression][using-compression] can reduce storage by up to 90%, saving you a significant amount of money on your time-series deployment 51 | - [Continuous aggregates][using-continuous-aggregates] automatically calculate the results of a query in the background and materialize the results 52 | - [Data retention][using-data-retention] policies allow you to decide how long raw data is kept, separately from data rollups stored in Continuous Aggregates 53 | - Achieve petabyte scale with [Multi-node][multinode] (distributed hypertables) 54 | - Support for [high cardinality][high-cardinality] datasets 55 | - Support for all PostgreSQL extensions, such as [PostGIS][hello-timescale] 56 | - Compatibility with [Grafana][grafana-tutorials], [Tableau][tableau-tutorials], and most visualization tools 57 | - Support for [VPC Peering][vpc-peering] (in Timescale Cloud) 58 | - [SSL Support for database connections][ssl-support] and better security 59 | 60 | ### Using TimescaleDB 61 | 62 | The best way to gain familiarity with TimescaleDB is to use it. The following 63 | tutorials (complete with sample data) will help you learn how to harness the 64 | power of your time-series data and give you a guided tour of TimescaleDB. 65 | 66 | - Start with [Hello Timescale][hello-timescale], our 20-minute guided tour of TimescaleDB 67 | - Many people use visualization tools with their time-series data, and our [Grafana tutorials][grafana-tutorials] will walk you through these steps 68 | - We’ve also built [other tutorials][all-tutorials] for language-specific developers, data migration, and more 69 | 70 | ### Need help? 71 | 72 | Our world-class support team is here to support you through multiple channels: 73 | 74 | - Join our [Community Slack][slack-community] and get to know your fellow time-series developers 75 | - Consider [paid support options][paid-support] for a deeper relationship with Timescale engineers 76 | - Join our [worldwide TimescaleDB community][community-options] and stay on top of the latest developments in time-series data 77 | 78 | 79 | [time-series-data]: /introduction/time-series-data 80 | [timescaledb-architecture]: /introduction/architecture 81 | [hypertables]: /introduction/architecture#hypertables 82 | [using-timescaledb]: /using-timescaledb 83 | [try-for-free]: https://www.timescale.com/timescale-signup 84 | [install-timescale]: /getting-started/installation 85 | [migrate]: /getting-started/migrating-data 86 | [time-series-functions]: https://blog.timescale.com/blog/sql-functions-for-time-series-analysis/ 87 | [using-compression]: /using-timescaledb/compression 88 | [using-continuous-aggregates]: /using-timescaledb/continuous-aggregates 89 | [using-data-retention]: /using-timescaledb/data-retention 90 | [multinode]: /getting-started/setup-multi-node-basic 91 | [high-cardinality]: https://blog.timescale.com/blog/what-is-high-cardinality-how-do-time-series-databases-influxdb-timescaledb-compare/ 92 | [vpc-peering]: https://kb.timescale.cloud/en/articles/2752394-using-vpc-peering 93 | [ssl-support]: https://kb.timescale.cloud/en/articles/2752457-ssl-tls-certificates 94 | [hello-timescale]: /tutorials/tutorial-hello-timescale 95 | [grafana-tutorials]: /tutorials/tutorial-grafana 96 | [tableau-tutorials]: /tutorials/visualizing-time-series-data-in-tableau 97 | [telegraf-tutorials]: /tutorials/telegraf-output-plugin 98 | [all-tutorials]: /tutorials 99 | [slack-community]: https://slack.timescale.com/ 100 | [paid-support]: https://www.timescale.com/support 101 | [community-options]: https://www.timescale.com/community 102 | -------------------------------------------------------------------------------- /getting-started/creating-hypertables.md: -------------------------------------------------------------------------------- 1 | # Creating Hypertables 2 | 3 | The primary point of interaction with your data is a hypertable, 4 | the abstraction of a single continuous table across all space and time intervals, such that one can query it via vanilla SQL. 5 | 6 | >:TIP: First make sure that you have properly [installed][] **AND [setup][]** TimescaleDB within your PostgreSQL instance. 7 | 8 | ### Creating a (Hyper)table [](create-hypertable) 9 | To create a hypertable, you start with a regular SQL table, and then convert 10 | it into a hypertable via the function [`create_hypertable`][create_hypertable]. 11 | 12 | The following example creates a hypertable for tracking 13 | temperature and humidity across a collection of devices over time. 14 | 15 | ```sql 16 | -- We start by creating a regular SQL table 17 | 18 | CREATE TABLE conditions ( 19 | time TIMESTAMPTZ NOT NULL, 20 | location TEXT NOT NULL, 21 | temperature DOUBLE PRECISION NULL, 22 | humidity DOUBLE PRECISION NULL 23 | ); 24 | ``` 25 | 26 | Next, transform it into a hypertable with `create_hypertable`: 27 | 28 | ```sql 29 | -- This creates a hypertable that is partitioned by time 30 | -- using the values in the `time` column. 31 | 32 | SELECT create_hypertable('conditions', 'time'); 33 | ``` 34 | 35 | >:TIP: The 'time' column used in the `create_hypertable` function supports 36 | timestamp, date, or integer types, so you can use a parameter that is not 37 | explicitly time-based, as long as it can increment. For example, a 38 | monotonically increasing id would work. You must specify a chunk time interval 39 | when creating a hypertable if you use a monotonically increasing id. 40 | 41 | >:TIP: If you want to use [distributed hypertables][create_distributed_hypertable] in a multinode 42 | TimescaleDB setup, refer to the [scaling out][scaling-out] section for more information. 43 | 44 | ### Inserting & Querying [](inserting-querying) 45 | Inserting data into the hypertable is done via normal SQL `INSERT` commands, 46 | e.g. using millisecond timestamps: 47 | ```sql 48 | INSERT INTO conditions(time, location, temperature, humidity) 49 | VALUES (NOW(), 'office', 70.0, 50.0); 50 | ``` 51 | 52 | Similarly, querying data is done via normal SQL `SELECT` commands. 53 | ```sql 54 | SELECT * FROM conditions ORDER BY time DESC LIMIT 100; 55 | ``` 56 | 57 | SQL `UPDATE` and `DELETE` commands also work as expected. For more 58 | examples of using TimescaleDB's standard SQL interface, see our 59 | [use pages][]. 60 | 61 | [installed]: /getting-started/installation 62 | [setup]: /getting-started/setup 63 | [create_hypertable]: /api#create_hypertable 64 | [use pages]: /using-timescaledb 65 | [create_distributed_hypertable]: /api#create_distributed_hypertable 66 | [scaling-out]: /getting-started/scaling-out 67 | -------------------------------------------------------------------------------- /getting-started/exploring-forge.md: -------------------------------------------------------------------------------- 1 | # Exploring Timescale Forge 2 | 3 | Welcome to Timescale Forge! Timescale Forge combines the power and reliability 4 | of TimescaleDB with a fully-managed, cloud-native experience that is easy to 5 | start and less expensive to operate. 6 | 7 | This tutorial will walk you through setting up your Timescale Forge account and 8 | completing your first tutorial project. 9 | 10 | ### Step 1: Create a Timescale Forge account [](step1-create-account) 11 | 12 | Sign up for Timescale Forge by visiting [forge.timescale.com][forge-signup]. 13 | 14 | Provide your full name, email address, and a strong password to start: 15 | 16 | Sign up for Timescale Forge 17 | 18 | You will need to confirm your account by clicking the link you receive via 19 | email. If you do not receive this link, please first check your spam folder 20 | and, failing that, please [contact us][contact-timescale]. 21 | 22 | ### Step 2: Create your first service [](step2-create-service) 23 | 24 | After you complete account verification, you can visit the 25 | [Timescale Forge console][forge-console] and login with your credentials. 26 | 27 | To begin, click 'Create service'. 28 | 29 | Set up a Timescale Forge service 30 | 31 | 1. First, supply your service name (e.g., `acmecorp-test` or `acmecorp-dev`). 32 | 1. Next, choose your CPU and memory configuration, from (0.25 CPU, 1GB RAM) to 33 | (8 CPU, 32 GB RAM). 34 | 1. Select your storage requirements, from 10 GB to 4 TB. Note that with TimescaleDB 35 | compression, this is typically equivalent to 170 GB to 67+ TB of uncompressed 36 | storage (although compression rates can vary based on your data). 37 | 1. Note the estimated cost of running your chosen configuration. Feel free to 38 | [contact us][contact-timescale] if you would like to discuss pricing and 39 | configuration options best suited for your use case. 40 | 1. Click 'Create service' once your configuration is complete. 41 | 42 | >:TIP:Don't worry if too much about the size settings that you choose initially. With Timescale Forge, 43 | it's easy to modify both the compute (CPU/Memory) and storage associated with the service 44 | that you just created. As you get to know TimescaleDB and how your data processing needs vary, 45 | it's easy to [right-size your service with a few clicks](#forge-resize)! 46 | 47 | After you select 'Create service', you will see confirmation of your service account and 48 | password information. You should save the information in this confirmation screen in 49 | a safe place: 50 | 51 | View Timescale Forge service information 52 | 53 | >:WARNING: If you forget your password in the future, you can reset your password from the *service dashboard*. 54 | 55 | It will take a couple minutes for your service to be provisioned. When your database is 56 | ready for connection, you should see a green `Running` label above the service in the 57 | service dashboard. 58 | 59 | View all Timescale Forge services 60 | 61 | Select any service to view *service details*. You can obtain connection, 62 | configuration, and utilization information. In addition, you can reset the 63 | password for your service, power down or power up any service (which stops 64 | or starts your compute, although your storage persists), or delete 65 | a service altogether. 66 | 67 | View Timescale Forge service information 68 | 69 | ### Step 3: Complete your first tutorial [](step3-tutorial) 70 | 71 | Congratulations! You are now up and running with Timescale Forge. In order to 72 | familiarize yourself with the features and capabilities of the product, we 73 | recommend that you complete the [Hello, Timescale!][hello-timescale] tutorial. 74 | 75 | To simplify operations with TimescaleDB, each Timescale Forge service comprises a 76 | single "database" per PostgreSQL terminology, and all Timescale Forge services 77 | come with TimescaleDB already installed. So skip the `CREATE DATABASE` step 78 | and the "adding the TimescaleDB extension" step of the tutorial and 79 | jump right to the "Define your data schema" section of the [Hello, Timescale!][hello-timescale] 80 | tutorial. Wherever the instructions indicate that you should use the `nyc_data` 81 | database, use `tsdb` instead. `tsdb` is the default database name for every 82 | Timescale Forge service. And if you need another database, it's just a click away. 83 | 84 | ### Step 4: Learn more about TimescaleDB 85 | 86 | Read about TimescaleDB features in our documentation: 87 | 88 | - Create your first ”[hypertable][hypertable-info]”. 89 | - Run your first query using [time_bucket()][time-bucket-info]. 90 | - Trying more advanced time-series functions, starting with [gap filling][gap-filling-info] or [real-time aggregates][aggregates-info]. 91 | 92 | ### Step 5: Keep testing during your free trial and enter your billing information when you’re ready 93 | 94 | You’re now on your way to a great start with Timescale! 95 | 96 | You will have an unthrottled, 30-day free trial with Timescale Forge to 97 | continue to test your use case. Before the end of your trial, we encourage you 98 | to add your credit card information. This will ensure a smooth transition after 99 | your trial period concludes. 100 | 101 | ### Summary 102 | 103 | We’re excited to play a small part in helping you build a best-in-class 104 | time-series application or monitoring tool. If you have any questions, please 105 | feel free to [join our community Slack group][slack-info] 106 | or [contact us][contact-timescale] directly. 107 | 108 | Now, it's time to forge! 109 | 110 | ## Advanced configuration and Multi-node setup 111 | Timescale Forge is a versatile hosting service that provides a growing list of 112 | advanced features for your PostgreSQL and time-series data workloads. 113 | 114 | Please see additional documentation on how to: 115 | * [Resize compute and storage][resize] at any time! 116 | * [Customize your database configuration][configuration] easily! 117 | * [Create a TimescaleDB multi-node cluster][multi-node] in Timescale Forge! 118 | 119 | [forge-signup]: https://forge.timescale.com 120 | [billing-info]: /forge/managing-billing-payments 121 | [slack-info]: https://slack-login.timescale.com 122 | [install-psql]: /getting-started/install-psql-tutorial 123 | [hello-timescale]: /tutorials/tutorial-hello-timescale 124 | [forge-console]: https://console.forge.timescale.com/login 125 | [contact-timescale]: https://www.timescale.com/contact 126 | [hypertable-info]: https://docs.timescale.com/latest/using-timescaledb/hypertables 127 | [time-bucket-info]: https://docs.timescale.com/latest/using-timescaledb/reading-data#time-bucket 128 | [gap-filling-info]: https://docs.timescale.com/latest/using-timescaledb/reading-data#gap-filling 129 | [aggregates-info]: https://docs.timescale.com/latest/tutorials/continuous-aggs-tutorial 130 | [resize]: /getting-started/exploring-forge/forge-resize 131 | [configuration]: /getting-started/exploring-forge/forge-configuration 132 | [multi-node]: /getting-started/exploring-forge/forge-multi-node 133 | -------------------------------------------------------------------------------- /getting-started/exploring-labs.md: -------------------------------------------------------------------------------- 1 | # Setup [](setup) 2 | Now that you’ve been invited to Timescale Labs, you’re ready to work 3 | with some data. The first thing to do is to create a new database. 4 | Timescale Labs is a PostgreSQL database with TimescaleDB extensions 5 | already installed. Timescale Labs automatically provisions and manages 6 | a TimescaleDB instance in Amazon Web Services. 7 | 8 | >:WARNING: Timescale Labs is in private alpha. [Request an invite][invite-request] . We will keep this document updated as we add features, but the screenshots and instructions may be temporarily out of date. 9 | 10 | ## Sign in to Timescale Labs [](signin) 11 | Sign in to Timescale Labs and you’ll see that a `timescale_demo` 12 | service is automatically created for you. Click on the service 13 | and you’ll see a screen similar to this: 14 | 15 | timescale labs config screen 16 | 17 | ## Connect to your Timescale Labs database [](connect) 18 | To connect to the database, you’ll need to make sure the `psql` 19 | utility is installed on your command line. Follow the instructions for 20 | your platform in order to 21 | [setup the psql command-line utility][setup-psql]). 22 | 23 | Take note of the hostname and password (blacked out in the screenshot 24 | above) for your Timescale Labs instance. 25 | 26 | Connecting to your Timescale database is as easy as copying and pasting 27 | your service URL. Note that the link you copied from the start screen 28 | will look different than the one below; `example-password`, `example-host`, 29 | and `example-port` will be replaced with actual values for your instance. 30 | 31 | ```bash 32 | psql postgres://tsdbadmin:example-password@example-host:example-port/tsdb?sslmode=require 33 | ``` 34 | 35 | ## Upload data to Timescale Labs [](upload) 36 | Suppose you want to setup schema and upload data to your Timescale database, 37 | as described in the 38 | [Sample Datasets tutorial][sample-datasets-tutorials]. 39 | 40 | You can experiment with a Device Ops dataset, called 41 | [:DOWNLOAD_LINK: `devices_small`][devices-small-dataset] 42 | (representing metrics collected from mobile devices, like CPU, memory, network 43 | etc) and/or a Weather dataset, called 44 | [:DOWNLOAD_LINK: `weather_small`][weather-small-dataset] 45 | (representing temperature and humidity data from a variety of locations). 46 | 47 | For the Device Ops dataset, you’d first set up your schema: 48 | ```bash 49 | psql -x "postgres://tsdbadmin:example-password@example-host:example-port/tsdb?sslmode=require" < devices.sql 50 | ``` 51 | 52 | Enter your password when prompted, then connect to your database with psql 53 | and copy in data to the appropriate tables: 54 | 55 | ```bash 56 | \COPY readings FROM devices_small_readings.csv CSV 57 | \COPY device_info FROM devices_small_device_info.csv CSV 58 | ``` 59 | 60 | Similarly, for the Weather dataset, you’d set up your schemas as follows: 61 | 62 | ```bash 63 | psql -x "postgres://tsdbadmin:example-password@example-host:example-port/tsdb?sslmode=require" < weather.sql 64 | ``` 65 | 66 | And then connect to psql and copy in data to the appropriate tables: 67 | 68 | ```bash 69 | \COPY conditions FROM weather_small_conditions.csv CSV 70 | \COPY locations FROM weather_small_locations.csv CSV 71 | ``` 72 | 73 | # Next Steps [](nextsteps) 74 | To get the most out of your Timescale Labs experience, follow the 75 | [Device Ops tutorial][device-ops-tutorial] or the 76 | [Weather data tutorial][device-weather-tutorial]. 77 | We've designed TimescaleDB with simplicity in mind, and these tutorials will 78 | get you up and running quickly. 79 | 80 | If you have questions about Timescale Labs, want to provide 81 | feedback, or need help with more advanced setup requirements, please reach 82 | out to us [on our community Slack](https://timescaledb.slack.com/archives/CRG0JJ6AF). 83 | 84 | Remember, Timescale Labs is in private alpha. Don’t be shy! 85 | Tell us what you want to see us build next. 86 | 87 | [invite-request]: https://labs.timescale.com/ 88 | [setup-psql]: https://blog.timescale.com/tutorials/how-to-install-psql-on-mac-ubuntu-debian-windows/ 89 | [sample-datasets-tutorials]: /tutorials/other-sample-datasets 90 | [devices-small-dataset]: https://timescaledata.blob.core.windows.net/datasets/devices_small.tar.gz 91 | [weather-small-dataset]: https://timescaledata.blob.core.windows.net/datasets/weather_small.tar.gz 92 | [device-ops-tutorial]: /tutorials/other-sample-datasets#in-depth-devices 93 | [device-weather-tutorial]: /tutorials/other-sample-datasets#in-depth-weather) -------------------------------------------------------------------------------- /getting-started/forge-configuration.md: -------------------------------------------------------------------------------- 1 | # Customize database configuration in Timescale Forge 2 | 3 | Timescale Forge allows you to customize many TimescaleDB and PostgreSQL configuration 4 | options for each Service individually. Most configuration values for a Service 5 | are initially set in accordance with best practices given the compute and storage 6 | settings of the Service. Any time you increase or decrease the compute for a Service 7 | the most essential values are set to reflect the size of the new Service. 8 | 9 | There are times, however, when your specific workload may require tuning some of 10 | the many available TimescaleDB and PostgreSQL parameters. By providing the ability 11 | to tune various runtime settings, Timescale Forge provides the balance and flexibility you need when running your workloads 12 | in our hosted environment. 13 | 14 | >:WARNING: Modifications of most parameters can be applied without restarting 15 | the Timescale Forge Service. However, as when modifying the compute resources 16 | of a running Service, some settings will require that a restart be performed, 17 | resulting in some brief downtime (usually about 30 seconds). 18 | 19 | ### Step 1: View Service operation details [](service-details) 20 | To modify configuration parameters, first select the Service that 21 | you want to modify. This will display the _service details_ which list tabs 22 | across the top: Overview, Operations, Metrics, Logs, and Settings. 23 | 24 | Select **_Settings_**. 25 | 26 | View Timescale Forge service operational information 27 | 28 | ### Step 2: Modify basic parameters [](basic-parameters) 29 | Under the Settings tab, you can modify a limited set of the parameters that are 30 | most often modified in a TimescaleDB or PostgreSQL instance. To modify a 31 | configured value, simply click into on the **_value_** that you would like to 32 | change. This will reveal an editable field to apply your change. Clicking anywhere 33 | outside of that field will save the value to be applied. 34 | 35 | View Timescale Forge service settings modification 36 | 37 | ### Step 3: Apply configuration changes [](apply-changes) 38 | Once you have modified the basic configuration parameters that you would like to 39 | change, click the **Apply Changes** button. For some changes, such as `timescaledb.max_background_workers` 40 | (pictured below), the Service needs to be restarted. Therefore, the 41 | button will read **Restart and apply changes**. 42 | 43 | View Timescale Forge service apply settings parameter changes 44 | 45 | Regardless of whether the Service needs to be restarted or not, a confirmation 46 | dialog will be displayed which lists the parameters that will be modified. Click 47 | **Confirm** to apply the changes (and restart if necessary). 48 | 49 | View Timescale Forge service configuration changes confirmation dialog 50 | 51 | 52 | ## Configuring Advanced Parameters [](advanced-parameters) 53 | It is also possible to configure a wide variety of Service database parameters 54 | by flipping the **Show advanced parameters** toggle in the upper-right corner 55 | of the **Settings** tab. 56 | 57 | View Timescale Forge service configuration changes confirmation dialog 58 | 59 | Once toggled, a scrollable (and searchable) list of configurable parameters will 60 | be displayed. 61 | 62 | View Timescale Forge service configuration changes confirmation dialog 63 | 64 | As with the basic database configuration parameters, any changes will be highlighted 65 | and the **Apply changes** (or **Restart and apply changes**) button will be 66 | available to click, prompting you to confirm any changes before the Service is 67 | modified. 68 | -------------------------------------------------------------------------------- /getting-started/forge-resize.md: -------------------------------------------------------------------------------- 1 | # Resizing Compute and Storage in Timescale Forge 2 | 3 | Timescale Forge allows you to resize compute (CPU/RAM) and storage independently 4 | at any time. This is extremely useful when users have a need to increase storage 5 | (for instance) but not compute. The Timescale Forge console makes this very easy 6 | to do for any service. 7 | 8 | Before you modify the compute or storage settings for a Forge Service, please 9 | note the following limitations and when a change to these settings will result in momentary downtime. 10 | 11 | **Storage**: Storage changes are applied with no downtime, typically available 12 | within a few seconds. Other things to note about storage changes: 13 | * At the current time, storage can only be _increased_ in size. 14 | * Storage size changes can only be made once every six (6) hours. 15 | * Storage can be modified in various increments between 25GB and 4TB. 16 | 17 | **Compute**: Modifications to the compute size of your service (increases or 18 | decreases) can be applied at any time, however, please note the following: 19 | * **_There will be momentary downtime_** while the compute settings are applied. 20 | In most cases, this downtime will be less than 30 seconds. 21 | * Because there will be an interruption to your service, you should plan 22 | accordingly to have the settings applied at an appropriate service window. 23 | 24 | ## Step 1: View Service operation details [](service-details) 25 | To modify the compute or storage of your Service, first select the Service that 26 | you want to modify. This will display the _service details_ which list four tabs 27 | across the top: Overview, Operations, Metrics, and Logs. 28 | 29 | Select **_Operations_**. 30 | 31 | View Timescale Forge service operational information 32 | 33 | ## Step 2: Display the current Service Resources [](service-resources) 34 | Under the Operations tab, you can perform the same **Basic** operations as before 35 | (Reset password, Pause service, Delete service). There is now a second, advanced 36 | section on the left labeled **Resources**. Selecting this option displays the 37 | current resource settings for the Service. 38 | 39 | View Timescale Forge service resource information 40 | 41 | ## Step 3: Modify Service resources [](modify-resources) 42 | Once you have navigated to the current Service resources, it's easy to modify 43 | either the compute (CPU/Memory) or disk size. As you modify either setting, 44 | notice that the current and new hourly charges are displayed in real-time 45 | so that it's easy to verify how these changes will impact your costs. 46 | 47 | As noted above, changes to disk size will not cause any downtime. However, 48 | the platform currently only supports _increasing_ disk size (not decreasing it), 49 | and you can increase disk size once every six (6) hours. 50 | 51 | When you're satisfied with the changes, click **Apply** (storage resizes only) or **Apply and Restart** (when modifying compute resources). 52 | 53 | View Timescale Forge service apply resize 54 | -------------------------------------------------------------------------------- /getting-started/install-psql-tutorial.md: -------------------------------------------------------------------------------- 1 | # Tutorial: How to install psql on Mac, Ubuntu, Debian, Windows 2 | 3 | ### Introduction 4 | `psql` is the standard command line interface for interacting with a PostgreSQL 5 | or TimescaleDB instance. Here we explain how to install `psql` on various platforms. 6 | 7 | ### Before you start 8 | Before you start, you should confirm that you don’t already have `psql` installed. 9 | In fact, if you’ve ever installed Postgres or TimescaleDB before, you likely already 10 | have `psql` installed. 11 | 12 | ```bash 13 | psql --version 14 | ``` 15 | 16 | ### Install on macOS using Homebrew 17 | First, install the [Brew Package Manager][brew-package-manager]. Homebrew simplifies 18 | the installation of software on macOS. 19 | 20 | Second, update `brew`. From your command line, run the following commands: 21 | 22 | ```bash 23 | brew doctor 24 | brew update 25 | brew install libpq 26 | ``` 27 | 28 | Finally, create a symbolic link to `psql` (and other `libpq` tools) into `/usr/local/bin` 29 | so that you can reach it from any command on the macOS Terminal. 30 | 31 | ```bash 32 | brew link --force libpq ail 33 | ``` 34 | 35 | ### Install on Ubuntu 16.04,18.04 and Debian 9,10 36 | Install on Ubuntu and Debian using the `apt` package manager: 37 | 38 | ```bash 39 | sudo apt-get update 40 | sudo apt-get install postgresql-client 41 | ``` 42 | 43 | >:TIP: This only installs the `psql` client and not the PostgreSQL database. 44 | 45 | ### Install on Windows 10 46 | We recommend using the installer from [PostgreSQL.org][windows-installer]. 47 | 48 | ### Last step: Connect to your PostgreSQL server 49 | Let’s confirm that `psql` is installed: 50 | 51 | ```bash 52 | psql --version 53 | ``` 54 | 55 | Now, in order to connect to your PostgreSQL server, you’ll need the following 56 | connection parameters: 57 | - Hostname 58 | - Port 59 | - Username 60 | - Password 61 | - Database name 62 | 63 | There are two ways to use these parameters to connect to your PostgreSQL database. 64 | 65 | #### Option 1: Supply parameters at the command line 66 | In this method, use parameter flags on the command line to supply the required 67 | information to connect to a PostgreSQL database: 68 | 69 | ```bash 70 | psql -h HOSTNAME -p PORT -U USERNAME -W -d DATABASENAME 71 | ``` 72 | 73 | Once you run that command, the prompt will ask you for your password. (This is the purpose 74 | of the `-W` flag.) 75 | 76 | #### Option 2: Use a service URI 77 | The Service URI begins with `postgres://`. 78 | 79 | ```bash 80 | psql postgres://[USERNAME]:[PASSWORD]@[HOSTNAME]:[PORT]/[DATABASENAME]?sslmode=require 81 | ``` 82 | 83 | ### Fun things to do with psql 84 | 85 | #### Common psql commands 86 | Here is a table of common commands you'll find yourself using a lot: 87 | 88 | | Command | Actions | 89 | |---------------|------------------------------------------| 90 | |`\l` | List available databases | 91 | |`\c dbname` | Connect to a new database | 92 | |`\dt` | List available tables | 93 | |`\d tablename` | Describe the details of given table | 94 | |`\dn` | List all schemas in the current database | 95 | |`\df` | List functions in the current database | 96 | |`\h` | Get help on syntax of SQL commands | 97 | |`\?` | Lists all `psql` slash commands | 98 | |`\set` | System variables list | 99 | |`\timing` | Shows how long a query took to execute | 100 | |`\x` | Show expanded query results | 101 | |`\q` | Quit `psql` | 102 | 103 | #### Save results of a query to a comma-separated file 104 | You may often find yourself running SQL queries with lengthy results. You can save these 105 | results to a comma-separated file (CSV) using the `COPY` command: 106 | 107 | ```sql 108 | \copy (SELECT * FROM ...) TO '/tmp/myoutput.csv' (format CSV); 109 | ``` 110 | 111 | You would then be able to open `/tmp/myoutput.csv` using any spreadsheet or similar 112 | program that reads CSV files. 113 | 114 | #### Edit a SQL query in an editor 115 | Sometimes you may find yourself writing a lengthy query such as this one from our 116 | [Hello Timescale!][hello-timescale] tutorial: 117 | 118 | ```sql 119 | -- For each airport: num trips, avg trip duration, avg cost, avg tip, avg distance, min distance, max distance, avg number of passengers 120 | SELECT rates.description, COUNT(vendor_id) AS num_trips, 121 | AVG(dropoff_datetime - pickup_datetime) AS avg_trip_duration, AVG(total_amount) AS avg_total, 122 | AVG(tip_amount) AS avg_tip, MIN(trip_distance) AS min_distance, AVG (trip_distance) AS avg_distance, MAX(trip_distance) AS max_distance, 123 | AVG(passenger_count) AS avg_passengers 124 | FROM rides 125 | JOIN rates ON rides.rate_code = rates.rate_code 126 | WHERE rides.rate_code IN (2,3) AND pickup_datetime < '2016-02-01' 127 | GROUP BY rates.description 128 | ORDER BY rates.description; 129 | ``` 130 | 131 | It would be pretty common to make an error the first couple of times you attempt to 132 | write something that long in SQL. Instead of re-typing every line or character, 133 | you can launch a `vim` editor using the `\e` command. Your previous command can 134 | then be edited, and once you save ("Escape-Colon-W-Q") your edits, the command will 135 | appear in the buffer. You will be able to get back to it by pressing the up arrow 136 | in your Terminal window. 137 | 138 | Congrats! Now you have connected via `psql`. 139 | 140 | [brew-package-manager]: https://brew.sh/ 141 | [windows-installer]: https://www.postgresql.org/download/windows/ 142 | [hello-timescale]: /tutorials/tutorial-hello-timescale -------------------------------------------------------------------------------- /getting-started/installation-apt-debian.md: -------------------------------------------------------------------------------- 1 | ## apt Installation (Debian) [](installation-apt-debian) 2 | 3 | This will install TimescaleDB via `apt` on Debian distros. 4 | 5 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 6 | 7 | #### Prerequisites 8 | 9 | - Debian 9 (stretch) or 10 (buster) 10 | 11 | #### Build & Install 12 | 13 | >:WARNING: If you have another PostgreSQL installation not via `apt`, 14 | this will likely cause problems. 15 | If you wish to maintain your current version of PostgreSQL outside 16 | of `apt`, we recommend installing from source. Otherwise, please be 17 | sure to remove non-`apt` installations before using this method. 18 | 19 | **If you don't already have PostgreSQL installed**, add PostgreSQL's third 20 | party repository to get the latest PostgreSQL packages: 21 | ```bash 22 | # `lsb_release -c -s` should return the correct codename of your OS 23 | echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -c -s)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list 24 | wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - 25 | sudo apt-get update 26 | ``` 27 | 28 | Add TimescaleDB's third party repository and install TimescaleDB, 29 | which will download any dependencies it needs from the PostgreSQL repo: 30 | ```bash 31 | # Add our repository 32 | sudo sh -c "echo 'deb https://packagecloud.io/timescale/timescaledb/debian/ `lsb_release -c -s` main' > /etc/apt/sources.list.d/timescaledb.list" 33 | wget --quiet -O - https://packagecloud.io/timescale/timescaledb/gpgkey | sudo apt-key add - 34 | sudo apt-get update 35 | 36 | # Now install appropriate package for PG version 37 | sudo apt-get install timescaledb-2-postgresql-:pg_version: 38 | ``` 39 | 40 | #### Upgrading from TimescaleDB 1.x 41 | If you are upgrading from TimescaleDB 1.x, the `apt` package will first 42 | uninstall the previous version of TimescaleDB and then install the latest TimescaleDB 2.0 43 | binaries. The feedback in your terminal should look similar to the following: 44 | 45 | ```bash 46 | Reading package lists... Done 47 | Building dependency tree 48 | Reading state information... Done 49 | The following additional packages will be installed: 50 | timescaledb-2-loader-postgresql-12 51 | The following packages will be REMOVED: 52 | timescaledb-loader-postgresql-12 timescaledb-postgresql-12 53 | The following NEW packages will be installed: 54 | timescaledb-2-loader-postgresql-12 timescaledb-2-postgresql-12 55 | 0 upgraded, 2 newly installed, 2 to remove and 11 not upgraded. 56 | Need to get 953 kB of archives. 57 | After this operation, 1314 kB of additional disk space will be used. 58 | Do you want to continue? [Y/n] 59 | ``` 60 | 61 | Once you confirm and install the newest binary package, perform the 62 | EXTENSION update as discussed in [Updating Timescale to 2.0][update-tsdb-2]. 63 | 64 | #### Configure your database 65 | 66 | There are a [variety of settings that can be configured][config] for your 67 | new database. At a minimum, you will need to update your `postgresql.conf` 68 | file to include our library in the parameter `shared_preload_libraries`. 69 | The easiest way to get started is to run `timescaledb-tune`, which is 70 | installed by default when using `apt`: 71 | ```bash 72 | sudo timescaledb-tune 73 | ``` 74 | 75 | This will ensure that our extension is properly added to the parameter 76 | `shared_preload_libraries` as well as offer suggestions for tuning memory, 77 | parallelism, and other settings. 78 | 79 | To get started you'll now need to restart PostgreSQL: 80 | ```bash 81 | # Restart PostgreSQL instance 82 | sudo service postgresql restart 83 | ``` 84 | 85 | [config]: /getting-started/configuring 86 | [contact]: https://www.timescale.com/contact 87 | [slack]: https://slack.timescale.com/ 88 | [multi-node-basic]: /getting-started/setup-multi-node-basic 89 | [update-tsdb-2]: /update-timescaledb/update-tsdb-2 90 | -------------------------------------------------------------------------------- /getting-started/installation-apt-ubuntu.md: -------------------------------------------------------------------------------- 1 | ## apt Installation (Ubuntu) [](installation-apt-ubuntu) 2 | 3 | This will install TimescaleDB via `apt` on Ubuntu distros. 4 | 5 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 6 | 7 | #### Prerequisites 8 | 9 | - Ubuntu 18.04 or later, except obsoleted versions. 10 | Check [releases.ubuntu.com][ubuntu-releases] for list of 11 | non-obsolete releases. 12 | 13 | #### Build & Install 14 | 15 | >:WARNING: If you have another PostgreSQL installation not via `apt`, 16 | this will likely cause problems. 17 | If you wish to maintain your current version of PostgreSQL outside 18 | of `apt`, we recommend installing from source. Otherwise, please be 19 | sure to remove non-`apt` installations before using this method. 20 | 21 | **If you don't already have PostgreSQL installed**, add PostgreSQL's third 22 | party repository to get the latest PostgreSQL packages (if you are using Ubuntu older than 19.04): 23 | ```bash 24 | # `lsb_release -c -s` should return the correct codename of your OS 25 | echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -c -s)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list 26 | wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - 27 | sudo apt-get update 28 | ``` 29 | 30 | Add TimescaleDB's third party repository and install TimescaleDB, 31 | which will download any dependencies it needs from the PostgreSQL repo: 32 | ```bash 33 | # Add our PPA 34 | sudo add-apt-repository ppa:timescale/timescaledb-ppa 35 | sudo apt-get update 36 | 37 | # Now install appropriate package for PG version 38 | sudo apt install timescaledb-2-postgresql-:pg_version: 39 | ``` 40 | 41 | #### Upgrading from TimescaleDB 1.x 42 | If you are upgrading from TimescaleDB 1.x, the `apt` package will first 43 | uninstall the previous version of TimescaleDB and then install the latest TimescaleDB 2.0 44 | binaries. The feedback in your terminal should look similar to the following: 45 | 46 | ```bash 47 | Reading package lists... Done 48 | Building dependency tree 49 | Reading state information... Done 50 | The following additional packages will be installed: 51 | timescaledb-2-loader-postgresql-12 52 | The following packages will be REMOVED: 53 | timescaledb-loader-postgresql-12 timescaledb-postgresql-12 54 | The following NEW packages will be installed: 55 | timescaledb-2-loader-postgresql-12 timescaledb-2-postgresql-12 56 | 0 upgraded, 2 newly installed, 2 to remove and 11 not upgraded. 57 | Need to get 953 kB of archives. 58 | After this operation, 1314 kB of additional disk space will be used. 59 | Do you want to continue? [Y/n] 60 | ``` 61 | 62 | Once you confirm and install the newest binary package, you must still perform the 63 | EXTENSION update as discussed in [Updating Timescale to 2.0][update-tsdb-2]. 64 | 65 | #### Configure your database 66 | 67 | There are a [variety of settings that can be configured][config] for your 68 | new database. At a minimum, you will need to update your `postgresql.conf` 69 | file to include our library in the parameter `shared_preload_libraries`. 70 | The easiest way to get started is to run `timescaledb-tune`, which is 71 | installed by default when using `apt`: 72 | ```bash 73 | sudo timescaledb-tune 74 | ``` 75 | 76 | This will ensure that our extension is properly added to the parameter 77 | `shared_preload_libraries` as well as offer suggestions for tuning memory, 78 | parallelism, and other settings. 79 | 80 | To get started you'll now need to restart PostgreSQL: 81 | ```bash 82 | # Restart PostgreSQL instance 83 | sudo service postgresql restart 84 | ``` 85 | 86 | [ubuntu-releases]: http://releases.ubuntu.com/ 87 | [config]: /getting-started/configuring 88 | [contact]: https://www.timescale.com/contact 89 | [slack]: https://slack.timescale.com/ 90 | [multi-node-basic]: /getting-started/setup-multi-node-basic 91 | [update-tsdb-2]: /update-timescaledb/update-tsdb-2 92 | -------------------------------------------------------------------------------- /getting-started/installation-docker.md: -------------------------------------------------------------------------------- 1 | ## Docker Hub [](docker) 2 | 3 | #### Quick start 4 | 5 | Start a TimescaleDB instance, pulling our Docker image from [Docker Hub][] if it has not been already installed: 6 | 7 | ```bash 8 | docker run -d --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb:x.y.z-pg:pg_version: 9 | ``` 10 | 11 | >:WARNING: The -p flag binds the container port to the host port, meaning 12 | anything that can access the host port will be able to access your TimescaleDB 13 | container. This can be particularly dangerous if you do not set a PostgreSQL 14 | password at runtime using the `POSTGRES_PASSWORD` environment variable as we 15 | do in the above command. Without that variable, the Docker container will 16 | disable password checks for all database users. If you want to access the 17 | container from the host but avoid exposing it to the outside world, you can 18 | explicitly have it bind to 127.0.0.1 instead of the public interface by using 19 | `-p 127.0.0.1:5432:5432`. 20 | > 21 | >Otherwise, you'll want to ensure that your host box is adequately locked down 22 | through security groups, IP Tables, or whatever you're using for access 23 | control. Note also that Docker binds the container by modifying your Linux IP 24 | Tables. For systems that use Linux UFW (Uncomplicated Firewall) for security 25 | rules, this means that Docker will potentially override any UFW settings that 26 | restrict the port you are binding to. If you are relying on UFW rules for 27 | network security, consider adding `DOCKER_OPTS="--iptables=false"` to 28 | `/etc/default/docker` to prevent Docker from overwriting IP Tables. 29 | See [this writeup on the vulnerability][docker-vulnerability] 30 | for more details. 31 | 32 | If you have PostgreSQL client tools (e.g., `psql`) installed locally, 33 | you can use those to access the TimescaleDB docker instance. Otherwise, 34 | and probably simpler given default PostgreSQL access-control settings, 35 | you can connect using the instance's version of `psql` within the 36 | container (NOTE: for Windows this is _necessary_): 37 | 38 | ```bash 39 | docker exec -it timescaledb psql -U postgres 40 | ``` 41 | 42 | #### More detailed instructions 43 | 44 | Our Docker image is derived from the [official PostgreSQL image][official-image] 45 | and includes [alpine Linux][] as its OS. 46 | 47 | While the above `run` command will pull the Docker image on demand, 48 | you can also -- and for upgrades, **need to** -- explicitly pull our image from [Docker Hub][]: 49 | 50 | ```bash 51 | docker pull timescale/timescaledb:x.y.z-pg:pg_version: 52 | ``` 53 | 54 | When running a Docker image, if one prefers to store the data in a 55 | host directory or wants to run the docker image on top of an existing 56 | data directory, then you can also specify a directory where a data 57 | volume should be stored/mounted via the `-v` flag. In particular, the 58 | above `docker run` command should now include some additional argument 59 | such as `-v /your/data/dir:/var/lib/postgresql/data`. 60 | 61 | Note that creating a new container (`docker run`) will also create a new 62 | volume unless an existing data volume is reused by reference via the 63 | -v parameter (e.g., `-v VOLUME_ID:/var/lib/postgresql/data`). Existing 64 | containers can be stopped (`docker stop`) and started again (`docker 65 | start`) while retaining their volumes and data. Even if a docker 66 | container is deleted (`docker rm`) its data volume persists on disk 67 | until explicitly removed. Use `docker volume ls` to list the existing 68 | docker volumes. 69 | ([More information on data volumes][docker-data-volumes]) 70 | 71 | >:TIP: Our standard binary releases are licensed under the Timescale License, 72 | which allows to use all our capabilities. 73 | If you want to use a version that contains _only_ Apache 2.0 licensed 74 | code, you should pull the tag `x.y.z-pg:pg_version:-oss`. 75 | 76 | ## Prebuilt with PostGIS [](postgis-docker) 77 | 78 | We have also published a Docker image that comes prebuilt with 79 | PostGIS. This image is published under the 80 | name `timescale/timescaledb-postgis` rather than `timescale/timescaledb`. 81 | To download and run this image, follow the same instructions as above, 82 | but use this image name instead. 83 | 84 | Then just add the extension from the `psql` command line: 85 | ```bash 86 | CREATE EXTENSION postgis; 87 | ``` 88 | For more instructions on using PostGIS, [see our tutorial][tutorial-postgis]. 89 | 90 | 91 | 92 | [Docker Hub]: https://hub.docker.com/r/timescale/timescaledb/ 93 | [docker-vulnerability]: https://www.techrepublic.com/article/how-to-fix-the-docker-and-ufw-security-flaw 94 | [official-image]: https://github.com/docker-library/postgres/ 95 | [alpine Linux]: https://alpinelinux.org/ 96 | [docker-data-volumes]: https://docs.docker.com/storage/volumes/ 97 | [tutorial-postgis]: http://docs.timescale.com/tutorials/tutorial-hello-nyc#tutorial-postgis 98 | 99 | -------------------------------------------------------------------------------- /getting-started/installation-grafana.md: -------------------------------------------------------------------------------- 1 | # Installing Grafana for use with TimescaleDB 2 | 3 | ### Pre-requisites 4 | 5 | You will need to [setup an instance of TimescaleDB][install-timescale]. 6 | 7 | ### Options for installing Grafana 8 | 9 | The easiest option for installing Grafana is to use Timescale Cloud. Alternatively, 10 | you can setup your own instance of Grafana. 11 | 12 | #### Installing Grafana with Timescale Cloud 13 | 14 | If you’re using Timescale Cloud, you can setup a Grafana Metrics Dashboard 15 | from the **Create Service** flow. 16 | 17 | Create a new Grafana service 18 | 19 | #### Installing your own managed instance of Grafana 20 | 21 | You can setup [Grafana][grafana-install] from the Grafana website. Once completed, 22 | follow the rest of the instructions below. 23 | 24 | ### Connecting Grafana to your TimescaleDB instance 25 | 26 | Next, you need to configure Grafana to connect to your TimescaleDB 27 | instance. 28 | 29 | Start by selecting 'Add Data Source' and choosing the 'PostgreSQL' option 30 | in the SQL group: 31 | 32 | Adding Postgres to Grafana 33 | 34 | In the configuration screen, supply the `Host`, `Database`, `User`, and `Password` for 35 | your TimescaleDB instance. 36 | 37 | >:TIP: Don’t forget to add the port number after your host URI. For example, `hostname.timescaledb.io:19660`. 38 | 39 | ### Enable TimescaleDB within Grafana 40 | 41 | Since we will be connecting to a TimescaleDB instance for this 42 | tutorial, we will also want to check the option for 'TimescaleDB' in the 43 | 'PostgreSQL details' section of the PostgreSQL configuration screen. 44 | 45 | ### Wrapping up 46 | 47 | You should also change the 'Name' of the database to something descriptive. This is 48 | optional, but will inform others who use your Grafana dashboard what this data source 49 | contains. 50 | 51 | Once done, click 'Save & Test'. You should receive confirmation that your database 52 | connection is working. 53 | 54 | Test your Grafana database connection 55 | 56 | [install-timescale]: /getting-started/installation 57 | [grafana-install]: https://www.grafana.com 58 | -------------------------------------------------------------------------------- /getting-started/installation-homebrew.md: -------------------------------------------------------------------------------- 1 | ## Homebrew [](homebrew) 2 | 3 | This will install both TimescaleDB *and* PostgreSQL via Homebrew. 4 | 5 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 6 | 7 | #### Prerequisites 8 | 9 | - [Homebrew][] 10 | 11 | #### Build & Install 12 | 13 | >:WARNING: If you have another PostgreSQL installation 14 | (such as through Postgres.app), the following instructions will 15 | cause problems. If you wish to maintain your current version of PostgreSQL 16 | outside of Homebrew we recommend installing from source. Otherwise please be 17 | sure to remove non-Homebrew installations before using this method. 18 | 19 | ```bash 20 | # Add our tap 21 | brew tap timescale/tap 22 | 23 | # To install 24 | brew install timescaledb 25 | 26 | # Post-install to move files to appropriate place 27 | /usr/local/bin/timescaledb_move.sh 28 | ``` 29 | 30 | #### Configure your database 31 | 32 | There are a [variety of settings that can be configured][config] for your 33 | new database. At a minimum, you will need to update your `postgresql.conf` 34 | file to include our library in the parameter `shared_preload_libraries`. 35 | The easiest way to get started is to run `timescaledb-tune`, which is 36 | installed as a dependency when you install via Homebrew: 37 | ```bash 38 | timescaledb-tune 39 | ``` 40 | 41 | This will ensure that our extension is properly added to the parameter 42 | `shared_preload_libraries` as well as offer suggestions for tuning memory, 43 | parallelism, and other settings. 44 | 45 | To get started you'll now need to restart PostgreSQL and add 46 | a `postgres` superuser (used in the rest of the docs): 47 | 48 | ```bash 49 | # Restart PostgreSQL instance 50 | brew services restart postgresql 51 | 52 | # Add a superuser postgres: 53 | createuser postgres -s 54 | ``` 55 | 56 | >:TIP: Our standard binary releases are licensed under the Timescale License, 57 | which allows to use all our capabilities. 58 | If you want to use a version that contains _only_ Apache 2.0 licensed 59 | code, you should use `brew install timescaledb --with-oss-only`. 60 | 61 | [config]: /getting-started/configuring 62 | [Homebrew]: https://brew.sh/ 63 | [contact]: https://www.timescale.com/contact 64 | [slack]: https://slack.timescale.com/ 65 | -------------------------------------------------------------------------------- /getting-started/installation-source-windows.md: -------------------------------------------------------------------------------- 1 | ## From Source (Windows) [](installation-source) 2 | 3 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 4 | 5 | #### Prerequisites 6 | 7 | - A standard **PostgreSQL :pg_version: 64-bit** installation 8 | - Visual Studio 2017 (with [CMake][] and Git components) 9 | **or** Visual Studio 2015/2016 (with [CMake][] version 3.11+ and Git components) 10 | - Make sure all relevant binaries are in your PATH: `pg_config` and `cmake` 11 | 12 | #### Build & Install with Local PostgreSQL 13 | >:TIP: It is **highly recommended** that you checkout the latest 14 | tagged commit to build from (see the repo's [Releases][github-releases] page for that) 15 | 16 | Clone the repository from [GitHub][github-timescale]: 17 | 18 | ```bash 19 | git clone https://github.com/timescale/timescaledb.git 20 | cd timescaledb 21 | git checkout # e.g., git checkout x.y.z 22 | ``` 23 | 24 | If you are using Visual Studio 2017 with the CMake and Git components, 25 | you should be able to open the folder in Visual Studio, which will take 26 | care of the rest. 27 | 28 | If you are using an earlier version of Visual Studio: 29 | 30 | >:WARNING: This install step has to be made as admin. 31 | 32 | ```bash 33 | # Bootstrap the build system 34 | bootstrap.bat 35 | 36 | # To build the extension from command line 37 | cmake --build ./build --config Release 38 | 39 | # To install 40 | cmake --build ./build --config Release --target install 41 | 42 | # Alternatively, open build/timescaledb.sln in Visual Studio and build, 43 | # then open & build build/INSTALL.vcxproj 44 | ``` 45 | 46 | #### Update `postgresql.conf` 47 | 48 | You will need to edit your `postgresql.conf` file to include 49 | the TimescaleDB library, and then restart PostgreSQL. First, locate your 50 | `postgresql.conf` file: 51 | 52 | ```bash 53 | psql -d postgres -c "SHOW config_file;" 54 | ``` 55 | 56 | Then modify `postgresql.conf` to add the required library. Note that 57 | the `shared_preload_libraries` line is commented out by default. 58 | Make sure to uncomment it when adding our library. 59 | 60 | ```bash 61 | shared_preload_libraries = 'timescaledb' 62 | ``` 63 | >:TIP: If you have other libraries you are preloading, they should be comma separated. 64 | 65 | Then, restart the PostgreSQL instance. 66 | 67 | #### Updating from TimescaleDB 1.x to 2.0 68 | Once the latest TimescaleDB 2.0 are installed, you can update the EXTENSION 69 | in your database as discussed in [Updating Timescale to 2.0][update-tsdb-2]. 70 | 71 | 72 | >:TIP: Our standard binary releases are licensed under the Timescale License, 73 | which allows to use all our capabilities. 74 | To build a version of this software that contains 75 | source code that is only licensed under Apache License 2.0, pass `-DAPACHE_ONLY=1` 76 | to `bootstrap`. 77 | 78 | [CMake]: https://cmake.org/ 79 | [github-releases]: https://github.com/timescale/timescaledb/releases 80 | [github-timescale]: https://github.com/timescale/timescaledb 81 | [update-tsdb-2]: /update-timescaledb/update-tsdb-2 82 | -------------------------------------------------------------------------------- /getting-started/installation-source.md: -------------------------------------------------------------------------------- 1 | ## From Source [](installation-source) 2 | 3 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 4 | 5 | #### Prerequisites 6 | 7 | - A standard **PostgreSQL :pg_version:** installation with development environment (header files) (see https://www.postgresql.org/download/ for the appropriate package) 8 | - C compiler (e.g., gcc or clang) 9 | - [CMake][] version 3.11 or greater 10 | 11 | #### Build & Install with Local PostgreSQL 12 | >:TIP: It is **highly recommended** that you checkout the latest 13 | tagged commit to build from (see the repo's [Releases][github-releases] page for that) 14 | 15 | Clone the repository from [GitHub][github-timescale]: 16 | ```bash 17 | git clone https://github.com/timescale/timescaledb.git 18 | cd timescaledb 19 | git checkout # e.g., git checkout x.y.z 20 | 21 | # Bootstrap the build system 22 | ./bootstrap 23 | 24 | # To build the extension 25 | cd build && make 26 | 27 | # To install 28 | make install 29 | ``` 30 | 31 | >:WARNING: Our build scripts use `pg_config` to find out where PostgreSQL 32 | stores its extension files. If you have two versions of PostgreSQL 33 | installed, use `pg_config` to find out which version TimescaleDB was 34 | installed with. 35 | 36 | #### Update `postgresql.conf` 37 | 38 | You will need to edit your `postgresql.conf` file to include 39 | the TimescaleDB library, and then restart PostgreSQL. First, locate your 40 | `postgresql.conf` file: 41 | 42 | ```bash 43 | psql -d postgres -c "SHOW config_file;" 44 | ``` 45 | 46 | Then modify `postgresql.conf` to add the required library. Note that 47 | the `shared_preload_libraries` line is commented out by default. 48 | Make sure to uncomment it when adding our library. 49 | 50 | ```bash 51 | shared_preload_libraries = 'timescaledb' 52 | ``` 53 | >:TIP: If you have other libraries you are preloading, they should be comma separated. 54 | 55 | Then, restart the PostgreSQL instance. 56 | 57 | >:TIP: Our standard binary releases are licensed under the Timescale License, 58 | which allows to use all our capabilities. 59 | To build a version of this software that contains 60 | source code that is only licensed under Apache License 2.0, pass `-DAPACHE_ONLY=1` 61 | to `bootstrap`. 62 | 63 | [CMake]: https://cmake.org/ 64 | [github-releases]: https://github.com/timescale/timescaledb/releases 65 | [github-timescale]: https://github.com/timescale/timescaledb 66 | -------------------------------------------------------------------------------- /getting-started/installation-timescale-cloud.md: -------------------------------------------------------------------------------- 1 | ## Installation (Timescale Cloud) [](installation-timescale-cloud) 2 | 3 | Timescale Cloud is a Database as a Service (DBaaS) offering that provides 4 | an easy way for you to store and analyze time-series. 5 | Powered by TimescaleDB, you can create database instances in the cloud 6 | and automate many of your most common operational tasks. 7 | 8 | You can register for a Timescale Cloud account on the 9 | [sign up][sign-up] page. Once you have a login, you can access 10 | the Timescale Cloud [portal][portal]. 11 | 12 | After you create an account and login for the first time, 13 | a default project is created for you. In this project is where 14 | you create your first TimescaleDB service. 15 | 16 | >:TIP: Timescale Cloud automatically gives you access to all the features 17 | and capabilities in [TimescaleDB][timescale-features]. 18 | 19 | Now that your database account is setup, it's time to 20 | [setup Timescale Cloud][timescale-cloud-setup]. 21 | 22 | --- 23 | 24 | [sign-up]: https://www.timescale.com/cloud-signup 25 | [portal]: http://portal.timescale.cloud 26 | [timescale-features]: https://www.timescale.com/products 27 | [timescale-cloud-setup]: /getting-started/exploring-cloud 28 | [intercom]: https://kb.timescale.cloud/ 29 | [contact]: https://www.timescale.com/contact 30 | [slack]: https://slack.timescale.com/ 31 | -------------------------------------------------------------------------------- /getting-started/installation-timescale-forge.md: -------------------------------------------------------------------------------- 1 | ## Installation (Timescale Forge) [](installation-timescale-forge) 2 | 3 | Timescale Forge is a time-series platform that provides 4 | a cloud-native experience for storing and analyzing time-series. 5 | Powered by TimescaleDB, you can create database instances in the cloud 6 | and automate many of your most common operational tasks. 7 | 8 | You can register for a Timescale Forge account on the 9 | [sign up][sign-up] page. 10 | 11 | >:TIP: Timescale Forge automatically gives you access to all the features 12 | and capabilities in [TimescaleDB][timescale-features]. 13 | 14 | Now that your database account is set up, it's time to 15 | [set up Timescale Forge][timescale-forge-setup]. 16 | 17 | --- 18 | 19 | [sign-up]: https://forge.timescale.com/signup 20 | [timescale-features]: https://www.timescale.com/products 21 | [timescale-forge-setup]: /getting-started/exploring-forge 22 | [contact]: https://www.timescale.com/contact 23 | [slack]: https://slack.timescale.com/ 24 | -------------------------------------------------------------------------------- /getting-started/installation-ubuntu-ami.md: -------------------------------------------------------------------------------- 1 | ## Installing from an Amazon AMI (Ubuntu) [](installation-ubuntu-ami) 2 | 3 | TimescaleDB is currently available as an Ubuntu 18.04 Amazon EBS-backed AMI. AMIs are 4 | distributed by region, and our AMI is currently available in US and EU 5 | regions. Note that this image is built to use an EBS attached volume 6 | rather than the default disk that comes with EC2 instances. 7 | 8 | See below for the image id corresponding to each region for the most recent TimescaleDB version: 9 | 10 | Region | Image ID 11 | --- | --- 12 | us-east-1 (North Virginia) | ami-0100e6f7324a8c63e 13 | us-east-2 (Ohio) | ami-005e78f98120d5a02 14 | us-west-1 (North California) | ami-061ba468b6dc0037b 15 | us-west-2 (Oregon) | ami-0861e7f40fe2da511 16 | eu-central-1 (Germany) | ami-063ad7be933384642 17 | eu-north-1 (Sweden) | ami-03d77a6bf633b11ca 18 | eu-west-1 (Ireland) | ami-01c4d97c5b5e3535d 19 | eu-west-2 (England) | ami-09fef3b40e8b792db 20 | eu-west-3 (France) | ami-0f8fa46ab84a12fea 21 | 22 | To launch the AMI, go to the `AMIs` section of your AWS EC2 Dashboard run the following steps: 23 | 24 | * Select `Public Images` under the dropdown menu. 25 | * Filter the image id by the image id for your region and select the image. 26 | * Click the `Launch` button. 27 | 28 | You can also use the image id to build an instance using Cloudformation, Terraform, 29 | the AWS CLI, or any other AWS deployment tool that supports building from public AMIs. 30 | 31 | TimescaleDB is installed on the AMI, but you will still need to follow the steps for 32 | initializing a database with the TimescaleDB extension. See our [setup][] section for details. 33 | Depending on your user/permission needs, you will also need to set up a postgres superuser for your 34 | database by following these [postgres instructions][]. Another possibility is using the operating 35 | system's `ubuntu` user and modifying the [pg_hba][]. 36 | 37 | >:WARNING: AMIs do not know what instance type you are using beforehand. Therefore 38 | the PostgreSQL configuration (postgresql.conf) that comes with our AMI uses the default 39 | settings, which are not optimal for most systems. Our AMI is packaged with `timescaledb-tune`, 40 | which you can use to tune postgresql.conf based on the underlying system resources of your instance. 41 | See our [configuration][] section for details. 42 | 43 | >:TIP: These AMIs are made for EBS attached volumes. This allows for snapshots, protection of 44 | data if the EC2 instance goes down, and dynamic IOPS configuration. You should choose an 45 | EC2 instance type that is optimized for EBS attached volumes. For information on choosing the right 46 | EBS optimized EC2 instance type, see the AWS [instance configuration page][]. 47 | 48 | [setup]: /getting-started/setup 49 | [postgres instructions]: https://www.postgresql.org/docs/current/sql-createrole.html 50 | [pg_hba]: https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html 51 | [configuration]: /getting-started/configuring 52 | [instance configuration page]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-ec2-config.html 53 | [contact]: https://www.timescale.com/contact 54 | [slack]: https://slack.timescale.com/ 55 | -------------------------------------------------------------------------------- /getting-started/installation-windows.md: -------------------------------------------------------------------------------- 1 | ## Windows ZIP Installer [](installation-windows) 2 | 3 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 4 | 5 | #### Prerequisites 6 | 7 | - [Visual C++ Redistributable for Visual Studio 2015][c_plus] (included in VS 2015 and later) 8 | - A standard **PostgreSQL :pg_version: 64-bit** installation 9 | - Make sure all relevant binaries are in your PATH: (use [pg_config][]) 10 | - Installation must be performed from an account with admin privileges 11 | 12 | #### Build & Install 13 | 14 | 1. Download the the [.zip file for your PostgreSQL version][windows-dl]. 15 | 16 | 1. Extract the zip file locally 17 | 18 | 1. Run `setup.exe`, making sure that PostgreSQL is not currently running 19 | 20 | 1. If successful, a `cmd.exe` window will pop open and you will see the following: 21 | ```bash 22 | TimescaleDB installation completed succesfully. 23 | Press ENTER/Return key to close... 24 | ``` 25 | Go ahead and press ENTER to close the window 26 | 27 | #### Updating from TimescaleDB 1.x to 2.0 28 | Once the latest TimescaleDB 2.0 are installed, you can update the EXTENSION 29 | in your database as discussed in [Updating Timescale to 2.0][update-tsdb-2]. 30 | 31 | #### Configure your database 32 | 33 | There are a [variety of settings that can be configured][config] for your 34 | new database. At a minimum, you will need to update your `postgresql.conf` 35 | file to include our library in the parameter `shared_preload_libraries`. 36 | If you ran `timescaledb-tune` during the install, you are already done. 37 | If you did not, you can re-run the installer. 38 | 39 | This will ensure that our extension is properly added to the parameter 40 | `shared_preload_libraries` as well as offer suggestions for tuning memory, 41 | parallelism, and other settings. 42 | 43 | Then, restart the PostgreSQL instance. 44 | 45 | >:TIP: Our standard binary releases are licensed under the Timescale License, 46 | which allows to use all our capabilities. 47 | To build a version of this software that contains 48 | source code that is only licensed under Apache License 2.0, pass `-DAPACHE_ONLY=1` 49 | to `bootstrap`. 50 | 51 | [c_plus]: https://www.microsoft.com/en-us/download/details.aspx?id=48145 52 | [pg_config]: https://www.postgresql.org/docs/10/static/app-pgconfig.html 53 | [windows-dl]: https://timescalereleases.blob.core.windows.net/windows/timescaledb-postgresql-:pg_version:_x.y.z-windows-amd64.zip 54 | [config]: /getting-started/configuring 55 | [contact]: https://www.timescale.com/contact 56 | [slack]: https://slack.timescale.com/ 57 | [update-tsdb-2]: /update-timescaledb/update-tsdb-2 58 | -------------------------------------------------------------------------------- /getting-started/installation-yum.md: -------------------------------------------------------------------------------- 1 | ## yum Installation [](installation-yum) 2 | 3 | This will install both TimescaleDB *and* PostgreSQL via `yum` 4 | (or `dnf` on Fedora). 5 | 6 | **Note: TimescaleDB requires PostgreSQL 11, 12 or 13.** 7 | 8 | #### Prerequisites 9 | 10 | - RHEL/CentOS 7 (or Fedora equivalent) or later 11 | 12 | #### Build & Install 13 | 14 | >:WARNING: If you have another PostgreSQL installation not 15 | via `yum`, this will likely cause problems. 16 | If you wish to maintain your current version of PostgreSQL outside of `yum`, 17 | we recommend installing from source. Otherwise please be 18 | sure to remove non-`yum` installations before using this method. 19 | 20 | You'll need to [download the correct PGDG from PostgreSQL][pgdg] for 21 | your operating system and architecture and install it: 22 | ```bash 23 | # Download PGDG: 24 | sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-$(rpm -E %{rhel})-x86_64/pgdg-redhat-repo-latest.noarch.rpm 25 | 26 | ``` 27 | 28 | Add TimescaleDB's third party repository and install TimescaleDB, 29 | which will download any dependencies it needs from the PostgreSQL repo: 30 | ```bash 31 | # Add timescaledb repo 32 | sudo tee /etc/yum.repos.d/timescale_timescaledb.repo <:TIP:Consider following our how-to, in order to [setup and explore a multi-node cluster in Timescale Forge][multinode-on-forge], our fully managed database service. [Sign-up for your free][sign-up], 30-day trial and get 4 | started today! 5 | 6 | A multi-node TimescaleDB implementation consists of: 7 | - One access node to handle ingest, data routing and act as an entry 8 | point for user access; 9 | - One or more data nodes to store and organize distributed data. 10 | 11 | All nodes begin as standalone TimescaleDB instances, i.e., hosts with 12 | a running PostgreSQL server and a loaded TimescaleDB extension. This 13 | is assumed for "access node" and "data node" in the instructions. More 14 | detail on the architecture can be found in the 15 | [Architecture][architecture] section. 16 | 17 | TimescaleDB multi-node can be created as part of a self-managed deployment 18 | or (coming soon) as a managed cloud deployment. In order to set up a 19 | self-managed cluster, including how to configure the nodes for secure 20 | communication and creating users/roles across servers, please follow 21 | [these instructions][advanced setup] before proceeding. 22 | 23 | In the case of Timescale Cloud and Forge, the created services already contain 24 | PostgreSQL with TimescaleDB loaded and the created user `tsdbadmin` as superuser. 25 | In this case, all you will need to do is decide which service should be the access 26 | node, and follow the instructions in the next section. More information will be 27 | forthcoming as TimescaleDB multi-node is made available on these cloud platforms. 28 | 29 | ## Initialize data nodes from the access node [](init_data_nodes_on_access_node) 30 | 31 | Once logged in on the access node, it is necessary to add data nodes 32 | to the local database before creating distributed hypertables. This 33 | will make the data node available for use by distributed hypertables 34 | and the access node will also connect to the data node and initialize 35 | it. 36 | 37 | While connected to the access node as a superuser (e.g., via `psql`), 38 | use the command: 39 | 40 | ```sql 41 | SELECT add_data_node('example_node_name', host => 'example_host_address'); 42 | ``` 43 | 44 | `example_node_name` should be a unique name for the 45 | node. `example_host_address` is the host name or IP address of the 46 | data node. You can specify a password to authenticate with using the 47 | optional `password` parameter. But this is only necessary if password 48 | authentication is used to connect to data nodes and the password is 49 | not provided through other means (e.g., a local password file). See 50 | the [`add_data_node`][add_data_node] API reference documentation for 51 | detailed information about this command. 52 | 53 | You can now create distributed hypertables using 54 | `create_distributed_hypertable`, but note that, in order to create and 55 | use distributed hypertables as a non-superuser, the user role needs to 56 | exist on all data nodes and have the correct privileges to use the 57 | data nodes on the access node. Please refer to the next section for 58 | instructions on how to create roles on all data nodes and grant data 59 | node privileges. 60 | 61 | ## (Optional) Add roles to all data nodes [](add-roles-to-data-nodes) 62 | 63 | When you add a role on the access node it will not be automatically 64 | created on the data nodes. Therefore, the role must also be created on 65 | the data nodes before it can be used to create and query distributed 66 | hypertables: 67 | 68 | ```sql 69 | CREATE ROLE testrole; 70 | CALL distributed_exec($$ CREATE ROLE testrole WITH LOGIN $$); 71 | ``` 72 | 73 | Note that, depending on how the access node authenticates with the data 74 | nodes, the new role might need to be configured with, e.g., a 75 | password. Please refer to the following sections for more specific 76 | instructions depending on the authentication mechanism you are using: 77 | - Adding roles using [trust authentication][trust_role_setup] 78 | - Adding roles using [password authentication][password_role_setup] 79 | - Adding roles using [certificate authentication][certificate_role_setup] 80 | 81 | Finally, grant data node privileges to the user role: 82 | 83 | 84 | ```sql 85 | GRANT USAGE ON FOREIGN SERVER , , ... TO testrole; 86 | ``` 87 | 88 | >:TIP: It is possible to grant data node usage to `PUBLIC`, in which 89 | >case all user roles (including future ones) will be able to use the 90 | >specified data nodes. 91 | 92 | ## Maintenance tasks [](multi-node-maintenance) 93 | 94 | It is highly recommended that the access node is configured to run a 95 | maintenance job that regularly "heals" any non-completed distributed 96 | transactions. A distributed transaction ensures atomic execution 97 | across multiple data nodes and can remain in a non-completed state in 98 | case a data node reboots or experiences temporary issues. The access 99 | node keeps a log of distributed transactions so that nodes that 100 | haven't yet completed their part of the distributed transaction can 101 | later complete it at the access node's request. The log requires 102 | regular cleanup to "garbage collect" transactions that have completed 103 | and heal those that haven't. The maintenance job can be run as a 104 | user-defined action (custom job): 105 | 106 | 107 | ```sql 108 | CREATE OR REPLACE PROCEDURE data_node_maintenance(job_id int, config jsonb) 109 | LANGUAGE SQL AS 110 | $$ 111 | SELECT _timescaledb_internal.remote_txn_heal_data_node(fs.oid) 112 | FROM pg_foreign_server fs, pg_foreign_data_wrapper fdw 113 | WHERE fs.srvfdw = fdw.oid 114 | AND fdw.fdwname = 'timescaledb_fdw'; 115 | $$; 116 | 117 | SELECT add_job('data_node_maintenance', '5m'); 118 | ``` 119 | 120 | It is also possible to schedule this job to run from outside the 121 | database, e.g, via a CRON job. Note that the job must be scheduled 122 | separately for each database that contains distributed hypertables. 123 | 124 | --- 125 | ## Next steps 126 | To start using the database, see the page on [distributed hypertables][]. 127 | 128 | To further configure the system (set up secure node-to-node communication, add 129 | additional users/roles) see [advanced setup][]. 130 | 131 | All functions for modifying the node network are described in the API 132 | docs: 133 | - [add_data_node][] 134 | - [attach_data_node][] 135 | - [delete_data_node][] 136 | - [detach_data_node][] 137 | - [distributed_exec][] 138 | 139 | [architecture]: /introduction/architecture#single-node-vs-clustering 140 | [install]: /getting-started/installation 141 | [setup]: /getting-started/setup 142 | [advanced setup]: /getting-started/setup-multi-node-basic/multi-node-self-managed 143 | [trust_role_setup]: /getting-started/setup-multi-node-basic/multi-node-self-managed#multi-node-auth-trust-roles 144 | [password_role_setup]: /getting-started/setup-multi-node-basic/multi-node-self-managed#multi-node-auth-password-roles 145 | [certificate_role_setup]: /getting-started/setup-multi-node-basic/multi-node-self-managed#multi-node-auth-certificate-roles 146 | [postgresql-hba]: https://www.postgresql.org/docs/current/auth-pg-hba-conf.html 147 | [max_prepared_transactions]: https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS 148 | [distributed hypertables]: /using-timescaledb/distributed-hypertables 149 | [add_data_node]: /api#add_data_node 150 | [attach_data_node]: /api#attach_data_node 151 | [delete_data_node]: /api#delete_data_node 152 | [detach_data_node]: /api#detach_data_node 153 | [distributed_exec]: /api#distributed_exec 154 | [multinode-on-forge]: /getting-started/exploring-forge/forge-multi-node 155 | [sign-up]: https://forge.timescale.com/signup 156 | -------------------------------------------------------------------------------- /getting-started/setup.md: -------------------------------------------------------------------------------- 1 | # Setup 2 | 3 | Ok, you have [installed][] TimescaleDB, and now you are ready to work with some 4 | data. The first thing to do is to create a new empty database or convert an 5 | existing PostgreSQL database to use TimescaleDB. 6 | 7 | >:TIP: If you are planning on doing any performance testing on TimescaleDB, we 8 | strongly recommend that you [configure][] TimescaleDB properly. 9 | 10 | setup illustration 11 | 12 | First connect to the PostgreSQL instance: 13 | 14 | ```bash 15 | # Connect to PostgreSQL, using a superuser named 'postgres' 16 | psql -U postgres -h localhost 17 | ``` 18 | 19 | Now create a new empty database (skip this if you already have a database): 20 | 21 | ```sql 22 | -- Create the database, let's call it 'tutorial' 23 | CREATE database tutorial; 24 | ``` 25 | 26 | >:WARNING: Starting in v0.12.0, TimescaleDB enables [telemetry reporting][] 27 | by default. You can opt-out by following the instructions detailed 28 | in our [telemetry documentation][]. However, please do note that telemetry is 29 | anonymous, and by keeping it on, you help us [improve our product][]. 30 | 31 | Lastly add TimescaleDB: 32 | 33 | ```sql 34 | -- Connect to the database 35 | \c tutorial 36 | 37 | -- Extend the database with TimescaleDB 38 | CREATE EXTENSION IF NOT EXISTS timescaledb; 39 | ``` 40 | 41 | >:TIP: If you want to install a version that is not the most 42 | recent available on your system you can specify the version like so: 43 | `CREATE EXTENSION timescaledb VERSION '1.7.4';` 44 | 45 | _That's it!_ Connecting to the new database is as simple as: 46 | 47 | ```bash 48 | psql -U postgres -h localhost -d tutorial 49 | ``` 50 | 51 | --- 52 | 53 | From here, you will create a TimescaleDB hypertable using one of the 54 | following options: 55 | 56 | 1. **[Start from scratch][start-scratch]**: You don't currently have 57 | any data, and just want to create an empty hypertable for inserting 58 | data. 59 | 1. **[Migrate from PostgreSQL][migrate-postgres]**: You are currently 60 | storing time-series data in a PostgreSQL database, and want to move this data 61 | to a TimescaleDB hypertable. 62 | 63 | --- 64 | 65 | [installed]: /getting-started/installation 66 | [configure]: /getting-started/configuring 67 | [telemetry reporting]: /api#get_telemetry_report 68 | [telemetry documentation]: /using-timescaledb/telemetry 69 | [improve our product]: https://www.timescale.com/blog/why-introduced-telemetry-in-timescaledb-2ed11014d95d/ 70 | [start-scratch]: /getting-started/creating-hypertables 71 | [migrate-postgres]: /getting-started/migrating-data 72 | -------------------------------------------------------------------------------- /guc.md: -------------------------------------------------------------------------------- 1 | # TimescaleDB Configuration 2 | 3 | ## Hypertables [](hypertables) 4 | 5 | #### timescaledb.enable_constraint_aware_append (bool) [](#enable_constraint_aware_append) 6 | 7 | Enable constraint exclusion at execution time. It is by default enabled. 8 | 9 | #### timescaledb.enable_ordered_append (bool) [](#enable_ordered_append) 10 | 11 | Enable ordered append optimization for queries that are ordered by the 12 | time dimension. It is by default enabled. 13 | 14 | #### timescaledb.enable_chunk_append (bool) [](#enable_chunk_append) 15 | 16 | Enable chunk append node. It is by default enabled. 17 | 18 | #### timescaledb.enable_parallel_chunk_append (bool) [](#enable_parallel_chunk_append) 19 | 20 | Enable parallel aware chunk append node. It is by default enabled. 21 | 22 | #### timescaledb.enable_runtime_exclusion (bool) [](#enable_runtime_exclusion) 23 | 24 | Enable runtime chunk exclusion in chunk append node. It is by default enabled. 25 | 26 | #### timescaledb.enable_constraint_exclusion (bool) [](#enable_constraint_exclusion) 27 | 28 | Enable planner constraint exclusion. It is by default enabled. 29 | 30 | ## Compression [](compression) 31 | 32 | #### timescaledb.enable_transparent_decompression (bool) [](#enable_transparent_decompression) 33 | 34 | Enable transparent decompression when querying hypertable. It is by default enabled. 35 | 36 | ## Continuous Aggregates [](continuous-aggregates) 37 | 38 | #### timescaledb.enable_cagg_reorder_groupby (bool) [](#enable_cagg_reorder_groupby) 39 | 40 | Enable group-by clause reordering for continuous aggregates. It is by default enabled. 41 | 42 | ## Policies [](policies) 43 | 44 | #### timescaledb.max_background_workers (int) [](#max_background_workers) 45 | 46 | Max background worker processes allocated to TimescaleDB. Set to at 47 | least 1 + number of databases in Postgres instance to use background 48 | workers. Default value is 8. 49 | 50 | ## Distributed Hypertables [](multinode) 51 | 52 | #### timescaledb.enable_2pc (bool) [](#enable_2pc) 53 | 54 | Enables two-phase commit for distributed hypertables. If disabled, it 55 | will use a one-phase commit instead, which is faster but can result in 56 | inconsistent data. It is by default enabled. 57 | 58 | #### timescaledb.enable_per_data_node_queries (bool) [](#enable_per_data_node_queries) 59 | 60 | If enabled, TimescaleDB will combine different chunks belonging to the 61 | same hypertable into a single query per data node. It is by default enabled. 62 | 63 | #### timescaledb.max_insert_batch_size (int) [](#max_insert_batch_size) 64 | 65 | When acting as a access node, TimescaleDB splits batches of inserted 66 | tuples across multiple data nodes. It will batch up to 67 | `max_insert_batch_size` tuples per data node before flushing. Setting 68 | this to 0 disables batching, reverting to tuple-by-tuple inserts. The 69 | default value is 1000. 70 | 71 | #### timescaledb.enable_connection_binary_data (bool) [](#enable_connection_binary_data) 72 | 73 | Enables binary format for data exchanged between nodes in the 74 | cluster. It is by default enabled. 75 | 76 | #### timescaledb.enable_client_ddl_on_data_nodes (bool) [](#enable_client_ddl_on_data_nodes) 77 | 78 | Enables DDL operations on data nodes by a client and do not restrict 79 | execution of DDL operations only by access node. It is by default disabled. 80 | 81 | #### timescaledb.enable_async_append (bool) [](#enable_async_append) 82 | 83 | Enables optimization that runs remote queries asynchronously across 84 | data nodes. It is by default enabled. 85 | 86 | #### timescaledb.enable_remote_explain (bool) [](#enable_remote_explain) 87 | 88 | Enable getting and showing `EXPLAIN` output from remote nodes. This 89 | will require sending the query to the data node, so it can be affected 90 | by the network connection and availability of data nodes. It is by default disabled. 91 | 92 | #### timescaledb.remote_data_fetcher (enum) [](#remote_data_fetcher) 93 | 94 | Pick data fetcher type based on type of queries you plan to run, which 95 | can be either `rowbyrow` or `cursor`. The default is `rowbyrow`. 96 | 97 | #### timescaledb.ssl_dir (string) [](#ssl_dir) 98 | 99 | Specifies the path used to search user certificates and keys when 100 | connecting to data nodes using certificate authentication. Defaults to 101 | `timescaledb/certs` under the PostgreSQL data directory. 102 | 103 | #### timescaledb.passfile (string) [](#passfile) 104 | 105 | Specifies the name of the file where passwords are stored and when 106 | connecting to data nodes using password authentication. 107 | 108 | ## Administration [](administration) 109 | 110 | #### timescaledb.restoring (bool) [](#restoring) 111 | 112 | Set TimescaleDB in restoring mode. It is by default disabled. 113 | 114 | #### timescaledb.license (string) [](#license) 115 | 116 | TimescaleDB license type. Determines which features are enabled. The 117 | variable value defaults to `timescale`. 118 | 119 | ## timescaledb.telemetry_level (enum) [](#telemetry_level) 120 | 121 | Telemetry settings level. Level used to determine which telemetry to 122 | send. Can be set to `off` or `basic`. Defaults to `basic`. 123 | 124 | #### timescaledb.last_tuned (string) [](#last_tuned) 125 | 126 | Records last time `timescaledb-tune` ran. 127 | 128 | #### timescaledb.last_tuned_version (string) [](#last_tuned_version) 129 | 130 | Version of `timescaledb-tune` used to tune when it ran. 131 | 132 | -------------------------------------------------------------------------------- /integration-tools.md: -------------------------------------------------------------------------------- 1 | # *** Integration Tools 2 | 3 | ## REST API Connectors [](rest-api-connector) 4 | 5 | - [postgREST](https://github.com/begriffs/postgrest) 6 | - [pSQL API](https://github.com/QBisConsult/psql-api) 7 | 8 | ## Administration Tools [](administration-tools) 9 | 10 | - [pgAdmin](https://www.pgadmin.org/) 11 | 12 | The most popular administration tool for PostgreSQL. 13 | 14 | - [pgStudio](http://www.postgresqlstudio.org/) 15 | - [Postbird](http://paxa.github.io/postbird/) 16 | - [SQuirrel SQL](http://www.squirrelsql.org/) 17 | - [Dbglass](https://github.com/web-pal/DBGlass/) 18 | 19 | A tool that avoids the need to know SQL. Applies filters to data while 20 | hiding the relevant SQL queries. 21 | 22 | PostgreSQL list of 23 | [Administration tools](https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools) 24 | 25 | ## Visualization Tools [](visualization-tools) 26 | 27 | - [SQLPad](https://rickbergfalk.github.io/sqlpad/) 28 | 29 | An easy to use and powerful open source javascript viz tool that uses D3 to 30 | render graphs. Requires nodejs installation. 31 | -------------------------------------------------------------------------------- /introduction.md: -------------------------------------------------------------------------------- 1 | # TimescaleDB Overview 2 | 3 | TimescaleDB is an open-source time-series database optimized for fast 4 | ingest and complex queries. It speaks "full SQL" and is 5 | correspondingly easy to use like a traditional relational database, 6 | yet scales in ways previously reserved for NoSQL databases. 7 | 8 | Compared to the trade-offs demanded by these two alternatives 9 | (relational vs. NoSQL), TimescaleDB offers the best of both 10 | worlds **for time-series data:** 11 | 12 | ## Easy to Use 13 | 14 | - **Full SQL interface** for all SQL natively supported by 15 | PostgreSQL (including secondary indexes, non-time based aggregates, 16 | sub-queries, JOINs, window functions). 17 | - **Connects** to any client or tool that speaks PostgreSQL, no changes needed. 18 | - **Time-oriented** features, API functions, and optimizations. 19 | - Robust support for **Data retention policies**. 20 | 21 | 22 | ## Scalable 23 | 24 | - **Transparent time/space partitioning** for both scaling up (single node) 25 | and scaling out (forthcoming). 26 | - **High data write rates** (including batched commits, in-memory 27 | indexes, transactional support, support for data backfill). 28 | - **Right-sized chunks** (two-dimensional data partitions) on single nodes to 29 | ensure fast ingest even at large data sizes. 30 | - **Parallelized operations** across chunks and servers. 31 | 32 | ## Reliable 33 | 34 | - **Engineered up** from PostgreSQL, packaged as an extension. 35 | - **Proven foundations** benefiting from 20+ years of PostgreSQL 36 | research (including streaming replication, backups). 37 | - **Flexible management options** (compatible with existing PostgreSQL 38 | ecosystem and tooling). 39 | 40 | The rest of this section describes the design and motivation around the TimescaleDB 41 | architecture, including why time-series data is different, and how we leverage 42 | its characteristics when building TimescaleDB. 43 | 44 | **Next:** In part to understand TimescaleDB's design choices, let us ask: [What is time-series data?][time-series-data] 45 | 46 | ## Download the Guide 47 | If you want a quick visual intro to TimescaleDB, click on the image below to download the starter guide. 48 | 49 | [starter guide][starter-guide] 52 | 53 | [time-series-data]: /introduction/time-series-data 54 | [starter-guide]: https://assets.timescale.com/resources/TimescaleDB_Starter_Guide.pdf 55 | 56 | -------------------------------------------------------------------------------- /introduction/data-model.md: -------------------------------------------------------------------------------- 1 | # Data Model 2 | 3 | As a relational database supporting full SQL, TimescaleDB supports flexible data models 4 | that can be optimized for different use cases. This makes TimescaleDB somewhat different from 5 | most other time-series databases, which typically use "narrow-table" models. 6 | 7 | Specifically, TimescaleDB can support both wide-table and narrow-table models. Here, we discuss 8 | the different performance trade-offs and implications of these two models using an Internet of 9 | Things (IoT) example. 10 | 11 | Imagine a distributed group of 1,000 IoT devices designed to collect 12 | environmental data at various intervals. This data could include: 13 | 14 | - **Identifiers:** `device_id`, `timestamp` 15 | - **Metadata:** `location_id`, `dev_type`, `firmware_version`, `customer_id` 16 | - **Device metrics:** `cpu_1m_avg`, `free_mem`, `used_mem`, `net_rssi`, `net_loss`, `battery` 17 | - **Sensor metrics:** `temperature`, `humidity`, `pressure`, `CO`, `NO2`, `PM10` 18 | 19 | For example, your incoming data may look like this: 20 | 21 | timestamp | device_id | cpu_1m_avg | free_mem | temperature | location_id | dev_type 22 | ---:|---:|---:|---:|---:|---:|---: 23 | 2017-01-01 01:02:00 | abc123 | 80 | 500MB | 72 | 335 | field 24 | 2017-01-01 01:02:23 | def456 | 90 | 400MB | 64 | 335 | roof 25 | 2017-01-01 01:02:30 | ghi789 | 120 | 0MB | 56 | 77 | roof 26 | 2017-01-01 01:03:12 | abc123 | 80 | 500MB | 72 | 335 | field 27 | 2017-01-01 01:03:35 | def456 | 95 | 350MB | 64 | 335 | roof 28 | 2017-01-01 01:03:42 | ghi789 | 100 | 100MB | 56 | 77 | roof 29 | 30 | 31 | Now, let's look at various ways to model this data. 32 | 33 | ## Narrow-table Model 34 | 35 | Most time-series databases would represent this data in the following way: 36 | - Represent each metric as a separate entity (e.g., represent `cpu_1m_avg` 37 | and `free_mem` as two different things) 38 | - Store a sequence of "time", "value" pairs for that metric 39 | - Represent the metadata values as a "tag-set" associated with that 40 | metric/tag-set combination 41 | 42 | In this model, each metric/tag-set combination is considered an individual 43 | "time series" containing a sequence of time/value pairs. 44 | 45 | Using our example above, this approach would result in 9 different "time series", each of which is defined by a unique set of tags. 46 | ``` 47 | 1. {name: cpu_1m_avg, device_id: abc123, location_id: 335, dev_type: field} 48 | 2. {name: cpu_1m_avg, device_id: def456, location_id: 335, dev_type: roof} 49 | 3. {name: cpu_1m_avg, device_id: ghi789, location_id: 77, dev_type: roof} 50 | 4. {name: free_mem, device_id: abc123, location_id: 335, dev_type: field} 51 | 5. {name: free_mem, device_id: def456, location_id: 335, dev_type: roof} 52 | 6. {name: free_mem, device_id: ghi789, location_id: 77, dev_type: roof} 53 | 7. {name: temperature, device_id: abc123, location_id: 335, dev_type: field} 54 | 8. {name: temperature, device_id: def456, location_id: 335, dev_type: roof} 55 | 9. {name: temperature, device_id: ghi789, location_id: 77, dev_type: roof} 56 | ``` 57 | The number of such time series scales with the cross-product of the 58 | cardinality of each tag, i.e., (# names) × (# device ids) × (# 59 | location ids) × (device types). Some time-series databases struggle as 60 | cardinality increases, ultimately limiting the number of device types and devices 61 | you can store in a single database. 62 | 63 | TimescaleDB supports narrow models and does not suffer from the same cardinality limitations 64 | as other time-series databases do. A narrow model makes sense if you collect each metric 65 | independently. It allows you to add new metrics as you go by adding a new tag without 66 | requiring a formal schema change. 67 | 68 | However, a narrow model is not as performant if you are collecting many metrics with the 69 | same timestamp, since it requires writing a timestamp for each metric. This ultimately 70 | results in higher storage and ingest requirements. Further, queries that correlate different metrics 71 | are also more complex, since each additional metric you want to correlate requires another 72 | JOIN. If you typically query multiple metrics together, it is both faster and easier to store them 73 | in a wide table format, which we will cover in the following section. 74 | 75 | ## Wide-table Model 76 | 77 | TimescaleDB easily supports wide-table models. Queries across multiple metrics are 78 | easier in this model, since they do not require JOINs. Also, ingest is faster 79 | since only one timestamp is written for multiple metrics. 80 | 81 | A typical wide-table model would match 82 | a typical data stream in which multiple metrics are collected at a given timestamp: 83 | 84 | timestamp | device_id | cpu_1m_avg | free_mem | temperature | location_id | dev_type 85 | ---:|---:|---:|---:|---:|---:|---: 86 | 2017-01-01 01:02:00 | abc123 | 80 | 500MB | 72 | 42 | field 87 | 2017-01-01 01:02:23 | def456 | 90 | 400MB | 64 | 42 | roof 88 | 2017-01-01 01:02:30 | ghi789 | 120 | 0MB | 56 | 77 | roof 89 | 2017-01-01 01:03:12 | abc123 | 80 | 500MB | 72 | 42 | field 90 | 2017-01-01 01:03:35 | def456 | 95 | 350MB | 64 | 42 | roof 91 | 2017-01-01 01:03:42 | ghi789 | 100 | 100MB | 56 | 77 | roof 92 | 93 | Here, each row is a new reading, with a set of measurements and metadata at a 94 | given time. This allows us to preserve relationships within the data, and 95 | ask more interesting or exploratory questions than before. 96 | 97 | Of course, this is not a new format: it's what one would commonly find within 98 | a relational database. 99 | 100 | ## JOINs with Relational Data 101 | 102 | TimescaleDB's data model also has another similarity with relational 103 | databases: it supports JOINs. Specifically, one can store additional 104 | metadata in a secondary table, and then utilize that data at query time. 105 | 106 | In our example, one could have a separate locations table, 107 | mapping `location_id` to additional metadata for that location. For example: 108 | 109 | location_id | name | latitude | longitude | zip_code | region 110 | ---:|---:|---:|---:|---:|---: 111 | 42 | Grand Central Terminal | 40.7527° N | 73.9772° W | 10017 | NYC 112 | 77 | Lobby 7 | 42.3593° N | 71.0935° W | 02139 | Massachusetts 113 | 114 | Then at query time, by joining our two tables, one could ask questions 115 | like: what is the average `free_mem` of our devices in `zip_code` 10017? 116 | 117 | Without joins, one would need to denormalize their data and store 118 | all metadata with each measurement row. This creates data bloat, 119 | and makes data management more difficult. 120 | 121 | With joins, one can store metadata independently, and update mappings 122 | more easily. 123 | 124 | For example, if we wanted 125 | to update our "region" for `location_id` 77 (e.g., from "Massachusetts" 126 | to "Boston"), we can make this change without having to go back and 127 | overwrite historical data. 128 | 129 | 130 | **Next:** [How is TimescaleDB's architecture different?][architecture] 131 | 132 | [architecture]: /introduction/architecture 133 | -------------------------------------------------------------------------------- /introduction/time-series-data.md: -------------------------------------------------------------------------------- 1 | # What Is Time-series Data? 2 | 3 | What is this "time-series data" that we keep talking about, and how and why is 4 | it different from other data? 5 | 6 | Many applications or databases actually take an overly narrow view, and equate 7 | time-series data with something like server metrics of a specific form: 8 | 9 | ```bash 10 | Name: CPU 11 | 12 | Tags: Host=MyServer, Region=West 13 | 14 | Data: 15 | 2017-01-01 01:02:00 70 16 | 2017-01-01 01:03:00 71 17 | 2017-01-01 01:04:00 72 18 | 2017-01-01 01:05:01 68 19 | ``` 20 | 21 | But in fact, in many monitoring applications, different metrics are often 22 | collected together (e.g., CPU, memory, network statistics, battery life). So, it 23 | does not always make sense to think of each metric separately. Consider this 24 | alternative "wider" data model that maintains the correlation between metrics 25 | collected at the same time. 26 | 27 | ```bash 28 | Metrics: CPU, free_mem, net_rssi, battery 29 | 30 | Tags: Host=MyServer, Region=West 31 | 32 | Data: 33 | 2017-01-01 01:02:00 70 500 -40 80 34 | 2017-01-01 01:03:00 71 400 -42 80 35 | 2017-01-01 01:04:00 72 367 -41 80 36 | 2017-01-01 01:05:01 68 750 -54 79 37 | ``` 38 | 39 | 40 | This type of data belongs in a much **broader** category, 41 | whether temperature 42 | readings from a sensor, the price of a stock, the status of a machine, 43 | or even the number of logins to an app. 44 | 45 | **Time-series data is data that 46 | collectively represents how a system, process, or behavior changes 47 | over time.** 48 | 49 | 50 | ## Characteristics of Time-series Data [](characteristics) 51 | 52 | If you look closely at how it’s produced and ingested, there are important 53 | characteristics that time-series databases like TimescaleDB typically leverage: 54 | 55 | - **Time-centric**: Data records always have a timestamp. 56 | - **Append-only**: Data is almost solely append-only (INSERTs). 57 | - **Recent**: New data is typically about recent time intervals, and we 58 | more rarely make updates or backfill missing data about old intervals. 59 | 60 | The frequency or regularity of data is less important though; it can be 61 | collected every millisecond or hour. It can also be collected at regular or 62 | irregular intervals (e.g., when some *event* happens, as opposed to at 63 | pre-defined times). 64 | 65 | But haven't databases long had time fields? A key difference between 66 | time-series data (and the databases that support them), compared to other 67 | data like standard relational "business" data, is that **changes to the 68 | data are inserts, not overwrites**. 69 | 70 | ## Time-series Data Is Everywhere [](is-everywhere) 71 | 72 | Time-series data is everywhere, but there are environments where it is especially 73 | being created in torrents. 74 | 75 | - **Monitoring computer systems**: VM, server, container metrics (CPU, free memory, net/disk IOPs), 76 | service and application metrics (request rates, request latency). 77 | 78 | - **Financial trading systems**: Classic securities, newer cryptocurrencies, 79 | payments, transaction events. 80 | 81 | - **Internet of Things**: Data from sensors on industrial machines and equipment, 82 | wearable devices, vehicles, physical containers, pallets, 83 | consumer devices for smart homes, etc. 84 | 85 | - **Eventing applications**: User/customer interaction data like clickstreams, 86 | pageviews, logins, signups, etc. 87 | 88 | - **Business intelligence**: Tracking key metrics and the overall health of the business. 89 | 90 | - **Environmental monitoring**: Temperature, humidity, pressure, pH, pollen count, 91 | air flow, carbon monoxide (CO), nitrogen dioxide (NO2), particulate matter (PM10). 92 | 93 | - (and more) 94 | 95 | **Next:** [TimescaleDB's data model][data-model] 96 | 97 | [data-model]: /introduction/data-model 98 | -------------------------------------------------------------------------------- /introduction/timescaledb-vs-nosql.md: -------------------------------------------------------------------------------- 1 | # Why Use TimescaleDB over NoSQL? 2 | 3 | Compared to general NoSQL databases (e.g., MongoDB, Cassandra) or even 4 | more specialized time-oriented ones (e.g., InfluxDB, KairosDB), 5 | TimescaleDB provides both qualitative and quantitative differences: 6 | 7 | - **Normal SQL**: TimescaleDB gives you the power of standard SQL 8 | queries on time-series data, even at scale. Most (all?) NoSQL 9 | databases require learning either a new query language or using 10 | something that's at best "SQL-ish" (which still breaks compatibility 11 | with existing tools). 12 | - **Operational simplicity**: With TimescaleDB, you only need to manage one 13 | database for your relational and time-series data. Otherwise, users 14 | often need to silo data into two databases: a "normal" relational 15 | one, and a second time-series one. 16 | - **JOINs** can be performed across relational and time-series data. 17 | - **Query performance** is faster for a varied set 18 | of queries. More complex queries are often slow or full table scans 19 | on NoSQL databases, while some databases can't even support many 20 | natural queries. 21 | - **Manage like PostgreSQL** and inherit its support for varied datatypes and 22 | indexes (B-tree, hash, range, BRIN, GiST, GIN). 23 | - **Native support for geospatial data**: Data stored in TimescaleDB 24 | can leverage PostGIS's geometric datatypes, indexes, and queries. 25 | - **Third-party tools**: TimescaleDB supports anything that speaks 26 | SQL, including BI tools like Tableau. 27 | -------------------------------------------------------------------------------- /main.md: -------------------------------------------------------------------------------- 1 |

TimescaleDB Documentation

2 | 3 | 23 | 43 | -------------------------------------------------------------------------------- /starting-from-scratch.md: -------------------------------------------------------------------------------- 1 | # Starting from an Empty Database 2 | 3 | One of the core ideas of our time-series database is the time-series optimized data 4 | table we call a **hypertable**. 5 | 6 | ### Creating a (Hyper)table 7 | >:TIP: First make sure that you have properly [installed][] **AND [setup][]** TimescaleDB within your PostgreSQL instance. 8 | 9 | To create a hypertable, you start with a regular SQL table, and then convert 10 | it into a hypertable via the function `create_hypertable()` ([API reference][]). 11 | 12 | The following example creates a hypertable for tracking 13 | temperature and humidity across a collection of devices over time. 14 | 15 | ```sql 16 | -- We start by creating a regular SQL table 17 | 18 | CREATE TABLE conditions ( 19 | time TIMESTAMPTZ NOT NULL, 20 | location TEXT NOT NULL, 21 | temperature DOUBLE PRECISION NULL, 22 | humidity DOUBLE PRECISION NULL 23 | ); 24 | ``` 25 | 26 | Next, transform it into a hypertable with `create_hypertable()`: 27 | 28 | ```sql 29 | -- This creates a hypertable that is partitioned by time 30 | -- using the values in the `time` column. 31 | 32 | SELECT create_hypertable('conditions', 'time'); 33 | 34 | -- OR you can additionally partition the data on another 35 | -- dimension (what we call 'space partitioning'). 36 | -- E.g., to partition `location` into 4 partitions: 37 | 38 | SELECT create_hypertable('conditions', 'time', 'location', 4); 39 | ``` 40 | 41 | For more information about how to choose the appropriate partitioning 42 | for your data, see our [best practices discussion][]. 43 | 44 | **Next let's learn how to create and work with a [hypertable][], the primary 45 | point of interaction for TimescaleDB.** 46 | 47 | [installed]: /getting-started/installation 48 | [setup]: /getting-started/setup 49 | [hypertable]: /getting-started/basic-operations 50 | [best practices discussion]: /api/api-timescaledb#create_hypertable-best-practices 51 | [API Reference]: /api/api-timescaledb 52 | -------------------------------------------------------------------------------- /tutorials.md: -------------------------------------------------------------------------------- 1 | # Tutorials 2 | We've created a host of code-focused tutorials that will help you get 3 | started with *TimescaleDB*. 4 | 5 | Most of these tutorials require a working [installation of TimescaleDB][install-timescale]. 6 | 7 | ### Common scenarios for using TimescaleDB 8 | 9 | - **[Start Here - Hello Timescale][Hello Timescale]**: If you are new to TimescaleDB 10 | or even SQL, check out our tutorial with NYC taxicab data to get an idea of the 11 | capabilities our database has to offer. 12 | - **[Time-series Forecasting][Forecasting]**: Use R, Apache MADlib and Python to perform 13 | data analysis and make forecasts on your data. 14 | - **[Analyze Cryptocurrency Data][Crypto]**: Use TimescaleDB to analyze historic cryptocurrency data. Learn how to build your own schema, ingest data, and analyze information in TimescaleDB. 15 | 16 | ### How to use specific TimescaleDB features 17 | 18 | - **[Scaling out TimescaleDB][Clustering]**: Distribute data across multiple nodes to 19 | scale out your TimescaleDB cluster. 20 | - **[Replication][]**: TimescaleDB takes advantage of well established PostgreSQL methods for replication. Here we provide a detailed guide along with additional resources for setting up streaming replicas. 21 | - **[Continuous Aggregates][]**: Getting started with continuous aggregates. 22 | 23 | ### Integrating with Prometheus 24 | 25 | - **[Setup a Prometheus endpoint to monitor Timescale Cloud][prometheus-tsc-endpoint]**: Configure Prometheus to collect monitoring data about your Timescale Cloud instance. 26 | - **[Monitoring Django with Prometheus][monitor-django-prometheus]**: 27 | Learn how to monitor your Django application using Prometheus. 28 | 29 | ### Integrating with Grafana 30 | 31 | - **[Creating a Grafana dashboard and panel][tutorial-grafana-dashboards]**: Basic tutorial on using Grafana to visualize data in TimescaleDB. 32 | - **[Visualize Geospatial data in Grafana][tutorial-grafana-geospatial]**: Use the Grafana WorldMap visualization to view your TimescaleDB data. 33 | - **[Use Grafana variables][tutorial-grafana-variables]**: Filter and customize your Grafana visualizations. 34 | - **[Visualizing Missing Data with Grafana][tutorial-grafana-missing-data]**: Learn how to visualize and aggregate missing time-series data in Grafana. 35 | - **[Setting up Grafana alerts][tutorial-grafana-alerts]**: Configure Grafana to alert you in Slack, PagerDuty, and more. 36 | 37 | ### Integrating with other products 38 | 39 | - **[Collecting metrics with Telegraf][Telegraf Output Plugin]**: Collecting metrics with the PostgreSQL and TimescaleDB output plugin for Telegraf. 40 | - **[Visualize Time-Series Data using Tableau][Tableau]**: Learn how to configure Tableau to connect to TimescaleDB and visualize your time-series data. 41 | - **[Migrate from InfluxDB with Outflux][Outflux]**: Use our open-source migration tool to transfer your data from InfluxDB to TimescaleDB. 42 | 43 | ### Language quick-starts 44 | 45 | - **[Node and TimescaleDB][node-quickstart]**: A quick start guide for Node developers looking to use TimescaleDB. 46 | - **[Python and TimescaleDB][python-quickstart]**: A quick start guide for Python developers looking to use TimescaleDB. 47 | - **[Ruby on Rails and TimescaleDB][ruby-quickstart]**: A quick start guide for Ruby on Rails developers looking to use TimescaleDB. 48 | - **[Golang and TimescaleDB][go-quickstart]**: A quick start guide for Golang developers looking to use TimescaleDB. 49 | 50 | ### Additional resources 51 | 52 | - **[Sample data sets][Data Sets]**: And if you want to explore on your own 53 | with some sample data, we have some ready-made data sets for you to explore. 54 | - **[Simulate IoT Sensor Data][simul-iot-data]**: Simulate a basic IoT sensor dataset 55 | on PostgreSQL or TimescaleDB. 56 | - **[psql installation][psql]**: `psql` is a terminal-based front-end for PostgreSQL. 57 | Learn how to install `psql` on Mac, Ubuntu, Debian, Windows, 58 | and pick up some valuable `psql` tips and tricks along the way. 59 | 60 | [Hello Timescale]: /tutorials/tutorial-hello-timescale 61 | [Forecasting]: /tutorials/tutorial-forecasting 62 | [Replication]: /tutorials/replication 63 | [Clustering]: /tutorials/clustering 64 | [Continuous Aggregates]: /tutorials/continuous-aggs-tutorial 65 | [Outflux]: /tutorials/outflux 66 | [Grafana]: /tutorials/tutorial-grafana 67 | [Telegraf Output Plugin]: /tutorials/telegraf-output-plugin 68 | [Data Sets]: /tutorials/other-sample-datasets 69 | [install-timescale]: /getting-started/installation 70 | [psql]: /getting-started/install-psql-tutorial 71 | [Crypto]: /tutorials/analyze-cryptocurrency-data 72 | [Tableau]: /tutorials/visualizing-time-series-data-in-tableau 73 | [prometheus-tsc-endpoint]: /tutorials/tutorial-setting-up-timescale-cloud-endpoint-for-prometheus 74 | [monitor-django-prometheus]: /tutorials/tutorial-howto-monitor-django-prometheus 75 | [tutorial-grafana-dashboards]: /tutorials/tutorial-grafana-dashboards 76 | [tutorial-grafana-geospatial]: /tutorials/tutorial-grafana-geospatial 77 | [tutorial-grafana-variables]: /tutorials/tutorial-grafana-variables 78 | [tutorial-grafana-missing-data]: /tutorials/tutorial-howto-visualize-missing-data-grafana 79 | [tutorial-grafana-alerts]: /tutorials/tutorial-grafana-alerts 80 | [node-quickstart]: /tutorials/quickstart-node 81 | [python-quickstart]: /tutorials/quickstart-python 82 | [ruby-quickstart]: /tutorials/quickstart-ruby 83 | [go-quickstart]: /tutorials/quickstart-go 84 | [simul-iot-data]: /tutorials/tutorial-howto-simulate-iot-sensor-data 85 | 86 | -------------------------------------------------------------------------------- /tutorials/getting-started-with-promscale.md: -------------------------------------------------------------------------------- 1 | # Getting Started with Prometheus and TimescaleDB using Promscale 2 | 3 | ## Introduction 4 | [Prometheus][prometheus-webpage] is an open-source systems monitoring and alerting toolkit that can be used to easily and cost-effectively monitor infrastructure and applications. 5 | Over the past few years, Prometheus has emerged as the monitoring solution for modern software systems. 6 | The key to Prometheus’ success is its pull-based architecture and service discovery, which is able to seamlessly monitor modern, dynamic systems in which (micro-)services startup and shutdown frequently. 7 | 8 | ### The Problem: Prometheus is not designed for analytics 9 | As organizations use Prometheus to collect data from more and more of their infrastructure, the benefits from mining this data also increase. Analytics becomes critical for auditing, reporting, capacity planning, prediction, root-cause analysis, and more. Prometheus's architectural philosophy is one of simplicity and extensibility. Accordingly, it does not itself provide durable, highly-available long-term storage or advanced analytics, but relies on other projects to implement this functionality. 10 | 11 | There are existing ways to durably store Prometheus data, but while these options are useful for long-term storage, they only support the Prometheus data model and query model (limited to the PromQL query language). While these work extremely well for the simple, fast analyses found in dashboarding, alerting, and monitoring, they fall short for more sophisticated analysis capabilities, or for the ability to enrich their dataset with other sources needed for insight-generating cross-cutting analysis. 12 | 13 | ### Solution: Promscale scales and augments Prometheus for long-term storage and analytics 14 | [Promscale][promscale-github] is an open-source long-term store for Prometheus data, designed for analytics. It is a horizontally scalable and operationally mature platform for Prometheus data. Promscale offers the combined power of PromQL and SQL, enabling you to ask any question, create any dashboard, and achieve greater visibility into your systems. 15 | 16 | Promscale is built on top of TimescaleDB, the leading relational database for time-series. Promscale also supports native compression, handles high-cardinality, provides rock-solid reliability, and more. Furthermore, it offers other native time-series capabilities, such as data retention policies, continuous aggregate views, downsampling, data gap-filling, and interpolation. It is already natively supported by Grafana via the Prometheus and PostgreSQL/TimescaleDB data sources. 17 | 18 | > :TIP: For an overview of Promscale, see this short introductory video: [Intro to Promscale - Advanced analytics for Prometheus][promscale-intro-video]. 19 | 20 | ## Roadmap 21 | In this tutorial you will learn: 22 | 1. [The benefits of using Promscale to store and analyze Prometheus metrics][promscale-benefits] 23 | 2. [How Promscale works][promscale-how-it-works] 24 | 3. [How to install Prometheus, Promscale and TimescaleDB][promscale-install] 25 | 4. [How to run queries in PromQL and SQL against Promscale][promscale-run-queries] 26 | 27 | Let's get started with the first section, [Why use Promscale & TimescaleDB to store Prometheus metrics?][promscale-benefits] 28 | 29 | [prometheus-webpage]:https://prometheus.io 30 | [promscale-blog]: https://blog.timescale.com/blog/promscale-analytical-platform-long-term-store-for-prometheus-combined-sql-promql-postgresql/ 31 | [promscale-readme]: https://github.com/timescale/promscale/blob/master/README.md 32 | [design-doc]: https://tsdb.co/prom-design-doc 33 | [promscale-github]: https://github.com/timescale/promscale#promscale 34 | [promscale-extension]: https://github.com/timescale/promscale_extension#promscale-extension 35 | [promscale-helm-chart]: https://github.com/timescale/promscale/tree/master/helm-chart 36 | [tobs-github]: https://github.com/timescale/tobs 37 | [promscale-baremetal-docs]: https://github.com/timescale/promscale/blob/master/docs/bare-metal-promscale-stack.md#deploying-promscale-on-bare-metal 38 | [Prometheus]: https://prometheus.io/ 39 | [timescaledb vs]: /introduction/timescaledb-vs-postgres 40 | [prometheus storage docs]: https://prometheus.io/docs/prometheus/latest/storage/ 41 | [prometheus lts]: https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage 42 | [prometheus-federation]: https://prometheus.io/docs/prometheus/latest/federation/ 43 | [docker-pg-prom-timescale]: https://hub.docker.com/r/timescale/pg_prometheus 44 | [postgresql adapter]: https://github.com/timescale/prometheus-postgresql-adapter 45 | [Prometheus native format]: https://prometheus.io/docs/instrumenting/exposition_formats/ 46 | [docker]: https://docs.docker.com/install 47 | [docker image]: https://hub.docker.com/r/timescale/prometheus-postgresql-adapter 48 | [Node Exporter]: https://github.com/prometheus/node_exporter 49 | [first steps]: https://prometheus.io/docs/introduction/first_steps/#configuring-prometheus 50 | [for example]: https://www.zdnet.com/article/linux-meltdown-patch-up-to-800-percent-cpu-overhead-netflix-tests-show/ 51 | [promql-functions]: https://prometheus.io/docs/prometheus/latest/querying/functions/ 52 | [promscale-intro-video]: https://youtube.com/playlist?list=PLsceB9ac9MHTrmU-q7WCEvies-o7ts3ps 53 | [Writing to Promscale]: https://github.com/timescale/promscale/blob/master/docs/writing_to_promscale.md 54 | [Node Exporter Github]: https://github.com/prometheus/node_exporter#node-exporter 55 | [promscale-github-installation]: https://github.com/timescale/promscale#-choose-your-own-installation-adventure 56 | [promscale-docker-image]: https://hub.docker.com/r/timescale/promscale 57 | [psql docs]: https://www.postgresql.org/docs/13/app-psql.html 58 | [an Luu's post on SQL query]: https://danluu.com/metrics-analytics/ 59 | [grafana-homepage]:https://grafana.com 60 | [promlens-homepage]: https://promlens.com 61 | [multinode-blog]:https://blog.timescale.com/blog/timescaledb-2-0-a-multi-node-petabyte-scale-completely-free-relational-database-for-time-series/ 62 | [grafana-docker]: https://grafana.com/docs/grafana/latest/installation/docker/#install-official-and-community-grafana-plugins 63 | [timescaledb-multinode-docs]:https://docs.timescale.com/latest/getting-started/setup-multi-node-basic 64 | [timescale-analytics]:https://github.com/timescale/timescale-analytics 65 | [hello-timescale]:https://docs.timescale.com/latest/tutorials/tutorial-hello-timescale 66 | [promscale-docker-compose]: https://github.com/timescale/promscale/blob/master/docker-compose/docker-compose.yaml 67 | [promscale-benefits]: /tutorials/getting-started-with-promscale/promscale-benefits 68 | [promscale-how-it-works]: /tutorials/getting-started-with-promscale/promscale-how-it-works 69 | [promscale-install]: /tutorials/getting-started-with-promscale/promscale-install 70 | [promscale-run-queries]: /tutorials/getting-started-with-promscale/promscale-run-queries -------------------------------------------------------------------------------- /tutorials/tutorial-grafana-dashboards.md: -------------------------------------------------------------------------------- 1 | # Creating a Grafana Dashboard and Panel 2 | 3 | Grafana is organized into ‘Dashboards’ and ‘Panels’. A dashboard represents a view 4 | onto the performance of a system, and each dashboard consists of one or more panels, 5 | which represents information about a specific metric related to that system. 6 | 7 | In this tutorial, you'll build a simple dashboard, connect it to TimescaleDB, and visualize 8 | data. 9 | 10 | ### Pre-requisites 11 | 12 | To complete this tutorial, you will need a cursory knowledge of the Structured Query 13 | Language (SQL). The tutorial will walk you through each SQL command, but it will be 14 | helpful if you've seen SQL before. 15 | 16 | * To start, [install TimescaleDB][install-timescale]. 17 | * Next [setup Grafana][install-grafana]. 18 | 19 | Once your installation of TimescaleDB and Grafana are complete, ingest the data found 20 | in the [Hello, Timescale!][hello-timescale] tutorial and configure Grafana to connect 21 | to that database. Be sure to follow the full tutorial if you’re interested in background 22 | on how to use TimescaleDB. 23 | 24 | ### Build a new dashboard 25 | 26 | We will start by creating a new dashboard. In the far left of the Grafana user 27 | interface, you’ll see a '+' icon. If you hover over it, you’ll see a 'Create' menu, 28 | within which is a 'Dashboard' option. Select that 'Dashboard' option. 29 | 30 | After creating a new dashboard, you’ll see a 'New Panel' screen, with options 31 | for 'Add Query' and 'Choose Visualization'. In the future, if you already have a 32 | dashboard with panels, you can click on the '+' icon at the **top** of the Grafana user 33 | interface, which will enable you to add a panel to an existing dashboard. 34 | 35 | To proceed with our tutorial, let’s add a new visualization by clicking on the 'Choose 36 | Visualization' option. 37 | 38 | At this point, you’ll have several options for different Grafana visualizations. We will 39 | choose the first option, the 'Graph' visualization. 40 | 41 | Grafana visualizations to choose from 42 | 43 | There are multiple ways to configure our panel, but we will accept all the defaults 44 | and create a simple 'Lines' graph. 45 | 46 | In the far left section of the Grafana user interface, select the 'Queries' tab. 47 | 48 | How to create a new Grafana query 49 | 50 | Instead of using the Grafana query builder, we will edit our query directly. In the 51 | view, click on the 'Edit SQL' button at the bottom. 52 | 53 | Edit custom SQL queries in Grafana 54 | 55 | Before we can begin authoring our query, we also want to set the Query database to the New 56 | York City taxi cab datasource we connected to earlier: 57 | 58 | Switching data sources in Grafana 59 | 60 | ### Visualize metrics stored in TimescaleDB 61 | 62 | Let’s start by creating a visualization that answers the question **How many rides took place on each day?** 63 | from the [Hello, Timescale!][hello-timescale] tutorial. 64 | 65 | From the tutorial, you can see the standard SQL syntax for our query: 66 | 67 | ```sql 68 | SELECT date_trunc('day', pickup_datetime) AS day, 69 | COUNT(*) 70 | FROM rides 71 | GROUP BY day 72 | ORDER BY day; 73 | ``` 74 | 75 | We will need to alter this query to support Grafana’s unique query syntax. 76 | 77 | #### Modifying the SELECT statement 78 | 79 | First, we will modify the `date_trunc` function to use the TimescaleDB `time_bucket` 80 | function. You can consult the TimescaleDB [API Reference on time_bucket][time-bucket-reference] 81 | for more information on how to use it properly. 82 | 83 | Let’s examine the `SELECT` portion of this query. First, we will bucket our results into 84 | one day groupings using the `time_bucket` function. If you set the 'Format' of a Grafana 85 | panel to be 'Time series', for use in Graph panel for example, then the query must return 86 | a column named `time` that returns either a SQL `datetime` or any numeric datatype 87 | representing a Unix epoch. 88 | 89 | So, part 1 of this new query is modified so that the output of the `time_bucket` grouping 90 | is labeled `time` as Grafana requires, while part 2 is unchanged: 91 | 92 | ```sql 93 | SELECT 94 | --1-- 95 | time_bucket('1 day', pickup_datetime) AS "time", 96 | --2-- 97 | COUNT(*) 98 | FROM rides 99 | ``` 100 | 101 | #### The Grafana \_\_timeFilter function 102 | 103 | Grafana time-series panels include a tool that enables the end-user to filter on a given 104 | time range. A “time filter,” if you will. Not surprisingly, Grafana has a way to link the 105 | user interface construct in a Grafana panel with the query itself. In this case, 106 | the `$__timefilter()` function. 107 | 108 | In the modified query below, we will use the `$__timefilter()` function 109 | to set the `pickup_datetime` column as the filtering range for our visualizations. 110 | 111 | ```sql 112 | SELECT 113 | --1-- 114 | time_bucket('1 day', pickup_datetime) AS "time", 115 | --2-- 116 | COUNT(*) 117 | FROM rides 118 | WHERE $__timeFilter(pickup_datetime) 119 | ``` 120 | 121 | #### Referencing elements in our query 122 | 123 | Finally, we want to group our visualization by the time buckets we’ve selected, 124 | and we want to order the results by the time buckets as well. So, our `GROUP BY` 125 | and `ORDER BY` statements will reference `time`. 126 | 127 | With these changes, this is our final Grafana query: 128 | 129 | ```sql 130 | SELECT 131 | --1-- 132 | time_bucket('1 day', pickup_datetime) AS time, 133 | --2-- 134 | COUNT(*) 135 | FROM rides 136 | WHERE $__timeFilter(pickup_datetime) 137 | GROUP BY time 138 | ORDER BY time 139 | ``` 140 | 141 | When we visualize this query in Grafana, we see the following: 142 | 143 | Visualizing time-series data in Grafana 144 | 145 | >:TIP: Remember to set the time filter in the upper right corner of your Grafana dashboard. If you're using the pre-built sample dataset for this example, you will want to set your time filter around January 1st, 2016. 146 | 147 | Currently, the data is bucketed into 1 day groupings. Adjust the `time_bucket` 148 | function to be bucketed into 5 minute groupings instead and compare the graphs: 149 | 150 | ```sql 151 | SELECT 152 | --1-- 153 | time_bucket('5m', pickup_datetime) AS time, 154 | --2-- 155 | COUNT(*) 156 | FROM rides 157 | WHERE $__timeFilter(pickup_datetime) 158 | GROUP BY time 159 | ORDER BY time 160 | ``` 161 | 162 | When we visualize this query, it will look like this: 163 | 164 | Visualizing time-series data in Grafana 165 | 166 | ### Summary 167 | 168 | Complete your Grafana knowledge by following [all the TimescaleDB + Grafana tutorials][tutorial-grafana]. 169 | 170 | [install-timescale]: /getting-started/installation 171 | [install-grafana]: /getting-started/installation-grafana 172 | [hello-timescale]: /tutorials/tutorial-hello-timescale 173 | [time-bucket-reference]: /api#time_bucket 174 | [tutorial-grafana]: /tutorials/tutorial-grafana -------------------------------------------------------------------------------- /tutorials/tutorial-grafana-geospatial.md: -------------------------------------------------------------------------------- 1 | # Use Grafana to Visualize Geospatial Data Stored in TimescaleDB 2 | 3 | Grafana includes a WorldMap visualization that will help you see geospatial data overlaid 4 | atop a map of the world. This can be helpful to understand how data changes based on 5 | its location. 6 | 7 | ### Pre-requisites 8 | 9 | To complete this tutorial, you will need a cursory knowledge of the Structured Query 10 | Language (SQL). The tutorial will walk you through each SQL command, but it will be 11 | helpful if you've seen SQL before. 12 | 13 | * To start, [install TimescaleDB][install-timescale]. 14 | * Next [setup Grafana][install-grafana]. 15 | 16 | Once your installation of TimescaleDB and Grafana are complete, ingest the data found 17 | in the [Hello, Timescale!][hello-timescale] tutorial and configure Grafana to connect 18 | to that database. Be sure to follow the full tutorial if you’re interested in background 19 | on how to use TimescaleDB. 20 | 21 | >:TIP: Be sure to pay close attention to the [geospatial query portion][hello-timescale-geospatial] of the tutorial and complete those steps. 22 | 23 | ### Build a geospatial query 24 | 25 | The NYC Taxi Cab data also contains the location of each ride pickup. In the 26 | [Hello, Timescale! Tutorial][hello-timescale], we examined rides that originated 27 | near Times Square. Let’s build on that query and 28 | **visualize rides whose distance traveled was greater than five miles in Manhattan**. 29 | 30 | We can do this in Grafana using the 'Worldmap Panel'. We will start by creating a 31 | new panel, selecting 'New Visualization', and selecting the 'Worldmap Panel'. 32 | 33 | Once again, we will edit our query directly. In the Query screen, be sure 34 | to select your NYC Taxicab Data as the data source. In the 'Format as' dropdown, 35 | select 'Table'. Click on 'Edit SQL' and enter the following query in the text window: 36 | 37 | ```sql 38 | SELECT time_bucket('5m', rides.pickup_datetime) AS time, 39 | rides.trip_distance AS value, 40 | rides.pickup_latitude AS latitude, 41 | rides.pickup_longitude AS longitude 42 | FROM rides 43 | WHERE $__timeFilter(rides.pickup_datetime) AND 44 | ST_Distance(pickup_geom, 45 | ST_Transform(ST_SetSRID(ST_MakePoint(-73.9851,40.7589),4326),2163) 46 | ) < 2000 47 | GROUP BY time, 48 | rides.trip_distance, 49 | rides.pickup_latitude, 50 | rides.pickup_longitude 51 | ORDER BY time 52 | LIMIT 500; 53 | ``` 54 | 55 | Let’s dissect this query. First, we’re looking to plot rides with visual markers that 56 | denote the trip distance. Trips with longer distances will get different visual treatments 57 | on our map. We will use the `trip_distance` as the value for our plot. We will store 58 | this result in the `value` field. 59 | 60 | In the second and third lines of the `SELECT` statement, we are using the `pickup_longitude` 61 | and `pickup_latitude` fields in the database and mapping them to variables `longitude` 62 | and `latitude`, respectively. 63 | 64 | In the `WHERE` clause, we are applying a geospatial boundary to look for trips within 65 | 2000m of Times Square. 66 | 67 | Finally, in the `GROUP BY` clause, we supply the `trip_distance` and location variables 68 | so that Grafana can plot data properly. 69 | 70 | >:WARNING: This query may take a while, depending on the speed of your Internet connection. This is why we’re using the `LIMIT` statement for demonstration purposes. 71 | 72 | ### Configure the Worldmap Grafana panel 73 | 74 | Now let’s configure our Worldmap visualization. Select the 'Visualization' tab in the far 75 | left of the Grafana user interface. You’ll see options for 'Map Visual Options', 'Map Data Options', 76 | and more. 77 | 78 | First, make sure the 'Map Data Options' are set to 'table' and 'current'. Then in 79 | the 'Field Mappings' section. We will set the 'Table Query Format' to be ‘Table’. 80 | We can map the 'Latitude Field' to our `latitude` variable, the 'Longitude Field' to 81 | our `longitude` variable, and the 'Metric' field to our `value` variable. 82 | 83 | In the 'Map Visual Options', set the 'Min Circle Size' to 1 and the 'Max Circle Size' to 5. 84 | 85 | In the 'Threshold Options' set the 'Thresholds' to '2,5,10'. This will auto configure a set 86 | of colors. Any plot whose `value` is below 2 will be a color, any `value` between 2 and 5 will 87 | be another color, any `value` between 5 and 10 will be a third color, and any `value` over 10 88 | will be a fourth color. 89 | 90 | Your configuration should look like this: 91 | 92 | Mapping Worldmap fields to query results in Grafana 93 | 94 | At this point, data should be flowing into our Worldmap visualization, like so: 95 | 96 | Visualizing time series data in PostgreSQL using the Grafana Worldmap 97 | 98 | You should be able to edit the time filter at the top of your visualization to see trip pickup data 99 | for different timeframes. 100 | 101 | ### Summary 102 | 103 | Complete your Grafana knowledge by following [all the TimescaleDB + Grafana tutorials][tutorial-grafana]. 104 | 105 | [install-timescale]: /getting-started/installation 106 | [install-grafana]: /getting-started/installation-grafana 107 | [hello-timescale]: /tutorials/tutorial-hello-timescale 108 | [hello-timescale-geospatial]: /tutorials/tutorial-hello-timescale#postgis 109 | [tutorial-grafana]: /tutorials/tutorial-grafana -------------------------------------------------------------------------------- /tutorials/tutorial-grafana.md: -------------------------------------------------------------------------------- 1 | # Getting Started with Grafana and TimescaleDB 2 | 3 | [Grafana][grafana-website] is an open source analytics and monitoring solution 4 | often used to visualize time-series data. In these tutorials, you’ll learn how to: 5 | 6 | - Setup Grafana and [TimescaleDB][install-timescale] 7 | - Use Grafana to visualize metrics stored in TimescaleDB 8 | - Visualize geospatial data using Grafana 9 | 10 | Follow these tutorials: 11 | 12 | - [Creating a Grafana dashboard and panel][tutorial-grafana-dashboards] to visualize data in TimescaleDB. 13 | - [Visualize Geospatial data in Grafana][tutorial-grafana-geospatial]. 14 | - [Use Grafana variables][tutorial-grafana-variables] to filter and customize your visualizations. 15 | - [Visualize missing data in Grafana][tutorial-grafana-missing-data] using TimescaleDB features. 16 | - [Setup Grafana alerts][tutorial-grafana-alerts] on time-series data using Slack, PagerDuty, and more. 17 | 18 | ### Pre-requisites for Grafana tutorials 19 | 20 | To complete these tutorials, you will need a cursory knowledge of the Structured Query 21 | Language (SQL). Each tutorial will walk you through each SQL command, but it will be 22 | helpful if you've seen SQL before. 23 | 24 | * To start, [install TimescaleDB][install-timescale]. 25 | * Next [setup Grafana][install-grafana]. 26 | 27 | [install-timescale]: /getting-started/installation 28 | [install-grafana]: /getting-started/installation-grafana 29 | [hello-timescale]: /tutorials/tutorial-hello-timescale 30 | [tutorial-grafana-dashboards]: /tutorials/tutorial-grafana-dashboards 31 | [tutorial-grafana-geospatial]: /tutorials/tutorial-grafana-geospatial 32 | [tutorial-grafana-variables]: /tutorials/tutorial-grafana-variables 33 | [tutorial-grafana-missing-data]: /tutorials/tutorial-howto-visualize-missing-data-grafana 34 | [tutorial-grafana-alerts]: /tutorials/tutorial-grafana-alerts 35 | [grafana-website]: https://www.grafana.com 36 | -------------------------------------------------------------------------------- /tutorials/tutorial-setting-up-timescale-cloud-endpoint-for-prometheus.md: -------------------------------------------------------------------------------- 1 | # Tutorial: How to Set Up a Prometheus Endpoint for a Timescale Cloud Database 2 | 3 | You can get more insights into the performance of your Timescale Cloud 4 | database by monitoring it using [Prometheus][get-prometheus], a popular 5 | open-source metrics-based systems monitoring solution. This tutorial will 6 | take you through setting up a Prometheus endpoint for a database running 7 | in [Timescale Cloud][timescale-cloud]. To create a monitoring system to ingest and analyze 8 | Prometheus metrics from your Timescale Cloud instance, you can use [Promscale][promscale]! 9 | 10 | This will expose metrics from the [node_exporter][node-exporter-metrics] as well 11 | as [pg_stats][pg-stats-metrics] metrics. 12 | 13 | ### Prerequisites 14 | In order to proceed with this tutorial, you will need a Timescale Cloud database. 15 | To create one, see these instructions for how to 16 | [get started with Timescale Cloud][timescale-cloud-get-started] 17 | 18 | ### Step 1: Enable Prometheus Service Integration 19 | 20 | In the navigation bar, select 'Service Integrations'. Navigate to the service 21 | integrations, pictured below. 22 | 23 | Service Integrations Menu Option 24 | 25 | This will present you with the option to add a Prometheus integration point. 26 | Select the plus icon to add a new endpoint and give it a name of your choice. 27 | We’ve named ours `endpoint_dev`. 28 | 29 | Create a Prometheus endpoint on Timescale Cloud 30 | 31 | Furthermore, notice that you are given basic authentication information and a port number 32 | in order to access the service. This will be used when setting up your Prometheus 33 | installation, in the `prometheus.yml` configuration file. This will enable you to make 34 | this Timescale Cloud endpoint a target for Prometheus to scrape. 35 | 36 | Here’s a sample configuration file you can use when you setup your Prometheus 37 | installation, substituting the target port, IP address, username, and password 38 | for those of your Timescale Cloud instance: 39 | 40 | ```yaml 41 | # prometheus.yml for monitoring a Timescale Cloud instance 42 | global: 43 | scrape_interval: 10s 44 | evaluation_interval: 10s 45 | scrape_configs: 46 | - job_name: prometheus 47 | scheme: https 48 | static_configs: 49 | - targets: ['{TARGET_IP}:{TARGET_PORT}'] 50 | tls_config: 51 | insecure_skip_verify: true 52 | basic_auth: 53 | username: {ENDPOINT_USERNAME} 54 | password: {ENDPOINT_PASSWORD} 55 | remote_write: 56 | - url: "http://{ADAPTER_IP}:9201/write" 57 | remote_read: 58 | - url: "http://{ADAPTER_IP}:9201/read" 59 | ``` 60 | 61 | ### Step 2: Associate Prometheus Endpoint with Managed Service 62 | 63 | Next, we want to associate our Prometheus endpoint with our Timescale 64 | Cloud service. Using the navigation menu, select the service we want to 65 | monitor and click the 'Overview' tab. 66 | 67 | Navigate down to the 'Service Integrations' section and click the 'Manage Integrations' button. 68 | 69 | Manage Service integrations on your managed service 70 | 71 | Find the Prometheus integration option and select 'Use Prometheus'. 72 | 73 | Select Prometheus integration to integrate with 74 | 75 | Next, select the endpoint name you created in Step 1 as the endpoint you’d like to use 76 | with this service and then click the 'Enable' button. It’s possible to use the same 77 | endpoint for multiple services or a separate one for services you’d like to keep apart. 78 | 79 | Select name of Prometheus endpoint to integrate with 80 | 81 | To check if this was successful, navigate back to the Service Integrations section of your 82 | managed service, and check if that “Active” flag appears, along with the name of the endpoint 83 | you associated the service with. 84 | 85 | Success! Active prometheus endpoint with name 86 | 87 | Congratulations, you have successfully set up a Prometheus endpoint on your managed 88 | service on Timescale Cloud! 89 | 90 | ### Next Steps 91 | 92 | Next, [use Promscale][promscale] with Timescale, Grafana, and Prometheus to ingest 93 | and analyze Prometheus metrics from your Timescale Cloud instance. 94 | 95 | 96 | [timescale-cloud]: https://www.timescale.com/products 97 | [timescale-cloud-install]: /getting-started/explore-cloud 98 | [get-prometheus]: https://prometheus.io 99 | [timescale-cloud-get-started]: /getting-started/exploring-cloud 100 | [pg-stats-metrics]: https://www.postgresql.org/docs/current/monitoring-stats.html 101 | [promscale]: https://github.com/timescale/timescale-prometheus 102 | [node-exporter-metrics]: https://github.com/prometheus/node_exporter 103 | -------------------------------------------------------------------------------- /tutorials/visualizing-time-series-data-in-tableau.md: -------------------------------------------------------------------------------- 1 | # Using Tableau to Visualize Data in TimescaleDB 2 | 3 | [Tableau][get-tableau] is a popular analytics platform that enables you to gain 4 | greater intelligence about your business. It is an ideal tool for visualizing 5 | data stored in [TimescaleDB][timescale-products]. 6 | 7 | In this tutorial, we will cover: 8 | 9 | - Setting up Tableau to work with TimescaleDB 10 | - Running queries on TimescaleDB from within Tableau 11 | - Visualize data in Tableau 12 | 13 | ### Pre-requisites 14 | 15 | To complete this tutorial, you will need a cursory knowledge of the Structured Query 16 | Language (SQL). The tutorial will walk you through each SQL command, but it will be 17 | helpful if you've seen SQL before. 18 | 19 | To start, [install TimescaleDB][install-timescale]. Once your installation is complete, 20 | we can proceed to ingesting or creating sample data and finishing the tutorial. 21 | 22 | Also, [get a copy or license of Tableau][get-tableau]. 23 | 24 | You will also want to [complete the Cryptocurrency tutorial][crypto-tutorial], as it will 25 | setup and configure the data you need to complete the remainder of this 26 | tutorial. We will visualize many of the queries found at the end of the Cryptocurrency 27 | tutorial. 28 | 29 | ### Step 1: Setup Tableau to connect to TimescaleDB 30 | 31 | Locate the `host`, `port`, and `password` of your TimescaleDB instance. 32 | 33 | Connecting your TimescaleDB instance to Tableau takes just a few clicks, thanks to Tableau’s 34 | built-in Postgres connector. To connect to your database add a new connection and under the 35 | ‘to a server’ section, select PostgreSQL as the connection type. Then enter your database 36 | credentials. 37 | 38 | ### Step 2: Run a simple query in Tableau 39 | 40 | Let's use the built-in SQL editor in Tableau. To run a query, add custom SQL to your data source 41 | by dragging and dropping the “New Custom SQL” button (in the bottom left of the Tableau desktop 42 | user interface) to the place that says ‘Drag tables here’. 43 | 44 | A window will pop up, in which we can place a query. In this case, we will use the first 45 | query from the [Cryptocurrency Tutorial][crypto-tutorial]: 46 | 47 | ```sql 48 | SELECT time_bucket('7 days', time) AS period, 49 | last(closing_price, time) AS last_closing_price 50 | FROM btc_prices 51 | WHERE currency_code = 'USD' 52 | GROUP BY period 53 | ORDER BY period 54 | ``` 55 | 56 | You should see the same results in Tableau that you see when you run the query in the 57 | `psql` command line. 58 | 59 | Let's also name our data source 'btc_7_days', which you can see below. 60 | 61 | Using Tableau to view time-series data 62 | 63 | ### Step 3: Visualize data in Tableau 64 | 65 | Results in a table are only so useful, graphs are much better! So in our final 66 | step, let’s take our output from the previous step and turn it into an interactive 67 | graph in Tableau. 68 | 69 | To do this, create a new worksheet (or dashboard) and then select your desired data source 70 | (in our case ‘btc_7_days’), as shown below. 71 | 72 | New worksheet in Tableau to examine time-series data 73 | 74 | In the far left pane, you'll see a section Tableau calls 'Dimensions' and 'Measures'. 75 | Whenever you use Tableau, it will classify your fields as either dimensions or 76 | measures. A measure is a field that is a dependent variable, meaning its value is a 77 | function of one or more dimensions. For example, the price of an item on a given day 78 | is a measure based on which day is in question. A dimension, therefore, is an 79 | independent variable. In our example, the given day does not change based on 80 | any other value in our database. 81 | 82 | To put it in more direct terms, July 4, 1776 is still July 4, 1776, even if the 83 | price of tea skyrockets. However, the price of tea may change, depending on which 84 | day we are looking into. 85 | 86 | So, in our case, we want to move the dimension `period` into the Columns section of 87 | our worksheet, while we want to examine the measure `last_closing_price` depending 88 | on a given `period`. In Tableau, we can drag and drop these elements into the 89 | proper place, like so: 90 | 91 | New dimensions and measures in Tableau to examine time-series data 92 | 93 | Now this graph doesn’t quite have the level of fidelity we’re looking for because 94 | the data points are being grouped by year. To fix this, click on the drop down 95 | arrow on period and select 'exact date'. 96 | 97 | Analyze granular data in Tableau to examine time-series data 98 | 99 | Tableau is a powerful business intelligence tool and an ideal companion to data 100 | stored in TimescaleDB. We've only scratched the surface of the kinds of data 101 | you can visualize using Tableau. 102 | 103 | ### Conclusion 104 | 105 | In this tutorial, you learned how to setup Tableau to examine time-series data 106 | stored in TimescaleDB. 107 | 108 | Ready for more learning? Here’s a few suggestions: 109 | - [Time Series Forecasting using TimescaleDB, R, Apache MADlib and Python][time-series-forecasting] 110 | - [Continuous Aggregates][continuous-aggregates] 111 | - [Try Other Sample Datasets][other-samples] 112 | - [Migrate your own Data][migrate] 113 | 114 | [get-tableau]: https://www.tableau.com/products/trial 115 | [crypto-tutorial]: /tutorials/analyze-cryptocurrency-data 116 | [timescale-products]: https://www.timescale.com/products 117 | [install-timescale]: /getting-started/installation 118 | [time-series-forecasting]: /tutorials/tutorial-forecasting 119 | [continuous-aggregates]: /tutorials/continuous-aggs-tutorial 120 | [other-samples]: /tutorials/other-sample-datasets 121 | [migrate]: /getting-started/migrating-data 122 | -------------------------------------------------------------------------------- /update-timescaledb.md: -------------------------------------------------------------------------------- 1 | # Updating TimescaleDB versions [](update) 2 | 3 | This section describes how to upgrade between different versions of 4 | TimescaleDB. TimescaleDB supports **in-place updates only**: 5 | you don't need to dump and restore your data, and versions are published with 6 | automated migration scripts that convert any internal state if necessary. 7 | 8 | >:WARNING: There is currently no automated way to downgrade to an earlier release of TimescaleDB without setting up 9 | >a new instance of PostgreSQL with a previous release of TimescaleDB and then using `pg_restore` 10 | >from a backup. 11 | 12 | ### TimescaleDB Release Compatibility [](compatibility) 13 | 14 | TimescaleDB currently has three major release versions listed below. Please ensure that your version of 15 | PostgreSQL is supported with the extension version you want to install or update. 16 | 17 | TimescaleDB Release | Supported PostgreSQL Release 18 | --------------------|------------------------------- 19 | 1.7 | 9.6, 10, 11, 12 20 | 2.0 | 11, 12 21 | 2.1+ | 11, 12, 13 22 | 23 | >:TIP:If you need to upgrade PostgreSQL first, please see [our documentation][upgrade-pg]. 24 | 25 | ### Upgrade TimescaleDB 26 | 27 | To upgrade an existing TimescaleDB instance, follow the documentation below based on 28 | your current upgrade path. 29 | 30 | **TimescaleDB 2.0**: [Updating TimescaleDB from 1.x to 2.0-RC1+][update-tsdb-2] 31 | 32 | **TimescaleDB 2.0 on Docker**: [Updating TimescaleDB on Docker from 1.7.4 to 2.0-RC1+][update-docker] 33 | 34 | **TimescaleDB 1.x**: [Updating TimescaleDB 1.x to 1.7.4][update-tsdb-1] 35 | 36 | 37 | [upgrade-pg]: /update-timescaledb/upgrade-pg 38 | [update-tsdb-1]: https://docs.timescale.com/v1.7/update-timescaledb/update-tsdb-1 39 | [update-tsdb-2]: /update-timescaledb/update-tsdb-2 40 | [update-docker]: /update-timescaledb/update-docker 41 | -------------------------------------------------------------------------------- /update-timescaledb/update-docker.md: -------------------------------------------------------------------------------- 1 | # Updating a TimescaleDB Docker installation 2 | 3 | The following steps should be taken with a docker 4 | installation to upgrade to the latest TimescaleDB version, while 5 | retaining data across the updates. 6 | 7 | The following instructions assume that your docker instance is named 8 | `timescaledb`. If not, replace this name with the one you use in the subsequent 9 | commands. 10 | 11 | #### Step 1: Pull new image [](update-docker-1) 12 | Install the current TimescaleDB 2.0 image: 13 | 14 | ```bash 15 | docker pull timescale/timescaledb:2.0.2-pg12 16 | ``` 17 | >:TIP: If you are using PostgreSQL 11 images, use the tag `2.0.2-pg11`. 18 | 19 | #### Step 2: Determine mount point used by old container [](update-docker-2) 20 | As you'll want to restart the new docker image pointing to a mount point 21 | that contains the previous version's data, we first need to determine 22 | the current mount point. 23 | 24 | There are two types of mounts. To find which mount type your old container is 25 | using you can run the following command: 26 | ```bash 27 | docker inspect timescaledb --format='{{range .Mounts }}{{.Type}}{{end}}' 28 | ``` 29 | This command will return either `volume` or `bind`, corresponding 30 | to the two options below. 31 | 32 | 1. [Volumes][volumes] -- to get the current volume name use: 33 | ```bash 34 | $ docker inspect timescaledb --format='{{range .Mounts }}{{.Name}}{{end}}' 35 | 069ba64815f0c26783b81a5f0ca813227fde8491f429cf77ed9a5ae3536c0b2c 36 | ``` 37 | 38 | 2. [Bind-mounts][bind-mounts] -- to get the current mount path use: 39 | ```bash 40 | $ docker inspect timescaledb --format='{{range .Mounts }}{{.Source}}{{end}}' 41 | /path/to/data 42 | ``` 43 | 44 | #### Step 3: Stop old container [](update-docker-3) 45 | If the container is currently running, stop and remove it in order to connect 46 | the new one. 47 | 48 | ```bash 49 | docker stop timescaledb 50 | docker rm timescaledb 51 | ``` 52 | 53 | #### Step 4: Start new container [](update-docker-4) 54 | Launch a new container with the updated docker image, but pointing to 55 | the existing mount point. This will again differ by mount type. 56 | 57 | 1. For volume mounts you can use: 58 | ```bash 59 | docker run -v 069ba64815f0c26783b81a5f0ca813227fde8491f429cf77ed9a5ae3536c0b2c:/var/lib/postgresql/data -d --name timescaledb -p 5432:5432 timescale/timescaledb 60 | ``` 61 | 62 | 2. If using bind-mounts, you need to run: 63 | ```bash 64 | docker run -v /path/to/data:/var/lib/postgresql/data -d --name timescaledb -p 5432:5432 timescale/timescaledb 65 | ``` 66 | 67 | 68 | #### Step 5: Run ALTER EXTENSION [](update-docker-5) 69 | Finally, connect to this instance via `psql` (with the `-X` flag) and execute the `ALTER` command 70 | as above in order to update the extension to the latest version: 71 | 72 | ```bash 73 | docker exec -it timescaledb psql -U postgres -X 74 | 75 | # within the PostgreSQL instance 76 | ALTER EXTENSION timescaledb UPDATE; 77 | ``` 78 | 79 | You can then run the `\dx` command to make sure you have the 80 | latest version of TimescaleDB installed. 81 | 82 | [upgrade-pg]: /using-timescaledb/update-timescale/upgrade-pg 83 | [update-db-1]: /using-timescaledb/update-timescale/update-db-1 84 | [update-db-2]: /using-timescaledb/update-timescale/update-db-2 85 | [pg_upgrade]: https://www.postgresql.org/docs/current/static/pgupgrade.html 86 | [backup]: /using-timescaledb/backup 87 | [Install]: /getting-started/installation 88 | [telemetry]: /using-timescaledb/telemetry 89 | [volumes]: https://docs.docker.com/engine/admin/volumes/volumes/ 90 | [bind-mounts]: https://docs.docker.com/engine/admin/volumes/bind-mounts/ -------------------------------------------------------------------------------- /update-timescaledb/update-tsdb-1.md: -------------------------------------------------------------------------------- 1 | # Updating TimescaleDB 1.x [](update) 2 | 3 | Use these instructions to update TimescaleDB within the 1.x version. 4 | 5 | >:TIP:TimescaleDB 2.0 is currently available as a release candidate and we encourage 6 | >users to upgrade in testing environments to gain experience and provide feedback on 7 | >new and updated features. 8 | > 9 | >See [Changes in TimescaleDB 2.0][changes-in-2.0] for more information and links to installation 10 | >instructions 11 | 12 | ### TimescaleDB Release Compatibility 13 | 14 | TimescaleDB 1.x is currently supported by the following PostgreSQL releases. 15 | 16 | TimescaleDB Release | Supported PostgreSQL Release 17 | --------------------|------------------------------- 18 | 1.3 - 1.7.4 | 9.6, 10, 11, 12 19 | 20 | If you need to upgrade PostgreSQL first, please see [our documentation][upgrade-pg]. 21 | 22 | ### Update TimescaleDB 23 | 24 | Software upgrades use PostgreSQL's `ALTER EXTENSION` support to update to the 25 | latest version. TimescaleDB supports having different extension 26 | versions on different databases within the same PostgreSQL instance. This 27 | allows you to update extensions independently on different databases. The 28 | upgrade process involves three-steps: 29 | 30 | 1. We recommend that you perform a [backup][] of your database via `pg_dump`. 31 | 1. [Install][] the latest version of the TimescaleDB extension. 32 | 1. Execute the following `psql` command inside any database that you want to 33 | update: 34 | 35 | ```sql 36 | ALTER EXTENSION timescaledb UPDATE; 37 | ``` 38 | 39 | >:WARNING: When executing `ALTER EXTENSION`, you should connect using `psql` 40 | with the `-X` flag to prevent any `.psqlrc` commands from accidentally 41 | triggering the load of a previous TimescaleDB version on session startup. 42 | It must also be the first command you execute in the session. 43 | 44 | 45 | This will upgrade TimescaleDB to the latest installed version, even if you 46 | are several versions behind. 47 | 48 | After executing the command, the psql `\dx` command should show the latest version: 49 | 50 | ```sql 51 | \dx timescaledb 52 | 53 | Name | Version | Schema | Description 54 | -------------+---------+------------+--------------------------------------------------------------------- 55 | timescaledb | x.y.z | public | Enables scalable inserts and complex queries for time-series data 56 | (1 row) 57 | ``` 58 | 59 | ### Example: Migrating docker installations [](update-docker) 60 | 61 | As a more concrete example, the following steps should be taken with a docker 62 | installation to upgrade to the latest TimescaleDB version, while 63 | retaining data across the updates. 64 | 65 | The following instructions assume that your docker instance is named 66 | `timescaledb`. If not, replace this name with the one you use in the subsequent 67 | commands. 68 | 69 | #### Step 1: Pull new image [](update-docker-1) 70 | Install the latest TimescaleDB image: 71 | 72 | ```bash 73 | docker pull timescale/timescaledb:latest-pg12 74 | ``` 75 | >:TIP: If you are using PostgreSQL 11 images, use the tag `latest-pg11`. 76 | 77 | #### Step 2: Determine mount point used by old container [](update-docker-2) 78 | As you'll want to restart the new docker image pointing to a mount point 79 | that contains the previous version's data, we first need to determine 80 | the current mount point. 81 | 82 | There are two types of mounts. To find which mount type your old container is 83 | using you can run the following command: 84 | ```bash 85 | docker inspect timescaledb --format='{{range .Mounts }}{{.Type}}{{end}}' 86 | ``` 87 | This command will return either `volume` or `bind`, corresponding 88 | to the two options below. 89 | 90 | 1. [Volumes][volumes] -- to get the current volume name use: 91 | ```bash 92 | $ docker inspect timescaledb --format='{{range .Mounts }}{{.Name}}{{end}}' 93 | 069ba64815f0c26783b81a5f0ca813227fde8491f429cf77ed9a5ae3536c0b2c 94 | ``` 95 | 96 | 2. [Bind-mounts][bind-mounts] -- to get the current mount path use: 97 | ```bash 98 | $ docker inspect timescaledb --format='{{range .Mounts }}{{.Source}}{{end}}' 99 | /path/to/data 100 | ``` 101 | 102 | #### Step 3: Stop old container [](update-docker-3) 103 | If the container is currently running, stop and remove it in order to connect 104 | the new one. 105 | 106 | ```bash 107 | docker stop timescaledb 108 | docker rm timescaledb 109 | ``` 110 | 111 | #### Step 4: Start new container [](update-docker-4) 112 | Launch a new container with the updated docker image, but pointing to 113 | the existing mount point. This will again differ by mount type. 114 | 115 | 1. For volume mounts you can use: 116 | ```bash 117 | docker run -v 069ba64815f0c26783b81a5f0ca813227fde8491f429cf77ed9a5ae3536c0b2c:/var/lib/postgresql/data -d --name timescaledb -p 5432:5432 timescale/timescaledb 118 | ``` 119 | 120 | 2. If using bind-mounts, you need to run: 121 | ```bash 122 | docker run -v /path/to/data:/var/lib/postgresql/data -d --name timescaledb -p 5432:5432 timescale/timescaledb 123 | ``` 124 | 125 | 126 | #### Step 5: Run ALTER EXTENSION [](update-docker-5) 127 | Finally, connect to this instance via `psql` (with the `-X` flag) and execute the `ALTER` command 128 | as above in order to update the extension to the latest version: 129 | 130 | ```bash 131 | docker exec -it timescaledb psql -U postgres -X 132 | 133 | # within the PostgreSQL instance 134 | ALTER EXTENSION timescaledb UPDATE; 135 | ``` 136 | 137 | You can then run the `\dx` command to make sure you have the 138 | latest version of TimescaleDB installed. 139 | 140 | [changes-in-2.0]: /v2.0/release-notes/changes-in-timescaledb-2 141 | [upgrade-pg]: /update-timescaledb/upgrade-pg 142 | [update-tsdb-1]: /update-timescaledb/update-db-1 143 | [update-tsdb-2]: /v2.0/update-timescaledb/update-db-2 144 | [pg_upgrade]: https://www.postgresql.org/docs/current/static/pgupgrade.html 145 | [backup]: /using-timescaledb/backup 146 | [Install]: /getting-started/installation 147 | [telemetry]: /using-timescaledb/telemetry 148 | [volumes]: https://docs.docker.com/engine/admin/volumes/volumes/ 149 | [bind-mounts]: https://docs.docker.com/engine/admin/volumes/bind-mounts/ 150 | -------------------------------------------------------------------------------- /update-timescaledb/upgrade-pg.md: -------------------------------------------------------------------------------- 1 | 2 | # Upgrade PostgreSQL 3 | 4 | Each release of TimescaleDB is compatible with specific versions of PostgreSQL. Over time we will add support 5 | for a newer version of PostgreSQL while simultaneously dropping support for an older versions. 6 | 7 | When the supported versions of PostgreSQL changes, you may need to upgrade the version of the **PostgreSQL instance** (e.g. from 10 to 12) before you can install the latest release of TimescaleDB. 8 | 9 | To upgrade PostgreSQL, you have two choices, as outlined in the PostgreSQL online documentation. 10 | 11 | ### Use `pg_upgrade` 12 | 13 | [`pg_upgrade`][pg_upgrade] is a tool that avoids the need to dump all data and then import it 14 | into a new instance of PostgreSQL after a new version is installed. Instead, `pg_upgrade` allows you to 15 | retain the data files of your current PostgreSQL installation while binding the new PostgreSQL binary 16 | runtime to them. This is currently supported for all releases 8.4 and greater. 17 | 18 | ``` 19 | pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir" 20 | ``` 21 | 22 | ### Use `pg_dump` and `pg_restore` 23 | When `pg_upgrade` is not an option, such as moving data to a new physical instance of PostgreSQL, using the 24 | tried and true method of dumping all data in the database and then restoring into a database in the new instance 25 | is always supported with PostgreSQL and TimescaleDB. 26 | 27 | Please see our documentation on [Backup & Restore][backup] strategies for more information. 28 | 29 | 30 | [pg_upgrade]: https://www.postgresql.org/docs/current/static/pgupgrade.html 31 | [backup]: /using-timescaledb/backup 32 | -------------------------------------------------------------------------------- /using-timescaledb.md: -------------------------------------------------------------------------------- 1 | # Using TimescaleDB 2 | 3 | TimescaleDB focuses on _simplicity_ for our users and how they can operate and 4 | manage their database, infrastructure, and applications, especially at scale. 5 | 6 | First and foremost, we developed TimescaleDB as an extension to PostgreSQL, 7 | rather than building a time-series database from scratch. We also chose not to introduce 8 | our own custom query language. Instead, TimescaleDB fully embraces SQL. 9 | 10 | TimescaleDB supports all SQL operations and queries one would expect out of PostgreSQL. 11 | This includes how tables are created, altered and deleted, how schemas are built and indexed, 12 | and how data is inserted and queried. Additionally, TimescaleDB adds necessary and useful 13 | functions for operational ease-of-use and analytical flexibility. In general, if you are 14 | familiar with SQL, TimescaleDB will be familiar to you. 15 | 16 | The most important design aspect for providing users with a simple interface to 17 | the database is the TimescaleDB hypertable, explained in our 18 | [database architecture][architecture] section. 19 | 20 | Essentially, hypertables abstract away the complexity of TimescaleDB's automatic 21 | partitioning, so users don't have to worry about managing any of the underlying 22 | chunks individually. Instead, users can focus on developing and interacting with their data as 23 | they would with regular tables within a PostgreSQL database. For advanced users, TimescaleDB is 24 | transparent about the presence of chunks and allows several ways to access them directly. 25 | This section covers all of the operations, and more, for using TimescaleDB. 26 | 27 | ## Clustering [](clustering) 28 | 29 | TimescaleDB also supports PostgreSQL's built-in replication functionality for 30 | high availability, redundancy, and sharding read queries. Hypertables are fully 31 | compatible with the PostgreSQL streaming replication protocol, as explained in our 32 | [streaming replication tutorial][replication]. Streaming replication setups can be 33 | further extended to offer high availability and failover using community tools like [patroni][patroni]. 34 | 35 | Write clustering for multi-node TimescaleDB deployments is now available with 36 | TimescaleDB 2.0. Read more about [multi-node capabilities][multi-node-basic] 37 | or join our #multinode channel in our [community Slack][slack] 38 | 39 | That being said, workloads that may require a multi-node deployment on NoSQL databases 40 | can often be handled by a single TimescaleDB instance with one or more read replicas. 41 | The power of using a relational database to handle production-level time series data 42 | is discussed in further detail in this [blog post][nosql-blog-post]. 43 | 44 | If you're entirely new to PostgreSQL, here are some resources to help you get started: 45 | - [PostgreSQL Manuals][postgres-manuals] 46 | 47 | If you're entirely new to SQL, here are some resources to help you get started: 48 | - [Khan Academy: Intro to SQL][khanacademy] 49 | - [Tutorials Point: SQL Tutorial][tutorialspoint] 50 | - [Codecademy: Learn SQL][codecademy] 51 | 52 | 53 | [architecture]: /introduction/architecture 54 | [replication]: /tutorials/replication 55 | [patroni]: https://github.com/zalando/patroni 56 | [nosql-blog-post]: https://www.timescale.com/blog/time-series-data-why-and-how-to-use-a-relational-database-instead-of-nosql-d0cd6975e87c 57 | [creating-hypertables]: /using-timescaledb/hypertables 58 | [postgres-manuals]: https://www.postgresql.org/docs/manuals/ 59 | [tutorialspoint]: https://www.tutorialspoint.com/sql/ 60 | [khanacademy]: https://www.khanacademy.org/computing/computer-programming/sql 61 | [codecademy]: https://www.codecademy.com/learn/learn-sql 62 | [slack]: https://slack.timescale.com/ 63 | [multi-node-basic]: /getting-started/setup-multi-node-basic -------------------------------------------------------------------------------- /using-timescaledb/alerting.md: -------------------------------------------------------------------------------- 1 | # Alerting 2 | 3 | There are a variety of different alerting solutions you can use in conjunction with TimescaleDB that are part of the PostgreSQL ecosystem. Regardless of whether you are creating custom alerts embedded in your applications, or using third-party alerting tools to monitor event data across your organization, there are a wide selection of tools available. 4 | 5 | ## Grafana [](grafana) 6 | 7 | Grafana is a great way to visualize and explore time-series data and has a first-class integration with TimescaleDB. Beyond data visualization, Grafana also provides alerting functionality to keep you notified of anomalies. 8 | 9 | Within Grafana, you can [define alert rules][define alert rules] which are time-based thresholds for your dashboard data (e.g. “Average CPU usage greater than 80 percent for 5 minutes”). When those alert rules are triggered, Grafana will send a message via the chosen notification channel. Grafana provides integration with webhooks, email and more than a dozen external services including Slack and PagerDuty. 10 | 11 | To get started, first download and install [Grafana][Grafana-install]. Next, add a new [PostgreSQL datasource][PostgreSQL datasource] that points to your TimescaleDB instance. This data source was built by TimescaleDB engineers, and it is designed to take advantage of the database's time-series capabilities. From there, proceed to your dashboard and set up alert rules as described above. 12 | 13 | 14 | >:WARNING: Alerting is only available in Grafana v4.0 and above. 15 | 16 | ## Other Alerting Tools [](alerting-tools) 17 | 18 | TimescaleDB works with a variety of alerting tools within the PostgreSQL ecosystem. Users can use these tools to set up notifications about meaningful events that signify notable changes to the system. 19 | 20 | Some popular alerting tools that work with TimescaleDB include: 21 | 22 | - DataDog: get started [here][datadog-install] 23 | - Nagios: get started [here][nagios-install] 24 | - Zabbix: get started [here][zabbix-install] 25 | 26 | 27 | [define alert rules]: https://grafana.com/docs/alerting/rules/ 28 | [Grafana-install]: https://grafana.com/get 29 | [PostgreSQL datasource]: https://grafana.com/docs/features/datasources/postgres/ 30 | [alert rules]: https://grafana.com/docs/alerting/rules/ 31 | [datadog-install]: https://docs.datadoghq.com/integrations/postgres/ 32 | [nagios-install]: https://www.nagios.com/solutions/postgres-monitoring/ 33 | [zabbix-install]: https://www.zabbix.com/documentation/current/manual/quickstart/notification 34 | -------------------------------------------------------------------------------- /using-timescaledb/data-retention.md: -------------------------------------------------------------------------------- 1 | # Data Retention [](data-retention) 2 | 3 | An intrinsic part of time-series data is that new data is accumulated 4 | and old data is rarely, if ever, updated and the relevance of the data 5 | diminishes over time. It is therefore often desirable to delete old 6 | data to save disk space. 7 | 8 | As an example, if you have a hypertable definition of `conditions` 9 | where you collect raw data into chunks of one day: 10 | 11 | ```sql 12 | CREATE TABLE conditions( 13 | time TIMESTAMPTZ NOT NULL, 14 | device INTEGER, 15 | temperature FLOAT 16 | ); 17 | 18 | SELECT * FROM create_hypertable('conditions', 'time', 19 | chunk_time_interval => INTERVAL '1 day'); 20 | ``` 21 | 22 | If you collect a lot of data and realize that you never actually use 23 | raw data older than 30 days, you might want to delete data older than 24 | 30 days from `conditions`. 25 | 26 | However, deleting large swaths of data from tables can be costly and 27 | slow if done row-by-row using the standard `DELETE` command. Instead, 28 | TimescaleDB provides a function `drop_chunks` that quickly drop data 29 | at the granularity of chunks without incurring the same overhead. 30 | 31 | For example: 32 | 33 | ```sql 34 | SELECT drop_chunks('conditions', INTERVAL '24 hours'); 35 | ``` 36 | 37 | This will drop all chunks from the hypertable `conditions` that _only_ 38 | include data older than this duration, and will _not_ delete any 39 | individual rows of data in chunks. 40 | 41 | For example, if one chunk has data more than 36 hours old, a second 42 | chunk has data between 12 and 36 hours old, and a third chunk has the 43 | most recent 12 hours of data, only the first chunk is dropped when 44 | executing `drop_chunks`. Thus, in this scenario, 45 | the `conditions` hypertable will still have data stretching back 36 hours. 46 | 47 | For more information on the `drop_chunks` function and related 48 | parameters, please review the [API documentation][drop_chunks]. 49 | 50 | ### Automatic Data Retention Policies [](retention-policy) 51 | 52 | TimescaleDB includes a background job scheduling framework for automating data 53 | management tasks, such as enabling easy data retention policies. 54 | 55 | To add such data retention policies, a database administrator can create, 56 | remove, or alter policies that cause `drop_chunks` to be automatically executed 57 | according to some defined schedule. 58 | 59 | To add such a policy on a hypertable, continually causing chunks older than 24 60 | hours to be deleted, simply execute the command: 61 | ```sql 62 | SELECT add_retention_policy('conditions', INTERVAL '24 hours'); 63 | ``` 64 | 65 | To subsequently remove the policy: 66 | ```sql 67 | SELECT remove_retention_policy('conditions'); 68 | ``` 69 | 70 | The scheduler framework also allows one to view scheduled jobs: 71 | ```sql 72 | SELECT * FROM timescaledb_information.job_stats; 73 | ``` 74 | 75 | For more information, please see the [API documentation][add_retention_policy]. 76 | 77 | ### Data Retention with Continuous Aggregates [](retention-with-aggregates) 78 | 79 | Extra care must be taken when using retention policies or `drop_chunks` calls on 80 | hypertables which have [continuous aggregates][continuous_aggregates] defined on 81 | them. Similar to a refresh of a materialized view, a refresh on a continuous aggregate 82 | will update the aggregate to reflect changes in the underlying source data. This means 83 | that any chunks that are dropped in the region still being refreshed by the 84 | continuous aggregate will cause the chunk data to disappear from the aggregate as 85 | well. If the intent is to keep the aggregate while dropping the underlying data, 86 | the interval being dropped should not overlap with the offsets for the continuous 87 | aggregate. 88 | 89 | As an example, let's add a continuous aggregate to our `conditions` hypertable: 90 | ```sql 91 | CREATE MATERIALIZED VIEW conditions_summary_daily (day, device, temp) 92 | WITH (timescaledb.continuous) AS 93 | SELECT time_bucket('1 day', time), device, avg(temperature) 94 | FROM conditions 95 | GROUP BY (1, 2); 96 | 97 | SELECT add_continuous_aggregate_policy('conditions_summary_daily', '7 days', '1 day', '1 day'); 98 | ``` 99 | 100 | This will create the `conditions_summary_daily` aggregate which will store the daily 101 | temperature per device from our `conditions` table. However, we have a problem here 102 | if we're using our 24 hour retention policy from above, as our aggregate will capture 103 | changes to the data for up to seven days. As a result, we will update the aggregate 104 | when we drop the chunk from the table, and we'll ultimately end up with no data in our 105 | `conditions_summary_daily` table. 106 | 107 | We can fix this by replacing the `conditions` retention policy with one having a more 108 | suitable interval: 109 | ```sql 110 | SELECT remove_retention_policy('conditions'); 111 | SELECT add_retention_policy('conditions', INTERVAL '30 days'); 112 | ``` 113 | 114 | It's worth noting that continuous aggregates are also valid targets for `drop_chunks` 115 | and retention policies. To continue our example, we now have our `conditions` table 116 | holding the last 30 days worth of data, and our `conditions_daily_summary` holding 117 | average daily values for an indefinite window after that. The following will change 118 | this to also drop the aggregate data after 600 days: 119 | 120 | ```sql 121 | SELECT add_retention_policy('conditions_summary_daily', INTERVAL '600 days'); 122 | ``` 123 | 124 | [drop_chunks]: /api#drop_chunks 125 | [add_retention_policy]: /api#add_retention_policy 126 | [continuous_aggregates]: /using-timescaledb/continuous-aggregates 127 | -------------------------------------------------------------------------------- /using-timescaledb/data-tiering.md: -------------------------------------------------------------------------------- 1 | # Data Tiering 2 | 3 | TimescaleDB includes the ability to perform data tiering by moving chunks 4 | between PostgreSQL tablespaces. Tablespaces are locations on disk where 5 | PostgreSQL stores data files containing database objects, and each can be 6 | backed by a different class of storage. As data ages, you can add new 7 | tablespaces backed by a specified storage class and use the 8 | [`move_chunk`][api-move-chunk] API function to migrate data between these 9 | tablespaces. 10 | 11 | For example, we can attach multiple tablespaces to a single hypertable; in the 12 | following example, we use two tablespaces: 13 | 14 | 1. Tablespace `pg_default` is backed by faster, more expensive storage 15 | (SSDs) and is meant for recent chunks that are being actively written to and 16 | regularly queried. 17 | 18 | 1. Tablespace `history` is backed by slower, less expensive storage 19 | (HDDs) and is meant for older chunks that are more rarely queried. 20 | 21 | Taking a "data tiering" approach, as data ages, its corresponding chunks are 22 | moved from `pg_default` to `history`. This provides users with the ability to 23 | tradeoff storage performance for cost, and additional "tiers" of increasingly 24 | large/cheap/slow tablespaces may be employed when appropriate. Therefore, data 25 | tiering provides another mechanism, in addition to other TimescaleDB 26 | capabilities like compression and data retention, to help manage data storage 27 | costs. 28 | 29 | Using multiple tablespaces can also yield I/O performance benefits. With data 30 | tiering, you can isolate large scans of historical data away from the continual 31 | read/write workload against recent data (in the default tablespace). 32 | 33 | ## Creating a Tablespace 34 | 35 | The [`move_chunk`][api-move-chunk] function requires multiple tablespaces set up in PostgreSQL, so let's 36 | start with a quick review of how this works. 37 | 38 | First, add a storage mount that will serve as a home for your new tablespace. This 39 | process will differ based on how you are deployed, but your system administrator 40 | should be able to arrange setting up the mount point. The key here is to provision 41 | your tablespace with storage that is appropriate for how its resident data will be used. 42 | 43 | To create a [tablespace][] in Postgres: 44 | 45 | ```sql 46 | CREATE TABLESPACE history 47 | OWNER postgres 48 | LOCATION '/mnt/history': 49 | ``` 50 | 51 | Here we are creating a tablespace called `history` that will be 52 | owned by the default `postgres` user, using the storage mounted at `/mnt/history`. 53 | 54 | ## Move Chunks :community_function: [](move_chunks) 55 | 56 | Now that we have set up a new, empty tablespace, we can move individual chunks 57 | to there from the default tablespace. The move chunks command also allows you 58 | to move indexes belonging to those chunks to the secondary tablespace (or 59 | another one). 60 | 61 | In addition, the [`move_chunk`][api-move-chunk] function has the 62 | ability to "reorder" the chunk during the migration in order to enable faster 63 | queries. This behavior is similar to [`reorder_chunk`][api-reorder-chunk]; please 64 | see that documentation for more information. 65 | 66 | To determine which chunks to move, we can list chunks that fit a specific 67 | criteria. For example, to identify chunks older than two days: 68 | 69 | ```sql 70 | SELECT show_chunks('conditions', older_than => INTERVAL '2 days'); 71 | ``` 72 | 73 | We then can move `_timescaledb_internal._hyper_1_4_chunk` along with its index 74 | over to `history`, while reordering the chunk based on its time index: 75 | 76 | 77 | ```sql 78 | SELECT move_chunk( 79 | chunk => '_timescaledb_internal._hyper_1_4_chunk', 80 | destination_tablespace => 'history', 81 | index_destination_tablespace => 'history', 82 | reorder_index => '_timescaledb_internal._hyper_1_4_chunk_netdata_time_idx', 83 | verbose => TRUE 84 | ); 85 | ``` 86 | Once this successfully executes, we can verify that our chunk now lives on the 87 | `history` tablespace by querying `pg_tables` to list all of the chunks that 88 | are on `history`: 89 | 90 | ```sql 91 | SELECT tablename from pg_tables 92 | WHERE tablespace = 'history' and tablename like '_hyper_%_%_chunk'; 93 | ``` 94 | 95 | As you will see, the target chunk is now listed as residing on `history`; we 96 | can similarly validate the location of our index: 97 | 98 | ```sql 99 | SELECT indexname FROM pg_indexes WHERE tablespace = 'history'; 100 | ``` 101 | 102 | ## Additional data tiering examples [](other-examples) 103 | 104 | After moving a chunk to a slower tablespace, you may want to move a chunk back 105 | to the default, faster tablespace: 106 | 107 | ```sql 108 | SELECT move_chunk( 109 | chunk => '_timescaledb_internal._hyper_1_4_chunk', 110 | destination_tablespace => 'pg_default', 111 | index_destination_tablespace => 'pg_default', 112 | reorder_index => '_timescaledb_internal._hyper_1_4_chunk_netdata_time_idx' 113 | ); 114 | ``` 115 | 116 | Alternatively, you may decide to move a data chunk to your slower tablespace, 117 | but keep the chunk's indexes on the default, faster tablespace: 118 | 119 | ```sql 120 | SELECT move_chunk( 121 | chunk => '_timescaledb_internal._hyper_1_4_chunk', 122 | destination_tablespace => 'history', 123 | index_destination_tablespace => 'pg_default', 124 | reorder_index => '_timescaledb_internal._hyper_1_4_chunk_netdata_time_idx' 125 | ); 126 | ``` 127 | 128 | You could perform the opposite as well (keeping the data in `pg_default` but 129 | moving the index to `history`), or setup a third tablespace 130 | (`history_indexes`) and move the data to `history` and its corresponding 131 | indexes to `history_indexes`. 132 | 133 | Finally, with the introduction of user-exposed automation in TimescaleDB 2.0, 134 | you can use `move_chunk` within TimescaleDB's job scheduler framework. Please see 135 | our [Actions documentation][actions] for more information. 136 | 137 | [api-move-chunk]: /api#move_chunk 138 | [api-reorder-chunk]: /api#reorder_chunk 139 | [tablespace]: https://www.postgresql.org/docs/10/sql-createtablespace.html 140 | [actions]: /using-timescaledb/actions 141 | -------------------------------------------------------------------------------- /using-timescaledb/ingesting-data.md: -------------------------------------------------------------------------------- 1 | # Ingesting data 2 | 3 | TimescaleDB can support standard SQL inserts. Read more about how to use 4 | SQL to write data into TimescaleDB in our [Writing Data][writing-data] section. 5 | 6 | Users often choose to leverage existing 3rd party tools to build data ingest pipelines 7 | that increase ingest rates by performing batch writes into TimescaleDB, as opposed 8 | to inserting data one row or metric at a time. At a high-level, TimescaleDB looks just 9 | like PostgreSQL, so any tool that can read and/or write to PostgreSQL also works with 10 | TimescaleDB. 11 | 12 | Below, we discuss some popular frameworks and systems used in conjunction with TimescaleDB. 13 | 14 | ## Prometheus [](prometheus) 15 | 16 | Prometheus is a popular tool used to monitor infrastructure metrics. It can scrape any 17 | endpoints that expose metrics in a Prometheus-compatible format. The metrics are stored in 18 | Prometheus and can be queried using PromQL. Prometheus itself is not built for long-term 19 | metrics storage, and instead, supports a variety of remote storage solutions. 20 | 21 | We developed a [Promscale][promscale-blog] that allows Prometheus to use TimescaleDB as a 22 | remote store for long-term metrics. Promscale supports both PromQL and SQL, PromQL queries 23 | can be directed to the Promscale endpoint or Prometheus instance and the [SQL 24 | API][promscale-sql] can be accessed by connecting to Timescale directly. It also offers 25 | other native time-series capabilities, such as automatically[compressing your 26 | data][timescale-compression], retention policies, continuous aggregate views, 27 | downsampling, data gap-filling, and interpolation. It is already natively supported by 28 | Grafana via the [Prometheus][prometheus-grafana] and PostgreSQL/TimescaleDB 29 | [postgres-grafana] data sources. 30 | 31 | Read more about Promscale and how we designed it to perform well in our [design 32 | doc][design-doc] or check out our [github project][promscale-github]. 33 | 34 | ## PostgreSQL and TimescaleDB output plugin for Telegraf [](postgresql-and-timescaledb-output-plugin-for-telegraf) 35 | 36 | Telegraf is an agent that collects, processes, aggregates, and writes metrics. Since it is plugin-driven for both the 37 | collection and the output of data, it is easily extendable. In fact, it already contains over 200 plugins for gathering and 38 | writing different types of data. 39 | 40 | We wrote the PostgreSQL output plugin which also has the ability to send data to a TimescaleDB hypertable. Telegraf handles 41 | batching, processing, and aggregating the data collected prior to inserting that data into TimescaleDB. 42 | 43 | 44 | >:WARNING: The [pull request][pull-request] is open and currently under review by the Telegraf developers, waiting to be 45 | merged. To give users the opportunity to try this functionality, we built [downloadable binaries][downloadable-binaries] of 46 | Telegraf with our plugin already included. 47 | 48 | The PostgreSQL plugin extends the ease of use users get from leveraging Telegraf by handling schema generation and 49 | modification. This means that as metrics are collected by Telegraf, the plugin creates a table if it doesn’t exist and alters 50 | the table if a schema has changed. By default, the plugin leverages a [wide model][wide-model], which is typically the schema 51 | model that TimescaleDB users tend to choose when storing metrics. However, you can specify that you want to store metrics in a 52 | narrow model with a separate metadata table and foreign keys. You can also choose to use JSONB. 53 | 54 | To get started with the PostgreSQL and TimescaleDB output plugin, visit the [tutorial][telegraf-tutorial]. 55 | 56 | ## PostgreSQL's Kafka connector [](postgresqls-kafka-connector) 57 | 58 | Another popular method of ingesting data into TimescaleDB is through the use of 59 | the [PostgreSQL connector with Kafka Connect][postgresql-connector-with-kafka-connect]. 60 | The connector is designed to work with [Kafka Connect][kafka-connect] and to be 61 | deployed to a Kafka Connect runtime service. It’s purpose is to ingest change 62 | events from PostgreSQL databases (i.e. TimescaleDB). 63 | 64 | The deployed connector will monitor one or more schemas within a TimescaleDB 65 | server and write all change events to Kafka topics, which can be independently 66 | consumed by one or more clients. Kafka Connect can be distributed to provide 67 | fault tolerance to ensure the connectors are running and continually keeping 68 | up with changes in the database. 69 | 70 | >:TIP: The PostgreSQL connector can also be used as a library without Kafka or 71 | Kafka Connect, enabling applications and services to directly connect to 72 | TimescaleDB and obtain the ordered change events. This approach requires the 73 | application to record the progress of the connector so that upon restart, 74 | the connect can continue where it left off. This approach may be useful for 75 | less critical use cases. However, for production use cases, it’s recommended 76 | that you use this connector with Kafka and Kafka Connect. 77 | 78 | To start using the PostgreSQL connector, visit the [GitHub page][github-debezium]. 79 | If you are interested in an alternative method to ingest data from Kafka to 80 | TimescaleDB, you can download the [StreamSets Data Collector][streamsets-data-collector] 81 | and get started with this [tutorial][tutorial-streamsets]. 82 | 83 | 84 | [writing-data]: /using-timescaledb/writing-data 85 | [prometheus-grafana]: https://grafana.com/docs/grafana/latest/datasources/prometheus/ 86 | [postgres-grafana]: https://grafana.com/docs/grafana/latest/datasources/postgres/ 87 | [promscale-blog]: https://blog.timescale.com/blog/promscale-analytical-platform-long-term-store-for-prometheus-combined-sql-promql-postgresql/ 88 | [promscale-sql]: https://github.com/timescale/promscale/blob/master/docs/sql_schema.md 89 | [timescale-compression]: https://blog.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/ 90 | [grafana]: /using-timescaledb/visualizing-data#grafana 91 | [other-viz-tools]: /using-timescaledb/visualizing-data#other-viz-tools 92 | [pull-request]: https://github.com/influxdata/telegraf/pull/3428 93 | [downloadable-binaries]: https://docs.timescale.com/tutorials/telegraf-output-plugin#telegraf-installation 94 | [wide-model]: https://docs.timescale.com/introduction/data-model 95 | [telegraf-tutorial]: https://docs.timescale.com/tutorials/telegraf-output-plugin 96 | [postgresql-connector-with-kafka-connect]: https://github.com/debezium/debezium/tree/master/debezium-connector-postgres 97 | [kafka-connect]: http://kafka.apache.org/documentation.html#connect 98 | [github-debezium]: https://github.com/debezium/debezium/tree/master/debezium-connector-postgres 99 | [streamsets-data-collector]: https://streamsets.com/opensource 100 | [tutorial-streamsets]: https://streamsets.com/blog/ingesting-data-apache-kafka-timescaledb/ 101 | -------------------------------------------------------------------------------- /using-timescaledb/limitations.md: -------------------------------------------------------------------------------- 1 | # Limitations [](limitations) 2 | 3 | While TimescaleDB generally offers capabilities that go beyond what 4 | PostgreSQL offers, there are some limitations to using hypertables, 5 | and, in particular, distributed hypertables. This section documents 6 | the common limitations when using both regular and distributed 7 | hypertables. 8 | 9 | ## Hypertable Limitations [](hypertable-limitations) 10 | 11 | - Foreign key constraints referencing a hypertable are not supported. 12 | - Time dimensions (columns) used for partitioning cannot have NULL 13 | values. 14 | - Unique indexes must include all columns that are partitioning 15 | dimensions. 16 | - `UPDATE` statements that move values between partitions (chunks) are 17 | not supported. This includes upserts (`INSERT ... ON CONFLICT 18 | UPDATE`). 19 | 20 | ## Distributed Hypertable Limitations [](distributed-hypertable-limitations) 21 | 22 | All the limitations of regular hypertables also apply to distributed 23 | hypertables. In addition, the following limitations apply specifically 24 | to distributed hypertables: 25 | 26 | - Distributed scheduling of background jobs is not supported. Background jobs 27 | created on an access node are scheduled and executed on this access node 28 | without distributing the jobs to data nodes. 29 | - Continuous aggregates are not supported. 30 | - Compression policies are not supported. However, you can enable 31 | compression on the distributed hypertable and manually 32 | execute `compress_chunk`. 33 | - Reordering chunks is not supported. 34 | - Tablespaces cannot be attached to a distributed hypertable on the 35 | access node. It is still possible attach tablespaces on data nodes. 36 | - Roles and permissions are assumed to be consistent across the nodes 37 | of a distributed database, but consistency is not enforced. 38 | - Joins on data nodes are not supported. Joining a distributed 39 | hypertable with another table requires the other table to reside on 40 | the access node. This also limits the performance of joins on 41 | distributed hypertables. 42 | - Tables referenced by foreign key constraints in a distributed 43 | hypertable must be present on the access node and all data 44 | nodes. This applies also to referenced values. 45 | - Parallel-aware scans and appends are not supported. 46 | - A consistent restore point for backup/restore across nodes is not 47 | natively provided; care must be taken when restoring individual 48 | backups to access and data nodes. 49 | - Native replication limitations are described [here][native-replication]. 50 | - User defined functions have to be manually installed on the data nodes 51 | so that the function definition is available on both access and data 52 | nodes. This is particularly relevant for functions that are 53 | registered with `set_integer_now_func`. 54 | 55 | Note that these limitations concern usage from the access node. Some 56 | currently unsupported features (like compression policy or 57 | continuous aggregates) might still work on individual data nodes, but 58 | such usage is neither tested nor officially supported. Future versions 59 | of TimescaleDB might remove some of these limitations. 60 | 61 | [native-replication]: /using-timescaledb/distributed-hypertables#native-replication 62 | -------------------------------------------------------------------------------- /using-timescaledb/telemetry.md: -------------------------------------------------------------------------------- 1 | # Telemetry and Version Checking 2 | We enable anonymous usage sharing to help us better 3 | understand and assist TimescaleDB users, as well as provide automated version 4 | checks. We emphasize that privacy of our users is paramount, so we do not 5 | collect any personally-identifying information. The following is an example of 6 | the JSON that is sent to our servers about a specific deployment: 7 | 8 | ```javascript 9 | { 10 | "db_uuid": "26917841-2fc0-48fd-b096-ba19b3fda98f", 11 | "license": { 12 | "edition": "community" 13 | }, 14 | "exported_db_uuid": "8dd4543c-f44e-43c9-a666-02d23bb09b90", 15 | "installed_time": "2000-04-17 10:56:59.427738-04", 16 | "last_tuned_time": "2001-02-03T04:05:06-0300", 17 | "last_tuned_version": "1.0.0", 18 | "install_method": "source", 19 | "os_name": "Linux", 20 | "os_release": "4.9.125-linuxkit", 21 | "os_version": "#1 SMP Fri Sep 7 08:20:28 UTC 2018", 22 | "os_name_pretty": "Debian GNU/Linux 8 (jessie)", 23 | "postgresql_version": "12.4", 24 | "timescaledb_version": "1.7.0", 25 | "build_architecture": "x86_64", 26 | "build_architecture_bit_size": "64", 27 | "build_os_name": "Linux", 28 | "build_os_version": "4.9.125-linuxkit", 29 | "data_volume": "65982148", 30 | "db_metadata":{ 31 | "promscale_version": "0.1.0", 32 | "promscale_commit_hash": "" 33 | }, 34 | "num_hypertables": "3", 35 | "num_continuous_aggs": "0", 36 | "num_reorder_policies": "1", 37 | "num_drop_chunks_policies": "2", 38 | "related_extensions":{ 39 | "pg_prometheus": "false", 40 | "PostGIS": "true", 41 | "promscale": "true" 42 | } 43 | } 44 | ``` 45 | 46 | In particular, the `UUID` fields contain no identifying information. 47 | Both `UUID` fields are randomly generated by appropriately seeded 48 | random number generators. For full transparency, we expose a 49 | new API function, [`get_telemetry_report`][get_telemetry_report], that returns 50 | a text string of the exact JSON that is sent to our servers. 51 | 52 | Additionally any content of the table `_timescaledb_catalog.metadata` which has 53 | `include_in_telemetry` set to `true` and the value of `timescaledb_telemetry.cloud` 54 | will be included in the telemetry report. 55 | 56 | Notably, telemetry reports a different set of values depending on the license 57 | that your TimescaleDB instance is running under. If you are using OSS or Community, 58 | we only send an "edition" field, which could have a value of either "apache_only" or "community", 59 | as relevant. 60 | 61 | 62 | ## Version Checking 63 | The database sends telemetry reports periodically in the background. 64 | In response to the telemetry report, the database will receive the most recent 65 | version of TimescaleDB available for installation. This version will be 66 | recorded in the user’s server logs, along with any applicable out-of-date 67 | version warnings. While you do not have to update immediately to the newest 68 | release, many users have reported that performance issues or bugs 69 | automatically resolve after updating their version of TimescaleDB. 70 | 71 | ## Disabling Telemetry 72 | Although we invite our community to help us keep improving our 73 | product, we do understand when users would like to disable telemetry. Note that 74 | disabling telemetry also disables the version checking functionality. 75 | 76 | Telemetry is sent on a per-database basis, so users can disable telemetry for specific databases or for an entire instance. 77 | 78 | To turn off telemetry for an instance, simply include the following line 79 | in your `postgresql.conf` file: 80 | 81 | ``` 82 | timescaledb.telemetry_level=off 83 | ``` 84 | 85 | Alternatively, in a `psql` console, run: 86 | 87 | ``` 88 | ALTER [SYSTEM | DATABASE | USER] { *db_name* | *role_specification* } SET timescaledb.telemetry_level=off 89 | ``` 90 | 91 | If `ALTER DATABASE` is run, then this will disable telemetry for the specified 92 | database, but not for other databases in the instance. If `ALTER SYSTEM` is 93 | run, this will disable telemetry for the entire instance. 94 | Note that superuser privileges are necessary to run `ALTER SYSTEM`. 95 | 96 | After running the desired command, reload the new server configuration with `SELECT pg_reload_conf()` in order 97 | for the configuration changes to take effect. 98 | 99 | If at a later time you wish to re-enable version checking and telemetry, either 100 | include the following line in `postgresql.conf`: 101 | 102 | ``` 103 | timescaledb.telemetry_level=basic 104 | ``` 105 | 106 | or run the following command in psql: 107 | 108 | ``` 109 | ALTER [SYSTEM | DATABASE | USER] { *db_name* | *role_specification* } SET timescaledb.telemetry_level=basic 110 | ``` 111 | 112 | [get_telemetry_report]: /api#get_telemetry_report 113 | -------------------------------------------------------------------------------- /using-timescaledb/tooling.md: -------------------------------------------------------------------------------- 1 | # Tooling 2 | 3 | We’ve created several open-source tools to help users make the most out of their experience with TimescaleDB. 4 | 5 | ## `timescaledb-tune` [](ts-tune) 6 | 7 | [`timescaledb-tune`][tstune] is a command-line tool that helps you tune and configure your TimescaleDB/PostgreSQL instances to leverage your existing hardware for better performance. It accomplishes this by adjusting the settings to match your system's CPU, memory resources, and PostgreSQL version. 8 | 9 | `timescaledb-tune` is packaged along with our binary releases as a dependency, so if you installed one of our binary releases (including Docker), you should have access to the tool. Alternatively, with a standard Go environment, you can `go get` the repository to install it. 10 | 11 | The tool will first analyze the existing `postgresql.conf` file to ensure that the TimescaleDB extension is appropriately installed, and then it will provide recommendations for memory, parallelism, WAL, and other settings. These changes are written to your `postgresql.conf` and will take effect on the next (re)start. If you are starting on fresh instance and don't feel the need to approve each group of changes, you can automatically accept and append the suggestions to the end of your `postgresql.conf`. 12 | 13 | For more information on how to get started with `timescaledb-tune`, visit the [GitHub repo][github-tstune]. 14 | 15 | ## `timescaledb-parallel-copy` [](ts-copy) 16 | 17 | [`timescaledb-parallel-copy`][tscopy] is a command-line program for parallelizing PostgreSQL's built-in COPY functionality for bulk inserting data into TimescaleDB. When getting started with TimescaleDB, we recommend this program as a good way to get better bulk insert performance. 18 | 19 | The purpose of this tool is to speed up large data migrations by running multiple `COPYs` concurrently. In addition to parallelizing the workload, the tool also offers flags to improve the copy experience. 20 | 21 | To get started with `timescaledb-parallel-copy`, visit the [GitHub repo][tscopy]. 22 | 23 | [tstune]: https://github.com/timescale/timescaledb-tune 24 | [github-tstune]: https://github.com/timescale/timescaledb-tune 25 | [tscopy]: https://github.com/timescale/timescaledb-parallel-copy 26 | -------------------------------------------------------------------------------- /using-timescaledb/troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting 2 | 3 | If you run into problems when using TimescaleDB, there are a few things that you 4 | can do. There are some solutions to common errors below as well as ways to output 5 | diagnostic information about your setup. If you need more guidance, you can join 6 | the support [slack group][slack] or post an issue on the TimescaleDB [github][]. 7 | 8 | ## Common Errors 9 | ### Error updating TimescaleDB when using a third-party PostgreSQL admin tool. 10 | 11 | The update command `ALTER EXTENSION timescaledb UPDATE` must be the first command 12 | executed upon connection to a database. Some admin tools execute command before 13 | this, which can disrupt the process. It may be necessary for you to manually update 14 | the database with `psql`. See our [update docs][update-db] for details. 15 | 16 | ### Log error: could not access file "timescaledb" [](access-timescaledb) 17 | 18 | If your PostgreSQL logs have this error preventing it from starting up, 19 | you should double check that the TimescaleDB files have been installed 20 | to the correct location. Our installation methods use `pg_config` to 21 | get PostgreSQL's location. However if you have multiple versions of 22 | PostgreSQL installed on the same machine, the location `pg_config` 23 | points to may not be for the version you expect. To check which 24 | version TimescaleDB used: 25 | ```bash 26 | $ pg_config --version 27 | PostgreSQL 12.3 28 | ``` 29 | 30 | If that is the correct version, double check that the installation path is 31 | the one you'd expect. For example, for PostgreSQL 11.0 installed via 32 | Homebrew on macOS it should be `/usr/local/Cellar/postgresql/11.0/bin`: 33 | ```bash 34 | $ pg_config --bindir 35 | /usr/local/Cellar/postgresql/11.0/bin 36 | ``` 37 | 38 | If either of those steps is not the version you are expecting, you need 39 | to either (a) uninstall the incorrect version of PostgreSQL if you can or 40 | (b) update your `PATH` environmental variable to have the correct 41 | path of `pg_config` listed first, i.e., by prepending the full path: 42 | ```bash 43 | $ export PATH = /usr/local/Cellar/postgresql/11.0/bin:$PATH 44 | ``` 45 | Then, reinstall TimescaleDB and it should find the correct installation 46 | path. 47 | 48 | ### ERROR: could not access file "timescaledb-\": No such file or directory [](alter-issue) 49 | 50 | If the error occurs immediately after updating your version of TimescaleDB and 51 | the file mentioned is from the previous version, it is probably due to an incomplete 52 | update process. Within the greater PostgreSQL server instance, each 53 | database that has TimescaleDB installed needs to be updated with the SQL command 54 | `ALTER EXTENSION timescaledb UPDATE;` while connected to that database. Otherwise, 55 | the database will be looking for the previous version of the timescaledb files. 56 | 57 | See [our update docs][update-db] for more info. 58 | 59 | --- 60 | 61 | ## Getting more information 62 | 63 | ### EXPLAINing query performance [](explain) 64 | 65 | PostgreSQL's EXPLAIN feature allows users to understand the underlying query 66 | plan that PostgreSQL uses to execute a query. There are multiple ways that 67 | PostgreSQL can execute a query: for example, a query might be fulfilled using a 68 | slow sequence scan or a much more efficient index scan. The choice of plan 69 | depends on what indexes are created on the table, the statistics that PostgreSQL 70 | has about your data, and various planner settings. The EXPLAIN output let's you 71 | know which plan PostgreSQL is choosing for a particular query. PostgreSQL has a 72 | [in-depth explanation][using explain] of this feature. 73 | 74 | To understand the query performance on a hypertable, we suggest first 75 | making sure that the planner statistics and table maintenance is up-to-date on the hypertable 76 | by running `VACUUM ANALYZE ;`. Then, we suggest running the 77 | following version of EXPLAIN: 78 | 79 | ``` 80 | EXPLAIN (ANALYZE on, BUFFERS on) <original query>; 81 | ``` 82 | 83 | If you suspect that your performance issues are due to slow IOs from disk, you 84 | can get even more information by enabling the 85 | [track\_io\_timing][track_io_timing] variable with `SET track_io_timing = 'on';` 86 | before running the above EXPLAIN. 87 | 88 | When asking query-performance related questions in our [support portal][] 89 | or via [slack][], providing the EXPLAIN output of a 90 | query is immensely helpful. 91 | 92 | --- 93 | 94 | ## Dump TimescaleDB meta data [](dump-meta-data) 95 | 96 | To help when asking for support and reporting bugs, 97 | TimescaleDB includes a SQL script that outputs metadata 98 | from the internal TimescaleDB tables as well as version information. 99 | The script is available in the source distribution in `scripts/` 100 | but can also be [downloaded separately][]. 101 | To use it, run: 102 | 103 | ```bash 104 | psql [your connect flags] -d your_timescale_db < dump_meta_data.sql > dumpfile.txt 105 | ``` 106 | 107 | and then inspect `dump_file.txt` before sending it together with a bug report or support question. 108 | 109 | [slack]: https://slack.timescale.com/ 110 | [github]: https://github.com/timescale/timescaledb/issues 111 | [update-db]: /update-timescaledb 112 | [using explain]: https://www.postgresql.org/docs/current/static/using-explain.html 113 | [track_io_timing]: https://www.postgresql.org/docs/current/static/runtime-config-statistics.html#GUC-TRACK-IO-TIMING 114 | [downloaded separately]: https://raw.githubusercontent.com/timescale/timescaledb/master/scripts/dump_meta_data.sql 115 | [support portal]: https://www.timescale.com/support 116 | -------------------------------------------------------------------------------- /using-timescaledb/update-db.md: -------------------------------------------------------------------------------- 1 | # Updating software versions [](update) 2 | 3 | This section describes how to upgrade between different versions of 4 | TimescaleDB. Since version 0.1.0, TimescaleDB supports **in-place updates**: 5 | you don't need to dump and restore your data, and versions are published with 6 | automated migration scripts that convert any internal state if necessary. 7 | 8 | >:TIP: If you are looking to upgrade the version of the **PostgreSQL instance** (e.g. from 11 to 12) rather than the version of the TimescaleDB extension, you have two choices. Either use [`pg_upgrade`][pg_upgrade] with the command: 9 | > ``` 10 | > pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir" 11 | > ``` 12 | > or [backup][] and then restore into a new version of the instance. 13 | 14 | ### Using ALTER EXTENSION 15 | 16 | Software upgrades use PostgreSQL's `ALTER EXTENSION` support to update to the 17 | latest version. Since 0.9.0, TimescaleDB supports having different extension 18 | versions on different databases within the same PostgreSQL instance. This 19 | allows you to update extensions independently on different databases. The 20 | upgrade process is involves three-steps: 21 | 22 | 1. Optionally, perform a [backup][] of your database via `pg_dump`. 23 | 1. [Install][] the latest version of the TimescaleDB extension. 24 | 1. Execute the following `psql` command inside any database that you want to 25 | update: 26 | 27 | ```sql 28 | ALTER EXTENSION timescaledb UPDATE; 29 | ``` 30 | 31 | >:WARNING: When executing `ALTER EXTENSION`, you should connect using `psql` 32 | with the `-X` flag to prevent any `.psqlrc` commands from accidentally 33 | triggering the load of a previous TimescaleDB version on session startup. 34 | It must also be the first command you execute in the session. 35 | 36 | 37 | >:WARNING: When upgrading from an old version of TimescaleDB before upgrading 38 | to version 0.12.0 or version 1.5.0, 39 | you will need to restart your database before calling `ALTER EXTENSION`. 40 | After upgrading to 1.6.1 you will need to restart the database 41 | before restoring a backup. 42 | Remember that restarting PostgreSQL is accomplished via different 43 | commands on different platforms: 44 | - Linux services: `sudo service postgresql restart` 45 | - Mac Homebrew: `brew services restart postgresql` 46 | - Docker: see below 47 | 48 | 49 | 50 | >:WARNING: If you are upgrading from a version before 0.11.0 make sure your 51 | root table does not contain data otherwise the update will fail. 52 | Data can be migrated as follows: 53 | ```sql 54 | BEGIN; 55 | SET timescaledb.restoring = 'off'; 56 | INSERT INTO hypertable SELECT * FROM ONLY hypertable; 57 | SET timescaledb.restoring = 'on'; 58 | TRUNCATE ONLY hypertable; 59 | SET timescaledb.restoring = 'off'; 60 | COMMIT; 61 | ``` 62 | 63 | This will upgrade TimescaleDB to the latest installed version, even if you 64 | are several versions behind. 65 | 66 | After executing the command, the psql `\dx` command should show the latest version: 67 | 68 | ```sql 69 | \dx timescaledb 70 | 71 | Name | Version | Schema | Description 72 | -------------+---------+------------+--------------------------------------------------------------------- 73 | timescaledb | x.y.z | public | Enables scalable inserts and complex queries for time-series data 74 | (1 row) 75 | ``` 76 | 77 | >:TIP: Beginning in v0.12.0, [telemetry][] reporting will also enable automatic 78 | >version checking. If you have enabled telemetry, TimescaleDB will 79 | >periodically notify you via server logs if there is a new version 80 | >of TimescaleDB available. 81 | 82 | ### Example: Migrating docker installations [](update-docker) 83 | 84 | As a more concrete example, the following steps should be taken with a docker 85 | installation to upgrade to the latest TimescaleDB version, while 86 | retaining data across the updates. 87 | 88 | The following instructions assume that your docker instance is named 89 | `timescaledb`. If not, replace this name with the one you use in the subsequent 90 | commands. 91 | 92 | #### Step 1: Pull new image [](update-docker-1) 93 | Install the latest TimescaleDB image: 94 | 95 | ```bash 96 | docker pull timescale/timescaledb:latest-pg12 97 | ``` 98 | >:TIP: If you are using PostgreSQL 11 images, use the tag `latest-pg11`. 99 | 100 | #### Step 2: Determine mount point used by old container [](update-docker-2) 101 | As you'll want to restart the new docker image pointing to a mount point 102 | that contains the previous version's data, we first need to determine 103 | the current mount point. 104 | 105 | There are two types of mounts. To find which mount type your old container is 106 | using you can run the following command: 107 | ```bash 108 | docker inspect timescaledb --format='{{range .Mounts }}{{.Type}}{{end}}' 109 | ``` 110 | This command will return either `volume` or `bind`, corresponding 111 | to the two options below. 112 | 113 | 1. [Volumes][volumes] -- to get the current volume name use: 114 | ```bash 115 | $ docker inspect timescaledb --format='{{range .Mounts }}{{.Name}}{{end}}' 116 | 069ba64815f0c26783b81a5f0ca813227fde8491f429cf77ed9a5ae3536c0b2c 117 | ``` 118 | 119 | 2. [Bind-mounts][bind-mounts] -- to get the current mount path use: 120 | ```bash 121 | $ docker inspect timescaledb --format='{{range .Mounts }}{{.Source}}{{end}}' 122 | /path/to/data 123 | ``` 124 | 125 | #### Step 3: Stop old container [](update-docker-3) 126 | If the container is currently running, stop and remove it in order to connect 127 | the new one. 128 | 129 | ```bash 130 | docker stop timescaledb 131 | docker rm timescaledb 132 | ``` 133 | 134 | #### Step 4: Start new container [](update-docker-4) 135 | Launch a new container with the updated docker image, but pointing to 136 | the existing mount point. This will again differ by mount type. 137 | 138 | 1. For volume mounts you can use: 139 | ```bash 140 | docker run -v 069ba64815f0c26783b81a5f0ca813227fde8491f429cf77ed9a5ae3536c0b2c:/var/lib/postgresql/data -d --name timescaledb -p 5432:5432 timescale/timescaledb 141 | ``` 142 | 143 | 2. If using bind-mounts, you need to run: 144 | ```bash 145 | docker run -v /path/to/data:/var/lib/postgresql/data -d --name timescaledb -p 5432:5432 timescale/timescaledb 146 | ``` 147 | 148 | 149 | #### Step 5: Run ALTER EXTENSION [](update-docker-5) 150 | Finally, connect to this instance via `psql` (with the `-X` flag) and execute the `ALTER` command 151 | as above in order to update the extension to the latest version: 152 | 153 | ```bash 154 | docker exec -it timescaledb psql -U postgres -X 155 | 156 | # within the PostgreSQL instance 157 | ALTER EXTENSION timescaledb UPDATE; 158 | ``` 159 | 160 | You can then run the `\dx` command to make sure you have the 161 | latest version of TimescaleDB installed. 162 | 163 | [pg_upgrade]: https://www.postgresql.org/docs/current/static/pgupgrade.html 164 | [backup]: /using-timescaledb/backup 165 | [Install]: /getting-started/installation 166 | [telemetry]: /using-timescaledb/telemetry 167 | [volumes]: https://docs.docker.com/engine/admin/volumes/volumes/ 168 | [bind-mounts]: https://docs.docker.com/engine/admin/volumes/bind-mounts/ 169 | -------------------------------------------------------------------------------- /using-timescaledb/visualizing-data.md: -------------------------------------------------------------------------------- 1 | # Visualizing data 2 | 3 | The time-series data stored in TimescaleDB can be easily displayed on graphs. 4 | TimescaleDB is compatible with visualization tools that work with PostgreSQL. This means that 5 | regardless of whether you are creating custom visualizations embedded in your applications or 6 | using off-the-shelf visualization tools to expose data across your business organization, you 7 | can choose from a wide selection of tools. 8 | 9 | ## Grafana [](grafana) 10 | 11 | Grafana is an open-source visualization tool popular in the DevOps monitoring space, 12 | although it can also be used across the organization to visualize time-series metrics. 13 | Getting started with Grafana is simple. Download and install [Grafana][grafana-install]. 14 | Then, add a new PostgreSQL data source that points to your TimescaleDB instance. 15 | Queries run through Grafana will continue to benefit from the performance improvements 16 | built into TimescaleDB. In fact, this data source was built by TimescaleDB engineers, 17 | and it is designed to take advantage of the databases' time-series capabilities. 18 | 19 | >:WARNING: Grafana expects data received to be ordered by time. When querying 20 | Grafana using SQL, you must include the `ORDER BY time` statement so that 21 | results are guaranteed to be ordered. Grafana draws the points as they appear 22 | in the returned query. If data comes in unordered, you may observe 23 | inconsistencies in both graphs and Grafana functions. 24 | 25 | ## Other Visualization Tools [](other-viz-tools) 26 | 27 | TimescaleDB also works with popular visualization software solutions that allow 28 | users across your organization to analyze and visualize data. Users can use these 29 | platforms to run business intelligence reports, power machine learning models, and 30 | build custom dashboards. Many of these tools also allow you to embed dashboards 31 | into applications, making it quick and easy to offer analytical features to your users. 32 | 33 | Some popular visualization tools that work with TimescaleDB include: 34 | - Tableau: get started [here][tableau-install] 35 | - PowerBI: get started [here][powerbi-install] 36 | - Looker: get started [here][looker-install] 37 | - Periscope: get started [here][periscope-install] 38 | - Mode: read more [here][mode-install] 39 | - Chartio: read more [here][chartio-install] 40 | 41 | >:TIP: If it works with PostgreSQL, it works with TimescaleDB. TimescaleDB looks 42 | just like PostgreSQL on the outside, but offers optimizations built deep into the 43 | system that speed up time-series queries. 44 | 45 | [grafana-install]: https://grafana.com/get 46 | [tableau-install]: https://onlinehelp.tableau.com/current/pro/desktop/en-us/examples_postgresql.html 47 | [powerbi-install]: https://powerbi.microsoft.com/en-us/integrations/postgresql/ 48 | [looker-install]: https://docs.looker.com/setup-and-management/database-config/postgresql 49 | [periscope-install]: https://doc.periscopedata.com/article/connecting-to-periscope-menu#whitelisting 50 | [mode-install]: https://about.modeanalytics.com/postgres/ 51 | [chartio-install]: https://chartio.com/product/data-sources/postgresql/ 52 | --------------------------------------------------------------------------------