├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── Dockerfile ├── LICENSE ├── Makefile ├── README.md ├── assets └── skor-arch.png ├── gen-triggers.py ├── k8s.yaml ├── sample.triggers.json ├── src ├── log.c ├── log.h ├── req.c └── skor.c └── tests ├── requirements.txt ├── run_tests.sh ├── schema.sql ├── test.py └── triggers.json /.gitignore: -------------------------------------------------------------------------------- 1 | build/ 2 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, gender identity and expression, level of experience, 9 | education, socio-economic status, nationality, personal appearance, race, 10 | religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at support AT hasura DOT io. All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Code of Conduct 2 | This project and everyone participating in it is governed by the [Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. 3 | 4 | # Development Environment 5 | Make sure you have the following installed: 6 | - PostgreSQL 9+ 7 | - `gcc` 8 | - libcurl (`libcurl4-openssl-dev`) 9 | - libppq (`libpq-dev`) 10 | 11 | # Build 12 | Build the project using `make`: 13 | 14 | ```bash 15 | $ make 16 | ``` 17 | 18 | # Run 19 | Run the application with the arguments specifying database and webhook parameters: 20 | 21 | ```bash 22 | $ ./build/skor 'host=localhost port=5432 dbname=postgres user=postgres password=' http://localhost:5000 23 | ``` 24 | 25 | # Tests 26 | Tests have been written using Python 3. The webhook is a `python-flask` server. 27 | 28 | To run tests make sure you have Postgres running at `localhost:5432` and the database doesn't already have a table named `test_table`. 29 | 30 | You can modify the Postgres credentials in the `test.py` file. 31 | 32 | Run the tests from the root directory as: 33 | 34 | ```bash 35 | $ python test.py 36 | ``` -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM debian:jessie-20180426 as skor-builder 2 | MAINTAINER vamshi@hasura.io 3 | 4 | RUN apt-get update && apt-get install -y build-essential pkgconf libcurl4-openssl-dev libpq-dev \ 5 | && rm -rf /var/lib/apt/lists/* 6 | 7 | COPY ./src /skor/src 8 | COPY Makefile /skor/ 9 | WORKDIR /skor 10 | RUN make 11 | 12 | FROM debian:jessie-20180426 13 | 14 | RUN apt-get update && apt-get install -y libcurl3 libpq5 \ 15 | && rm -rf /var/lib/apt/lists/* 16 | 17 | ENV DBNAME "postgres" 18 | ENV PGUSER "postgres" 19 | ENV PGPASS "''" 20 | ENV PGHOST "localhost" 21 | ENV PGPORT 5432 22 | ENV WEBHOOKURL "http://localhost:5000" 23 | ENV LOG_LEVEL "2" 24 | 25 | COPY --from=skor-builder /skor/build/skor /usr/bin/skor 26 | COPY Makefile /skor/ 27 | WORKDIR /skor 28 | 29 | CMD "skor" "host=${PGHOST} port=${PGPORT} dbname=${DBNAME} user=${PGUSER} password=${PGPASS}" "${WEBHOOKURL}" "${LOG_LEVEL}" 30 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | Copyright 2018, Hasura Inc. 179 | 180 | Licensed under the Apache License, Version 2.0 (the "License"); 181 | you may not use this file except in compliance with the License. 182 | You may obtain a copy of the License at 183 | 184 | http://www.apache.org/licenses/LICENSE-2.0 185 | 186 | Unless required by applicable law or agreed to in writing, software 187 | distributed under the License is distributed on an "AS IS" BASIS, 188 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 189 | See the License for the specific language governing permissions and 190 | limitations under the License. -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | project := skor 2 | current_dir := $(shell pwd) 3 | registry := hasura 4 | CPPFLAGS += $(shell pkg-config --cflags libpq) 5 | version := 0.2 6 | build_dir := $(current_dir)/build 7 | 8 | skor: src/skor.c src/req.c 9 | mkdir -p build 10 | c99 $(CPPFLAGS) -O3 -Wall -Wextra -o build/skor src/skor.c src/log.c -lpq -lcurl 11 | 12 | clean: 13 | rm -rf build 14 | 15 | image: 16 | docker build -t $(registry)/$(project):$(version) . 17 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Skor 2 | 3 | ## New and improved version of Skor is now part of [Hasura GraphQL Engine](https://github.com/hasura/graphql-engine) 4 | 5 | A few months ago, we built the open source [GraphQL Engine](https://github.com/hasura/graphql-engine) that gives you instant GraphQL APIs over any Postgres database. We have added all of Skor's existing features and even more to make it production ready: 6 | 7 | 1) Reliable: We capture every relevant action on the database as an event, even when Hasura is down! The events are delivered to your webhook as soon as possible with an atleast-once guarantee. 8 | 9 | 2) Scalable: What more, it even scales horizontally. If you are processing millions of events, just add more instances of GraphQL engine. 10 | 11 | 3) Use with Serverless: If you are using Skor, then avoid the pain of managing your webhook by moving to Serverless infrastructure. Check out these blog posts to get started 12 | 13 | **Use [Hasura GraphQL Engine](https://github.com/hasura/graphql-engine) for production use cases** 14 | 15 | --- 16 | 17 | `skor` is a utility for Postgres which calls a webhook with row changes as JSON whenever an INSERT, UPDATE or DELETE event occurs on a particular table. 18 | You can drop the docker image next to your Postgres database instance and configure a webhook that will be called. 19 | 20 | It works using a `pg_notify` trigger function and a tiny C program `skor` that listens to the notifications and calls the configured webhook with a JSON payload. 21 | 22 | ## When to use 23 | - When you want to trigger an action in an external application when a table row is modified. 24 | - When you want a lightweight notification system for changes in the database. 25 | - When you want to send the changes to a message queue such as AMQP, Kafka etc. 26 | 27 | ## How it works 28 | A PostgreSQL stored procedure is set up as a trigger on the required table(s). This trigger uses PostgreSQL's LISTEN and NOTIFY to publish change events as JSON to a notification channel. `Skor` watches this channel for messages and when a message is received, it makes an HTTP POST call to the webhook with the JSON payload. The webhook can then decide to take an action on this. 29 | 30 | ![Skor Architecture Diagram](assets/skor-arch.png "Skor Architecture") 31 | 32 | 33 | ## Caveats 34 | - Events are only captured when skor is running. 35 | - If a call to the webhook fails, it is **not** retried. 36 | 37 | ## Getting started 38 | 39 | ### 1) Set up the triggers: 40 | 41 | We need to setup triggers on the tables that we we are interested in. Create a `triggers.json` file (see [sample.triggers.json](sample.triggers.json)) with the required tables and events. 42 | 43 | Note: This command requires `python3`. 44 | 45 | ```bash 46 | $ ./gen-triggers.py triggers.json | psql -h localhost -p 5432 -U postgres -d postgres --single-transaction -- 47 | ``` 48 | 49 | ### 2) Run Skor: 50 | 51 | Run the skor Docker image (that has the `skor` binary baked in): 52 | 53 | ```bash 54 | $ docker run \ 55 | -e DBNAME="postgres" \ 56 | -e PGUSER="postgres" \ 57 | -e PGPASS="''" \ 58 | -e PGHOST="localhost" \ 59 | -e PGPORT=5432 \ 60 | -e WEBHOOKURL="http://localhost:5000/" \ 61 | --net host \ 62 | -it hasura/skor:v0.1.1 63 | ``` 64 | 65 | Make sure you use the appropriate database parameters and webhook URL above. 66 | 67 | ## Examples 68 | 69 | ### INSERT 70 | 71 | Query: 72 | ```sql 73 | INSERT INTO test_table(name) VALUES ('abc1'); 74 | ``` 75 | 76 | JSON webhook payload: 77 | 78 | ```json 79 | {"data": {"id": 1, "name": "abc1"}, "table": "test_table", "op": "INSERT"} 80 | ``` 81 | 82 | ### UPDATE 83 | 84 | Query: 85 | ```sql 86 | UPDATE test_table SET name = 'pqr1' WHERE id = 1; 87 | ``` 88 | 89 | JSON webhook payload: 90 | 91 | ```json 92 | {"data": {"id": 1, "name": "pqr1"}, "table": "test_table", "op": "UPDATE"} 93 | ``` 94 | 95 | ### DELETE 96 | 97 | Query: 98 | ```sql 99 | DELETE FROM test_table WHERE id = 1; 100 | ``` 101 | 102 | JSON webhook payload: 103 | 104 | ```json 105 | {"data": {"id": 1, "name": "pqr1"}, "table": "test_table", "op": "DELETE"} 106 | ``` 107 | 108 | ## Uninstalling 109 | 110 | To remove the skor related functions and triggers that were added to Postgres, run this in psql: 111 | 112 | ```sql 113 | DO $$DECLARE r record; 114 | BEGIN 115 | FOR r IN SELECT routine_schema, routine_name FROM information_schema.routines 116 | WHERE routine_name LIKE 'notify_skor%' 117 | LOOP 118 | EXECUTE 'DROP FUNCTION ' || quote_ident(r.routine_schema) || '.' || quote_ident(r.routine_name) || ' CASCADE'; 119 | END LOOP; 120 | END$$; 121 | ``` 122 | 123 | ## Deploying Skor on Hasura 124 | 125 | The pre-built Docker image with the `skor` binary is available at `hasura/skor` and can be deployed as a microservice with the sample `k8s.yaml` in this repo. 126 | The webhook can be another microservice that exposes an endpoint. 127 | 128 | To learn more on deploying microservices on Hasura you may check out the [documentation](https://docs.hasura.io/0.15/manual/microservices/index.html). 129 | 130 | 131 | ## Build Skor: 132 | 133 | ### Requirements: 134 | 135 | - PostgreSQL 9+ 136 | - `gcc` 137 | - libcurl (`libcurl4-openssl-dev`) 138 | - libppq (`libpq-dev`) 139 | 140 | 141 | ### Build: 142 | 143 | ```bash 144 | $ make 145 | ``` 146 | ### Run: 147 | 148 | ```bash 149 | $ ./build/skor 'host=localhost port=5432 dbname=postgres user=postgres password=' http://localhost:5000 150 | ``` 151 | 152 | ## Test 153 | 154 | 1. Install the requirements specified in `tests/requirements.txt` 155 | 2. The tests assume that you have a local postgres instance at `localhost:5432` and a database called `skor_test` which can be accessed by an `admin` user. 156 | 3. Run skor on this database with the webhook url set to `http://localhost:5000` 157 | 4. run `run_tests.sh` script in the `tests` directory. 158 | 159 | ## Contributing 160 | Contributions are welcome! 161 | 162 | Please check out the [contributing guide](CONTRIBUTING.md) to learn about setting up the development environment and building the project. Also look at the [issues](https://github.com/hasura/skor/issues) page and help us in improving Skor! 163 | -------------------------------------------------------------------------------- /assets/skor-arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hasura/skor/b3f02e3ec7202ae086c08e4f88c611821531af60/assets/skor-arch.png -------------------------------------------------------------------------------- /gen-triggers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import argparse 4 | import json 5 | 6 | dropPrevTriggers = """ 7 | DO $$DECLARE r record; 8 | BEGIN 9 | FOR r IN SELECT routine_schema, routine_name FROM information_schema.routines 10 | WHERE routine_name LIKE 'notify_skor%' 11 | LOOP 12 | EXECUTE 'DROP FUNCTION ' || quote_ident(r.routine_schema) || '.' || quote_ident(r.routine_name) || '() CASCADE'; 13 | END LOOP; 14 | END$$; 15 | """ 16 | 17 | functionTemplate = """ 18 | CREATE OR REPLACE FUNCTION {schema}.notify_skor_{table}_{event}() RETURNS trigger 19 | LANGUAGE plpgsql 20 | AS $$ 21 | DECLARE 22 | cur_rec record; 23 | BEGIN 24 | PERFORM pg_notify('skor', json_build_object( 25 | 'table', TG_TABLE_NAME, 26 | 'schema', TG_TABLE_SCHEMA, 27 | 'op', TG_OP, 28 | 'data', {data_expression} 29 | )::text); 30 | RETURN cur_rec; 31 | END; 32 | $$; 33 | DROP TRIGGER IF EXISTS notify_skor_{table}_{event} ON {schema}.{table}; 34 | CREATE TRIGGER notify_skor_{table}_{event} AFTER {event} ON {schema}.{table} FOR EACH ROW EXECUTE PROCEDURE {schema}.notify_skor_{table}_{event}(); 35 | """ 36 | 37 | def genSQL(tableConf): 38 | table = tableConf["table"] 39 | schema = tableConf.get("schema", "public") 40 | columns = tableConf.get("columns", "*") 41 | triggerConf = {} 42 | if type(columns) == dict: 43 | triggerConf = columns 44 | else: 45 | triggerConf['insert'] = columns 46 | triggerConf['update'] = columns 47 | triggerConf['delete'] = columns 48 | for op, columns in triggerConf.items(): 49 | opL = op.lower() 50 | if opL == 'delete': 51 | recVar = 'OLD' 52 | else: 53 | recVar = 'NEW' 54 | if columns == "*": 55 | dataExp = "row_to_json({})".format(recVar) 56 | else: 57 | dataExp = "row_to_json((select r from (SELECT {}) as r))".format( 58 | ",".join(["{}.{}".format(recVar, col) for col in columns]) 59 | ) 60 | sql = functionTemplate.format( 61 | schema=schema, 62 | table=table, 63 | event=opL, 64 | data_expression=dataExp 65 | ) 66 | print(sql) 67 | 68 | if __name__ == "__main__": 69 | parser = argparse.ArgumentParser() 70 | parser.add_argument( 71 | 'conf', 72 | help="The JSON configuration for generating triggers (see sample.triggers.json)", 73 | type=argparse.FileType('r') 74 | ) 75 | args = parser.parse_args() 76 | print(dropPrevTriggers) 77 | for conf in json.load(args.conf): 78 | genSQL(conf) 79 | -------------------------------------------------------------------------------- /k8s.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | items: 3 | - apiVersion: extensions/v1beta1 4 | kind: Deployment 5 | metadata: 6 | creationTimestamp: null 7 | labels: 8 | app: '{{ microservice.name }}' 9 | hasuraService: custom 10 | name: '{{ microservice.name }}' 11 | namespace: '{{ cluster.metadata.namespaces.user }}' 12 | spec: 13 | replicas: 1 14 | strategy: {} 15 | template: 16 | metadata: 17 | creationTimestamp: null 18 | labels: 19 | app: '{{ microservice.name }}' 20 | spec: 21 | containers: 22 | - image: hasura/skor:v0.1.1 23 | imagePullPolicy: IfNotPresent 24 | name: '{{ microservice.name }}' 25 | ports: 26 | - containerPort: 8080 27 | protocol: TCP 28 | env: 29 | - name: PGHOST 30 | value: "postgres.hasura" 31 | - name: PGPORT 32 | value: "5432" 33 | - name: PGUSER 34 | valueFrom: 35 | secretKeyRef: 36 | name: hasura-secrets 37 | key: postgres.user 38 | - name: PGPASS 39 | valueFrom: 40 | secretKeyRef: 41 | name: hasura-secrets 42 | key: postgres.password 43 | - name: DBNAME 44 | value: "hasuradb" 45 | - name: WEBHOOKURL 46 | value: "http://pgwebhook.abash85.hasura-app.io" # replace with your webhook URL 47 | resources: {} 48 | securityContext: {} 49 | terminationGracePeriodSeconds: 0 50 | status: {} 51 | - apiVersion: v1 52 | kind: Service 53 | metadata: 54 | creationTimestamp: null 55 | labels: 56 | app: '{{ microservice.name }}' 57 | hasuraService: custom 58 | name: '{{ microservice.name }}' 59 | namespace: '{{ cluster.metadata.namespaces.user }}' 60 | spec: 61 | ports: 62 | - port: 80 63 | protocol: TCP 64 | targetPort: 8080 65 | selector: 66 | app: '{{ microservice.name }}' 67 | type: ClusterIP 68 | status: 69 | loadBalancer: {} 70 | kind: List 71 | metadata: {} 72 | -------------------------------------------------------------------------------- /sample.triggers.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "table": "tracks" 4 | }, 5 | { 6 | "table": "films", 7 | "columns": "*" 8 | }, 9 | { 10 | "table": "artists", 11 | "columns": ["id"] 12 | }, 13 | { 14 | "table": "genres", 15 | "columns": { 16 | "insert": "*", 17 | "update": ["id", "name"], 18 | "delete": ["id"] 19 | } 20 | } 21 | ] 22 | -------------------------------------------------------------------------------- /src/log.c: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017 rxi 3 | * 4 | * Permission is hereby granted, free of charge, to any person obtaining a copy 5 | * of this software and associated documentation files (the "Software"), to 6 | * deal in the Software without restriction, including without limitation the 7 | * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or 8 | * sell copies of the Software, and to permit persons to whom the Software is 9 | * furnished to do so, subject to the following conditions: 10 | * 11 | * The above copyright notice and this permission notice shall be included in 12 | * all copies or substantial portions of the Software. 13 | * 14 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 | * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 | * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 17 | * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 18 | * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 19 | * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 20 | * IN THE SOFTWARE. 21 | */ 22 | 23 | #include 24 | #include 25 | #include 26 | #include 27 | #include 28 | 29 | #include "log.h" 30 | 31 | static struct { 32 | void *udata; 33 | log_LockFn lock; 34 | FILE *fp; 35 | int level; 36 | int quiet; 37 | } L; 38 | 39 | 40 | static const char *level_names[] = { 41 | "TRACE", "DEBUG", "INFO", "WARN", "ERROR", "FATAL" 42 | }; 43 | 44 | #ifdef LOG_USE_COLOR 45 | static const char *level_colors[] = { 46 | "\x1b[94m", "\x1b[36m", "\x1b[32m", "\x1b[33m", "\x1b[31m", "\x1b[35m" 47 | }; 48 | #endif 49 | 50 | 51 | static void lock(void) { 52 | if (L.lock) { 53 | L.lock(L.udata, 1); 54 | } 55 | } 56 | 57 | 58 | static void unlock(void) { 59 | if (L.lock) { 60 | L.lock(L.udata, 0); 61 | } 62 | } 63 | 64 | 65 | void log_set_udata(void *udata) { 66 | L.udata = udata; 67 | } 68 | 69 | 70 | void log_set_lock(log_LockFn fn) { 71 | L.lock = fn; 72 | } 73 | 74 | 75 | void log_set_fp(FILE *fp) { 76 | L.fp = fp; 77 | } 78 | 79 | 80 | void log_set_level(int level) { 81 | L.level = level; 82 | } 83 | 84 | 85 | void log_set_quiet(int enable) { 86 | L.quiet = enable ? 1 : 0; 87 | } 88 | 89 | 90 | void log_log(int level, const char *file, int line, const char *fmt, ...) { 91 | if (level < L.level) { 92 | return; 93 | } 94 | 95 | /* Acquire lock */ 96 | lock(); 97 | 98 | /* Get current time */ 99 | time_t t = time(NULL); 100 | struct tm *lt = localtime(&t); 101 | 102 | /* Log to stderr */ 103 | if (!L.quiet) { 104 | va_list args; 105 | char buf[16]; 106 | buf[strftime(buf, sizeof(buf), "%H:%M:%S", lt)] = '\0'; 107 | #ifdef LOG_USE_COLOR 108 | fprintf( 109 | stderr, "%s %s%-5s\x1b[0m \x1b[90m%s:%d:\x1b[0m ", 110 | buf, level_colors[level], level_names[level], file, line); 111 | #else 112 | fprintf(stderr, "%s %-5s %s:%d: ", buf, level_names[level], file, line); 113 | #endif 114 | va_start(args, fmt); 115 | vfprintf(stderr, fmt, args); 116 | va_end(args); 117 | fprintf(stderr, "\n"); 118 | } 119 | 120 | /* Log to file */ 121 | if (L.fp) { 122 | va_list args; 123 | char buf[32]; 124 | buf[strftime(buf, sizeof(buf), "%Y-%m-%dT%H:%M:%S", lt)] = '\0'; 125 | fprintf(L.fp, "%s %-5s %s:%d: ", buf, level_names[level], file, line); 126 | va_start(args, fmt); 127 | vfprintf(L.fp, fmt, args); 128 | va_end(args); 129 | fprintf(L.fp, "\n"); 130 | } 131 | 132 | /* Release lock */ 133 | unlock(); 134 | } 135 | -------------------------------------------------------------------------------- /src/log.h: -------------------------------------------------------------------------------- 1 | /** 2 | * Copyright (c) 2017 rxi 3 | * 4 | * This library is free software; you can redistribute it and/or modify it 5 | * under the terms of the MIT license. See `log.c` for details. 6 | */ 7 | 8 | #ifndef LOG_H 9 | #define LOG_H 10 | 11 | #include 12 | #include 13 | 14 | #define LOG_VERSION "0.1.0" 15 | 16 | typedef void (*log_LockFn)(void *udata, int lock); 17 | 18 | enum { LOG_TRACE, LOG_DEBUG, LOG_INFO, LOG_WARN, LOG_ERROR, LOG_FATAL }; 19 | 20 | #define log_trace(...) log_log(LOG_TRACE, __FILE__, __LINE__, __VA_ARGS__) 21 | #define log_debug(...) log_log(LOG_DEBUG, __FILE__, __LINE__, __VA_ARGS__) 22 | #define log_info(...) log_log(LOG_INFO, __FILE__, __LINE__, __VA_ARGS__) 23 | #define log_warn(...) log_log(LOG_WARN, __FILE__, __LINE__, __VA_ARGS__) 24 | #define log_error(...) log_log(LOG_ERROR, __FILE__, __LINE__, __VA_ARGS__) 25 | #define log_fatal(...) log_log(LOG_FATAL, __FILE__, __LINE__, __VA_ARGS__) 26 | 27 | void log_set_udata(void *udata); 28 | void log_set_lock(log_LockFn fn); 29 | void log_set_fp(FILE *fp); 30 | void log_set_level(int level); 31 | void log_set_quiet(int enable); 32 | 33 | void log_log(int level, const char *file, int line, const char *fmt, ...); 34 | 35 | #endif 36 | -------------------------------------------------------------------------------- /src/req.c: -------------------------------------------------------------------------------- 1 | /** 2 | * Based off https://gist.github.com/leprechau/e6b8fef41a153218e1f4 3 | */ 4 | 5 | /* standard includes */ 6 | #include 7 | #include 8 | #include 9 | #include 10 | 11 | /* the local logging library */ 12 | #include "log.h" 13 | 14 | /* libcurl (http://curl.haxx.se/libcurl/c) */ 15 | #include 16 | 17 | /* holder for curl fetch */ 18 | struct curl_fetch_st { 19 | char *payload; 20 | size_t size; 21 | }; 22 | 23 | /* callback for curl fetch */ 24 | size_t curl_callback (void *contents, size_t size, size_t nmemb, void *userp) { 25 | size_t realsize = size * nmemb; /* calculate buffer size */ 26 | struct curl_fetch_st *p = (struct curl_fetch_st *) userp; /* cast pointer to fetch struct */ 27 | 28 | /* expand buffer */ 29 | p->payload = (char *) realloc(p->payload, p->size + realsize + 1); 30 | 31 | /* check buffer */ 32 | if (p->payload == NULL) { 33 | /* this isn't good */ 34 | log_error("failed to expand buffer in curl_callback"); 35 | /* free buffer */ 36 | free(p->payload); 37 | /* return */ 38 | return -1; 39 | } 40 | 41 | /* copy contents to buffer */ 42 | memcpy(&(p->payload[p->size]), contents, realsize); 43 | 44 | /* set new buffer size */ 45 | p->size += realsize; 46 | 47 | /* ensure null termination */ 48 | p->payload[p->size] = 0; 49 | 50 | /* return size */ 51 | return realsize; 52 | } 53 | 54 | /* fetch and return url body via curl */ 55 | CURLcode curl_fetch_url(CURL *ch, const char *url, struct curl_fetch_st *fetch) { 56 | CURLcode rcode; /* curl result code */ 57 | 58 | /* init payload */ 59 | fetch->payload = (char *) calloc(1, sizeof(fetch->payload)); 60 | 61 | /* check payload */ 62 | if (fetch->payload == NULL) { 63 | /* log error */ 64 | log_error("failed to allocate payload in curl_fetch_url"); 65 | /* return error */ 66 | return CURLE_FAILED_INIT; 67 | } 68 | 69 | /* init size */ 70 | fetch->size = 0; 71 | 72 | /* set url to fetch */ 73 | curl_easy_setopt(ch, CURLOPT_URL, url); 74 | 75 | /* set calback function */ 76 | curl_easy_setopt(ch, CURLOPT_WRITEFUNCTION, curl_callback); 77 | 78 | /* pass fetch struct pointer */ 79 | curl_easy_setopt(ch, CURLOPT_WRITEDATA, (void *) fetch); 80 | 81 | /* set default user agent */ 82 | curl_easy_setopt(ch, CURLOPT_USERAGENT, "libcurl-agent/1.0"); 83 | 84 | /* set timeout */ 85 | curl_easy_setopt(ch, CURLOPT_TIMEOUT, 5); 86 | 87 | /* enable location redirects */ 88 | curl_easy_setopt(ch, CURLOPT_FOLLOWLOCATION, 1); 89 | 90 | /* set maximum allowed redirects */ 91 | curl_easy_setopt(ch, CURLOPT_MAXREDIRS, 1); 92 | 93 | /* fetch the url */ 94 | rcode = curl_easy_perform(ch); 95 | 96 | /* return */ 97 | return rcode; 98 | } 99 | 100 | int call_webhook(char *url, char *j_data) { 101 | CURL *ch; /* curl handle */ 102 | CURLcode rcode; /* curl result code */ 103 | 104 | struct curl_fetch_st curl_fetch; /* curl fetch struct */ 105 | struct curl_fetch_st *cf = &curl_fetch; /* pointer to fetch struct */ 106 | struct curl_slist *headers = NULL; /* http headers to send with request */ 107 | 108 | /* init curl handle */ 109 | if ((ch = curl_easy_init()) == NULL) { 110 | /* log error */ 111 | log_error("failed to create curl handle in fetch_session"); 112 | /* return error */ 113 | return -1; 114 | } 115 | 116 | /* set content type */ 117 | headers = curl_slist_append(headers, "Content-Type: application/json"); 118 | 119 | /* set curl options */ 120 | curl_easy_setopt(ch, CURLOPT_CUSTOMREQUEST, "POST"); 121 | curl_easy_setopt(ch, CURLOPT_HTTPHEADER, headers); 122 | curl_easy_setopt(ch, CURLOPT_POSTFIELDS, j_data); 123 | 124 | /* fetch page and capture return code */ 125 | rcode = curl_fetch_url(ch, url, cf); 126 | 127 | /* TODO: should be called only when rcode == CURLE_OK */ 128 | long response_code = 0; 129 | curl_easy_getinfo(ch, CURLINFO_RESPONSE_CODE, &response_code); 130 | 131 | /* cleanup curl handle */ 132 | curl_easy_cleanup(ch); 133 | 134 | /* free headers */ 135 | curl_slist_free_all(headers); 136 | 137 | /* check return code */ 138 | if (rcode != CURLE_OK) { 139 | /* log error */ 140 | log_error("failed to send notification to webhook at %s - curl said: %s", 141 | url, curl_easy_strerror(rcode)); 142 | /* return error */ 143 | return -2; 144 | } 145 | 146 | /* check payload */ 147 | if (cf->payload != NULL) { 148 | /* print result */ 149 | log_debug("webhook returned: %ld '%s'", response_code, cf->payload); 150 | /* free payload */ 151 | free(cf->payload); 152 | } else { 153 | /* error */ 154 | log_error("failed to populate payload"); 155 | /* return */ 156 | return -3; 157 | } 158 | 159 | /* exit */ 160 | return rcode; 161 | } 162 | -------------------------------------------------------------------------------- /src/skor.c: -------------------------------------------------------------------------------- 1 | /* skor.c 2 | 3 | Waits for changes in the database and forwards them to 4 | a webhook 5 | 6 | Usage as shown in usageError(). 7 | */ 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include /* libpq */ 13 | #include /* libcurl to send requests */ 14 | #include "req.c" 15 | 16 | /* closes the connection and exits */ 17 | static void clean_exit(PGconn *conn) { 18 | PQfinish(conn); 19 | exit(EXIT_FAILURE); 20 | } 21 | 22 | static void help_msg(const char *prog_name) { 23 | fprintf(stderr, "usage: %s [log_level(0-5)]\n", prog_name); 24 | exit(EXIT_FAILURE); 25 | } 26 | 27 | int main(int argc, char *argv[]) { 28 | /* default level INFO or greater */ 29 | int log_level = 2; 30 | PGconn *conn; 31 | PGresult *res; 32 | PGnotify *notify; 33 | 34 | /* Check the arguments */ 35 | if (argc < 3 || argc > 4 || strcmp(argv[1], "--help") == 0) 36 | help_msg(argv[0]); 37 | 38 | if (argc == 4) 39 | log_level = atoi(argv[3]); 40 | 41 | log_set_quiet(1); 42 | log_set_level(log_level); 43 | log_set_fp(stdout); 44 | 45 | 46 | /* Establish a connection to the postgres database */ 47 | conn = PQconnectdb(argv[1]); 48 | 49 | /* Check to see that the backend connection was successfully made */ 50 | if (PQstatus(conn) != CONNECTION_OK) { 51 | log_fatal("connection to database failed: %s", 52 | PQerrorMessage(conn)); 53 | clean_exit(conn); 54 | } 55 | 56 | log_info("listening for notifications from postgres"); 57 | 58 | /* Issue LISTEN command to enable notifications from the rule's NOTIFY. */ 59 | res = PQexec(conn, "LISTEN skor"); 60 | if (PQresultStatus(res) != PGRES_COMMAND_OK) { 61 | log_fatal("LISTEN command failed: %s", PQerrorMessage(conn)); 62 | PQclear(res); 63 | clean_exit(conn); 64 | } 65 | /* Avoid leaks */ 66 | PQclear(res); 67 | 68 | /* Listen to notifications */ 69 | while (1) 70 | { 71 | /* Sleep until something happens on the connection. */ 72 | int sock; 73 | fd_set input_mask; 74 | 75 | sock = PQsocket(conn); 76 | 77 | if (sock < 0) 78 | break; /* shouldn't happen */ 79 | 80 | FD_ZERO(&input_mask); 81 | FD_SET(sock, &input_mask); 82 | 83 | log_debug("waiting for data on socket"); 84 | fflush(stdout); 85 | if (select(sock + 1, &input_mask, NULL, NULL, NULL) < 0) { 86 | log_fatal("select() failed: %s", strerror(errno)); 87 | clean_exit(conn); 88 | } 89 | 90 | /* Now check for input */ 91 | PQconsumeInput(conn); 92 | while ((notify = PQnotifies(conn)) != NULL) { 93 | log_info("received notification : '%s'", notify->extra); 94 | if (call_webhook(argv[2], notify->extra) != 0) 95 | log_error("failed to send notification to the webhook"); 96 | else 97 | log_info("notification sent"); 98 | PQfreemem(notify); 99 | } 100 | } 101 | 102 | log_fatal("connection lost; probably the server exited?"); 103 | 104 | /* close the connection to the database and cleanup */ 105 | PQfinish(conn); 106 | 107 | exit(EXIT_FAILURE); 108 | } 109 | -------------------------------------------------------------------------------- /tests/requirements.txt: -------------------------------------------------------------------------------- 1 | psycopg2-binary 2 | sqlalchemy 3 | -------------------------------------------------------------------------------- /tests/run_tests.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sh 2 | 3 | set -e 4 | 5 | psql -h 127.0.0.1 -p 5432 -d skor_test -U admin --single-transaction -f schema.sql 6 | ../gen-triggers.py triggers.json | psql -h 127.0.0.1 -p 5432 -d skor_test -U admin --single-transaction -- 7 | ./test.py 8 | -------------------------------------------------------------------------------- /tests/schema.sql: -------------------------------------------------------------------------------- 1 | drop table if exists skor_test_t1; 2 | create table skor_test_t1( 3 | c1 int, 4 | c2 text 5 | ); 6 | 7 | drop table if exists skor_test_t2; 8 | create table skor_test_t2( 9 | c1 int, 10 | c2 text 11 | ); 12 | 13 | drop table if exists skor_test_t3; 14 | create table skor_test_t3( 15 | c1 int, 16 | c2 text 17 | ); 18 | 19 | drop table if exists skor_test_t4; 20 | create table skor_test_t4( 21 | c1 int, 22 | c2 text, 23 | c3 text 24 | ); 25 | -------------------------------------------------------------------------------- /tests/test.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import socketserver 4 | import threading 5 | import http.server 6 | import json 7 | import queue 8 | from http import HTTPStatus 9 | 10 | from sqlalchemy import create_engine 11 | from sqlalchemy.schema import MetaData 12 | 13 | respQ = queue.Queue(maxsize=1) 14 | 15 | class WebhookHandler(http.server.BaseHTTPRequestHandler): 16 | def do_GET(self): 17 | self.send_response(HTTPStatus.OK) 18 | self.end_headers() 19 | def do_POST(self): 20 | contentLen = self.headers.get('Content-Length') 21 | reqBody = self.rfile.read(int(contentLen)) 22 | reqJson = json.loads(reqBody) 23 | self.log_message(json.dumps(reqJson)) 24 | self.send_response(HTTPStatus.NO_CONTENT) 25 | self.end_headers() 26 | respQ.put(reqJson) 27 | 28 | def startWebserver(): 29 | server_address = ('', 5000) 30 | httpd = http.server.HTTPServer(server_address, WebhookHandler) 31 | webServer = threading.Thread(target=httpd.serve_forever) 32 | webServer.start() 33 | return httpd, webServer 34 | 35 | def assertEvent(q, resp, timeout=5): 36 | evResp = q.get(timeout=timeout) 37 | return resp == evResp 38 | 39 | def t1Insert(meta): 40 | t = meta.tables['skor_test_t1'] 41 | return { 42 | "name": "t1: insert", 43 | "statement": t.insert().values(c1=1, c2='hello'), 44 | "resp": { 45 | 'table': 'skor_test_t1', 46 | 'schema': 'public', 47 | 'op': 'INSERT', 48 | 'data': {'c1': 1, 'c2': 'hello'} 49 | } 50 | } 51 | 52 | def t1Update(meta): 53 | t = meta.tables['skor_test_t1'] 54 | return { 55 | "name": "t1: update", 56 | "statement": t.update().values(c2='world').where(t.c.c1 == 1), 57 | "resp": { 58 | 'table': 'skor_test_t1', 59 | 'schema': 'public', 60 | 'op': "UPDATE", 61 | 'data': {'c1': 1, 'c2': 'world'} 62 | } 63 | } 64 | 65 | def t1Delete(meta): 66 | t = meta.tables['skor_test_t1'] 67 | return { 68 | "name": "t1: delete", 69 | "statement": t.delete().where(t.c.c1 == 1), 70 | "resp": { 71 | 'table': 'skor_test_t1', 72 | 'schema': 'public', 73 | 'op': 'DELETE', 74 | 'data': {'c1': 1, 'c2': 'world'} 75 | } 76 | } 77 | 78 | def t3Insert(meta): 79 | t = meta.tables['skor_test_t3'] 80 | return { 81 | "name": "t3: insert", 82 | "statement": t.insert().values(c1=1, c2='hello'), 83 | "resp": { 84 | 'table': 'skor_test_t3', 85 | 'schema': 'public', 86 | 'op': 'INSERT', 87 | 'data': {'c1': 1} 88 | } 89 | } 90 | 91 | def t3Update(meta): 92 | t = meta.tables['skor_test_t3'] 93 | return { 94 | "name": "t3: update", 95 | "statement": t.update().values(c2='world').where(t.c.c1 == 1), 96 | "resp": { 97 | 'table': 'skor_test_t3', 98 | 'schema': 'public', 99 | 'op': "UPDATE", 100 | 'data': {'c1': 1} 101 | } 102 | } 103 | 104 | def t3Delete(meta): 105 | t = meta.tables['skor_test_t3'] 106 | return { 107 | "name": "t3: delete", 108 | "statement": t.delete().where(t.c.c1 == 1), 109 | "resp": { 110 | 'table': 'skor_test_t3', 111 | 'schema': 'public', 112 | 'op': 'DELETE', 113 | 'data': {'c1': 1} 114 | } 115 | } 116 | 117 | def t4Insert(meta): 118 | t = meta.tables['skor_test_t4'] 119 | return { 120 | "name": "t4: insert", 121 | "statement": t.insert().values(c1=1, c2='hello', c3='world'), 122 | "resp": { 123 | 'table': 'skor_test_t4', 124 | 'schema': 'public', 125 | 'op': 'INSERT', 126 | 'data': {'c1': 1, 'c2': 'hello', 'c3': 'world'} 127 | } 128 | } 129 | 130 | def t4Update(meta): 131 | t = meta.tables['skor_test_t4'] 132 | return { 133 | "name": "t4: update", 134 | "statement": t.update().values(c2='ahoy').where(t.c.c1 == 1), 135 | "resp": { 136 | 'table': 'skor_test_t4', 137 | 'schema': 'public', 138 | 'op': "UPDATE", 139 | 'data': {'c1': 1, 'c2': 'ahoy'} 140 | } 141 | } 142 | 143 | def t4Delete(meta): 144 | t = meta.tables['skor_test_t4'] 145 | return { 146 | "name": "t4: delete", 147 | "statement": t.delete().where(t.c.c1 == 1), 148 | "resp": { 149 | 'table': 'skor_test_t4', 150 | 'schema': 'public', 151 | 'op': 'DELETE', 152 | 'data': {'c1': 1} 153 | } 154 | } 155 | 156 | tests = [ t1Insert, t1Update, t1Delete 157 | , t3Insert, t3Update, t3Delete 158 | , t4Insert, t4Update, t4Delete 159 | ] 160 | 161 | httpd, webServer = startWebserver() 162 | 163 | engine = create_engine('postgresql://admin@localhost:5432/skor_test') 164 | meta = MetaData() 165 | meta.reflect(bind=engine) 166 | 167 | conn = engine.connect() 168 | 169 | for t in tests: 170 | testParams = t(meta) 171 | print("-" * 20) 172 | print("Running Test: {}".format(testParams['name'])) 173 | stmt = testParams['statement'] 174 | conn.execute(stmt) 175 | print(stmt) 176 | resp = testParams['resp'] 177 | success = assertEvent(respQ, resp) 178 | res = "Succeeded" if success else "Failed" 179 | print("Test result: {}".format(res)) 180 | 181 | httpd.shutdown() 182 | webServer.join() 183 | -------------------------------------------------------------------------------- /tests/triggers.json: -------------------------------------------------------------------------------- 1 | [ 2 | { 3 | "table": "skor_test_t1" 4 | }, 5 | { 6 | "table": "skor_test_t2", 7 | "columns": "*" 8 | }, 9 | { 10 | "table": "skor_test_t3", 11 | "columns": ["c1"] 12 | }, 13 | { 14 | "table": "skor_test_t4", 15 | "columns": { 16 | "insert": "*", 17 | "update": ["c1", "c2"], 18 | "delete": ["c1"] 19 | } 20 | } 21 | ] 22 | --------------------------------------------------------------------------------