├── .env-example ├── .gitignore ├── LICENSE.txt ├── README.md ├── alembic.ini ├── bin └── datums ├── datums ├── __init__.py ├── migrations │ ├── README │ ├── env.py │ ├── script.py.mako │ └── versions │ │ ├── 2698789ba4a4_add_inland_water_to_placemark_reports_.py │ │ ├── 457bbf802239_add_foreign_key_constraints.py │ │ ├── 728b24c64cea_add_primary_key_constraints.py │ │ ├── 785ab1c2c255_delete_foreign_key_constraints.py │ │ ├── 8f22f932fa58_add_pressure_in_and_pressure_mb_to_.py │ │ ├── c984c6d45f23_rename_snapshot_tables_to_report_tables.py │ │ ├── e912ea8b3cb1_rename_snapshot_id_columns_to_report_id.py │ │ └── f24937651f71_add_altitude_reports_table.py ├── models │ ├── __init__.py │ ├── base.py │ ├── questions.py │ ├── reports.py │ └── responses.py └── pipeline │ ├── __init__.py │ ├── codec.py │ └── mappers.py ├── examples └── add_reports_from_yesterday.sh ├── images ├── data_model.png └── header.png ├── requirements.txt ├── setup.cfg ├── setup.py └── tests ├── __init__.py ├── test_codec.py ├── test_models.py └── test_pipeline.py /.env-example: -------------------------------------------------------------------------------- 1 | export DATABASE_URI=postgresql://@localhost:5432/datums -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | .env 3 | *.pyc 4 | *.swp 5 | *.sketch 6 | 7 | build/* 8 | datums.egg-info/* 9 | dist/* 10 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 Jane Stewart Adams 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![datums_header](images/header.png) 2 | 3 | [![Code Climate](https://codeclimate.com/github/thejunglejane/datums/badges/gpa.svg)](https://codeclimate.com/github/thejunglejane/datums) 4 | 5 | Datums is a PostgreSQL pipeline for [Reporter](http://www.reporter-app.com/). Datums will insert records from the Dropbox folder that contains your exported Reporter data[1](#notes) into PostgreSQL. 6 | 7 | > ["Self-tracking is only useful if it leads to new self-knowledge and—ultimately—new action."](https://medium.com/buster-benson/how-i-track-my-life-7da6f22b8e2c) 8 | 9 | # Getting Started 10 | 11 | Skip ahead to [Migrating to v1.0.0](#migrating-to-v100) 12 | 13 | ## Create the Database 14 | 15 | To create the datums database, first ensure that you have postgres installed and that the server is running locally. To create the database 16 | ``` 17 | $ createdb datums --owner=username 18 | ``` 19 | where `username` is the output of `whoami`. You're obviously free to name the database whatever you want, just make sure that when you declare the database URI below that it points to the right database. 20 | 21 | ## Installation 22 | 23 | #### pip 24 | ``` 25 | $ pip install datums 26 | ``` 27 | 28 | datums relies on one environment variable, `DATABASE_URI`. I recommend creating a virtual environment in which this variable is stored, but you can also add it to your .bash_profile (or equivalent) to make it available in all sessions and environments. 29 | 30 | Run the following command inside a virtual environment to make `DATABASE_URI` accessible whenever the environment is active, or outside of a virtual environment to make `DATABASE_URI` accessible only in your current Terminal session. To make `DATABASE_URI` available in all Terminal sessions and environments, add the following line to your .bash_profile (or equivalent) 31 | ```bash 32 | export DATABASE_URI=postgresql://@localhost:5432/datums 33 | ``` 34 | 35 | #### GitHub 36 | 37 | Alternatively, you can clone this repository and run the setup script 38 | ``` 39 | $ git clone https://github.com/thejunglejane/datums.git 40 | $ cd datums 41 | $ python setup.py install 42 | ``` 43 | You can rename the .env-example file in the repository's root to .env and fill in the `DATABASE_URI` variable information. You'll need to source the .env after filling in your information for the variable to be accessible in your session. 44 | 45 | Note that the `DATABASE_URI` variable will only be available in your current Terminal session. If you would like to be able to use datums without sourcing the .env everytime, I recommend creating a virtual environment in which this variable is stored or adding the variable to your .bash_profile (or equivalent). 46 | 47 | ### Setup the Database 48 | 49 | You should now have both the `datums` executable and Python library installed and ready to use. 50 | 51 | Before adding any reports, you'll need to setup the database schema. The database schema is defined in the `models` module. Here's a picture 52 | 53 | ![data_model](images/data_model.png) 54 | 55 | You can setup the database from the command line or from Python. From the command line, execute `datums` with the `--setup` flag. 56 | ``` 57 | $ datums --setup 58 | ``` 59 | or, from Python 60 | ```python 61 | >>> from datums.models import base 62 | >>> base.database_setup(base.engine) 63 | ``` 64 | 65 | You can also teardown the database, if you ever need to. This will remove all the tables from the database, but won't delete the database. To teardown the database from the command line, include the `--teardown` flag 66 | ``` 67 | $ datums --teardown 68 | ``` 69 | or, from Python 70 | ```python 71 | >>> from datums.models import base 72 | >>> base.database_teardown(base.engine) 73 | ``` 74 | 75 | #### Migrating to v1.0.0 76 | 77 | ##### `alembic` 78 | v1.0.0 introduces some changes to the database schema and the datums data model. To upgrade your existing datums database to a v1.0.0-compatible schema, a series of alembic mirations have been provided. To access these migrations, you will need to have the datums repository cloned to your local machine. If you've installed datums via pip, feel free to delete the cloned repository after you migrate your database, but remember to `pip install --upgrade datums` before trying to add more reports. 79 | 80 | To migrate your database, clone (or pull) this repository and run the setup script, then `cd` into the repository and run the migrations with 81 | 82 | ```bash 83 | /path/to/datums/ $ alembic upgrade head 84 | /path/to/datums/ $ datums --update "/path/to/reporter/folder/*.json" 85 | /path/to/datums/ $ datums --add "/path/to/reporter/folder/*.json" 86 | ``` 87 | 88 | After migrating, it's important to `--update` all reports to add the `pressure_in` and `pressure_mb` attributes on weather reports as well as the `inland_water` attribute to placemark reports. You can safely ignore the `UserWarning` that no `uniqueIdentifier` can be found for altitude reports; those altitude reports will be added when you `--add` in the next step. 89 | 90 | v1.0.0 adds support for altitude reports. After updating, you'll need to `--add` all your reports to capture altitude reports from before May, 2015. They must be added instead of updated because altitude reports have not always had `uniqueIdentifiers`. Adding will allow datums to create UUIDs for these earlier altitude reports. If no UUID is found for an altitude report, datums canot update or delete it. See [issue 29](https://github.com/thejunglejane/datums/issues/29) for more information. 91 | 92 | ##### Quick and Dirty 93 | Alternatively, you could just teardown your existing datums database and setup a new one. Make sure you teardown your database before upgrading datums. 94 | ```bash 95 | $ datums --teardown 96 | $ pip install --upgrade datums 97 | $ datums --setup 98 | $ datums --add "/path/to/reporter/folder/*.json" 99 | ``` 100 | 101 | # Adding, Updating, and Deleting 102 | The `pipeline` module allows you to add, update, and delete reports and questions. 103 | 104 | ### Definitions 105 | We should define a few terms before getting into how to use the pipeline. 106 | 107 | * A **reporter file** is a JSON file that contains all the **snapshot**s and all the **question**s for a given day. These files should be located in your Dropbox/Apps/Reporter-App folder. 108 | * A **snapshot** comprises a **report** and all the **response**s collected by Reporter when you make a report. 109 | * A **report** contains the information that the Reporter app automatically collects when you make a report, things like the weather, background noise, etc. 110 | * A **response** is the answer you enter for a question. 111 | 112 | Every **snapshot** will have one **report** and some number **response**s associated with it, and every **reporter file** will have some number **snapshot**s and some number of **question**s associated with it, depending on how many times you make reports throughout the day. 113 | 114 | If you add or delete questions from the Reporter app, different **reporter file**s will have different **question**s from day to day. When you add a new **reporter file**, first add the **question**s from that day. If there are no new **question**s, nothing will happen; if there is a new **question**, datums will add it to the database. 115 | 116 | ## Adding questions, reports, and responses 117 | 118 | When you first set datums up, you'll probably want to add all the questions, reports, and responses in your Dropbox Reporter folder. 119 | 120 | #### Command Line 121 | To add all the Reporter files in your Dropbox Reporter folder from the command line, execute `datums` with the `--add` flag followed by the path to your Dropbox Reporter folder 122 | 123 | ``` 124 | $ datums --add "/path/to/reporter/folder/*.json" 125 | ``` 126 | Make sure you include the '*.json' at the end to exclude the extra files in that folder. 127 | 128 | To add the questions and reports from a single Reporter file, include the filepath after the `--add` flag instead of the directory's path 129 | ``` 130 | $ datums --add "/path/to/file" 131 | ``` 132 | 133 | #### Python 134 | You can add all the Reporter files or a single Reporter file from Python as well. 135 | 136 | ```python 137 | >>> from datums import pipeline 138 | >>> import glob 139 | >>> import json 140 | >>> import os 141 | >>> all_reporter_files = glob.glob(os.path.join('/path/to/reporter/folder/', '*.json')) 142 | >>> for file in all_reporter_files: 143 | ... with open(os.path.expanduser(file), 'r') as f: 144 | ... day = json.load(f) 145 | ... # Add questions first because reports need them 146 | ... for question in day['questions']: 147 | ... pipeline.QuestionPipeline(question).add() 148 | ... for snapshot in day['snapshots']: 149 | ... # Add report and responses 150 | ... pipeline.SnapshotPipeline(snapshot).add() 151 | ``` 152 | ```python 153 | >>> from datums import pipeline 154 | >>> import json 155 | >>> with open('/path/to/file', 'r') as f: 156 | ... day = json.load(f) 157 | >>> # Add questions first because reports need them 158 | >>> for question in day['questions']: 159 | ... pipeline.QuestionPipeline(question).add() 160 | >>> for snapshot in day['snapshots']: 161 | ... # Add report and responses 162 | ... pipeline.SnapshotPipeline(snapshot).add() 163 | ``` 164 | 165 | You can also add a single snapshot from a Reporter file, if you need/want to 166 | ```python 167 | >>> from datums import pipeline 168 | >>> import json 169 | >>> with open('/path/to/file', 'r') as f: 170 | ... day = json.load(f) 171 | >>> snapshot = day['snapshots'][n] # where n is the index of the report 172 | >>> pipeline.SnapshotPipeline(snapshot).add() 173 | ``` 174 | 175 | ## Updating reports and responses 176 | 177 | If you make a change to one of your Reporter files, or if Reporter makes a change to one of those files, you can also update your reports and responses. If a new snapshot has been added the file located at '/path/to/file', the update will create it in the database. 178 | 179 | #### Command Line 180 | 181 | To update all snapshots in all the files in your Dropbox Reporter folder 182 | 183 | ``` 184 | $ datums --update "/path/to/reporter/folder/*.json" 185 | ``` 186 | and to update all the snapshots in a single Reporter file 187 | ``` 188 | $ datums --update "/path/to/file" 189 | ``` 190 | 191 | #### Python 192 | From Python 193 | ```python 194 | >>> from datums import pipeline 195 | >>> import glob 196 | >>> import json 197 | >>> import os 198 | >>> all_reporter_files = glob.glob(os.path.join('/path/to/reporter/folder/', '*.json')) 199 | >>> for file in all_reporter_files: 200 | ... with open(os.path.expanduser(file), 'r') as f: 201 | ... day = json.load(f) 202 | ... for snapshot in day['snapshots']: 203 | ... pipeline.SnapshotPipeline(snapshot).update() 204 | ``` 205 | ```python 206 | >>> from datums import pipeline 207 | >>> import json 208 | >>> with open('/path/to/file', 'r') as f: 209 | ... day = json.load(f) 210 | >>> for snapshot in day['snapshots']: 211 | ... pipeline.SnapshotPipeline(snapshot).update() 212 | ``` 213 | 214 | To update an individual snapshot within a snapshoter file with 215 | ```python 216 | >>> from datums import pipeline 217 | >>> import json 218 | >>> with open('/path/to/file', 'r') as f: 219 | ... day = json.load(f) 220 | >>> snapshot = day['snapshots'][n] # where n is the index of the snapshot 221 | >>> pipeline.SnapshotPipeline(snapshot).update() 222 | ``` 223 | #### Changing a Snapshot 224 | > While it is possible to change your response to a question from Python, it's not recommended. Datums won't overwrite the contents of your files, and you will lose the changes that you make the next time you update the snapshots in that file. If you make changes to a file itself, you may run into conflicts if Reporter tries to update that file. 225 | 226 | > If you do need to change your response to a question, I recommend that you do so from the Reporter app. The list icon in the top left corner will display all of your snapshots, and you can select a snapshot and make changes. If you have 'Save to Dropbox' enabled, the Dropbox file containing that snapshot will be updated when you save your changes; if you don't have 'Save to Dropbox' enabled, the file containing the snapshot will be updated the next time you export. Once the file is updated, you can follow the steps above to update the snapshots in that file in the database. 227 | 228 | ## Deleting reports and responses 229 | 230 | Deleting reports and responses from the database is much the same. Note that deleting a report will delete any responses included in the snapshot containing that report. 231 | 232 | #### Command Line 233 | You can delete all snapshots in your Dropbox Reporter folder with 234 | ``` 235 | $ datums --delete "/path/to/reporter/folder/*.json" 236 | ``` 237 | and the snapshots in a single file with 238 | ``` 239 | $ datums --delete "/path/to/file" 240 | ``` 241 | 242 | #### Python 243 | ```python 244 | >>> from datums import pipeline 245 | >>> import glob 246 | >>> import json 247 | >>> import os 248 | >>> all_reporter_files = glob.glob(os.path.join('/path/to/reporter/folder/', '*.json')) 249 | >>> for file in all_reporter_files: 250 | ... with open(os.path.expanduser(file), 'r') as f: 251 | ... day = json.load(f) 252 | ... for snapshot in day['snapshots']: 253 | ... delete.delete_snapshot(snapshot) 254 | ``` 255 | ```python 256 | >>> from datums import pipeline 257 | >>> import json 258 | >>> with open('/path/to/file', 'r') as f: 259 | ... day = json.load(f) 260 | >>> for snapshot in day['snapshots']: 261 | ... pipeline.SnapshotPipeline(snapshot).delete() 262 | ``` 263 | 264 | To delete a single snapshot within a Reporter file 265 | ```python 266 | >>> from datums import pipeline 267 | >>> import json 268 | >>> with open('/path/to/file', 'r') as f: 269 | ... day = json.load(f) 270 | >>> snapshot = day['snapshots'][n] # where n is the index of the snapshot 271 | >>> pipeline.SnapshotPipeline(snapshot).delete() 272 | ``` 273 | 274 | ## Deleting questions 275 | 276 | You can also delete questions from the database. Note that this will delete any responses associated with the deleted question as well. 277 | 278 | ```python 279 | >>> from datums import pipeline 280 | >>> import json 281 | >>> with open('/path/to/file', 'r') as f: 282 | ... day = json.load(f) 283 | >>> question = day['questions'][n] # where n is the index of the question 284 | >>> pipeline.QuestionPipeline(snapshot).delete() 285 | ``` 286 | 287 | # Notes 288 | 289 | 1. This version of datums only supports JSON exports. 290 | 2. Photo sets are not supported. 291 | 292 | # Licensing 293 | 294 | Datums is licensed under the MIT License, so please share, enjoy, and improve. 295 | 296 | Copyright (c) 2015 Jane Stewart Adams 297 | 298 | Permission is hereby granted, free of charge, to any person obtaining a copy 299 | of this software and associated documentation files (the "Software"), to deal 300 | in the Software without restriction, including without limitation the rights 301 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 302 | copies of the Software, and to permit persons to whom the Software is 303 | furnished to do so, subject to the following conditions: 304 | 305 | The above copyright notice and this permission notice shall be included in 306 | all copies or substantial portions of the Software. 307 | 308 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 309 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 310 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 311 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 312 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 313 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 314 | THE SOFTWARE. 315 | -------------------------------------------------------------------------------- /alembic.ini: -------------------------------------------------------------------------------- 1 | # A generic, single database configuration. 2 | 3 | [alembic] 4 | # path to migration scripts 5 | script_location = datums/migrations 6 | 7 | # template used to generate migration files 8 | # file_template = %%(rev)s_%%(slug)s 9 | 10 | # max length of characters to apply to the 11 | # "slug" field 12 | #truncate_slug_length = 40 13 | 14 | # set to 'true' to run the environment during 15 | # the 'revision' command, regardless of autogenerate 16 | # revision_environment = false 17 | 18 | # set to 'true' to allow .pyc and .pyo files without 19 | # a source .py file to be detected as revisions in the 20 | # versions/ directory 21 | # sourceless = false 22 | 23 | # version location specification; this defaults 24 | # to alembic/versions. When using multiple version 25 | # directories, initial revisions must be specified with --version-path 26 | # version_locations = %(here)s/bar %(here)s/bat alembic/versions 27 | 28 | # the output encoding used when revision files 29 | # are written from script.py.mako 30 | # output_encoding = utf-8 31 | 32 | sqlalchemy.url = driver://user:pass@localhost/dbname 33 | 34 | 35 | # Logging configuration 36 | [loggers] 37 | keys = root,sqlalchemy,alembic 38 | 39 | [handlers] 40 | keys = console 41 | 42 | [formatters] 43 | keys = generic 44 | 45 | [logger_root] 46 | level = WARN 47 | handlers = console 48 | qualname = 49 | 50 | [logger_sqlalchemy] 51 | level = WARN 52 | handlers = 53 | qualname = sqlalchemy.engine 54 | 55 | [logger_alembic] 56 | level = INFO 57 | handlers = 58 | qualname = alembic 59 | 60 | [handler_console] 61 | class = StreamHandler 62 | args = (sys.stderr,) 63 | level = NOTSET 64 | formatter = generic 65 | 66 | [formatter_generic] 67 | format = %(levelname)-5.5s [%(name)s] %(message)s 68 | datefmt = %H:%M:%S 69 | -------------------------------------------------------------------------------- /bin/datums: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | from __future__ import with_statement 3 | 4 | import argparse 5 | import glob 6 | import json 7 | import sys 8 | import os 9 | 10 | from datums import __version__ 11 | from datums import pipeline 12 | from datums import models 13 | from datums.models import base 14 | 15 | '''bin/datums provides entry point main().''' 16 | 17 | 18 | def create_parser(): 19 | # Breaking out argument parsing for easier testing 20 | parser = argparse.ArgumentParser( 21 | prog='datums', description='PostgreSQL pipeline for Reporter.', 22 | usage='%(prog)s [options]') 23 | parser.add_argument('-V', '--version', action='store_true') 24 | parser.add_argument( 25 | '--setup', action='store_true', help='Setup the datums database') 26 | parser.add_argument( 27 | '--teardown', action='store_true', 28 | help='Tear down the datums database') 29 | parser.add_argument( 30 | '-A', '--add', help='Add the reports in file(s) specified') 31 | parser.add_argument( 32 | '-U', '--update', help='Update the reports in the file(s) specified') 33 | parser.add_argument( 34 | '-D', '--delete', help='Delete the reports in the file(s) specified') 35 | return parser 36 | 37 | 38 | def main(): 39 | '''Runs program and handles command line options.''' 40 | parser = create_parser() 41 | args = parser.parse_args() 42 | 43 | if args.version: 44 | print __version__ 45 | if args.setup: 46 | base.database_setup(base.engine) 47 | if args.teardown: 48 | base.database_teardown(base.engine) 49 | if args.add: 50 | files = glob.glob(os.path.expanduser(args.add)) 51 | for file in files: 52 | with open(file, 'r') as f: 53 | day = json.load(f) 54 | # Add questions first because responses need them 55 | for question in day['questions']: 56 | pipeline.QuestionPipeline(question).add() 57 | for snapshot in day['snapshots']: 58 | pipeline.SnapshotPipeline(snapshot).add() 59 | if args.update: 60 | files = glob.glob(os.path.expanduser(args.update)) 61 | for file in files: 62 | with open(file, 'r') as f: 63 | day = json.load(f) 64 | for snapshot in day['snapshots']: 65 | pipeline.SnapshotPipeline(snapshot).update() 66 | if args.delete: 67 | files = glob.glob(os.path.expanduser(args.delete)) 68 | for file in files: 69 | with open(file, 'r') as f: 70 | day = json.load(f) 71 | for snapshot in day['snapshots']: 72 | pipeline.SnapshotPipeline(snapshot).delete() 73 | 74 | 75 | if __name__ == '__main__': 76 | main() 77 | -------------------------------------------------------------------------------- /datums/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | __version__ = '1.0.0' 3 | -------------------------------------------------------------------------------- /datums/migrations/README: -------------------------------------------------------------------------------- 1 | Generic single-database configuration. -------------------------------------------------------------------------------- /datums/migrations/env.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | from alembic import context 3 | from sqlalchemy import engine_from_config, pool 4 | from logging.config import fileConfig 5 | import os 6 | 7 | 8 | # this is the Alembic Config object, which provides 9 | # access to the values within the .ini file in use. 10 | config = context.config 11 | 12 | # Interpret the config file for Python logging. 13 | # This line sets up loggers basically. 14 | fileConfig(config.config_file_name) 15 | 16 | # add your model's MetaData object here 17 | # for 'autogenerate' support 18 | # from myapp import mymodel 19 | # target_metadata = mymodel.Base.metadata 20 | target_metadata = None 21 | 22 | # other values from the config, defined by the needs of env.py, 23 | # can be acquired: 24 | # my_important_option = config.get_main_option("my_important_option") 25 | # ... etc. 26 | 27 | 28 | def run_migrations_offline(): 29 | """Run migrations in 'offline' mode. 30 | 31 | This configures the context with just a URL 32 | and not an Engine, though an Engine is acceptable 33 | here as well. By skipping the Engine creation 34 | we don't even need a DBAPI to be available. 35 | 36 | Calls to context.execute() here emit the given string to the 37 | script output. 38 | 39 | """ 40 | url = os.environ['DATABASE_URI'] 41 | context.configure( 42 | url=url, target_metadata=target_metadata, literal_binds=True) 43 | 44 | with context.begin_transaction(): 45 | context.run_migrations() 46 | 47 | 48 | def run_migrations_online(): 49 | """Run migrations in 'online' mode. 50 | 51 | In this scenario we need to create an Engine 52 | and associate a connection with the context. 53 | 54 | """ 55 | alembic_config = config.get_section(config.config_ini_section) 56 | alembic_config['sqlalchemy.url'] = os.environ['DATABASE_URI'] 57 | 58 | connectable = engine_from_config( 59 | alembic_config, 60 | prefix='sqlalchemy.', 61 | poolclass=pool.NullPool) 62 | 63 | with connectable.connect() as connection: 64 | context.configure( 65 | connection=connection, 66 | target_metadata=target_metadata 67 | ) 68 | 69 | with context.begin_transaction(): 70 | context.run_migrations() 71 | 72 | if context.is_offline_mode(): 73 | run_migrations_offline() 74 | else: 75 | run_migrations_online() 76 | -------------------------------------------------------------------------------- /datums/migrations/script.py.mako: -------------------------------------------------------------------------------- 1 | """${message} 2 | 3 | Revision ID: ${up_revision} 4 | Revises: ${down_revision | comma,n} 5 | Create Date: ${create_date} 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = ${repr(up_revision)} 11 | down_revision = ${repr(down_revision)} 12 | branch_labels = ${repr(branch_labels)} 13 | depends_on = ${repr(depends_on)} 14 | 15 | from alembic import op 16 | import sqlalchemy as sa 17 | ${imports if imports else ""} 18 | 19 | def upgrade(): 20 | ${upgrades if upgrades else "pass"} 21 | 22 | 23 | def downgrade(): 24 | ${downgrades if downgrades else "pass"} 25 | -------------------------------------------------------------------------------- /datums/migrations/versions/2698789ba4a4_add_inland_water_to_placemark_reports_.py: -------------------------------------------------------------------------------- 1 | """Add inland_water to placemark_reports table 2 | 3 | Revision ID: 2698789ba4a4 4 | Revises: 8f22f932fa58 5 | Create Date: 2016-01-25 23:11:18.515616 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = '2698789ba4a4' 11 | down_revision = '8f22f932fa58' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | from sqlalchemy import Column, String 17 | 18 | 19 | def upgrade(): 20 | op.add_column('placemark_reports', Column('inland_water', String)) 21 | 22 | 23 | def downgrade(): 24 | op.drop_column('placemark_reports', 'inland_water') 25 | -------------------------------------------------------------------------------- /datums/migrations/versions/457bbf802239_add_foreign_key_constraints.py: -------------------------------------------------------------------------------- 1 | """Add foreign key constraints 2 | 3 | Revision ID: 457bbf802239 4 | Revises: e912ea8b3cb1 5 | Create Date: 2016-01-25 23:04:37.492418 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = '457bbf802239' 11 | down_revision = 'e912ea8b3cb1' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | import sqlalchemy as sa 17 | 18 | 19 | def upgrade(): 20 | op.create_foreign_key( 21 | constraint_name='audio_reports_report_id_fkey', 22 | source_table='audio_reports', referent_table='reports', 23 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 24 | 25 | op.create_foreign_key( 26 | constraint_name='location_reports_report_id_fkey', 27 | source_table='location_reports', referent_table='reports', 28 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 29 | 30 | op.create_foreign_key( 31 | constraint_name='placemark_reports_location_report_id_fkey', 32 | source_table='placemark_reports', referent_table='location_reports', 33 | local_cols=['location_report_id'], remote_cols=['id'], 34 | ondelete='CASCADE') 35 | 36 | op.create_foreign_key( 37 | constraint_name='weather_reports_report_id_fkey', 38 | source_table='weather_reports', referent_table='reports', 39 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 40 | 41 | op.create_foreign_key( 42 | constraint_name='responses_report_id_fkey', 43 | source_table='responses', referent_table='reports', 44 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 45 | 46 | 47 | def downgrade(): 48 | with op.batch_alter_table('audio_reports') as batch_op: 49 | batch_op.drop_constraint('audio_reports_report_id_fkey') 50 | 51 | with op.batch_alter_table('location_reports') as batch_op: 52 | batch_op.drop_constraint('location_reports_report_id_fkey') 53 | 54 | with op.batch_alter_table('placemark_reports') as batch_op: 55 | batch_op.drop_constraint('placemark_reports_location_report_id_fkey') 56 | 57 | with op.batch_alter_table('weather_reports') as batch_op: 58 | batch_op.drop_constraint('weather_reports_report_id_fkey') 59 | 60 | with op.batch_alter_table('responses') as batch_op: 61 | batch_op.drop_constraint('responses_report_id_fkey') 62 | -------------------------------------------------------------------------------- /datums/migrations/versions/728b24c64cea_add_primary_key_constraints.py: -------------------------------------------------------------------------------- 1 | """Add primary key constraints 2 | 3 | Revision ID: 728b24c64cea 4 | Revises: 785ab1c2c255 5 | Create Date: 2016-01-25 22:54:18.023243 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = '728b24c64cea' 11 | down_revision = '785ab1c2c255' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | import sqlalchemy as sa 17 | 18 | 19 | def upgrade(): 20 | with op.batch_alter_table('audio_reports') as batch_op: 21 | batch_op.drop_constraint('audio_snapshots_pkey') 22 | batch_op.create_primary_key('audio_reports_pkey', columns=['id']) 23 | 24 | with op.batch_alter_table('location_reports') as batch_op: 25 | batch_op.drop_constraint('location_snapshots_pkey') 26 | batch_op.create_primary_key('location_reports_pkey', columns=['id']) 27 | 28 | with op.batch_alter_table('placemark_reports') as batch_op: 29 | batch_op.drop_constraint('placemark_snapshots_pkey') 30 | batch_op.create_primary_key('placemark_reports_pkey', columns=['id']) 31 | 32 | with op.batch_alter_table('weather_reports') as batch_op: 33 | batch_op.drop_constraint('weather_snapshots_pkey') 34 | batch_op.create_primary_key('weather_reports_pkey', columns=['id']) 35 | 36 | 37 | def downgrade(): 38 | with op.batch_alter_table('audio_reports') as batch_op: 39 | batch_op.drop_constraint('audio_reports_pkey') 40 | batch_op.create_primary_key('audio_snapshots_pkey', columns=['id']) 41 | 42 | with op.batch_alter_table('location_reports') as batch_op: 43 | batch_op.drop_constraint('location_reports_pkey') 44 | batch_op.create_primary_key('location_snapshots_pkey', columns=['id']) 45 | 46 | with op.batch_alter_table('placemark_reports') as batch_op: 47 | batch_op.drop_constraint('placemark_reports_pkey') 48 | batch_op.create_primary_key('placemark_snapshots_pkey', columns=['id']) 49 | 50 | with op.batch_alter_table('weather_reports') as batch_op: 51 | batch_op.drop_constraint('weather_reports_pkey') 52 | batch_op.create_primary_key('weather_snapshots_pkey', columns=['id']) 53 | 54 | -------------------------------------------------------------------------------- /datums/migrations/versions/785ab1c2c255_delete_foreign_key_constraints.py: -------------------------------------------------------------------------------- 1 | """Delete foreign key constraints 2 | 3 | Revision ID: 785ab1c2c255 4 | Revises: c984c6d45f23 5 | Create Date: 2016-01-25 22:52:32.756368 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = '785ab1c2c255' 11 | down_revision = 'c984c6d45f23' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | import sqlalchemy as sa 17 | 18 | 19 | def upgrade(): 20 | with op.batch_alter_table('audio_reports') as batch_op: 21 | batch_op.drop_constraint( 22 | 'audio_snapshots_snapshot_id_fkey') 23 | 24 | with op.batch_alter_table('location_reports') as batch_op: 25 | batch_op.drop_constraint( 26 | 'location_snapshots_snapshot_id_fkey') 27 | 28 | with op.batch_alter_table('placemark_reports') as batch_op: 29 | batch_op.drop_constraint( 30 | 'placemark_snapshots_location_snapshot_id_fkey') 31 | 32 | with op.batch_alter_table('weather_reports') as batch_op: 33 | batch_op.drop_constraint( 34 | 'weather_snapshots_snapshot_id_fkey') 35 | 36 | with op.batch_alter_table('responses') as batch_op: 37 | batch_op.drop_constraint('responses_snapshot_id_fkey') 38 | 39 | 40 | def downgrade(): 41 | op.create_foreign_key( 42 | constraint_name='audio_snapshots_snapshot_id_fkey', 43 | source_table='audio_reports', referent_table='reports', 44 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 45 | 46 | op.create_foreign_key( 47 | constraint_name='location_snapshots_snapshot_id_fkey', 48 | source_table='location_reports', referent_table='reports', 49 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 50 | 51 | op.create_foreign_key( 52 | constraint_name='placemark_reports_snapshots_snapshot_id_fkey', 53 | source_table='placemark_reports', referent_table='location_reports', 54 | local_cols=['location_snapshot_id'], remote_cols=['id'], 55 | ondelete='CASCADE') 56 | 57 | op.create_foreign_key( 58 | constraint_name='weather_snapshots_snapshot_id_fkey', 59 | source_table='weather_reports', referent_table='reports', 60 | local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') 61 | 62 | op.create_foreign_key( 63 | constraint_name='responses_snapshot_id_fkey', source_table='responses', 64 | referent_table='reports', local_cols=['report_id'], 65 | remote_cols=['id'], ondelete='CASCADE') 66 | -------------------------------------------------------------------------------- /datums/migrations/versions/8f22f932fa58_add_pressure_in_and_pressure_mb_to_.py: -------------------------------------------------------------------------------- 1 | """Add pressure_in and pressure_mb to weather_reports table 2 | 3 | Revision ID: 8f22f932fa58 4 | Revises: f24937651f71 5 | Create Date: 2016-01-25 23:09:20.325410 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = '8f22f932fa58' 11 | down_revision = 'f24937651f71' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | from sqlalchemy import Column, Numeric 17 | 18 | 19 | def upgrade(): 20 | op.add_column('weather_reports', Column('pressure_in', Numeric)) 21 | op.add_column('weather_reports', Column('pressure_mb', Numeric)) 22 | 23 | 24 | def downgrade(): 25 | op.drop_column('weather_reports', 'pressure_in') 26 | op.drop_column('weather_reports', 'pressure_mb') 27 | -------------------------------------------------------------------------------- /datums/migrations/versions/c984c6d45f23_rename_snapshot_tables_to_report_tables.py: -------------------------------------------------------------------------------- 1 | """Rename snapshot tables to report tables. 2 | 3 | Revision ID: c984c6d45f23 4 | Revises: 5 | Create Date: 2016-01-25 22:42:52.440714 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = 'c984c6d45f23' 11 | down_revision = None 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | import sqlalchemy as sa 17 | 18 | 19 | def upgrade(): 20 | op.rename_table('snapshots', 'reports') 21 | op.rename_table('audio_snapshots', 'audio_reports') 22 | op.rename_table('location_snapshots', 'location_reports') 23 | op.rename_table('placemark_snapshots', 'placemark_reports') 24 | op.rename_table('weather_snapshots', 'weather_reports') 25 | 26 | 27 | def downgrade(): 28 | op.rename_table('reports', 'snapshots') 29 | op.rename_table('audio_reports', 'audio_snapshots') 30 | op.rename_table('location_reports', 'location_snapshots') 31 | op.rename_table('placemark_reports', 'placemark_snapshots') 32 | op.rename_table('weather_reports', 'weather_snapshots') 33 | -------------------------------------------------------------------------------- /datums/migrations/versions/e912ea8b3cb1_rename_snapshot_id_columns_to_report_id.py: -------------------------------------------------------------------------------- 1 | """Rename snapshot_id columns to report_id 2 | 3 | Revision ID: e912ea8b3cb1 4 | Revises: 728b24c64cea 5 | Create Date: 2016-01-25 23:04:18.091843 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = 'e912ea8b3cb1' 11 | down_revision = '728b24c64cea' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | import sqlalchemy as sa 17 | 18 | 19 | def upgrade(): 20 | op.alter_column( 21 | 'audio_reports', 'snapshot_id', new_column_name='report_id') 22 | 23 | op.alter_column( 24 | 'location_reports', 'snapshot_id', new_column_name='report_id') 25 | 26 | op.alter_column( 27 | 'placemark_reports', 'location_snapshot_id', 28 | new_column_name='location_report_id') 29 | 30 | op.alter_column( 31 | 'weather_reports', 'snapshot_id', new_column_name='report_id') 32 | 33 | op.alter_column('responses', 'snapshot_id', new_column_name='report_id') 34 | 35 | 36 | def downgrade(): 37 | op.alter_column( 38 | 'audio_reports', 'report_id', new_column_name='snapshot_id') 39 | 40 | op.alter_column( 41 | 'location_reports', 'report_id', new_column_name='snapshot_id') 42 | 43 | op.alter_column( 44 | 'placemark_reports', 'report_id', new_column_name='snapshot_id') 45 | 46 | op.alter_column( 47 | 'weather_reports', 'report_id', new_column_name='snapshot_id') 48 | 49 | op.alter_column('responses', 'report_id', new_column_name='snapshot_id') 50 | -------------------------------------------------------------------------------- /datums/migrations/versions/f24937651f71_add_altitude_reports_table.py: -------------------------------------------------------------------------------- 1 | """Add altitude_reports table 2 | 3 | Revision ID: f24937651f71 4 | Revises: 457bbf802239 5 | Create Date: 2016-01-25 23:07:52.202616 6 | 7 | """ 8 | 9 | # revision identifiers, used by Alembic. 10 | revision = 'f24937651f71' 11 | down_revision = '457bbf802239' 12 | branch_labels = None 13 | depends_on = None 14 | 15 | from alembic import op 16 | from sqlalchemy import Column, ForeignKey, Numeric 17 | from sqlalchemy_utils import UUIDType 18 | 19 | 20 | def upgrade(): 21 | op.create_table( 22 | 'altitude_reports', 23 | Column('id', UUIDType, primary_key=True), 24 | Column('floors_ascended', Numeric), 25 | Column('floors_descended', Numeric), 26 | Column('gps_altitude_from_location', Numeric), 27 | Column('gps_altitude_raw', Numeric), 28 | Column('pressure', Numeric), 29 | Column('pressure_adjusted', Numeric), 30 | Column('report_id', UUIDType, ForeignKey( 31 | 'reports.id', ondelete='CASCADE'), nullable=False)) 32 | 33 | 34 | def downgrade(): 35 | op.drop_table('altitude_reports') 36 | -------------------------------------------------------------------------------- /datums/models/__init__.py: -------------------------------------------------------------------------------- 1 | from questions import * 2 | from responses import * 3 | from reports import * 4 | 5 | '''SQLAlchemy models for this application.''' 6 | 7 | import base 8 | from base import session, engine 9 | -------------------------------------------------------------------------------- /datums/models/base.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import os 4 | from sqlalchemy import create_engine 5 | from sqlalchemy.ext.declarative import declarative_base 6 | from sqlalchemy.orm import sessionmaker, scoped_session 7 | 8 | 9 | # Initialize Base class 10 | Base = declarative_base() 11 | metadata = Base.metadata 12 | 13 | session_maker = sessionmaker() 14 | session = scoped_session(session_maker) 15 | 16 | engine = create_engine(os.environ['DATABASE_URI']) 17 | session.configure(bind=engine) 18 | 19 | 20 | def database_setup(engine): 21 | '''Set up the database. 22 | ''' 23 | metadata.create_all(engine) 24 | 25 | 26 | def database_teardown(engine): 27 | '''BURN IT ALL DOWN (╯°□°)╯︵ ┻━┻ 28 | ''' 29 | metadata.drop_all(engine) 30 | 31 | 32 | def _action_and_commit(obj, action): 33 | '''Adds/deletes the instance obj to/from the session based on the action. 34 | ''' 35 | action(obj) 36 | session.commit() 37 | 38 | 39 | class GhostBase(Base): 40 | 41 | '''The GhostBase class extends the declarative Base class.''' 42 | 43 | __abstract__ = True 44 | 45 | def __str__(self, attrs): 46 | return '''<{0}({1})>'''.format(self.__class__.__name__, ', '.join([ 47 | '='.join([attr, str(getattr(self, attr, ''))]) for attr in attrs])) 48 | 49 | @classmethod 50 | def _get_instance(cls, **kwargs): 51 | '''Returns the first instance of cls with attributes matching **kwargs. 52 | ''' 53 | return session.query(cls).filter_by(**kwargs).first() 54 | 55 | @classmethod 56 | def get_or_create(cls, **kwargs): 57 | ''' 58 | If a record matching the instance already exists in the database, then 59 | return it, otherwise create a new record. 60 | ''' 61 | q = cls._get_instance(**kwargs) 62 | if q: 63 | return q 64 | q = cls(**kwargs) 65 | _action_and_commit(q, session.add) 66 | return q 67 | 68 | # TODO (jsa): _traverse_report only needs to return the ID for an update 69 | @classmethod 70 | def update(cls, **kwargs): 71 | ''' 72 | If a record matching the instance id already exists in the database, 73 | update it. If a record matching the instance id does not already exist, 74 | create a new record. 75 | ''' 76 | q = cls._get_instance(**{'id': kwargs['id']}) 77 | if q: 78 | for k, v in kwargs.items(): 79 | setattr(q, k, v) 80 | _action_and_commit(q, session.add) 81 | else: 82 | cls.get_or_create(**kwargs) 83 | 84 | @classmethod 85 | def delete(cls, **kwargs): 86 | ''' 87 | If a record matching the instance id exists in the database, delete it. 88 | ''' 89 | q = cls._get_instance(**kwargs) 90 | if q: 91 | _action_and_commit(q, session.delete) 92 | 93 | 94 | class ResponseClassLegacyAccessor(object): 95 | 96 | def __init__(self, response_class, column, accessor): 97 | self.response_class = response_class 98 | self.column = column 99 | self.accessor = accessor 100 | 101 | def _get_instance(self, **kwargs): 102 | '''Return the first existing instance of the response record. 103 | ''' 104 | return session.query(self.response_class).filter_by(**kwargs).first() 105 | 106 | def get_or_create_from_legacy_response(self, response, **kwargs): 107 | ''' 108 | If a record matching the instance already does not already exist in the 109 | database, then create a new record. 110 | ''' 111 | response_cls = self.response_class(**kwargs).get_or_create(**kwargs) 112 | if not getattr(response_cls, self.column): 113 | setattr(response_cls, self.column, self.accessor(response)) 114 | _action_and_commit(response_cls, session.add) 115 | 116 | def update(self, response, **kwargs): 117 | ''' 118 | If a record matching the instance already exists in the database, update 119 | it, else create a new record. 120 | ''' 121 | response_cls = self._get_instance(**kwargs) 122 | if response_cls: 123 | setattr(response_cls, self.column, self.accessor(response)) 124 | _action_and_commit(response_cls, session.add) 125 | else: 126 | self.get_or_create_from_legacy_response(response, **kwargs) 127 | 128 | def delete(self, response, **kwargs): 129 | ''' 130 | If a record matching the instance id exists in the database, delete it. 131 | ''' 132 | response_cls = self._get_instance(**kwargs) 133 | if response_cls: 134 | _action_and_commit(response_cls, session.delete) 135 | 136 | 137 | class LocationResponseClassLegacyAccessor(ResponseClassLegacyAccessor): 138 | 139 | def __init__( 140 | self, response_class, column, 141 | accessor, venue_column, venue_accessor): 142 | super(LocationResponseClassLegacyAccessor, self).__init__( 143 | response_class, column, accessor) 144 | self.venue_column = venue_column 145 | self.venue_accessor = venue_accessor 146 | 147 | def get_or_create_from_legacy_response(self, response, **kwargs): 148 | ''' 149 | If a record matching the instance already does not already exist in the 150 | database, then create a new record. 151 | ''' 152 | response_cls = self.response_class(**kwargs).get_or_create(**kwargs) 153 | if not getattr(response_cls, self.column): 154 | setattr(response_cls, self.column, self.accessor(response)) 155 | _action_and_commit(response_cls, session.add) 156 | if not getattr(response_cls, self.venue_column): 157 | setattr( 158 | response_cls, self.venue_column, self.venue_accessor(response)) 159 | _action_and_commit(response_cls, session.add) 160 | 161 | def update(self, response, **kwargs): 162 | ''' 163 | If a record matching the instance already exists in the database, update 164 | both the column and venue column attributes, else create a new record. 165 | ''' 166 | response_cls = super( 167 | LocationResponseClassLegacyAccessor, self)._get_instance(**kwargs) 168 | if response_cls: 169 | setattr(response_cls, self.column, self.accessor(response)) 170 | setattr( 171 | response_cls, self.venue_column, self.venue_accessor(response)) 172 | _action_and_commit(response_cls, session.add) 173 | -------------------------------------------------------------------------------- /datums/models/questions.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from base import GhostBase, session 4 | from sqlalchemy import Column, ForeignKey 5 | from sqlalchemy import Integer, String 6 | from sqlalchemy.orm import relationship 7 | 8 | 9 | __all__ = ['Question'] 10 | 11 | 12 | class Question(GhostBase): 13 | 14 | __tablename__ = 'questions' 15 | 16 | id = Column(Integer, primary_key=True, unique=True) 17 | type = Column(Integer, nullable=False) # questionType 18 | prompt = Column(String, nullable=False) # questionPrompt 19 | 20 | responses = relationship('Response', cascade='save-update, merge, delete') 21 | 22 | def __str__(self): 23 | attrs = ['id', 'type', 'prompt'] 24 | super(Question, self).__str__(attrs) 25 | -------------------------------------------------------------------------------- /datums/models/reports.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from base import GhostBase 4 | from sqlalchemy import Column, ForeignKey 5 | from sqlalchemy import Integer, Numeric, String, DateTime, Boolean 6 | from sqlalchemy.orm import relationship, backref 7 | from sqlalchemy_utils import UUIDType 8 | 9 | 10 | __all__ = ['Report', 'AltitudeReport', 'AudioReport', 11 | 'LocationReport', 'PlacemarkReport', 'WeatherReport'] 12 | 13 | 14 | class Report(GhostBase): 15 | 16 | __tablename__ = 'reports' 17 | 18 | id = Column(UUIDType, primary_key=True) 19 | background = Column(Numeric) 20 | battery = Column(Numeric) 21 | connection = Column(Numeric) 22 | created_at = Column(DateTime(timezone=False)) 23 | draft = Column(Boolean) 24 | report_impetus = Column(Integer) 25 | section_identifier = Column(String) 26 | steps = Column(Integer) 27 | 28 | responses = relationship( 29 | 'Response', backref=backref('Report', order_by=id), passive_deletes=True) 30 | altitude_report = relationship( 31 | 'AltitudeReport', backref=backref('Report'), passive_deletes=True) 32 | audio_report = relationship( 33 | 'AudioReport', backref=backref('Report'), passive_deletes=True) 34 | location_report = relationship( 35 | 'LocationReport', backref=backref('Report'), passive_deletes=True) 36 | weather_report = relationship( 37 | 'WeatherReport', backref=backref('Report'), passive_deletes=True) 38 | 39 | def __str__(self): 40 | attrs = ['id', 'created_at', 'report_impetus', 'battery', 'steps', 41 | 'section_identifier', 'background', 'connection', 'draft'] 42 | super(report, self).__str__(attrs) 43 | 44 | class AltitudeReport(GhostBase): 45 | 46 | __tablename__ = 'altitude_reports' 47 | 48 | id = Column(UUIDType, primary_key=True) 49 | floors_ascended = Column(Numeric) 50 | floors_descended = Column(Numeric) 51 | gps_altitude_from_location = Column(Numeric) 52 | gps_altitude_raw = Column(Numeric) 53 | pressure = Column(Numeric) 54 | pressure_adjusted = Column(Numeric) 55 | report_id = Column( 56 | UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) 57 | 58 | def __str__(self): 59 | attrs = ['id', 'report_id', 'average', 'peak'] 60 | super(AltitudeReport, self).__str__(attrs) 61 | 62 | 63 | class AudioReport(GhostBase): 64 | 65 | __tablename__ = 'audio_reports' 66 | 67 | id = Column(UUIDType, primary_key=True) 68 | average = Column(Numeric) 69 | peak = Column(Numeric) 70 | report_id = Column( 71 | UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) 72 | 73 | def __str__(self): 74 | attrs = ['id', 'report_id', 'average', 'peak'] 75 | super(AudioReport, self).__str__(attrs) 76 | 77 | 78 | class LocationReport(GhostBase): 79 | 80 | __tablename__ = 'location_reports' 81 | 82 | id = Column(UUIDType, primary_key=True) 83 | altitude = Column(Numeric) 84 | course = Column(Numeric) 85 | created_at = Column(DateTime(timezone=False)) 86 | horizontal_accuracy = Column(Numeric) 87 | latitude = Column(Numeric) 88 | longitude = Column(Numeric) 89 | report_id = Column( 90 | UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) 91 | speed = Column(Numeric) 92 | vertical_accuracy = Column(Numeric) 93 | 94 | placemark = relationship('PlacemarkReport', backref=backref( 95 | 'location_reports', order_by=id), passive_deletes=True) 96 | 97 | def __str__(self): 98 | attrs = ['id', 'report_id', 'created_at', 'latitude', 99 | 'longitudue', 'altitude', 'speed', 'course', 100 | 'vertical_accuracy', 'horizontal_accuracy'] 101 | super(LocationReport, self).__str__(attrs) 102 | 103 | 104 | class PlacemarkReport(GhostBase): 105 | 106 | __tablename__ = 'placemark_reports' 107 | 108 | id = Column(UUIDType, primary_key=True) 109 | address = Column(String) 110 | city = Column(String) 111 | country = Column(String) 112 | county = Column(String) 113 | inland_water = Column(String) 114 | location_report_id = Column( 115 | UUIDType, ForeignKey('location_reports.id', ondelete='CASCADE'), 116 | nullable=False) 117 | neighborhood = Column(String) 118 | postal_code = Column(String) 119 | region = Column(String) 120 | state = Column(String) 121 | street_name = Column(String) 122 | street_number = Column(String) 123 | 124 | def __str__(self): 125 | attrs = ['id', 'location_report_id', 'street_number', 126 | 'street_name', 'address', 'neighborhood', 'city', 'county', 127 | 'state', 'country', 'postal_code', 'region'] 128 | super(PlacemarkReport, self).__str__(attrs) 129 | 130 | 131 | class WeatherReport(GhostBase): 132 | 133 | __tablename__ = 'weather_reports' 134 | 135 | id = Column(UUIDType, primary_key=True) 136 | dewpoint_celsius = Column(Numeric) 137 | feels_like_celsius = Column(Numeric) 138 | feels_like_fahrenheit = Column(Numeric) 139 | latitude = Column(Numeric) 140 | longitude = Column(Numeric) 141 | precipitation_in = Column(Numeric) 142 | precipitation_mm = Column(Numeric) 143 | pressure_in = Column(Numeric) 144 | pressure_mb = Column(Numeric) 145 | relative_humidity = Column(String) 146 | report_id = Column( 147 | UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) 148 | station_id = Column(String) 149 | temperature_celsius = Column(Numeric) 150 | temperature_fahrenheit = Column(Numeric) 151 | uv = Column(Numeric) 152 | visibility_km = Column(Numeric) 153 | visibility_mi = Column(Numeric) 154 | weather = Column(String) 155 | wind_degrees = Column(Integer) 156 | wind_direction = Column(String) 157 | wind_gust_kph = Column(Numeric) 158 | wind_gust_mph = Column(Numeric) 159 | wind_kph = Column(Numeric) 160 | wind_mph = Column(Numeric) 161 | 162 | def __str__(self): 163 | attrs = ['id', 'report_id', 'station_id', 'latitude', 164 | 'longitude', 'weather', 'temperature_fahrenheit', 165 | 'temperature_celsius', 'feels_like_fahrenheit', 166 | 'feels_like_celsius', 'wind_direction', 'wind_degrees', 167 | 'wind_mph', 'wind_kph', 'wind_gust_mph', 'wind_gust_kph', 168 | 'relative_humidity', 'precipitation_in', 'precipitation_mm', 169 | 'dewpoint_celsius', 'visibility_mi', 'visibility_km', 'uv'] 170 | super(WeatherReport, self).__str__(attrs) 171 | -------------------------------------------------------------------------------- /datums/models/responses.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from base import GhostBase, ResponseClassLegacyAccessor 4 | from sqlalchemy import Column, ForeignKey 5 | from sqlalchemy import Boolean, Float, Integer, String 6 | from sqlalchemy.dialects import postgresql 7 | from sqlalchemy.orm import backref, relationship 8 | from sqlalchemy_utils import UUIDType 9 | 10 | 11 | __all__ = ['Response', 'BooleanResponse', 'NumericResponse', 'LocationResponse', 12 | 'MultiResponse', 'NoteResponse', 'PeopleResponse', 'TokenResponse'] 13 | 14 | 15 | class Response(GhostBase): 16 | 17 | __tablename__ = 'responses' 18 | 19 | id = Column(Integer, primary_key=True) 20 | question_id = Column( 21 | Integer, ForeignKey('questions.id', ondelete='CASCADE')) 22 | report_id = Column(UUIDType, ForeignKey('reports.id', ondelete='CASCADE')) 23 | type = Column(String) 24 | 25 | __mapper_args__ = { 26 | 'polymorphic_on': type 27 | } 28 | 29 | def __str__(self): 30 | attrs = ['id', 'report_id', 'question_id', 'type'] 31 | super(Response, self).__str__(attrs) 32 | 33 | 34 | class BooleanResponse(Response): 35 | 36 | boolean_response = Column(Boolean) 37 | 38 | __mapper_args__ = { 39 | 'polymorphic_identity': 'boolean', 40 | } 41 | 42 | 43 | class LocationResponse(Response): 44 | 45 | location_response = Column(String) 46 | venue_id = Column(String, nullable=True) 47 | 48 | __mapper_args__ = { 49 | 'polymorphic_identity': 'location', 50 | } 51 | 52 | 53 | class MultiResponse(Response): 54 | 55 | multi_response = Column(postgresql.ARRAY(String)) 56 | 57 | __mapper_args__ = { 58 | 'polymorphic_identity': 'multi', 59 | } 60 | 61 | 62 | class NoteResponse(Response): 63 | 64 | note_response = Column(String) 65 | 66 | __mapper_args__ = { 67 | 'polymorphic_identity': 'note', 68 | } 69 | 70 | 71 | class NumericResponse(Response): 72 | 73 | numeric_response = Column(Float) 74 | 75 | __mapper_args__ = { 76 | 'polymorphic_identity': 'numeric', 77 | } 78 | 79 | 80 | class PeopleResponse(Response): 81 | 82 | people_response = Column(postgresql.ARRAY(String)) 83 | 84 | __mapper_args__ = { 85 | 'polymorphic_identity': 'people', 86 | } 87 | 88 | 89 | class TokenResponse(Response): 90 | 91 | tokens_response = Column(postgresql.ARRAY(String)) 92 | 93 | __mapper_args__ = { 94 | 'polymorphic_identity': 'tokens', 95 | } 96 | -------------------------------------------------------------------------------- /datums/pipeline/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import codec 4 | import json 5 | import mappers 6 | import uuid 7 | import warnings 8 | from datums import models 9 | 10 | __all__ = ['codec', 'mappers'] 11 | 12 | 13 | class QuestionPipeline(object): 14 | 15 | def __init__(self, question): 16 | self.question = question 17 | 18 | self.question_dict = {'type': self.question['questionType'], 19 | 'prompt': self.question['prompt']} 20 | 21 | def add(self): 22 | models.Question.get_or_create(**self.question_dict) 23 | 24 | def update(self): 25 | models.Question.update(**self.question_dict) 26 | 27 | def delete(self): 28 | models.Question.delete(**self.question_dict) 29 | 30 | 31 | class ResponsePipeline(object): 32 | 33 | def __init__(self, response, report): 34 | self.response = response 35 | self.report = report 36 | 37 | self.accessor, self.ids = codec.get_response_accessor( 38 | self.response, self.report) 39 | 40 | def add(self): 41 | self.accessor.get_or_create_from_legacy_response( 42 | self.response, **self.ids) 43 | 44 | def update(self): 45 | self.accessor.update(self.response, **self.ids) 46 | 47 | def delete(self): 48 | self.accessor.delete(self.response, **self.ids) 49 | 50 | 51 | class ReportPipeline(object): 52 | 53 | def __init__(self, report): 54 | self.report = report 55 | 56 | def _report(self, action, key_mapper=mappers._report_key_mapper): 57 | '''Return the dictionary of **kwargs with the correct datums attribute 58 | names and data types for the top level of the report, and return the 59 | nested levels separately. 60 | ''' 61 | _top_level = [ 62 | k for k, v in self.report.items() if not isinstance(v, dict)] 63 | _nested_level = [ 64 | k for k, v in self.report.items() if isinstance(v, dict)] 65 | top_level_dict = {} 66 | nested_levels_dict = {} 67 | for key in _top_level: 68 | try: 69 | if key == 'date' or key == 'timestamp': 70 | item = mappers._key_type_mapper[key]( 71 | str(self.report[key]), **{'ignoretz': True}) 72 | else: 73 | item = mappers._key_type_mapper[key](str( 74 | self.report[key]) if key != 'draft' else self.report[key]) 75 | except KeyError: 76 | item = self.report[key] 77 | finally: 78 | try: 79 | top_level_dict[key_mapper[key]] = item 80 | except KeyError: 81 | warnings.warn(''' 82 | {0} is not currently supported by datums and will be ignored. 83 | Would you consider submitting an issue to add support? 84 | https://www.github.com/thejunglejane/datums/issues 85 | '''.format(key)) 86 | for key in _nested_level: 87 | nested_levels_dict[key] = self.report[key] 88 | # Add the parent report ID 89 | nested_levels_dict[key][ 90 | 'reportUniqueIdentifier'] = mappers._key_type_mapper[ 91 | 'uniqueIdentifier'](str(self.report['uniqueIdentifier'])) 92 | if key == 'placemark': 93 | # Add the parent location report UUID 94 | nested_levels_dict[key][ 95 | 'locationUniqueIdentifier'] = nested_levels_dict[key].pop( 96 | 'reportUniqueIdentifier') 97 | # Create UUID for altitude report if there is not one and the action 98 | # is get_or_create, else delete the altitude report from the nested 99 | # levels and warn that it will not be updated 100 | if 'uniqueIdentifier' not in nested_levels_dict[key]: 101 | if action.__func__.func_name == 'get_or_create': 102 | nested_levels_dict[key]['uniqueIdentifier'] = uuid.uuid4() 103 | else: 104 | del nested_levels_dict[key] 105 | warnings.warn(''' 106 | No uniqueIdentifier found for AltitudeReport in {0}. 107 | Existing altitude report will not be updated. 108 | '''.format(self.report['uniqueIdentifier'])) 109 | return top_level_dict, nested_levels_dict 110 | 111 | def add(self, action=models.Report.get_or_create, 112 | key_mapper=mappers._report_key_mapper): 113 | top_level, nested_levels = self._report(action, key_mapper) 114 | action(**top_level) 115 | for nested_level in nested_levels: 116 | try: 117 | key_mapper = mappers._report_key_mapper[nested_level] 118 | except KeyError: 119 | key_mapper = mappers._report_key_mapper[ 120 | 'location'][nested_level] 121 | ReportPipeline(nested_levels[nested_level]).add( 122 | mappers._model_type_mapper[ 123 | nested_level].get_or_create, key_mapper) 124 | 125 | def update(self, action=models.Report.update, 126 | key_mapper=mappers._report_key_mapper): 127 | top_level, nested_levels = self._report(action, key_mapper) 128 | action(**top_level) 129 | for nested_level in nested_levels: 130 | try: 131 | key_mapper = mappers._report_key_mapper[nested_level] 132 | except KeyError: 133 | key_mapper = mappers._report_key_mapper[ 134 | 'location'][nested_level] 135 | ReportPipeline(nested_levels[nested_level]).update( 136 | mappers._model_type_mapper[nested_level].update, key_mapper) 137 | 138 | def delete(self): 139 | models.Report.delete(**{'id': mappers._key_type_mapper[ 140 | 'uniqueIdentifier'](str(self.report['uniqueIdentifier']))}) 141 | 142 | 143 | class SnapshotPipeline(object): 144 | 145 | def __init__(self, snapshot): 146 | self.snapshot = snapshot 147 | 148 | self.report = self.snapshot.copy() 149 | self.responses = self.report.pop('responses') 150 | 151 | _ = self.report.pop('photoSet', None) # TODO (jsa): add support 152 | 153 | def add(self): 154 | ReportPipeline(self.report).add() 155 | for response in self.responses: 156 | ResponsePipeline(response, self.report).add() 157 | 158 | def update(self): 159 | ReportPipeline(self.report).update() 160 | for response in self.responses: 161 | ResponsePipeline(response, self.report).update() 162 | 163 | def delete(self): 164 | ReportPipeline(self.report).delete() 165 | -------------------------------------------------------------------------------- /datums/pipeline/codec.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from datums import models 4 | 5 | 6 | def human_to_boolean(human): 7 | '''Convert a boolean string ('Yes' or 'No') to True or False. 8 | 9 | PARAMETERS 10 | ---------- 11 | human : list 12 | a list containing the "human" boolean string to be converted to 13 | a Python boolean object. If a non-list is passed, or if the list 14 | is empty, None will be returned. Only the first element of the 15 | list will be used. Anything other than 'Yes' will be considered 16 | False. 17 | ''' 18 | if not isinstance(human, list) or len(human) == 0: 19 | return None 20 | if human[0].lower() == 'yes': 21 | return True 22 | return False 23 | 24 | token_accessor = models.base.ResponseClassLegacyAccessor( 25 | models.TokenResponse, 'tokens_response', 26 | (lambda x: [i['text'] for i in x.get('tokens', [])])) 27 | 28 | multi_accessor = models.base.ResponseClassLegacyAccessor( 29 | models.MultiResponse, 'multi_response', 30 | (lambda x: x.get('answeredOptions'))) 31 | 32 | boolean_accessor = models.base.ResponseClassLegacyAccessor( 33 | models.BooleanResponse, 'boolean_response', 34 | (lambda x: human_to_boolean(x.get('answeredOptions')))) 35 | 36 | location_accessor = models.base.LocationResponseClassLegacyAccessor( 37 | models.LocationResponse, 'location_response', 38 | (lambda x: x['locationResponse'].get( 39 | 'text') if x.get('locationResponse') else None), 40 | 'venue_id', (lambda x: x['locationResponse'].get( 41 | 'foursquareVenueId') if x.get('locationResponse') else None)) 42 | 43 | people_accessor = models.base.ResponseClassLegacyAccessor( 44 | models.PeopleResponse, 'people_response', 45 | (lambda x: [i['text'] for i in x.get('tokens', [])])) 46 | 47 | numeric_accessor = models.base.ResponseClassLegacyAccessor( 48 | models.NumericResponse, 'numeric_response', 49 | (lambda x: float( 50 | x.get('numericResponse')) if bool(x.get('numericResponse')) else None)) 51 | 52 | note_accessor = models.base.ResponseClassLegacyAccessor( 53 | models.NoteResponse, 'note_response', 54 | (lambda x: [i.get('text') for i in x.get('textResponses', [])])) 55 | 56 | 57 | def get_response_accessor(response, report): 58 | # Determine the question ID and response type based on the prompt 59 | question_id, response_type = models.session.query( 60 | models.Question.id, models.Question.type).filter( 61 | models.Question.prompt == response['questionPrompt']).first() 62 | 63 | ids = {'question_id': question_id, # set the question ID 64 | 'report_id': report['uniqueIdentifier']} # set the report ID 65 | 66 | # Dictionary mapping response type to response class, column, and accessor 67 | # mapper 68 | response_mapper = {0: token_accessor, 1: multi_accessor, 69 | 2: boolean_accessor, 3: location_accessor, 70 | 4: people_accessor, 5: numeric_accessor, 71 | 6: note_accessor} 72 | 73 | return response_mapper[response_type], ids 74 | -------------------------------------------------------------------------------- /datums/pipeline/mappers.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import uuid 4 | from dateutil.parser import parse 5 | from datums import models 6 | 7 | 8 | _model_type_mapper = { 9 | 'altitude': models.AltitudeReport, 10 | 'audio': models.AudioReport, 11 | 'location': models.LocationReport, 12 | 'placemark': models.PlacemarkReport, 13 | 'report': models.Report, 14 | 'weather': models.WeatherReport 15 | } 16 | 17 | _key_type_mapper = { 18 | 'date': parse, 19 | 'draft': bool, 20 | 'timestamp': parse, 21 | 'uniqueIdentifier': uuid.UUID 22 | } 23 | 24 | # TODO (jsa): just snakeify Reporter's attribute names and call it a day, other- 25 | # wise datums will break every time Reporter updates attributes 26 | _report_key_mapper = { 27 | 'altitude': { 28 | 'adjustedPressure': 'pressure_adjusted', 29 | 'floorsAscended': 'floors_ascended', 30 | 'floorsDescended': 'floors_descended', 31 | 'gpsAltitudeFromLocation': 'gps_altitude_from_location', 32 | 'gpsRawAltitude': 'gps_altitude_raw', 33 | 'pressure': 'pressure', 34 | 'reportUniqueIdentifier': 'report_id', # added 35 | 'uniqueIdentifier': 'id' 36 | }, 37 | 'audio': { 38 | 'avg': 'average', 39 | 'peak': 'peak', 40 | 'reportUniqueIdentifier': 'report_id', # added 41 | 'uniqueIdentifier': 'id' 42 | }, 43 | 'background': 'background', 44 | 'battery': 'battery', 45 | 'connection': 'connection', 46 | 'date': 'created_at', 47 | 'draft': 'draft', 48 | 'location': { 49 | 'altitude': 'altitude', 50 | 'course': 'course', 51 | 'horizontalAccuracy': 'horizontal_accuracy', 52 | 'latitude': 'latitude', 53 | 'longitude': 'longitude', 54 | 'placemark': { 55 | # TODO (jsa): don't assume U.S. addresses 56 | 'administrativeArea': 'state', 57 | 'country': 'country', 58 | 'inlandWater': 'inland_water', 59 | 'locality': 'city', 60 | 'locationUniqueIdentifier': 'location_report_id', # added 61 | 'name': 'address', 62 | 'postalCode': 'postal_code', 63 | 'region': 'region', 64 | 'subAdministrativeArea': 'county', 65 | 'subLocality': 'neighborhood', 66 | 'subThoroughfare': 'street_number', 67 | 'thoroughfare': 'street_name', 68 | 'uniqueIdentifier': 'id' 69 | }, 70 | 'reportUniqueIdentifier': 'report_id', # added 71 | 'speed': 'speed', 72 | 'timestamp': 'created_at', 73 | 'uniqueIdentifier': 'id', 74 | 'verticalAccuracy': 'vertical_accuracy' 75 | }, 76 | 'reportImpetus': 'report_impetus', 77 | 'sectionIdentifier': 'section_identifier', 78 | 'steps': 'steps', 79 | 'uniqueIdentifier': 'id', 80 | 'weather': { 81 | 'dewpointC': 'dewpoint_celsius', 82 | 'feelslikeC': 'feels_like_celsius', 83 | 'feelslikeF': 'feels_like_fahrenheit', 84 | 'latitude': 'latitude', 85 | 'longitude': 'longitude', 86 | 'precipTodayIn': 'precipitation_in', 87 | 'precipTodayMetric': 'precipitation_mm', 88 | 'pressureIn': 'pressure_in', 89 | 'pressureMb': 'pressure_mb', 90 | 'relativeHumidity': 'relative_humidity', 91 | 'reportUniqueIdentifier': 'report_id', # added 92 | 'stationID': 'station_id', 93 | 'tempC': 'temperature_celsius', 94 | 'tempF': 'temperature_fahrenheit', 95 | 'uniqueIdentifier': 'id', 96 | 'uv': 'uv', 97 | 'visibilityKM': 'visibility_km', 98 | 'visibilityMi': 'visibility_mi', 99 | 'weather': 'weather', 100 | 'windDegrees': 'wind_degrees', 101 | 'windDirection': 'wind_direction', 102 | 'windGustKPH': 'wind_gust_kph', 103 | 'windGustMPH': 'wind_gust_mph', 104 | 'windKPH': 'wind_kph', 105 | 'windMPH': 'wind_mph' 106 | } 107 | } 108 | -------------------------------------------------------------------------------- /examples/add_reports_from_yesterday.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | REPORTER_PATH=$HOME/Dropbox/Apps/Reporter-App 3 | 4 | # Get yesterday's date 5 | YESTERDAY=$(date -v -1d +"%Y-%m-%d") 6 | # Add the reports from the file dated yesterday to the database 7 | datums --add $REPORTER_PATH/$YESTERDAY-reporter-export.json 8 | -------------------------------------------------------------------------------- /images/data_model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thejunglejane/datums/2250b365e37ba952c2426edc615c1487afabae6e/images/data_model.png -------------------------------------------------------------------------------- /images/header.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/thejunglejane/datums/2250b365e37ba952c2426edc615c1487afabae6e/images/header.png -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | SQLAlchemy==1.0.11 2 | SQLAlchemy-Utils==0.29.9 3 | datums==1.0.0 4 | funcsigs==0.4 5 | mock==1.3.0 6 | pbr==1.8.1 7 | psycopg2==2.6.1 8 | python-dateutil==2.4.2 9 | six==1.10.0 10 | wsgiref==0.1.2 11 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [metadata] 2 | description-file = README.md -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | from datums import __version__ 4 | 5 | 6 | def readme(): 7 | with open('README.md') as f: 8 | return f.read() 9 | 10 | setup( 11 | name = 'datums', 12 | packages = ['datums', 'datums.pipeline', 'datums.models'], 13 | version = __version__, 14 | scripts = ['bin/datums'], 15 | install_requires = [ 16 | 'alembic', 'sqlalchemy', 'sqlalchemy-utils', 'python-dateutil'], 17 | tests_require = ['mock'], 18 | description = 'A PostgreSQL pipeline for Reporter.', 19 | author = 'Jane Stewart Adams', 20 | author_email = 'jane@thejunglejane.com', 21 | license = 'MIT', 22 | url = 'https://github.com/thejunglejane/datums', 23 | download_url = 'https://github.com/thejunglejane/datums/tarball/{v}'.format( 24 | v=__version__) 25 | ) -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | __all__ = ['test_codec', 'test_models', 'test_pipeline'] -------------------------------------------------------------------------------- /tests/test_codec.py: -------------------------------------------------------------------------------- 1 | import mock 2 | import unittest 3 | from datums import models 4 | from datums.pipeline import codec 5 | from sqlalchemy.orm import query 6 | 7 | 8 | class TestModelsBase(unittest.TestCase): 9 | 10 | def setUp(self): 11 | self.response = {'questionPrompt': 'How anxious are you?', 12 | 'uniqueIdentifier': '5496B14F-EAF1-4EF7-85DB-9531FDD7DC17', 13 | 'numericResponse': '0'} 14 | self.report = { 15 | 'uniqueIdentifier': '1B7AADBF-C137-4F35-A099-D73ACE534CFC'} 16 | 17 | def tearDown(self): 18 | del self.response 19 | del self.report 20 | 21 | def test_human_to_boolean_none(self): 22 | '''Does human_to_boolean return None if a non-list or an empty list is 23 | passed? 24 | ''' 25 | self.assertIsNone(codec.human_to_boolean([])) 26 | self.assertIsNone(codec.human_to_boolean(3)) 27 | 28 | def test_human_to_boolean_true(self): 29 | '''Does human_to_boolean return True if a list containing 'Yes' is 30 | passed? 31 | ''' 32 | self.assertTrue(codec.human_to_boolean(['Yes'])) 33 | 34 | def test_human_to_boolean_false(self): 35 | '''Does human_to_boolean return False if a list containing 'No', or 36 | something else that is not 'Yes', is passed? 37 | ''' 38 | self.assertFalse(codec.human_to_boolean(['No'])) 39 | self.assertFalse(codec.human_to_boolean(['Foo'])) 40 | 41 | def test_human_to_boolean_true_followed_by_false(self): 42 | '''Does human_to_boolean return True if the first element of the list 43 | is 'Yes', ignoring other elements of the list? 44 | ''' 45 | self.assertTrue(codec.human_to_boolean(['Yes', 'No'])) 46 | 47 | @mock.patch.object(query.Query, 'first') 48 | @mock.patch.object(query.Query, 'filter', return_value=query.Query( 49 | models.Question)) 50 | @mock.patch.object( 51 | models.session, 'query', return_value=query.Query(models.Question)) 52 | def test_get_response_accessor_valid_response_type( 53 | self, mock_session_query, mock_query_filter, mock_query_first): 54 | '''Does get_response_accessor() return the right response_mapper for 55 | the prompt in the response, as well as the question and report ID? 56 | ''' 57 | mock_query_first.return_value = (1, 5) 58 | mapper, ids = codec.get_response_accessor(self.response, self.report) 59 | mock_session_query.assert_called_once_with( 60 | models.Question.id, models.Question.type) 61 | self.assertTrue(mock_query_filter.called) 62 | self.assertEqual(mapper, codec.numeric_accessor) 63 | self.assertIsInstance(mapper, models.base.ResponseClassLegacyAccessor) 64 | self.assertDictEqual(ids, { 65 | 'question_id': 1, 'report_id': self.report['uniqueIdentifier']}) 66 | -------------------------------------------------------------------------------- /tests/test_models.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import mock 4 | import random 5 | import unittest 6 | from datums import models 7 | from sqlalchemy.orm import query 8 | 9 | 10 | class TestModelsBase(unittest.TestCase): 11 | 12 | def setUp(self): 13 | self.GhostBaseInstance = models.base.GhostBase() 14 | 15 | def tearDown(self): 16 | del self.GhostBaseInstance 17 | 18 | @mock.patch.object(models.base.metadata, 'create_all') 19 | def test_database_setup(self, mock_create_all): 20 | models.base.database_setup(models.engine) 21 | mock_create_all.assert_called_once_with(models.engine) 22 | 23 | @mock.patch.object(models.base.metadata, 'drop_all') 24 | def test_database_teardown(self, mock_drop_all): 25 | models.base.database_teardown(models.engine) 26 | mock_drop_all.assert_called_once_with(models.engine) 27 | 28 | @mock.patch.object(models.session, 'commit') 29 | @mock.patch.object(models.session, 'add') 30 | def test_action_and_commit_valid_kwargs( 31 | self, mock_session_add, mock_session_commit): 32 | '''Does the _action_and_commit() method commit the session if the 33 | kwargs are valid? 34 | ''' 35 | kwargs = {'section_identifier': 'bar'} 36 | obj = models.Report(**kwargs) 37 | setattr(self.GhostBaseInstance, 'section_identifier', 'bar') 38 | models.base._action_and_commit(obj, mock_session_add) 39 | mock_session_add.assert_called_once_with(obj) 40 | self.assertTrue(mock_session_commit.called) 41 | 42 | 43 | @mock.patch.object(query.Query, 'first') 44 | @mock.patch.object(query.Query, 'filter_by', return_value=query.Query( 45 | models.Report)) 46 | @mock.patch.object(models.session, 'query', return_value=query.Query( 47 | models.Report)) 48 | class TestGhostBase(unittest.TestCase): 49 | 50 | def setUp(self): 51 | self.GhostBaseInstance = models.base.GhostBase() 52 | 53 | def tearDown(self): 54 | del self.GhostBaseInstance 55 | 56 | def test_get_instance_exists( 57 | self, mock_session_query, mock_query_filter, mock_query_first): 58 | '''Does the _get_instance() method return an existing instance of the 59 | class? 60 | ''' 61 | mock_query_first.return_value = models.base.GhostBase() 62 | self.assertIsInstance( 63 | self.GhostBaseInstance._get_instance( 64 | **{'foo': 'bar'}), self.GhostBaseInstance.__class__) 65 | mock_session_query.assert_called_once_with(models.base.GhostBase) 66 | mock_query_filter.assert_called_once_with(**{'foo': 'bar'}) 67 | self.assertTrue(mock_query_first.called) 68 | 69 | def test_get_instance_does_not_exist( 70 | self, mock_session_query, mock_query_filter, mock_query_first): 71 | '''Does the _get_instance() method return None if no instance of the 72 | class exists? 73 | ''' 74 | mock_query_first.return_value = None 75 | self.assertIsNone( 76 | self.GhostBaseInstance._get_instance(**{'foo': 'bar'})) 77 | mock_session_query.assert_called_once_with(models.base.GhostBase) 78 | mock_query_filter.assert_called_once_with(**{'foo': 'bar'}) 79 | self.assertTrue(mock_query_first.called) 80 | 81 | @mock.patch.object(models.session, 'add') 82 | def test_get_or_create_get(self, mock_session_add, mock_session_query, 83 | mock_query_filter, mock_query_first): 84 | '''Does the get_or_create() method return an instance of the class 85 | without adding it to the session if the instance already exists? 86 | ''' 87 | mock_query_first.return_value = True 88 | self.assertTrue(self.GhostBaseInstance.get_or_create(**{'id': 'foo'})) 89 | mock_session_add.assert_not_called() 90 | 91 | @mock.patch.object(models.session, 'add') 92 | def test_get_or_create_add(self, mock_session_add, mock_session_query, 93 | mock_query_filter, mock_query_first): 94 | '''Does the get_or_create() method create a new instance and add it to 95 | the session if the instance does not already exist? 96 | ''' 97 | mock_query_first.return_value = None 98 | self.assertIsInstance( 99 | models.Report.get_or_create( 100 | **{'id': 'foo'}), models.base.GhostBase) 101 | self.assertTrue(mock_session_add.called) 102 | 103 | @mock.patch.object(models.session, 'add') 104 | def test_update_exists(self, mock_session_add, mock_session_query, 105 | mock_query_filter, mock_query_first): 106 | '''Does the update() method update the __dict__ attribute of an 107 | existing instance of the class and add it to the session? 108 | ''' 109 | _ = models.Report 110 | mock_query_first.return_value = _ 111 | self.GhostBaseInstance.update(**{'id': 'bar'}) 112 | self.assertTrue(hasattr(_, 'id')) 113 | self.assertTrue(mock_session_add.called) 114 | 115 | @mock.patch.object(models.session, 'add') 116 | def test_update_does_not_exist(self, mock_session_add, mock_session_query, 117 | mock_query_filter, mock_query_first): 118 | '''Does the update() method create a new instance and add it to the 119 | session if the instance does not already exist? 120 | ''' 121 | mock_query_first.return_value = None 122 | models.Report.update(**{'id': 'bar'}) 123 | self.assertTrue(mock_session_add.called) 124 | 125 | @mock.patch.object(models.base, '_action_and_commit') 126 | @mock.patch.object(models.session, 'delete') 127 | def test_delete_exists( 128 | self, mock_session_delete, mock_action_commit, 129 | mock_session_query, mock_query_filter, mock_query_first): 130 | '''Does the delete() method validate an existing instance of the class 131 | before deleting from the session? 132 | ''' 133 | mock_query_first.return_value = True 134 | self.GhostBaseInstance.delete() 135 | self.assertTrue(mock_action_commit.called) 136 | 137 | @mock.patch.object(models.base, '_action_and_commit') 138 | @mock.patch.object(models.session, 'delete') 139 | def test_delete_does_not_exist( 140 | self, mock_session_delete, mock_action_commit, 141 | mock_session_query, mock_query_filter, mock_query_first): 142 | '''Does the delete() method do nothing if the instance does not already 143 | exists? 144 | ''' 145 | mock_query_first.return_value = None 146 | self.GhostBaseInstance.delete() 147 | mock_action_commit.assert_not_called() 148 | 149 | 150 | @mock.patch.object(models.base, '_action_and_commit') 151 | class TestResponseClassLegacyAccessor(unittest.TestCase): 152 | 153 | _response_classes = models.Response.__subclasses__() 154 | _response_classes.remove(models.LocationResponse) # tested separately 155 | 156 | def setUp(self, mock_response=random.choice(_response_classes)): 157 | self.LegacyInstance = models.base.ResponseClassLegacyAccessor( 158 | response_class=mock_response, column='foo_response', 159 | accessor=(lambda x: x.get('foo'))) 160 | self.test_response = {'foo': 'bar'} 161 | self.mock_response = mock_response 162 | 163 | def tearDown(self): 164 | del self.LegacyInstance 165 | 166 | @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') 167 | @mock.patch.object(models.base.ResponseClassLegacyAccessor, 168 | 'get_or_create_from_legacy_response') 169 | def test_update_exists( 170 | self, mock_get_create, mock_get_instance, mock_action_commit): 171 | '''Does the update() method call _confirm_or_add_response() if there 172 | isn't an existing instance in the database, without calling 173 | get_or_create_from_legacy_response()? 174 | ''' 175 | _ = models.Report() 176 | mock_get_instance.return_value = _ 177 | self.LegacyInstance.update(self.test_response) 178 | self.assertTrue(mock_get_instance.called) 179 | mock_action_commit.assert_called_once_with(_, models.session.add) 180 | mock_get_create.assert_not_called() 181 | 182 | @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') 183 | @mock.patch.object(models.base.ResponseClassLegacyAccessor, 184 | 'get_or_create_from_legacy_response') 185 | def test_update_does_not_exist( 186 | self, mock_get_create, mock_get_instance, mock_action_commit): 187 | '''Does the update() method call get_or_create_from_legacy_response() 188 | if there isn't an existing instance in the database, without calling 189 | _action_and_commit()? 190 | ''' 191 | mock_get_instance.return_value = None 192 | self.LegacyInstance.update(self.test_response) 193 | mock_action_commit.assert_not_called() 194 | mock_get_create.assert_called_once_with(self.test_response) 195 | 196 | @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') 197 | def test_delete_exists(self, mock_get_instance, mock_action_commit): 198 | '''Does the delete() method call _action_and_commit() with 199 | models.session.delete if an instance exists? 200 | ''' 201 | _ = models.Report() 202 | mock_get_instance.return_value = _ 203 | self.LegacyInstance.delete(self.test_response) 204 | self.assertTrue(mock_get_instance.called) 205 | mock_action_commit.assert_called_once_with(_, models.session.delete) 206 | 207 | @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') 208 | def test_delete_does_not_exist( 209 | self, mock_get_instance, mock_action_commit): 210 | '''Does the delete() method do nothing if no instance exists? 211 | ''' 212 | mock_get_instance.return_value = None 213 | self.LegacyInstance.delete(self.test_response) 214 | self.assertTrue(mock_get_instance.called) 215 | mock_action_commit.assert_not_called() 216 | -------------------------------------------------------------------------------- /tests/test_pipeline.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import datetime 4 | import mock 5 | import random 6 | import unittest 7 | import uuid 8 | import warnings 9 | from dateutil.parser import parse 10 | from dateutil.tz import tzoffset 11 | from datums import models 12 | from datums import pipeline 13 | from datums.pipeline import mappers, codec 14 | from sqlalchemy.orm import query 15 | 16 | 17 | class TestQuestionPipeline(unittest.TestCase): 18 | 19 | def setUp(self): 20 | self.question = {'questionType': 'numeric', 21 | 'prompt': 'How anxious are you?'} 22 | self.question_dict = {'type': self.question['questionType'], 23 | 'prompt': self.question['prompt']} 24 | 25 | def tearDown(self): 26 | delattr(self, 'question') 27 | delattr(self, 'question_dict') 28 | 29 | def test_question_pipeline_init(self): 30 | '''Are QuestionPipeline objects fully initialized? 31 | ''' 32 | q = pipeline.QuestionPipeline(self.question) 33 | self.assertTrue(hasattr(q, 'question_dict')) 34 | self.assertDictEqual(q.question_dict, self.question_dict) 35 | 36 | @mock.patch.object(models.Question, 'get_or_create') 37 | def test_question_pipeline_add(self, mock_get_create): 38 | '''Does the add() method on QuestionPipeline objects call 39 | models.Question.get_or_create with the question_dict attribute? 40 | ''' 41 | pipeline.QuestionPipeline(self.question).add() 42 | mock_get_create.assert_called_once_with(**self.question_dict) 43 | 44 | @mock.patch.object(models.Question, 'update') 45 | def test_question_pipeline_update(self, mock_update): 46 | '''Does the update() method on QuestionPipeline objects call 47 | models.Question.update with the question_dict attribute? 48 | ''' 49 | pipeline.QuestionPipeline(self.question).update() 50 | mock_update.assert_called_once_with(**self.question_dict) 51 | 52 | @mock.patch.object(models.Question, 'delete') 53 | def test_question_pipeline_delete(self, mock_delete): 54 | '''Does the delete() method on QuestionPipeline objects call 55 | models.Question.delete with the question_dict attribute? 56 | ''' 57 | pipeline.QuestionPipeline(self.question).delete() 58 | mock_delete.assert_called_once_with(**self.question_dict) 59 | 60 | 61 | @mock.patch.object(codec, 'get_response_accessor') 62 | class TestResponsePipeline(unittest.TestCase): 63 | 64 | def setUp(self): 65 | self.report = {'uniqueIdentifier': uuid.uuid4(), 'responses': [{ 66 | 'questionPrompt': 'How anxious are you?', 67 | 'uniqueIdentifier': uuid.uuid4(), 68 | 'numericResponse': '1'}]} 69 | self.response = self.report.pop('responses')[0] 70 | self.accessor = codec.numeric_accessor 71 | self.ids = {'report_id': self.report['uniqueIdentifier'], 72 | 'question_id': 1} 73 | 74 | def tearDown(self): 75 | delattr(self, 'report') 76 | delattr(self, 'response') 77 | delattr(self, 'accessor') 78 | delattr(self, 'ids') 79 | 80 | def test_response_pipeline_init(self, mock_get_accessor): 81 | '''Are ResponsePipeline objects fully initialized? 82 | ''' 83 | mock_get_accessor.return_value = return_value=( 84 | codec.numeric_accessor, self.ids) 85 | r = pipeline.ResponsePipeline(self.response, self.report) 86 | self.assertTrue(hasattr(r, 'accessor')) 87 | self.assertTrue(hasattr(r, 'ids')) 88 | self.assertEquals(r.accessor, self.accessor) 89 | self.assertDictEqual(r.ids, self.ids) 90 | 91 | @mock.patch.object( 92 | codec.numeric_accessor, 'get_or_create_from_legacy_response') 93 | def test_response_pipeline_add( 94 | self, mock_get_create_legacy, mock_get_accessor): 95 | '''Does the add() method on ResponsePipeline objects call 96 | codec.numeric_accessor.get_or_create_from_legacy_response with the 97 | reponse and the ids attribute? 98 | ''' 99 | mock_get_accessor.return_value = return_value=( 100 | codec.numeric_accessor, self.ids) 101 | pipeline.ResponsePipeline(self.response, self.report).add() 102 | mock_get_accessor.assert_called_once_with(self.response, self.report) 103 | mock_get_create_legacy.assert_called_once_with( 104 | self.response, **self.ids) 105 | 106 | @mock.patch.object(codec.numeric_accessor, 'update') 107 | def test_response_pipeline_update(self, mock_update, mock_get_accessor): 108 | '''Does the update() method on ResponsePipeline objects call 109 | codec.numeric_accessor.update with the reponse and the ids attribute? 110 | ''' 111 | mock_get_accessor.return_value = return_value=( 112 | codec.numeric_accessor, self.ids) 113 | pipeline.ResponsePipeline(self.response, self.report).update() 114 | mock_update.assert_called_once_with(self.response, **self.ids) 115 | 116 | @mock.patch.object(codec.numeric_accessor, 'delete') 117 | def test_response_pipeline_delete(self, mock_delete, mock_get_accessor): 118 | '''Does the delete() method on ResponsePipeline objects call 119 | codec.numeric_accessor.delete with the reponse and the ids attribute? 120 | ''' 121 | mock_get_accessor.return_value = return_value=( 122 | codec.numeric_accessor, self.ids) 123 | pipeline.ResponsePipeline(self.response, self.report).delete() 124 | mock_delete.assert_called_once_with(self.response, **self.ids) 125 | 126 | 127 | class TestReportPipeline(unittest.TestCase): 128 | 129 | def setUp(self): 130 | self.report = {'uniqueIdentifier': uuid.uuid4(), 'audio': { 131 | 'uniqueIdentifier': uuid.uuid4(), 'avg': -59.8, 'peak': -57, }, 132 | 'connection': 0, 'battery': 0.89, 'location': { 133 | 'uniqueIdentifier': uuid.uuid4(), 'speed': -1, 'longitude': -73.9, 134 | 'latitude': 40.8, 'altitude': 11.2, 'placemark': { 135 | 'uniqueIdentifier': uuid.uuid4(), 'country': 'United States', 136 | 'locality': 'New York'}}} 137 | self.maxDiff = None 138 | 139 | def tearDown(self): 140 | delattr(self, 'report') 141 | 142 | def test_report_pipeline_init(self): 143 | '''Are ReportPipeline objects fully initialized? 144 | ''' 145 | r = pipeline.ReportPipeline(self.report) 146 | self.assertTrue(hasattr(r, 'report')) 147 | self.assertDictEqual(r.report, self.report) 148 | 149 | def test_report_pipeline_report_add(self): 150 | '''Does the _report() method on ReportPipeline objects return a 151 | dictionary of top-level report attributes mapped to the correct datums 152 | attribute names, and an unmapped dictionary of nested report attributes 153 | with the parent level report's uniqueIdentifier added, when the action 154 | specified is models.Report.get_or_create? 155 | ''' 156 | top_level, nested_level = pipeline.ReportPipeline(self.report)._report( 157 | models.Report.get_or_create) 158 | self.assertDictEqual(top_level, { 159 | 'id': self.report['uniqueIdentifier'], 'connection': 0, 160 | 'battery': 0.89}) 161 | self.assertDictEqual(nested_level, {'audio': { 162 | 'reportUniqueIdentifier': self.report['uniqueIdentifier'], 163 | 'uniqueIdentifier': self.report['audio']['uniqueIdentifier'], 164 | 'avg': -59.8, 'peak': -57}, 'location': { 165 | 'reportUniqueIdentifier': self.report['uniqueIdentifier'], 166 | 'uniqueIdentifier': self.report['location']['uniqueIdentifier'], 167 | 'latitude': 40.8, 'longitude': -73.9, 'altitude': 11.2, 168 | 'speed': -1, 'placemark': { 169 | 'uniqueIdentifier': self.report['location'][ 170 | 'placemark']['uniqueIdentifier'], 171 | 'country': 'United States', 'locality': 'New York'}}}) 172 | 173 | def test_report_pipeline_report_add_altitude_no_uuid(self): 174 | '''Does the _report() method on ReportPipeline objects add a 175 | uniqueIdentifier to a nested AltitudeReport if there isn't one and the 176 | action specified is models.Report.get_or_create? 177 | ''' 178 | self.report['altitude'] = { 179 | 'floorsDescended': 0, 'pressure': 101.5, 'floorsAscended': 0} 180 | top_level, nested_level = pipeline.ReportPipeline(self.report)._report( 181 | models.Report.get_or_create) 182 | self.assertSetEqual(set(nested_level['altitude'].keys()), 183 | set(['uniqueIdentifier', 'floorsAscended', 184 | 'floorsDescended', 'pressure', 185 | 'reportUniqueIdentifier'])) 186 | 187 | def test_report_pipeline_report_attr_not_supported(self): 188 | '''Does the _report() method on ReportPipeline objects generate a 189 | warning if there is an attribute in the report that is not yet 190 | supported (i.e., not yet in mappers._report_key_mapper)? 191 | ''' 192 | r = {'foo': 'bar'} 193 | with warnings.catch_warnings(record=True) as w: 194 | warnings.simplefilter('always') 195 | pipeline.ReportPipeline(r)._report(models.Report.get_or_create) 196 | self.assertEquals(len(w), 1) 197 | self.assertEquals(w[-1].category, UserWarning) 198 | 199 | def test_report_pipeline_report_update_altitude_no_uuid(self): 200 | '''Does the _report() method on ReportPipeline objects NOT add a 201 | uniqueIdentifier to a nested AltitudeReport and generate a warning if 202 | there isn't one and the action specified is models.Report.update? 203 | ''' 204 | self.report['altitude'] = { 205 | 'floorsDescended': 0, 'pressure': 101.5, 'floorsAscended': 0} 206 | with warnings.catch_warnings(record=True) as w: 207 | warnings.simplefilter('always') 208 | top_level, nested_level = pipeline.ReportPipeline( 209 | self.report)._report(models.Report.update) 210 | self.assertEquals(len(w), 1) 211 | self.assertEquals(w[-1].category, UserWarning) 212 | self.assertSetEqual( 213 | set(nested_level.keys()), set(['audio', 'location'])) 214 | 215 | def test_report_pipeline_report_update_altitude_uuid(self): 216 | '''Does the _report() method on ReportPipeline objects NOT generate a 217 | warning if there is a nested AltitudeReport has a uniqueIdentifier and 218 | the action specified is models.Report.update? 219 | ''' 220 | self.report['altitude'] = { 221 | 'floorsDescended': 0, 'pressure': 101.5, 222 | 'floorsAscended': 0, 'uniqueIdentifier': uuid.uuid4()} 223 | with warnings.catch_warnings(record=True) as w: 224 | warnings.simplefilter('always') 225 | top_level, nested_level = pipeline.ReportPipeline( 226 | self.report)._report(models.Report.update) 227 | self.assertEquals(len(w), 0) 228 | self.assertSetEqual(set(nested_level.keys()), 229 | set(['audio', 'location', 'altitude'])) 230 | 231 | # TODO (jsa): test recursion 232 | @mock.patch.object(models.Report, 'get_or_create') 233 | @mock.patch.object(pipeline.ReportPipeline, '_report') 234 | def test_report_pipeline_add(self, mock_report, mock_get_create): 235 | '''Does the add() method on ReportPipeline objects call the _report() 236 | method on the object and then call models.Report.get_or_create() with 237 | the top level dictionary returned, then recurse the keys in the nested 238 | dictionary returned? 239 | ''' 240 | top_level = {'id': uuid.uuid4()} 241 | mock_report.return_value = (top_level, {}) 242 | pipeline.ReportPipeline(self.report).add(models.Report.get_or_create) 243 | mock_report.assert_called_once_with( 244 | mock_get_create, mappers._report_key_mapper) 245 | mock_get_create.assert_called_once_with(**top_level) 246 | 247 | # TODO (jsa): test recursion 248 | @mock.patch.object(models.Report, 'update') 249 | @mock.patch.object(pipeline.ReportPipeline, '_report') 250 | def test_report_pipeline_update(self, mock_report, mock_update): 251 | '''Does the update() method on ReportPipeline objects call the _report() 252 | method on the object and then call models.Report.update() with the top 253 | level dictionary returned, then recurse the keys in the nested 254 | dictionary returned? 255 | ''' 256 | top_level = {'id': uuid.uuid4()} 257 | mock_report.return_value = (top_level, {}) 258 | pipeline.ReportPipeline(self.report).update(models.Report.update) 259 | mock_report.assert_called_once_with( 260 | mock_update, mappers._report_key_mapper) 261 | mock_update.assert_called_once_with(**top_level) 262 | 263 | @mock.patch.object(models.Report, 'delete') 264 | def test_report_pipeline_delete(self, mock_report_delete): 265 | '''Does the delete() method on ReportPipeline objects call the 266 | models.Report.delete() method with the uniqueIdentifier of the report? 267 | ''' 268 | pipeline.ReportPipeline(self.report).delete() 269 | mock_report_delete.assert_called_once_with( 270 | **{'id': self.report['uniqueIdentifier']}) 271 | 272 | 273 | class TestSnapshotPipeline(unittest.TestCase): 274 | 275 | def setUp(self): 276 | self.snapshot = {'uniqueIdentifier': uuid.uuid4(), 'responses': [ 277 | {'questionPrompt': 'How tired are you?', 278 | 'uniqueIdentifier': uuid.uuid4(), 'numericResponse': '2'}, 279 | {'questionPrompt': 'Where are you?', 280 | 'uniqueIdentifier': uuid.uuid4(), 281 | 'locationResponse': { 282 | 'longitude': -73.9, 283 | 'latitude': 40.8, 284 | 'uniqueIdentifier': uuid.uuid4()}, 'text': 'Home'}]} 285 | self.report = self.snapshot.copy() 286 | self.responses = self.report.pop('responses') 287 | self.ids = {'report_id': self.report['uniqueIdentifier'], 288 | 'question_id': 1} 289 | self._ = None 290 | 291 | def tearDown(self): 292 | delattr(self, 'snapshot') 293 | delattr(self, 'report') 294 | delattr(self, 'responses') 295 | delattr(self, '_') 296 | 297 | def test_snapshot_pipeline_init_no_photoset(self): 298 | '''Are SnapshotPipeline objects fully initialized when no photoSet is 299 | present? 300 | ''' 301 | s = pipeline.SnapshotPipeline(self.snapshot) 302 | self.assertTrue(hasattr(s, 'report')) 303 | self.assertTrue(hasattr(s, 'responses')) 304 | self.assertFalse(hasattr(s, '_')) 305 | self.assertDictEqual(s.report, self.report) 306 | self.assertListEqual(s.responses, self.responses) 307 | 308 | def test_snapshot_pipeline_init_photoset(self): 309 | '''Are SnapshotPipeline objects fully initialized when a photoSet is 310 | present? 311 | ''' 312 | self.snapshot['photoSet'] = {'photos': [ 313 | {'uniqueIdentifier': uuid.uuid4()}]} 314 | s = pipeline.SnapshotPipeline(self.snapshot) 315 | self.assertTrue(hasattr(s, 'report')) 316 | self.assertTrue(hasattr(s, 'responses')) 317 | self.assertFalse(hasattr(s, '_')) 318 | self.assertDictEqual(s.report, self.report) 319 | self.assertListEqual(s.responses, self.responses) 320 | 321 | @mock.patch.object(codec, 'get_response_accessor') 322 | @mock.patch.object(pipeline.ResponsePipeline, 'add') 323 | @mock.patch.object(pipeline.ReportPipeline, 'add') 324 | def test_snapshot_pipeline_add( 325 | self, mock_report_add, mock_response_add, mock_get_accessor): 326 | '''Does the add() method on SnapshotPipeline objects initialize a 327 | ReportPipeline object and call its add() method, and initialize n 328 | ResponsePipeline objects and call their add() methods, where n is the 329 | number of responses included in the snapshot? 330 | ''' 331 | mock_get_accessor.return_value = return_value=( 332 | codec.numeric_accessor, self.ids) 333 | pipeline.SnapshotPipeline(self.snapshot).add() 334 | self.assertTrue(mock_report_add.call_count, 1) 335 | self.assertEquals(mock_response_add.call_count, 2) 336 | 337 | @mock.patch.object(codec, 'get_response_accessor') 338 | @mock.patch.object(pipeline.ResponsePipeline, 'update') 339 | @mock.patch.object(pipeline.ReportPipeline, 'update') 340 | def test_snapshot_pipeline_update( 341 | self, mock_report_update, mock_response_update, mock_get_accessor): 342 | '''Does the update() method on SnapshotPipeline objects initialize a 343 | ReportPipeline object and call its update() method, and initialize n 344 | ResponsePipeline objects and call their update() methods, where n is 345 | the number of responses included in the snapshot? 346 | ''' 347 | mock_get_accessor.return_value = return_value=( 348 | codec.numeric_accessor, self.ids) 349 | pipeline.SnapshotPipeline(self.snapshot).update() 350 | self.assertTrue(mock_report_update.call_count, 1) 351 | self.assertEquals(mock_response_update.call_count, 2) 352 | 353 | @mock.patch.object(pipeline.ResponsePipeline, 'delete') 354 | @mock.patch.object(pipeline.ReportPipeline, 'delete') 355 | def test_snapshot_pipeline_delete( 356 | self, mock_report_delete, mock_response_delete): 357 | '''Does the delete() method on SnapshotPipeline objects initialize a 358 | ReportPipeline object and call its delete() method, without initializing 359 | any ResponsePipeline objects and calling their delete() methods? 360 | ''' 361 | pipeline.SnapshotPipeline(self.snapshot).delete() 362 | self.assertTrue(mock_report_delete.call_count, 1) 363 | mock_response_delete.assert_not_called() 364 | --------------------------------------------------------------------------------