├── .github └── pull_request_template.md ├── .gitignore ├── AUTHORS ├── CHANGELOG.md ├── LICENSE ├── README.rst ├── requirements.txt ├── setup.cfg ├── setup.py ├── sqlalchemy_teradata ├── __init__.py ├── base.py ├── compiler.py ├── dialect.py ├── requirements.py └── types.py └── test ├── __init__.py ├── conftest.py ├── test_dialect.py ├── test_generic_types.py ├── test_limit_offset.py ├── test_suite.py ├── test_td_ddl.py ├── test_td_types.py └── usage_test.py /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | ## High level description of this Pull-request 2 | Include motivations, reasons, and background to add context to your contribution. 3 | Include a description of changes associated with your commit(s) 4 | 5 | ## Related Issues 6 | - List all related issues or NA 7 | 8 | ## Reviewers 9 | - Use @Mentions to specify the reviewers for your PR. 10 | 11 | # CHECKLIST: 12 | Make sure all items are marked when you submit the pull-request. 13 | 14 | - [ ] Relevant documentation for functions, tests, classes, the wiki, etc. have been made 15 | - [ ] Necessary unit tests in tests/ pass with no errors 16 | - [ ] Necessary integration tests in tests/ pass with no errors 17 | - [ ] Update the CHANGELOG.md with a summary of your changes if requested 18 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | 47 | # Translations 48 | *.mo 49 | *.pot 50 | 51 | # Django stuff: 52 | *.log 53 | 54 | # Sphinx documentation 55 | docs/_build/ 56 | 57 | # PyBuilder 58 | target/ 59 | -------------------------------------------------------------------------------- /AUTHORS: -------------------------------------------------------------------------------- 1 | Mark Sandan 2 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Change Log 2 | 3 | ## [Unreleased](https://github.com/Teradata/sqlalchemy-teradata/tree/HEAD) 4 | 5 | **Closed issues:** 6 | 7 | - limit\_clause should include \*\*kwargs in it's function arguments [\#6](https://github.com/Teradata/sqlalchemy-teradata/issues/6) 8 | - Add "timestamp" to reserved keywords [\#4](https://github.com/Teradata/sqlalchemy-teradata/issues/4) 9 | - Schema meaning in dialect implementation [\#3](https://github.com/Teradata/sqlalchemy-teradata/issues/3) 10 | - AUTHMECH=LDAP [\#1](https://github.com/Teradata/sqlalchemy-teradata/issues/1) 11 | 12 | **Merged pull requests:** 13 | 14 | - Dialect methods implementation [\#9](https://github.com/Teradata/sqlalchemy-teradata/pull/9) ([mrbungie](https://github.com/mrbungie)) 15 | - Fix proposal for \#3 [\#8](https://github.com/Teradata/sqlalchemy-teradata/pull/8) ([mrbungie](https://github.com/mrbungie)) 16 | - Fixes limit\_clause signature problems [\#7](https://github.com/Teradata/sqlalchemy-teradata/pull/7) ([mrbungie](https://github.com/mrbungie)) 17 | - Added timestamp to ReservedKeywords [\#5](https://github.com/Teradata/sqlalchemy-teradata/pull/5) ([mrbungie](https://github.com/mrbungie)) 18 | 19 | 20 | 21 | ### Changes in Version 0.0.6 (Released: July 6, 2016) 22 | The [original repository](https://github.com/sandan/sqlalchemy-teradata) has moved under the Teradata organization :tada::confetti_ball::chart_with_upwards_trend: 23 | 24 | The initial implementation includes the following changes: 25 | * `An implementation of TeradataDialect and the various compilers` 26 | * `Implement various generic types in the TeradataTypeCompiler and dialect specific types` 27 | * `Various tests for the types and usage of the dialect` 28 | 29 | The majority of the development in the beginning is squashed into the following commit: 30 | * [add tests, compiler impls, type impls](https://github.com/Teradata/sqlalchemy-teradata/commit/def0489f6f75bbfaf6012027394e78747a3941fc) 31 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License 2 | 3 | Copyright (c) 2017 by Teradata. All rights reserved. http://teradata.com 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | Dialect for SQLAlchemy 2 | ====================== 3 | 4 | SQLAlchemy is a database toolkit that provides an abstraction over 5 | databases. It allows you to interact with relational databases using an 6 | object relational mapper or through a pythonic sql rendering engine 7 | known as the core. 8 | 9 | Read the documentation and more: http://www.sqlalchemy.org/ 10 | 11 | The Teradata Dialect is an implementation of SQLAlchemy’s Dialect 12 | System. It implements various classes that are specific to interacting 13 | with the teradata dbapi, construction of sql specific to Teradata, and 14 | more. The project is still in an incubation phase. See test/usage\_test 15 | for how the dialect is used for the core expression language api. 16 | 17 | Design Principles 18 | ================= 19 | 20 | :: 21 | 22 | * Have a simple setup process and a minimal learning curve 23 | * Provide a simple core that is modular and extensible 24 | * Be an easy way to interact with the database out of the box 25 | 26 | Quick Start 27 | =========== 28 | 29 | Install the sqlalchemy-teradata library: 30 | 31 | :: 32 | 33 | [sudo] pip install sqlalchemy-teradata 34 | 35 | Setup the connect url to point to the database. See the `example`_ in 36 | the wiki. 37 | 38 | Get Involved 39 | ============ 40 | 41 | :: 42 | 43 | * We welcome your contributions in: Documentation, Bug Reporting, Tests, and Code (Features & Bug Fixes) 44 | * You can contribute to our documentation by going to our github wiki. 45 | * All code submissions are done through pull requests. 46 | 47 | We have a room in `gitter`_. It is still based off of the old repo but it will do. 48 | 49 | Tests 50 | ===== 51 | 52 | The dialect is tested using the pytest plugin. You can run pytest in the sqlalchemy-teradata 53 | directory with the ``py.test``\ command. By default the tests are run against the database 54 | URI specified in ``setup.cfg`` under the ``[db]`` heading. 55 | 56 | You can override the dburi you would like the tests to run against: 57 | 58 | .. code:: 59 | 60 | py.test --dburi:teradata://user:pw@host 61 | 62 | To view the databases aliased in setup.cfg: 63 | 64 | .. code:: 65 | 66 | py.test --dbs all 67 | 68 | To run the tests against an aliased database URI in setup.cfg: 69 | 70 | .. code:: 71 | 72 | py.test --db default 73 | py.test --db teradata 74 | 75 | If the --db flag nor the --dburi flag are specified when running py.test, 76 | the database uri specified as ``default`` in setup.cfg is used. 77 | 78 | Typical usage: 79 | 80 | .. code:: python 81 | 82 | # test all the things (against default)! 83 | py.test -s test/* 84 | 85 | # run tests in this file 86 | py.test -s test/test_suite.py 87 | 88 | # run TestClass in the the file 89 | py.test -s test/test_suite.py::TestClass 90 | 91 | # just run a specific method in TestClass 92 | py.test -s test/test_suite.py::TestClass::test_func 93 | 94 | see the `pytest docs`_ for more info 95 | 96 | See Also 97 | ======== 98 | 99 | - `PyTd`_: the DB API 2.0 implementation found in the teradata module 100 | - `sqlalchemy\_aster`_: A SQLAlchemy dialect for aster 101 | 102 | .. _gitter: https://gitter.im/sandan/sqlalchemy-teradata 103 | .. _example: https://github.com/Teradata/sqlalchemy-teradata/wiki/Examples#creating-an-engine 104 | .. _pytest docs: http://pytest.org/latest/contents.html#toc 105 | .. _PyTd: https://github.com/Teradata/PyTd 106 | .. _sqlalchemy\_aster: https://github.com/KarolTx/sqlalchemy_aster 107 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | sqlalchemy 2 | teradata 3 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [egg_info] 2 | tag_build = dev 3 | 4 | [pytest] 5 | addopts= --tb native -v -r fxX 6 | python_files=test/*test_*.py 7 | 8 | 9 | [sqla_testing] 10 | requirement_cls=sqlalchemy_teradata.requirements:Requirements 11 | profile_file=.profiles.txt 12 | 13 | [db] 14 | default=teradata://:@localhost 15 | teradata=teradata://:@localhost 16 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from os import path 2 | from setuptools import setup 3 | 4 | setup( 5 | name='sqlalchemy_teradata', 6 | version='0.1.0', 7 | description="Teradata dialect for SQLAlchemy", 8 | classifiers=[ 9 | 'Development Status :: 3 - Alpha', 10 | 'Environment :: Console', 11 | 'Intended Audience :: Developers', 12 | 'Programming Language :: Python', 13 | 'Programming Language :: Python :: 2.7', 14 | 'Programming Language :: Python :: Implementation :: CPython', 15 | 'Topic :: Database :: Front-Ends', 16 | ], 17 | keywords='Teradata SQLAlchemy', 18 | author='Mark Sandan', 19 | author_email='mark.sandan@teradata.com', 20 | license='MIT', 21 | packages=['sqlalchemy_teradata'], 22 | include_package_data=True, 23 | tests_require=['pytest >= 2.5.2'], 24 | install_requires=['sqlalchemy', 'teradata'], 25 | entry_points={ 26 | 'sqlalchemy.dialects': [ 27 | 'teradata = sqlalchemy_teradata.dialect:TeradataDialect', 28 | ] 29 | } 30 | ) 31 | -------------------------------------------------------------------------------- /sqlalchemy_teradata/__init__.py: -------------------------------------------------------------------------------- 1 | # sqlalchemy_teradata/__init__.py 2 | # Copyright (C) 2015-2016 by Teradata 3 | # 4 | # 5 | # This module is part of sqlalchemy-teradata and is released under 6 | # the MIT License: http://www.opensource.org/licenses/mit-license.php 7 | 8 | from .types import TIME, TIMESTAMP, DECIMAL, CHAR, VARCHAR, CLOB, BYTEINT 9 | from sqlalchemy.sql.sqltypes import (Integer, Interval, SmallInteger,\ 10 | BigInteger, Float, Boolean,\ 11 | Text, Unicode, UnicodeText,\ 12 | DATE) 13 | __version__ = '0.1.0' 14 | 15 | __all__ = (Integer, SmallInteger, BigInteger, Float, Text, Unicode, 16 | UnicodeText, Interval, Boolean, 17 | DATE, TIME, TIMESTAMP, DECIMAL, 18 | CHAR, VARCHAR, CLOB, BYTEINT) 19 | 20 | from teradata import tdodbc 21 | -------------------------------------------------------------------------------- /sqlalchemy_teradata/base.py: -------------------------------------------------------------------------------- 1 | # sqlalchemy_teradata/base.py 2 | # Copyright (C) 2015-2016 by Teradata 3 | # 4 | # 5 | # This module is part of sqlalchemy-teradata and is released under 6 | # the MIT License: http://www.opensource.org/licenses/mit-license.php 7 | 8 | import re 9 | from sqlalchemy import * 10 | from sqlalchemy.sql import compiler 11 | from sqlalchemy.engine import default 12 | from sqlalchemy.ext.compiler import compiles 13 | from sqlalchemy.sql.expression import ClauseElement, Executable 14 | from sqlalchemy.schema import DDLElement 15 | from sqlalchemy.sql import table 16 | from sqlalchemy import types as sqltypes 17 | from sqlalchemy.types import CHAR, DATE, DATETIME, \ 18 | BLOB, CLOB, TIMESTAMP, FLOAT, BIGINT, DECIMAL, NUMERIC, \ 19 | NCHAR, NVARCHAR, INTEGER, \ 20 | SMALLINT, TIME, TEXT, VARCHAR, REAL 21 | 22 | AUTOCOMMIT_REGEXP = re.compile( 23 | r'\s*(?:UPDATE|INSERT|CREATE|DELETE|DROP|ALTER|MERGE)', 24 | re.I | re.UNICODE) 25 | 26 | #TODO: Read this from the dbc.restrictedwordsv view 27 | ReservedWords = set(["abort", "abortsession", "abs", "access_lock", "account", 28 | "acos", "acosh", "add", "add_months", "admin", "after", 29 | "aggregate","all", "alter", "amp", "and", "ansidate", 30 | "any", "arglparen", "as", "asc", "asin", "asinh", "at", 31 | "atan", "atan2", "atanh", "atomic", "authorization", "ave", 32 | "average", "avg", "before", "begin" , "between", "bigint", 33 | "binary", "blob", "both", "bt", "but", "by", "byte", "byteint", 34 | "bytes", "call", "case", "case_n", "casespecific", "cast", "cd", 35 | "char", "char_length", "char2hexint", "count","day", "desc", "hour", 36 | "in", "le", "minute", "meets", "month", "order", "ordering", 37 | "title", "value", 38 | 'user','password', "preceded", "second", "succeeds", "year", "match", "time", "timestamp"]) 39 | 40 | class TeradataExecutionContext(default.DefaultExecutionContext): 41 | 42 | def __init__(self, dialect, connection, dbapi_connection, compiled_ddl): 43 | super(TeradataExecutionContext, self).__init__(dialect, connection, dbapi_connection, compiled_ddl) 44 | 45 | def should_autocommit_text(self, statement): 46 | return AUTOCOMMIT_REGEXP.match(statement) 47 | 48 | class TeradataIdentifierPreparer(compiler.IdentifierPreparer): 49 | 50 | reserved_words = ReservedWords 51 | 52 | def __init__(self, dialect, initial_quote='"', final_quote=None, escape_quote='"', omit_schema=False): 53 | 54 | super(TeradataIdentifierPreparer, self).__init__(dialect, initial_quote, final_quote, 55 | escape_quote, omit_schema) 56 | 57 | # Views Recipe from: https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/Views 58 | class CreateView(DDLElement): 59 | 60 | def __init__(self, name, selectable): 61 | self.name = name 62 | self.selectable = selectable 63 | 64 | class DropView(DDLElement): 65 | 66 | def __init__(self, name): 67 | self.name = name 68 | 69 | @compiles(CreateView) 70 | def visit_create_view(element, compiler, **kw): 71 | return "CREATE VIEW {} AS {}".format(element.name, compiler.sql_compiler.process(element.selectable)) 72 | 73 | @compiles(DropView) 74 | def visit_drop_view(element, compiler, **kw): 75 | return "DROP VIEW {}".format(element.name) 76 | 77 | class CreateTableAs(DDLElement): 78 | pass 79 | 80 | @compiles(CreateTableAs) 81 | def visit_create_table(element, table, **kw): 82 | pass 83 | 84 | class CreateTableQueue(DDLElement): 85 | pass 86 | 87 | class CreateTableGlobalTempTrace(DDLElement): 88 | pass 89 | class CreateErrorTable(DDLElement): 90 | pass 91 | 92 | class IdentityColumn(DDLElement): 93 | pass 94 | 95 | class CreateJoinIndex(): 96 | pass 97 | 98 | class CreateHashIndex(): 99 | pass 100 | -------------------------------------------------------------------------------- /sqlalchemy_teradata/compiler.py: -------------------------------------------------------------------------------- 1 | # sqlalchemy_teradata/compiler.py 2 | # Copyright (C) 2015-2016 by Teradata 3 | # 4 | # 5 | # This module is part of sqlalchemy-teradata and is released under 6 | # the MIT License: http://www.opensource.org/licenses/mit-license.php 7 | 8 | from sqlalchemy.sql import compiler 9 | from sqlalchemy import exc 10 | from sqlalchemy import schema as sa_schema 11 | from sqlalchemy.types import Unicode 12 | from sqlalchemy.ext.compiler import compiles 13 | from sqlalchemy.sql.expression import Select 14 | from sqlalchemy import exc, sql 15 | from sqlalchemy import create_engine 16 | 17 | 18 | class TeradataCompiler(compiler.SQLCompiler): 19 | 20 | def __init__(self, dialect, statement, column_keys=None, inline=False, **kwargs): 21 | super(TeradataCompiler, self).__init__(dialect, statement, column_keys, inline, **kwargs) 22 | 23 | def get_select_precolumns(self, select, **kwargs): 24 | """ 25 | handles the part of the select statement before the columns are specified. 26 | Note: Teradata does not allow a 'distinct' to be specified when 'top' is 27 | used in the same select statement. 28 | 29 | Instead if a user specifies both in the same select clause, 30 | the DISTINCT will be used with a ROW_NUMBER OVER(ORDER BY) subquery. 31 | """ 32 | 33 | pre = select._distinct and "DISTINCT " or "" 34 | 35 | #TODO: decide whether we can replace this with the recipe... 36 | if (select._limit is not None and select._offset is None): 37 | pre += "TOP %d " % (select._limit) 38 | 39 | return pre 40 | 41 | def limit_clause(self, select, **kwargs): 42 | """Limit after SELECT is implemented in get_select_precolumns""" 43 | return "" 44 | 45 | class TeradataDDLCompiler(compiler.DDLCompiler): 46 | 47 | def postfix(self, table): 48 | 49 | """ 50 | This hook processes the optional keyword teradata_postfixes 51 | ex. 52 | from sqlalchemy_teradata.compiler import\ 53 | TDCreateTablePostfix as Opts 54 | t = Table( 'name', meta, 55 | ..., 56 | teradata_postfixes=Opts. 57 | fallback(). 58 | log(). 59 | with_journal_table(t2.name) 60 | 61 | CREATE TABLE name, fallback, 62 | log, 63 | with journal table = [database/user.]table_name( 64 | ... 65 | ) 66 | 67 | teradata_postfixes can also be a list of strings to be appended 68 | in the order given. 69 | """ 70 | post=table.dialect_kwargs['teradata_postfixes'] 71 | 72 | if isinstance(post, TDCreateTablePostfix): 73 | if post.opts: 74 | return ',\n' + post.compile() 75 | else: 76 | return post 77 | elif post: 78 | assert type(post) is list 79 | res = ',\n ' + ',\n'.join(post) 80 | else: 81 | return '' 82 | 83 | def visit_create_table(self, create): 84 | 85 | """ 86 | Current workaround for https://gerrit.sqlalchemy.org/#/c/85/1 87 | Once the merge gets released, delete this method entirely 88 | """ 89 | table = create.element 90 | preparer = self.dialect.identifier_preparer 91 | 92 | text = '\nCREATE ' 93 | if table._prefixes: 94 | text += ' '.join(table._prefixes) + ' ' 95 | text += 'TABLE ' + preparer.format_table(table) + ' ' +\ 96 | self.postfix(table) + ' (' 97 | 98 | separator = '\n' 99 | # if only one primary key, specify it along with the column 100 | first_pk = False 101 | for create_column in create.columns: 102 | column = create_column.element 103 | try: 104 | processed = self.process(create_column, 105 | first_pk=column.primary_key 106 | and not first_pk) 107 | if processed is not None: 108 | text += separator 109 | separator = ', \n' 110 | text += '\t' + processed 111 | if column.primary_key: 112 | first_pk = True 113 | except exc.CompileError as ce: 114 | util.raise_from_cause( 115 | exc.CompileError( 116 | util.u("(in table '%s', column '%s'): %s") % 117 | (table.description, column.name, ce.args[0]) 118 | )) 119 | 120 | const = self.create_table_constraints( 121 | table, _include_foreign_key_constraints= # noga 122 | create.include_foreign_key_constraints) 123 | if const: 124 | text += ', \n\t' + const 125 | 126 | text += "\n)%s\n\n" % self.post_create_table(table) 127 | return text 128 | 129 | 130 | def post_create_table(self, table): 131 | 132 | """ 133 | This hook processes the TDPostCreateTableOpts given by the 134 | teradata_post_create dialect kwarg for Table. 135 | 136 | Note that there are other dialect kwargs defined that could possibly 137 | be processed here. 138 | 139 | See the kwargs defined in dialect.TeradataDialect 140 | 141 | Ex. 142 | from sqlalchemy_teradata.compiler import TDCreateTablePost as post 143 | Table('t1', meta, 144 | ... 145 | , 146 | teradata_post_create = post(). 147 | fallback(). 148 | checksum('on'). 149 | mergeblockratio(85) 150 | 151 | creates ddl for a table like so: 152 | 153 | CREATE TABLE "t1" , 154 | checksum=on, 155 | fallback, 156 | mergeblockratio=85 ( 157 | ... 158 | ) 159 | 160 | """ 161 | kw = table.dialect_kwargs['teradata_post_create'] 162 | if isinstance(kw, TDCreateTablePost): 163 | if kw: 164 | return '\n' + kw.compile() 165 | return '' 166 | 167 | def get_column_specification(self, column, **kwargs): 168 | 169 | if column.table is None: 170 | raise exc.CompileError( 171 | "Teradata requires Table-bound columns " 172 | "in order to generate DDL") 173 | 174 | colspec = (self.preparer.format_column(column) + " " +\ 175 | self.dialect.type_compiler.process( 176 | column.type, type_expression=column)) 177 | 178 | # Null/NotNull 179 | if column.nullable is not None: 180 | if not column.nullable or column.primary_key: 181 | colspec += " NOT NULL" 182 | 183 | return colspec 184 | 185 | class TeradataOptions(object): 186 | """ 187 | An abstract base class for various schema object options 188 | """ 189 | def _append(self, opts, val): 190 | _opts=opts.copy() 191 | _opts.update(val) 192 | return _opts 193 | 194 | def compile(self): 195 | """ 196 | processes the argument options and returns a string representation 197 | """ 198 | pass 199 | 200 | def format_cols(self, key, val): 201 | 202 | """ 203 | key is a string 204 | val is a list of strings with an optional dict as the last element 205 | the dict values are appended at the end of the col list 206 | """ 207 | res = '' 208 | col_expr = ', '.join([x for x in val if type(x) is str]) 209 | 210 | res += key + '( ' + col_expr + ' )' 211 | if type(val[-1]) is dict: 212 | # process syntax elements (dict) after cols 213 | res += ' '.join( val[-1]['post'] ) 214 | return res 215 | 216 | class TDCreateTablePostfix(TeradataOptions): 217 | """ 218 | A generative class for Teradata create table options 219 | specified in teradata_postfixes 220 | """ 221 | def __init__(self, opts={}): 222 | """ 223 | opts is a dictionary that can be pre-populated with key-value pairs 224 | that may be overidden if the keys conflict with those entered 225 | in the methods below. See the compile method to see how the dict 226 | gets processed. 227 | """ 228 | self.opts = opts 229 | 230 | def compile(self): 231 | def process_opts(opts): 232 | return [key if opts[key] is None else '{}={}'.\ 233 | format(key, opts[key]) for key in opts] 234 | 235 | res = ',\n'.join(process_opts(self.opts)) 236 | return res 237 | 238 | def fallback(self, enabled=True): 239 | res = 'fallback' if enabled else 'no fallback' 240 | return self.__class__(self._append(self.opts, {res:None})) 241 | 242 | def log(self, enabled=True): 243 | res = 'log' if enabled else 'no log' 244 | return self.__class__(self._append(self.opts, {res:None})) 245 | 246 | def with_journal_table(self, tablename=None): 247 | """ 248 | tablename is the schema.tablename of a table. 249 | For example, if t1 is a SQLAlchemy: 250 | with_journal_table(t1.name) 251 | """ 252 | return self.__class__(self._append(self.opts,\ 253 | {'with journal table':tablename})) 254 | 255 | def before_journal(self, prefix='dual'): 256 | res = prefix+' '+'before journal' 257 | return self.__class__(self._append(self.opts, {res:None})) 258 | 259 | def after_journal(self, prefix='not local'): 260 | res = prefix+' '+'after journal' 261 | return self.__class__(self._append(self.opts, {res:None})) 262 | 263 | def checksum(self, integrity_checking='default'): 264 | """ 265 | integrity_checking is a string taking vaues of 'on', 'off', 266 | or 'default'. 267 | """ 268 | assert integrity_checking in ('on', 'off', 'default') 269 | return self.__class__(self._append(self.opts,\ 270 | {'checksum':integrity_checking})) 271 | 272 | def freespace(self, percentage=0): 273 | """ 274 | percentage is an integer taking values from 0 to 75. 275 | """ 276 | return self.__class__(self._append(self.opts,\ 277 | {'freespace':percentage})) 278 | 279 | def no_mergeblockratio(self): 280 | return self.__class__(self._append(self.opts,\ 281 | {'no mergeblockratio':None})) 282 | 283 | def mergeblockratio(self, integer=None): 284 | """ 285 | integer takes values from 0 to 100 inclusive. 286 | """ 287 | res = 'default mergeblockratio' if integer is None\ 288 | else 'mergeblockratio' 289 | return self.__class__(self._append(self.opts, {res:integer})) 290 | 291 | def min_datablocksize(self): 292 | return self.__class__(self._append(self.opts,\ 293 | {'minimum datablocksize':None})) 294 | 295 | def max_datablocksize(self): 296 | return self.__class__(self._append(self.opts,\ 297 | {'maximum datablocksize':None})) 298 | 299 | def datablocksize(self, data_block_size=None): 300 | """ 301 | data_block_size is an integer specifying the number of bytes 302 | """ 303 | res = 'datablocksize' if data_block_size is not None\ 304 | else 'default datablocksize' 305 | return self.__class__(self._append(self.opts,\ 306 | {res:data_block_size})) 307 | 308 | def blockcompression(self, opt='default'): 309 | """ 310 | opt is a string that takes values 'autotemp', 311 | 'default', 'manual', or 'never' 312 | """ 313 | return self.__class__(self._append(self.opts,\ 314 | {'blockcompression':opt})) 315 | 316 | def with_no_isolated_loading(self): 317 | return self.__class__(self._append(self.opts,\ 318 | {'with no isolated loading':None})) 319 | 320 | def with_concurrent_isolated_loading(self, opt=None): 321 | """ 322 | opt is a string that takes values 'all', 'insert', or 'none' 323 | """ 324 | assert opt in ('all', 'insert', 'none') 325 | for_stmt = ' for '+opt if opt is not None else '' 326 | res = 'with concurrent isolated loading'+for_stmt 327 | return self.__class__(self._append(self.opts, {res:opt})) 328 | 329 | class TDCreateTablePost(TeradataOptions): 330 | """ 331 | A generative class for building post create table options 332 | given in the teradata_post_create keyword for Table 333 | """ 334 | def __init__(self, opts={}): 335 | self.opts = opts 336 | 337 | def compile(self): 338 | def process(opts): 339 | return [key.upper() if opts[key] is None\ 340 | else self.format_cols(key, opts[key])\ 341 | for key in opts] 342 | 343 | return ',\n'.join(process(self.opts)) 344 | 345 | def no_primary_index(self): 346 | return self.__class__(self._append(self.opts, {'no primary index':None})) 347 | 348 | def primary_index(self, name=None, unique=False, cols=[]): 349 | """ 350 | name is a string for the primary index 351 | if unique is true then unique primary index is specified 352 | cols is a list of column names 353 | """ 354 | res = 'unique primary index' if unique else 'primary index' 355 | res += ' ' + name if name is not None else '' 356 | return self.__class__(self._append(self.opts, {res:cols})) 357 | 358 | 359 | def primary_amp(self, name=None, cols=[]): 360 | 361 | """ 362 | name is an optional string for the name of the amp index 363 | cols is a list of column names (strings) 364 | """ 365 | res = 'primary amp index' 366 | res += ' ' + name if name is not None else '' 367 | return self.__class__(self._append(self.opts, {res:cols})) 368 | 369 | 370 | def partition_by_col(self, all_but=False, cols={}, rows={}, const=None): 371 | 372 | """ 373 | ex: 374 | 375 | Opts.partition_by_col(cols ={'c1': True, 'c2': False, 'c3': None}, 376 | rows ={'d1': True, 'd2':False, 'd3': None}, 377 | const = 1) 378 | will emit: 379 | 380 | partition by( 381 | column( 382 | column(c1) auto compress, 383 | column(c2) no auto compress, 384 | column(c3), 385 | row(d1) auto compress, 386 | row(d2) no auto compress, 387 | row(d3)) 388 | add 1 389 | ) 390 | 391 | cols is a dictionary whose key is the column name and value True or False 392 | specifying AUTO COMPRESS or NO AUTO COMPRESS respectively. The columns 393 | are stored with COLUMN format. 394 | 395 | rows is a dictionary similar to cols except the ROW format is used 396 | 397 | const is an unsigned BIGINT 398 | """ 399 | res = 'partition by( column all but' if all_but else\ 400 | 'partition by( column' 401 | c = self._visit_partition_by(cols, rows) 402 | if const is not None: 403 | c += [{'post': ['add %s' % str(const), ')']}] 404 | 405 | return self.__class__(self._append(self.opts, {res: c})) 406 | 407 | def _visit_partition_by(self, cols, rows): 408 | 409 | if cols: 410 | c = ['column('+ k +') auto compress '\ 411 | for k,v in cols.items() if v is True] 412 | 413 | c += ['column('+ k +') no auto compress'\ 414 | for k,v in cols.items() if v is False] 415 | 416 | c += ['column(k)' for k,v in cols.items() if v is None] 417 | 418 | if rows: 419 | c += ['row('+ k +') auto compress'\ 420 | for k,v in rows.items() if v is True] 421 | 422 | c += ['row('+ k +') no auto compress'\ 423 | for k,v in rows.items() if v is False] 424 | 425 | c += [k for k,v in rows.items() if v is None] 426 | 427 | return c 428 | 429 | 430 | def partition_by_col_auto_compress(self, all_but=False, cols={},\ 431 | rows={}, const=None): 432 | 433 | res = 'partition by( column auto compress all but' if all_but else\ 434 | 'partition by( column auto compress' 435 | c = self._visit_partition_by(cols,rows) 436 | if const is not None: 437 | c += [{'post': ['add %s' % str(const), ')']}] 438 | 439 | return self.__class__(self._append(self.opts, {res: c})) 440 | 441 | 442 | def partition_by_col_no_auto_compress(self, all_but=False, cols={},\ 443 | rows={}, const=None): 444 | 445 | res = 'partition by( column no auto compress all but' if all_but else\ 446 | 'partition by( column no auto compression' 447 | c = self._visit_partition_by(cols,rows) 448 | if const is not None: 449 | c += [{'post': ['add %s' % str(const), ')']}] 450 | 451 | return self.__class__(self._append(self.opts, {res: c})) 452 | 453 | 454 | def index(self, index): 455 | """ 456 | Index is created with dialect specific keywords to 457 | include loading and ordering syntax elements 458 | 459 | index is a sqlalchemy.sql.schema.Index object. 460 | """ 461 | return self.__class__(self._append(self.opts, {res: c})) 462 | 463 | 464 | class TeradataTypeCompiler(compiler.GenericTypeCompiler): 465 | 466 | def _get(self, key, type_, kw): 467 | return kw.get(key, getattr(type_, key, None)) 468 | 469 | 470 | def visit_datetime(self, type_, **kw): 471 | return self.visit_TIMESTAMP(type_, precision=6, **kw) 472 | 473 | def visit_date(self, type_, **kw): 474 | return self.visit_DATE(type_, **kw) 475 | 476 | def visit_text(self, type_, **kw): 477 | return self.visit_CLOB(type_, **kw) 478 | 479 | def visit_time(self, type_, **kw): 480 | return self.visit_TIME(type_, precision=6, **kw) 481 | 482 | def visit_unicode(self, type_, **kw): 483 | return self.visit_VARCHAR(type_, charset='UNICODE', **kw) 484 | 485 | def visit_unicode_text(self, type_, **kw): 486 | return self.visit_CLOB(type_, charset='UNICODE', **kw) 487 | 488 | def visit_interval_year(self, type_, **kw): 489 | return 'INTERVAL YEAR{}'.format( 490 | '('+str(type_.precision)+')' if type_.precision else '') 491 | 492 | def visit_interval_year_to_month(self, type_, **kw): 493 | return 'INTERVAL YEAR{} TO MONTH'.format( 494 | '('+str(type_.precision)+')' if type_.precision else '') 495 | 496 | def visit_interval_month(self, type_, **kw): 497 | return 'INTERVAL MONTH{}'.format( 498 | '('+str(type_.precision)+')' if type_.precision else '') 499 | 500 | def visit_interval_day(self, type_, **kw): 501 | return 'INTERVAL DAY{}'.format( 502 | '('+str(type_.precision)+')' if type_.precision else '') 503 | 504 | def visit_interval_day_to_hour(self, type_, **kw): 505 | return 'INTERVAL DAY{} TO HOUR'.format( 506 | '('+str(type_.precision)+')' if type_.precision else '') 507 | 508 | def visit_interval_day_to_minute(self, type_, **kw): 509 | return 'INTERVAL DAY{} TO MINUTE'.format( 510 | '('+str(type_.precision)+')' if type_.precision else '') 511 | 512 | def visit_interval_day_to_second(self, type_, **kw): 513 | return 'INTERVAL DAY{} TO SECOND{}'.format( 514 | '('+str(type_.precision)+')' if type_.precision else '', 515 | '('+str(type_.frac_precision)+')' if type_.frac_precision is not None else '' 516 | ) 517 | 518 | def visit_interval_hour(self, type_, **kw): 519 | return 'INTERVAL HOUR{}'.format( 520 | '('+str(type_.precision)+')' if type_.precision else '') 521 | 522 | def visit_interval_hour_to_minute(self, type_, **kw): 523 | return 'INTERVAL HOUR{} TO MINUTE'.format( 524 | '('+str(type_.precision)+')' if type_.precision else '') 525 | 526 | 527 | def visit_interval_hour_to_second(self, type_, **kw): 528 | return 'INTERVAL HOUR{} TO SECOND{}'.format( 529 | '('+str(type_.precision)+')' if type_.precision else '', 530 | '('+str(type_.frac_precision)+')' if type_.frac_precision is not None else '' 531 | ) 532 | 533 | def visit_interval_minute(self, type_, **kw): 534 | return 'INTERVAL MINUTE{}'.format( 535 | '('+str(type_.precision)+')' if type_.precision else '') 536 | 537 | def visit_interval_minute_to_second(self, type_, **kw): 538 | return 'INTERVAL MINUTE{} TO SECOND{}'.format( 539 | '('+str(type_.precision)+')' if type_.precision else '', 540 | '('+str(type_.frac_precision)+')' if type_.frac_precision is not None else '' 541 | ) 542 | 543 | def visit_interval_second(self, type_, **kw): 544 | if type_.frac_precision is not None and type_.precision: 545 | return 'INTERVAL SECOND{}'.format( 546 | '('+str(type_.precision)+', '+str(type_.frac_precision)+')') 547 | else: 548 | return 'INTERVAL SECOND{}'.format( 549 | '('+str(type_.precision)+')' if type_.precision else '') 550 | 551 | def visit_TIME(self, type_, **kw): 552 | tz = ' WITH TIME ZONE' if type_.timezone else '' 553 | prec = self._get('precision', type_, kw) 554 | prec = '%s' % '('+str(prec)+')' if prec is not None else '' 555 | return 'TIME{}{}'.format(prec, tz) 556 | 557 | def visit_DATETIME(self, type_, **kw): 558 | return self.visit_TIMESTAMP(type_, precision=6, **kw) 559 | 560 | def visit_TIMESTAMP(self, type_, **kw): 561 | tz = ' WITH TIME ZONE' if type_.timezone else '' 562 | prec = self._get('precision', type_, kw) 563 | prec = '%s' % '('+str(prec)+')' if prec is not None else '' 564 | return 'TIMESTAMP{}{}'.format(prec, tz) 565 | 566 | def _string_process(self, type_, datatype, **kw): 567 | length = self._get('length', type_, kw) 568 | length = '(%s)' % length if length is not None else '' 569 | 570 | charset = self._get('charset', type_, kw) 571 | charset = ' CHAR SET %s' % charset if charset is not None else '' 572 | 573 | res = '{}{}{}'.format(datatype, length, charset) 574 | return res 575 | 576 | def visit_NCHAR(self, type_, **kw): 577 | return self.visit_CHAR(type_, charset='UNICODE', **kw) 578 | 579 | def visit_NVARCHAR(self, type_, **kw): 580 | return self.visit_VARCHAR(type_, charset='UNICODE', **kw) 581 | 582 | def visit_CHAR(self, type_, **kw): 583 | return self._string_process(type_, 'CHAR', length=type_.length, **kw) 584 | 585 | def visit_VARCHAR(self, type_, **kw): 586 | if type_.length is None: 587 | return self._string_process(type_, 'LONG VARCHAR', **kw) 588 | else: 589 | return self._string_process(type_, 'VARCHAR', length=type_.length, **kw) 590 | 591 | def visit_TEXT(self, type_, **kw): 592 | return self.visit_CLOB(type_, **kw) 593 | 594 | def visit_CLOB(self, type_, **kw): 595 | multi = self._get('multiplier', type_, kw) 596 | if multi is not None and type_.length is not None: 597 | length = str(type_.length) + multi 598 | return self._string_process(type_, 'CLOB', length=length, **kw) 599 | 600 | return self._string_process(type_, 'CLOB', **kw) 601 | 602 | def visit_BLOB(self, type, **kw): 603 | pass 604 | 605 | def visit_BOOLEAN(self, type_, **kw): 606 | return self.visit_BYTEINT(type_, **kw) 607 | 608 | def visit_BYTEINT(self, type_, **kw): 609 | return 'BYTEINT' 610 | 611 | 612 | 613 | #@compiles(Select, 'teradata') 614 | #def compile_select(element, compiler, **kw): 615 | # """ 616 | # """ 617 | # 618 | # if not getattr(element, '_window_visit', None): 619 | # if element._limit is not None or element._offset is not None: 620 | # limit, offset = element._limit, element._offset 621 | # 622 | # orderby=compiler.process(element._order_by_clause) 623 | # if orderby: 624 | # element = element._generate() 625 | # element._window_visit=True 626 | # #element._limit = None 627 | # #element._offset = None cant set to none... 628 | # 629 | # # add a ROW NUMBER() OVER(ORDER BY) column 630 | # element = element.column(sql.literal_column('ROW NUMBER() OVER (ORDER BY %s)' % orderby).label('rownum')).order_by(None) 631 | # 632 | # # wrap into a subquery 633 | # limitselect = sql.select([c for c in element.alias().c if c.key != 'rownum']) 634 | # 635 | # limitselect._window_visit=True 636 | # limitselect._is_wrapper=True 637 | # 638 | # if offset is not None: 639 | # limitselect.append_whereclause(sql.column('rownum') > offset) 640 | # if limit is not None: 641 | # limitselect.append_whereclause(sql.column('rownum') <= (limit + offset)) 642 | # else: 643 | # limitselect.append_whereclause(sql.column("rownum") <= limit) 644 | # 645 | # element = limitselect 646 | # 647 | # kw['iswrapper'] = getattr(element, '_is_wrapper', False) 648 | # return compiler.visit_select(element, **kw) 649 | -------------------------------------------------------------------------------- /sqlalchemy_teradata/dialect.py: -------------------------------------------------------------------------------- 1 | # sqlalchemy_teradata/dialect.py 2 | # Copyright (C) 2015-2016 by Teradata 3 | # 4 | # 5 | # This module is part of sqlalchemy-teradata and is released under 6 | # the MIT License: http://www.opensource.org/licenses/mit-license.php 7 | 8 | from sqlalchemy.engine import default 9 | from sqlalchemy import pool, String, Numeric 10 | from sqlalchemy.sql import select, and_, or_ 11 | from sqlalchemy_teradata.compiler import TeradataCompiler, TeradataDDLCompiler, TeradataTypeCompiler 12 | from sqlalchemy_teradata.base import TeradataIdentifierPreparer, TeradataExecutionContext 13 | from sqlalchemy.sql.expression import text, table, column, asc 14 | from sqlalchemy import Table, Column, Index 15 | import sqlalchemy.types as sqltypes 16 | import sqlalchemy_teradata.types as tdtypes 17 | from itertools import groupby 18 | 19 | # ischema names is used for reflecting columns (see get_columns in the dialect) 20 | ischema_names = { 21 | None: sqltypes.NullType, 22 | 23 | 'cf': tdtypes.CHAR, 24 | 'cv': tdtypes.VARCHAR, 25 | 'uf': sqltypes.NCHAR, 26 | 'uv': sqltypes.NVARCHAR, 27 | 'co': tdtypes.CLOB, 28 | 'n' : tdtypes.NUMERIC, 29 | 'd' : tdtypes.DECIMAL, 30 | 'i' : sqltypes.INTEGER, 31 | 'i1': tdtypes.BYTEINT, 32 | 'i2': sqltypes.SMALLINT, 33 | 'i8': sqltypes.BIGINT, 34 | 'f' : sqltypes.FLOAT, 35 | 'da': sqltypes.DATE, 36 | 'ts': tdtypes.TIMESTAMP, 37 | 'sz': tdtypes.TIMESTAMP, #Added timestamp with timezone 38 | 'at': tdtypes.TIME, 39 | 'tz': tdtypes.TIMESTAMP, #Added time with timezone 40 | 41 | #Expreimental - Binary 42 | 'bf': sqltypes.BINARY, 43 | 'bv': sqltypes.VARBINARY, 44 | 'bo': sqltypes.BLOB 45 | } #TODO: add the interval types and blob 46 | 47 | stringtypes=[ t for t in ischema_names if issubclass(ischema_names[t],sqltypes.String)] 48 | 49 | class TeradataDialect(default.DefaultDialect): 50 | 51 | name = 'teradata' 52 | driver = 'teradata' 53 | default_paramstyle = 'qmark' 54 | poolclass = pool.SingletonThreadPool 55 | 56 | statement_compiler = TeradataCompiler 57 | ddl_compiler = TeradataDDLCompiler 58 | type_compiler = TeradataTypeCompiler 59 | preparer = TeradataIdentifierPreparer 60 | execution_ctx_cls = TeradataExecutionContext 61 | 62 | supports_native_boolean = False 63 | supports_native_decimal = True 64 | supports_unicode_statements = True 65 | supports_unicode_binds = True 66 | postfetch_lastrowid = False 67 | implicit_returning = False 68 | preexecute_autoincrement_sequences = False 69 | 70 | construct_arguments = [ 71 | (Table, { 72 | "post_create": None, 73 | "postfixes": None 74 | }), 75 | 76 | (Index, { 77 | "order_by": None, 78 | "loading": None 79 | }), 80 | 81 | (Column, { 82 | "compress": None, 83 | "identity": None 84 | }) 85 | ] 86 | 87 | def __init__(self, **kwargs): 88 | super(TeradataDialect, self).__init__(**kwargs) 89 | 90 | def create_connect_args(self, url): 91 | if url is not None: 92 | params = super(TeradataDialect, self).create_connect_args(url)[1] 93 | cargs = ("Teradata", params['host'], params['username'], params['password']) 94 | cparams = {p:params[p] for p in params if p not in\ 95 | ['host', 'username', 'password']} 96 | return (cargs, cparams) 97 | 98 | @classmethod 99 | def dbapi(cls): 100 | 101 | """ Hook to the dbapi2.0 implementation's module""" 102 | from teradata import tdodbc 103 | return tdodbc 104 | 105 | def normalize_name(self, name, **kw): 106 | if name is not None: 107 | return name.strip().lower() 108 | return name 109 | 110 | def has_table(self, connection, table_name, schema=None): 111 | 112 | if schema is None: 113 | schema=self.default_schema_name 114 | 115 | stmt = select([column('tablename')], 116 | from_obj=[text('dbc.tablesvx')]).where( 117 | and_(text('DatabaseName=:schema'), 118 | text('TableName=:table_name'))) 119 | 120 | res = connection.execute(stmt, schema=schema, table_name=table_name).fetchone() 121 | return res is not None 122 | 123 | def _resolve_type(self, t, **kw): 124 | """ 125 | Resolve types for String, Numeric, Date/Time, etc. columns 126 | """ 127 | t = self.normalize_name(t) 128 | if t in ischema_names: 129 | #print(t,ischema_names[t]) 130 | t = ischema_names[t] 131 | 132 | if issubclass(t, sqltypes.String): 133 | return t(length=kw['length']/2 if kw['chartype']=='UNICODE' else kw['length'],\ 134 | charset=kw['chartype']) 135 | 136 | elif issubclass(t, sqltypes.Numeric): 137 | return t(precision=kw['prec'], scale=kw['scale']) 138 | 139 | elif issubclass(t, sqltypes.Time) or issubclass(t, sqltypes.DateTime): 140 | #Timezone 141 | tz=kw['fmt'][-1]=='Z' 142 | 143 | #Precision 144 | prec = kw['fmt'] 145 | #For some timestamps and dates, there is no precision, or indicatd in scale 146 | prec = prec[prec.index('(') + 1: prec.index(')')] if '(' in prec else 0 147 | prec = kw['scale'] if prec=='F' else int(prec) 148 | 149 | #prec = int(prec[prec.index('(') + 1: prec.index(')')]) if '(' in prec else 0 150 | return t(precision=prec,timezone=tz) 151 | 152 | elif issubclass(t, sqltypes.Interval): 153 | return t(day_precision=kw['prec'],second_precision=kw['scale']) 154 | 155 | else: 156 | return t() # For types like Integer, ByteInt 157 | 158 | return ischema_names[None] 159 | 160 | def _get_column_info(self, row): 161 | """ 162 | Resolves the column information for get_columns given a row. 163 | """ 164 | chartype = { 165 | 0: None, 166 | 1: 'LATIN', 167 | 2: 'UNICODE', 168 | 3: 'KANJISJIS', 169 | 4: 'GRAPHIC'} 170 | 171 | #Handle unspecified characterset and disregard chartypes specified for non-character types (e.g. binary, json) 172 | typ = self._resolve_type(row['columntype'],\ 173 | length=int(row['columnlength'] or 0),\ 174 | chartype=chartype[row['chartype'] if row['chartype'] in stringtypes else 0],\ 175 | prec=int(row['decimaltotaldigits'] or 0),\ 176 | scale=int(row['decimalfractionaldigits'] or 0),\ 177 | fmt=row['columnformat']) 178 | 179 | autoinc = row['idcoltype'] in ('GA', 'GD') 180 | 181 | return { 182 | 'name': self.normalize_name(row['columnname']), 183 | 'type': typ, 184 | 'nullable': row['nullable'] == u'Y', 185 | 'default': row['defaultvalue'], 186 | 'attrs': { 187 | 'columnformat':row['columnformat']}, 188 | 'autoincrement': autoinc 189 | } 190 | 191 | 192 | def get_columns(self, connection, table_name, schema=None, **kw): 193 | 194 | helpView=False 195 | 196 | if schema is None: 197 | schema = self.default_schema_name 198 | 199 | if int(self.server_version_info.split('.')[0])<16: 200 | dbc_columninfo='dbc.ColumnsV' 201 | 202 | #Check if the object us a view 203 | stmt = select([column('tablekind')],\ 204 | from_obj=[text('dbc.tablesV')]).where(\ 205 | and_(text('DatabaseName=:schema'),\ 206 | text('TableName=:table_name'),\ 207 | text("tablekind='V'"))) 208 | res = connection.execute(stmt, schema=schema, table_name=table_name).rowcount 209 | helpView = (res==1) 210 | 211 | else: 212 | dbc_columninfo='dbc.ColumnsQV' 213 | 214 | stmt = select([column('columnname'), column('columntype'),\ 215 | column('columnlength'), column('chartype'),\ 216 | column('decimaltotaldigits'), column('decimalfractionaldigits'),\ 217 | column('columnformat'),\ 218 | column('nullable'), column('defaultvalue'), column('idcoltype')],\ 219 | from_obj=[text(dbc_columninfo)]).where(\ 220 | and_(text('DatabaseName=:schema'),\ 221 | text('TableName=:table_name'))) 222 | 223 | res = connection.execute(stmt, schema=schema, table_name=table_name).fetchall() 224 | 225 | #If this is a view in pre-16 version, get types for individual columns 226 | if helpView: 227 | res=[self._get_column_help(connection, schema,table_name,r['columnname']) for r in res] 228 | 229 | return [self._get_column_info(row) for row in res] 230 | 231 | def _get_default_schema_name(self, connection): 232 | return self.normalize_name( 233 | connection.execute('select database').scalar()) 234 | 235 | def _get_column_help(self, connection, schema,table_name,column_name): 236 | stmt='help column '+schema+'.'+table_name+'.'+column_name 237 | res = connection.execute(stmt).fetchall()[0] 238 | 239 | return {'columnname':res['Column Name'], 240 | 'columntype':res['Type'], 241 | 'columnlength':res['Max Length'], 242 | 'chartype':res['Char Type'], 243 | 'decimaltotaldigits':res['Decimal Total Digits'], 244 | 'decimalfractionaldigits':res['Decimal Fractional Digits'], 245 | 'columnformat':res['Format'], 246 | 'nullable':res['Nullable'], 247 | 'defaultvalue':None, 248 | 'idcoltype':res['IdCol Type'] 249 | } 250 | 251 | def get_table_names(self, connection, schema=None, **kw): 252 | 253 | if schema is None: 254 | schema = self.default_schema_name 255 | 256 | stmt = select([column('tablename')], 257 | from_obj=[text('dbc.TablesVX')]).where( 258 | and_(text('DatabaseName = :schema'), 259 | or_(text('tablekind=\'T\''), 260 | text('tablekind=\'O\'')))) 261 | res = connection.execute(stmt, schema=schema).fetchall() 262 | return [self.normalize_name(name['tablename']) for name in res] 263 | 264 | def get_schema_names(self, connection, **kw): 265 | stmt = select([column('username')], 266 | from_obj=[text('dbc.UsersV')], 267 | order_by=[text('username')]) 268 | res = connection.execute(stmt).fetchall() 269 | return [self.normalize_name(name['username']) for name in res] 270 | 271 | def get_view_definition(self, connection, view_name, schema=None, **kw): 272 | 273 | if schema is None: 274 | schema = self.default_schema_name 275 | 276 | res = connection.execute('show table {}.{}'.format(schema, view_name)).scalar() 277 | return self.normalize_name(res) 278 | 279 | def get_view_names(self, connection, schema=None, **kw): 280 | 281 | if schema is None: 282 | schema = self.default_schema_name 283 | 284 | stmt = select([column('tablename')], 285 | from_obj=[text('dbc.TablesVX')]).where( 286 | and_(text('DatabaseName = :schema'), 287 | text('tablekind=\'V\''))) 288 | 289 | res = connection.execute(stmt, schema=schema).fetchall() 290 | return [self.normalize_name(name['tablename']) for name in res] 291 | 292 | def get_pk_constraint(self, connection, table_name, schema=None, **kw): 293 | """ 294 | Override 295 | TODO: Check if we need PRIMARY Indices or PRIMARY KEY Indices 296 | TODO: Check for border cases (No PK Indices) 297 | """ 298 | 299 | if schema is None: 300 | schema = self.default_schema_name 301 | 302 | stmt = select([column('ColumnName'), column('IndexName')], 303 | from_obj=[text('dbc.Indices')]).where( 304 | and_(text('DatabaseName = :schema'), 305 | text('TableName=:table'), 306 | text('IndexType=:indextype')) 307 | ).order_by(asc(column('IndexNumber'))) 308 | 309 | # K for Primary Key 310 | res = connection.execute(stmt, schema=schema, table=table_name, indextype='K').fetchall() 311 | 312 | index_columns = list() 313 | index_name = None 314 | 315 | for index_column in res: 316 | index_columns.append(self.normalize_name(index_column['ColumnName'])) 317 | index_name = self.normalize_name(index_column['IndexName']) # There should be just one IndexName 318 | 319 | return { 320 | "constrained_columns": index_columns, 321 | "name": index_name 322 | } 323 | 324 | def get_unique_constraints(self, connection, table_name, schema=None, **kw): 325 | """ 326 | Overrides base class method 327 | """ 328 | if schema is None: 329 | schema = self.default_schema_name 330 | 331 | stmt = select([column('ColumnName'), column('IndexName')], from_obj=[text('dbc.Indices')]) \ 332 | .where(and_(text('DatabaseName = :schema'), 333 | text('TableName=:table'), 334 | text('IndexType=:indextype'))) \ 335 | .order_by(asc(column('IndexName'))) 336 | 337 | # U for Unique 338 | res = connection.execute(stmt, schema=schema, table=table_name, indextype='U').fetchall() 339 | 340 | def grouper(fk_row): 341 | return { 342 | 'name': self.normalize_name(fk_row['IndexName']), 343 | } 344 | 345 | unique_constraints = list() 346 | for constraint_info, constraint_cols in groupby(res, grouper): 347 | unique_constraint = { 348 | 'name': self.normalize_name(constraint_info['name']), 349 | 'column_names': list() 350 | } 351 | 352 | for constraint_col in constraint_cols: 353 | unique_constraint['column_names'].append(self.normalize_name(constraint_col['ColumnName'])) 354 | 355 | unique_constraints.append(unique_constraint) 356 | 357 | return unique_constraints 358 | 359 | def get_foreign_keys(self, connection, table_name, schema=None, **kw): 360 | """ 361 | Overrides base class method 362 | """ 363 | 364 | if schema is None: 365 | schema = self.default_schema_name 366 | 367 | stmt = select([column('IndexID'), column('IndexName'), column('ChildKeyColumn'), column('ParentDB'), 368 | column('ParentTable'), column('ParentKeyColumn')], 369 | from_obj=[text('DBC.All_RI_ChildrenV')]) \ 370 | .where(and_(text('ChildTable = :table'), 371 | text('ChildDB = :schema'))) \ 372 | .order_by(asc(column('IndexID'))) 373 | 374 | res = connection.execute(stmt, schema=schema, table=table_name).fetchall() 375 | 376 | def grouper(fk_row): 377 | return { 378 | 'name': fk_row.IndexName or fk_row.IndexID, #ID if IndexName is None 379 | 'schema': fk_row.ParentDB, 380 | 'table': fk_row.ParentTable 381 | } 382 | 383 | # TODO: Check if there's a better way 384 | fk_dicts = list() 385 | for constraint_info, constraint_cols in groupby(res, grouper): 386 | fk_dict = { 387 | 'name': constraint_info['name'], 388 | 'constrained_columns': list(), 389 | 'referred_table': constraint_info['table'], 390 | 'referred_schema': constraint_info['schema'], 391 | 'referred_columns': list() 392 | } 393 | 394 | for constraint_col in constraint_cols: 395 | fk_dict['constrained_columns'].append(self.normalize_name(constraint_col['ChildKeyColumn'])) 396 | fk_dict['referred_columns'].append(self.normalize_name(constraint_col['ParentKeyColumn'])) 397 | 398 | fk_dicts.append(fk_dict) 399 | 400 | return fk_dicts 401 | 402 | def get_indexes(self, connection, table_name, schema=None, **kw): 403 | """ 404 | Overrides base class method 405 | """ 406 | 407 | if schema is None: 408 | schema = self.default_schema_name 409 | 410 | stmt = select(["*"], from_obj=[text('dbc.Indices')]) \ 411 | .where(and_(text('DatabaseName = :schema'), 412 | text('TableName=:table'))) \ 413 | .order_by(asc(column('IndexName'))) 414 | 415 | res = connection.execute(stmt, schema=schema, table=table_name).fetchall() 416 | 417 | def grouper(fk_row): 418 | return { 419 | 'name': fk_row.IndexName or fk_row.IndexNumber, # If IndexName is None TODO: Check what to do 420 | 'unique': True if fk_row.UniqueFlag == 'Y' else False 421 | } 422 | 423 | # TODO: Check if there's a better way 424 | indices = list() 425 | for index_info, index_cols in groupby(res, grouper): 426 | index_dict = { 427 | 'name': index_info['name'], 428 | 'column_names': list(), 429 | 'unique': index_info['unique'] 430 | } 431 | 432 | for index_col in index_cols: 433 | index_dict['column_names'].append(self.normalize_name(index_col['ColumnName'])) 434 | 435 | indices.append(index_dict) 436 | 437 | return indices 438 | 439 | def get_transaction_mode(self, connection, **kw): 440 | """ 441 | Returns the transaction mode set for the current session. 442 | T = TDBS 443 | A = ANSI 444 | """ 445 | stmt = select([text('transaction_mode')],\ 446 | from_obj=[text('dbc.sessioninfov')]).\ 447 | where(text('sessionno=SESSION')) 448 | 449 | res = connection.execute(stmt).scalar() 450 | return res 451 | 452 | def _get_server_version_info(self, connection, **kw): 453 | """ 454 | Returns the Teradata Database software version. 455 | """ 456 | stmt = select([text('InfoData')],\ 457 | from_obj=[text('dbc.dbcinfov')]).\ 458 | where(text('InfoKey=\'VERSION\'')) 459 | 460 | res = connection.execute(stmt).scalar() 461 | return res 462 | 463 | def conn_supports_autocommit(self, connection, **kw): 464 | """ 465 | Returns True if autocommit is used for this connection (underlying Teradata session) 466 | else False 467 | """ 468 | return self.get_transaction_mode(connection) == 'T' 469 | 470 | dialect = TeradataDialect 471 | -------------------------------------------------------------------------------- /sqlalchemy_teradata/requirements.py: -------------------------------------------------------------------------------- 1 | # The MIT License (MIT) 2 | # 3 | # Copyright (c) 2015 by Teradata 4 | # 5 | # Permission is hereby granted, free of charge, to any person obtaining a copy 6 | # of this software and associated documentation files (the "Software"), to deal 7 | # in the Software without restriction, including without limitation the rights 8 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | # copies of the Software, and to permit persons to whom the Software is 10 | # furnished to do so, subject to the following conditions: 11 | # 12 | # The above copyright notice and this permission notice shall be included in all 13 | # copies or substantial portions of the Software. 14 | # 15 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | # SOFTWARE. 22 | 23 | from sqlalchemy.testing.requirements import SuiteRequirements 24 | from sqlalchemy.testing import exclusions 25 | 26 | # Requirements specifies the features this dialect does/does not support for testing purposes 27 | # see: https://github.com/zzzeek/sqlalchemy/blob/master/README.dialects.rst 28 | 29 | 30 | class Requirements(SuiteRequirements): 31 | @property 32 | def datetime_microseconds(self): 33 | """target dialect supports representation of Python 34 | datetime.datetime() with microsecond objects.""" 35 | return exclusions.open() 36 | 37 | @property 38 | def offset(self): 39 | """target database can render OFFSET, or an equivalent, in a 40 | SELECT. 41 | """ 42 | return exclusions.closed() 43 | 44 | @property 45 | def bound_limit_offset(self): 46 | """target database can render LIMIT and/or OFFSET using a bound 47 | parameter 48 | """ 49 | return exclusions.closed() 50 | 51 | -------------------------------------------------------------------------------- /sqlalchemy_teradata/types.py: -------------------------------------------------------------------------------- 1 | # sqlalchemy_teradata/types.py 2 | # Copyright (C) 2015-2016 by Teradata 3 | # 4 | # 5 | # This module is part of sqlalchemy-teradata and is released under 6 | # the MIT License: http://www.opensource.org/licenses/mit-license.php 7 | 8 | from sqlalchemy.sql import sqltypes 9 | 10 | class BYTEINT(sqltypes.Integer): 11 | """ 12 | Teradata BYTEINT type. 13 | This type represents a one byte signed integer. 14 | """ 15 | __visit_name__ = 'BYTEINT' 16 | 17 | def __str__(self): 18 | return 'BYTEINT' 19 | 20 | class DECIMAL(sqltypes.DECIMAL): 21 | 22 | """ Teradata Decimal/Numeric type """ 23 | 24 | def __init__(self, precision=5, scale=0, **kw): 25 | """ Construct a Decimal type 26 | :param precision: max number of digits that can be stored (range from 1 thru 38) 27 | :param scale: number of fractional digits of :param precision: to the 28 | right of the decimal point (range from 0 to :param precision:) 29 | """ 30 | super(DECIMAL, self).__init__(precision=precision, scale=scale, **kw) 31 | 32 | 33 | class NUMERIC(sqltypes.NUMERIC): 34 | 35 | def __init__(self, precision=5, scale=0, **kw): 36 | super(NUMERIC, self).__init__(precision=precision, scale=scale, **kw) 37 | 38 | 39 | class TIME(sqltypes.TIME): 40 | 41 | def __init__(self, precision=6, timezone=False, **kwargs): 42 | 43 | """ Construct a TIME stored as UTC in Teradata 44 | 45 | :param precision: optional fractional seconds precision. A single digit 46 | representing the number of significant digits in the fractional 47 | portion of the SECOND field. Valid values range from 0 to 6 inclusive. 48 | The default precision is 6 49 | 50 | :param timezone: If set to True creates a Time WITH TIME ZONE type 51 | 52 | """ 53 | super(TIME, self).__init__(timezone=timezone, **kwargs) 54 | self.precision = precision 55 | 56 | 57 | class TIMESTAMP(sqltypes.TIMESTAMP): 58 | 59 | def __init__(self, precision=6, timezone=False, **kwargs): 60 | """ Construct a TIMESTAMP stored as UTC in Teradata 61 | 62 | :param precision: optional fractional seconds precision. A single digit 63 | representing the number of significant digits in the fractional 64 | portion of the SECOND field. Valid values range from 0 to 6 inclusive. 65 | The default precision is 6 66 | 67 | :param timezone: If set to True creates a TIMESTAMP WITH TIME ZONE type 68 | 69 | """ 70 | super(TIMESTAMP, self).__init__(timezone=timezone, **kwargs) 71 | self.precision = precision 72 | 73 | 74 | class _TDInterval(sqltypes.Interval): 75 | 76 | """ Base class for teradata interval sqltypes.""" 77 | 78 | def __init__(self, precision=None, frac_precision=None, **kwargs): 79 | 80 | self.precision = precision 81 | self.frac_precision = frac_precision 82 | 83 | super(_TDInterval, self).__init__(**kwargs) 84 | 85 | 86 | class IntervalYear(_TDInterval, sqltypes.Interval): 87 | 88 | """ Teradata Interval Year data type 89 | Identifies a field defining a period of time in years 90 | 91 | """ 92 | __visit_name__ = 'interval_year' 93 | 94 | def __init__(self, precision=None, **kwargs): 95 | 96 | """ 97 | precision: permitted range of digits for year ranging from 1 to 4 98 | 99 | """ 100 | return super(IntervalYear, self).__init__(precision=precision, **kwargs) 101 | 102 | class IntervalYearToMonth(_TDInterval, sqltypes.Interval): 103 | 104 | """ Teradata Interval Year To Month data type 105 | Identifies a field defining a period of time in years and months 106 | """ 107 | 108 | __visit_name__ = 'interval_year_to_month' 109 | 110 | def __init__(self, precision=None, **kwargs): 111 | 112 | """ 113 | precision: permitted range of digits for year ranging from 1 to 4 114 | 115 | """ 116 | return super(IntervalYearToMonth, self).__init__(precision=precision, **kwargs) 117 | 118 | class IntervalMonth(_TDInterval, sqltypes.Interval): 119 | 120 | """ Teradata Interval Month data type 121 | Identifies a field defining a period of time in months 122 | """ 123 | 124 | __visit_name__ = 'interval_month' 125 | 126 | def __init__(self, precision=None, **kwargs): 127 | 128 | """ 129 | precision: permitted range of digits for month ranging from 1 to 4 130 | 131 | """ 132 | return super(IntervalMonth, self).__init__(precision=precision, **kwargs) 133 | 134 | class IntervalDay(_TDInterval, sqltypes.Interval): 135 | 136 | """ Teradata Interval Day data type 137 | Identifies a field defining a period of time in days 138 | """ 139 | 140 | __visit_name__ = 'interval_day' 141 | 142 | def __init__(self, precision=None, **kwargs): 143 | 144 | """ 145 | precision: permitted range of digits for day ranging from 1 to 4 146 | 147 | """ 148 | return super(IntervalDay, self).__init__(precision=precision, **kwargs) 149 | 150 | 151 | class IntervalDayToHour(_TDInterval, sqltypes.Interval): 152 | 153 | """ Teradata Interval Day To Hour data type 154 | Identifies a field defining a period of time in days and hours 155 | """ 156 | __visit_name__ = 'interval_day_to_hour' 157 | 158 | def __init__(self, precision=None, **kwargs): 159 | 160 | """ 161 | precision: permitted range of digits for day ranging from 1 to 4 162 | 163 | """ 164 | 165 | return super(IntervalDayToHour, self).__init__(precision=precision, **kwargs) 166 | 167 | class IntervalDayToMinute(_TDInterval, sqltypes.Interval): 168 | 169 | """ Teradata Interval Day To Minute data type 170 | Identifies a field defining a period of time in days, hours, and minutes 171 | """ 172 | 173 | __visit_name__= 'interval_day_to_minute' 174 | 175 | def __init__(self, precision=None, **kwargs): 176 | 177 | """ 178 | precision: permitted range of digits for day ranging from 1 to 4 179 | 180 | """ 181 | 182 | return super(IntervalDayToMinute, self).__init__(precision=precision, **kwargs) 183 | 184 | class IntervalDayToSecond(_TDInterval, sqltypes.Interval): 185 | 186 | """ Teradata Interval Day To Second data type 187 | Identifies a field during a period of time in days, hours, minutes, and seconds 188 | """ 189 | 190 | __visit_name__='interval_day_to_second' 191 | 192 | def __init__(self, precision=None, frac_precision=None, **kwargs): 193 | 194 | """ 195 | precision: permitted range of digits for day ranging from 1 to 4 196 | frac_precision: fracional_seconds_precision ranging from 0 to 6 197 | 198 | """ 199 | return super(IntervalDayToSecond, self).__init__(precision=precision, 200 | frac_precision=frac_precision, 201 | **kwargs) 202 | 203 | class IntervalHour(_TDInterval, sqltypes.Interval): 204 | 205 | """ Teradata Interval Hour data type 206 | Identifies a field defining a period of time in hours 207 | """ 208 | 209 | __visit_name__='interval_hour' 210 | 211 | def __init__(self, precision=None, **kwargs): 212 | 213 | """ 214 | precision: permitted range of digits for hour ranging from 1 to 4 215 | 216 | """ 217 | return super(IntervalHour, self).__init__(precision=precision, **kwargs) 218 | 219 | class IntervalHourToMinute(_TDInterval, sqltypes.Interval): 220 | 221 | """ Teradata Interval Hour To Minute data type 222 | Identifies a field defining a period of time in hours and minutes 223 | """ 224 | 225 | __visit_name__='interval_hour_to_minute' 226 | 227 | def __init__(self, precision=None, **kwargs): 228 | 229 | """ 230 | precision: permitted range of digits for month ranging from 1 to 4 231 | 232 | """ 233 | return super(IntervalHourToMinute, self).__init__(precision=precision, 234 | **kwargs) 235 | 236 | class IntervalHourToSecond(_TDInterval, sqltypes.Interval): 237 | 238 | """ Teradata Interval Hour To Second data type 239 | Identifies a field defining a period of time in hours, minutes, and seconds 240 | """ 241 | 242 | __visit_name__='interval_hour_to_second' 243 | 244 | def __init__(self, precision=None, frac_precision=None, **kwargs): 245 | 246 | """ 247 | precision: permitted range of digits for hour ranging from 1 to 4 248 | frac_precision: fracional_seconds_precision ranging from 0 to 6 249 | 250 | """ 251 | return super(IntervalHourToSecond, self).__init__(precision=precision, 252 | frac_precision=frac_precision, 253 | **kwargs) 254 | 255 | class IntervalMinute(_TDInterval, sqltypes.Interval): 256 | 257 | """ Teradata Interval Minute type 258 | Identifies a field defining a period of time in minutes 259 | """ 260 | 261 | __visit_name__='interval_minute' 262 | 263 | def __init__(self, precision=None, **kwargs): 264 | 265 | """ 266 | precision: permitted range of digits for minute ranging from 1 to 4 267 | 268 | """ 269 | return super(IntervalMinute, self).__init__(precision=precision, **kwargs) 270 | 271 | class IntervalMinuteToSecond(_TDInterval, sqltypes.Interval): 272 | 273 | """ Teradata Interval Minute To Second data type 274 | Identifies a field defining a period of time in minutes and seconds 275 | """ 276 | 277 | __visit_name__='interval_minute_to_second' 278 | 279 | def __init__(self, precision=None, frac_precision=None, **kwargs): 280 | 281 | """ 282 | precision: permitted range of digits for minute ranging from 1 to 4 283 | frac_precision: fracional_seconds_precision ranging from 0 to 6 284 | 285 | """ 286 | return super(IntervalMinuteToSecond, self).__init__(precision=precision, 287 | frac_precision=frac_precision, 288 | **kwargs) 289 | 290 | class IntervalSecond(_TDInterval, sqltypes.Interval): 291 | 292 | """ Teradata Interval Second data type 293 | Identifies a field defining a period of time in seconds 294 | """ 295 | 296 | __visit_name__ = 'interval_second' 297 | 298 | def __init__(self, precision=None, frac_precision=None, **kwargs): 299 | 300 | """ 301 | precision: permitted range of digits for second ranging from 1 to 4 302 | frac_precision: fractional_seconds_precision ranging from 0 to 6 303 | 304 | """ 305 | return super(IntervalSecond, self).__init__(precision=precision, 306 | frac_precision=frac_precision, **kwargs) 307 | 308 | class CHAR(sqltypes.CHAR): 309 | 310 | def __init__(self, length=1, charset=None, **kwargs): 311 | 312 | """ Construct a Char 313 | 314 | :param length: number of characters or bytes allocated. Maximum value 315 | for n depends on the character set. For LATIN - 64000 characters, 316 | For UNICODE - 32000 characters, For KANJISJIS - 32000 bytes. If a value 317 | for n is not specified, the default is 1. 318 | 319 | :param charset: Server character set for the character column. 320 | Supported values: 321 | 'LATIN': fixed 8-bit characters from the ASCII ISO 8859 Latin1 322 | or ISO 8859 Latin9. 323 | 'UNICODE': fixed 16-bit characters from the UNICODE 6.0 standard. 324 | 'GRAPHIC': fixed 16-bit UNICODE characters defined by IBM for DB2. 325 | 'KANJISJIS': mixed single byte/multibyte characters intended for 326 | Japanese applications that rely on KanjiShiftJIS characteristics. 327 | Note: GRAPHIC(n) is equivalent to CHAR(n) CHARACTER SET GRAPHIC 328 | 329 | """ 330 | super(CHAR, self).__init__(length=length, **kwargs) 331 | self.charset = charset 332 | 333 | 334 | class VARCHAR(sqltypes.String): 335 | 336 | def __init__(self, length=None, charset=None, **kwargs): 337 | 338 | """Construct a Varchar 339 | 340 | :param length: Optional 0 to n. If None, LONG is used 341 | (the longest permissible variable length character string) 342 | 343 | :param charset: optional character set for varchar. 344 | 345 | Note: VARGRAPHIC(n) is equivalent to VARCHAR(n) CHARACTER SET GRAPHIC 346 | 347 | """ 348 | super(VARCHAR, self).__init__(length=length, **kwargs) 349 | self.charset = charset 350 | 351 | 352 | class CLOB(sqltypes.CLOB): 353 | 354 | def __init__(self, length=None, charset=None, multiplier=None, **kwargs): 355 | 356 | """Construct a Clob 357 | 358 | :param length: Optional length for clob. For Latin server character set, 359 | length cannot exceed 2097088000. For Unicode server character set, 360 | length cannot exceed 1048544000. 361 | If no length is specified then the maximum is used. 362 | 363 | :param multiplier: Either 'K', 'M', or 'G'. 364 | K specifies number of characters to allocate as nK, where K=1024 365 | (For Latin char sets, n < 2047937 and For Unicode char sets, n < 1023968) 366 | M specifies nM, where M=1024K 367 | (For Latin char sets, n < 1999 and For Unicode char sets, n < 999) 368 | G specifies nG, where G=1024M 369 | (For Latin char sets, n must be 1 and char set must be LATIN) 370 | 371 | :param charset: LATIN (fixed 8-bit characters ASCII ISO 8859 Latin1 or ISO 8859 Latin9) 372 | or UNICODE (fixed 16-bit characters from the UNICODE 6.0 standard) 373 | """ 374 | super(CLOB, self).__init__(length=length, **kwargs) 375 | self.charset = charset 376 | self.multiplier=multiplier 377 | 378 | class BLOB(sqltypes.LargeBinary): 379 | pass 380 | -------------------------------------------------------------------------------- /test/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Teradata/sqlalchemy-teradata/24ab0ec39f6a9d63185aad90267fc815c5a63a97/test/__init__.py -------------------------------------------------------------------------------- /test/conftest.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import * 2 | from sqlalchemy.dialects import registry 3 | 4 | registry.register("teradata", "sqlalchemy_teradata.dialect", "TeradataDialect") 5 | from sqlalchemy.testing.plugin.pytestplugin import * 6 | -------------------------------------------------------------------------------- /test/test_dialect.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy_teradata.dialect import TeradataDialect 2 | from sqlalchemy.testing import fixtures 3 | from sqlalchemy import testing 4 | from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey 5 | from sqlalchemy_teradata.base import CreateView,DropView 6 | from sqlalchemy.sql import table, column, select 7 | from sqlalchemy import PrimaryKeyConstraint 8 | 9 | class TeradataDialectTest(fixtures.TestBase): 10 | 11 | def setup(self): 12 | self.conn = testing.db.connect() 13 | self.engine = self.conn.engine 14 | self.dialect = self.conn.dialect 15 | self.metadata = MetaData() 16 | 17 | self.user_name = self.engine.execute('sel user').scalar() 18 | self.db_schema = self.engine.execute('sel database').scalar() 19 | self.tbl_name = self.user_name + '_test' 20 | self.view_name = self.user_name + '_test_view' 21 | 22 | # Setup test table (user should have necessary rights to create table) 23 | self.test_table = Table(self.tbl_name, self.metadata, 24 | Column('id', Integer, primary_key=True), 25 | PrimaryKeyConstraint('id', name='my_pk')) 26 | # Setup a test view 27 | #self.test_view = CreateView(self.view_name, select([self.test_table.c.id.label('view_id')])) 28 | 29 | # Create tables 30 | self.metadata.create_all(self.engine) 31 | 32 | #Create views 33 | #self.conn.execute(self.test_view) 34 | 35 | def tearDown(self): 36 | # drop view(s) 37 | #self.conn.execute(DropView(self.view_name)) 38 | 39 | # drop table(s) 40 | self.metadata.drop_all(self.engine) 41 | self.conn.close() 42 | 43 | def test_has_table(self): 44 | assert self.dialect.has_table(self.conn, self.test_table.name) 45 | 46 | def test_get_table_names(self): 47 | tbls = self.dialect.get_table_names(self.conn) 48 | assert self.dialect.normalize_name(self.test_table.name) in tbls 49 | 50 | def test_get_schema_names(self): 51 | schemas = self.dialect.get_schema_names(self.conn) 52 | assert self.dialect.normalize_name(self.user_name) in schemas 53 | 54 | def test_get_view_names(self): 55 | pass 56 | #views = self.dialect.get_view_names(self.conn) 57 | #assert self.dialect.normalize_name(self.test_view.name) in views 58 | 59 | def test_get_pk_constraint(self): 60 | cons = self.dialect.get_pk_constraint(self.conn, self.test_table, self.db_schema) 61 | assert type(cons) is dict 62 | assert self.dialect.normalize_name(cons['name']) == 'my_pk' 63 | for x in cons['constrained_columns']: 64 | x == self.test_table.c[x] 65 | 66 | def test_get_unique_constraint(self): 67 | assert False 68 | 69 | def test_get_foreign_keys(self): 70 | assert False 71 | 72 | def test_get_indexes(self): 73 | assert False 74 | 75 | def test_get_columns(self): 76 | cols = self.dialect.get_columns(self.conn, self.test_table.name, self.db_schema) 77 | for c in self.test_table.c: 78 | assert c.name in [d['name'] for d in cols] 79 | 80 | def test_get_transactio_mode(self): 81 | assert self.dialect.get_transaction_mode(self.conn) == 'T' 82 | -------------------------------------------------------------------------------- /test/test_generic_types.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy_teradata.compiler import TeradataTypeCompiler as tdtc 2 | from sqlalchemy_teradata.dialect import TeradataDialect as tdd 3 | from sqlalchemy.types import (Integer, SmallInteger, BigInteger, Numeric, 4 | Float, DateTime, Date, String, Text, Unicode, UnicodeText, 5 | Time, LargeBinary, Boolean, Interval, 6 | DATE, BOOLEAN, DATETIME, BIGINT, SMALLINT, INTEGER, FLOAT, REAL, 7 | TEXT, NVARCHAR, NCHAR) 8 | from sqlalchemy_teradata.types import (CHAR, VARCHAR, CLOB, DECIMAL, NUMERIC, 9 | VARCHAR, TIMESTAMP, TIME) 10 | from sqlalchemy.testing import fixtures 11 | 12 | from itertools import product 13 | import datetime as dt 14 | 15 | class TestCompileGeneric(fixtures.TestBase): 16 | 17 | def _comp(self, inst): 18 | return self.comp.process(inst) 19 | 20 | def setup(self): 21 | # Teradata Type Compiler using Teradata Dialect to compile types 22 | self.comp = tdtc(tdd) 23 | self.charset= ['latin, unicode, graphic, kanjisjis'] 24 | self.len_limits = [-1, 32000, 64000] 25 | self.multips = ['K', 'M', 'G'] 26 | 27 | def test_defaults(self): 28 | assert self._comp(Integer()) == 'INTEGER' 29 | assert self._comp(SmallInteger()) == 'SMALLINT' 30 | assert self._comp(BigInteger()) == 'BIGINT' 31 | assert self._comp(Numeric()) == 'NUMERIC' 32 | assert self._comp(Float()) == 'FLOAT' 33 | 34 | assert self._comp(DateTime()) == 'TIMESTAMP(6)' 35 | assert self._comp(Date()) == 'DATE' 36 | assert self._comp(Time()) == 'TIME(6)' 37 | 38 | assert self._comp(String()) == 'LONG VARCHAR' 39 | assert self._comp(Text()) == 'CLOB' 40 | assert self._comp(Unicode()) == 'LONG VARCHAR CHAR SET UNICODE' 41 | assert self._comp(UnicodeText()) == 'CLOB CHAR SET UNICODE' 42 | 43 | assert self._comp(Boolean()) == 'BYTEINT' 44 | #assert self._comp(LargeBinary()) == 'BLOB' 45 | 46 | class TestCompileSQLStandard(fixtures.TestBase): 47 | 48 | def _comp(self, inst): 49 | return self.comp.process(inst) 50 | 51 | def setup(self): 52 | self.comp = tdtc(tdd) 53 | 54 | def test_defaults(self): 55 | assert self._comp(DATE()) == 'DATE' 56 | assert self._comp(DATETIME()) == 'TIMESTAMP(6)' 57 | assert self._comp(TIMESTAMP()) == 'TIMESTAMP(6)' 58 | assert self._comp(TIME()) == 'TIME(6)' 59 | 60 | assert self._comp(CHAR()) == 'CHAR(1)' 61 | assert self._comp(VARCHAR()) == 'LONG VARCHAR' 62 | assert self._comp(NCHAR()) == 'CHAR CHAR SET UNICODE' 63 | assert self._comp(NVARCHAR()) == 'LONG VARCHAR CHAR SET UNICODE' 64 | assert self._comp(CLOB()) == 'CLOB' 65 | assert self._comp(TEXT()) == 'CLOB' 66 | 67 | assert self._comp(DECIMAL()) == 'DECIMAL(5, 0)' 68 | assert self._comp(NUMERIC()) == 'NUMERIC(5, 0)' 69 | assert self._comp(INTEGER()) == 'INTEGER' 70 | assert self._comp(FLOAT()) == 'FLOAT' 71 | assert self._comp(REAL()) == 'REAL' 72 | assert self._comp(SMALLINT()) == 'SMALLINT' 73 | assert self._comp(BIGINT()) == 'BIGINT' 74 | 75 | assert self._comp(BOOLEAN()) == 'BYTEINT' 76 | 77 | 78 | class TestCompileTypes(fixtures.TestBase): 79 | 80 | """ 81 | The tests are based of the info in SQL Data Types and Literals (Release 15.10, Dec '15) 82 | """ 83 | def setup(self): 84 | self.comp = tdtc(tdd) 85 | self.charset= ['latin, unicode, graphic, kanjisjis'] 86 | self.len_limits = [-1, 32000, 64000] 87 | self.multips = ['K', 'M', 'G'] 88 | 89 | def test_strings(self): 90 | 91 | for m in self.multips: 92 | c = CLOB(length = 1, multiplier = m) 93 | assert self.comp.process(c) == 'CLOB(1{})'.format(m) 94 | assert c.length == 1 95 | 96 | for len_ in self.len_limits: 97 | assert 'VARCHAR({})'.format(len_) == self.comp.process(VARCHAR(len_)) 98 | assert 'CHAR({})'.format(len_) == self.comp.process(CHAR(len_)) 99 | assert 'CLOB({})'.format(len_) == self.comp.process(CLOB(len_)) 100 | 101 | for c in self.charset: 102 | assert 'VARCHAR({}) CHAR SET {}'.format(len_, c) == \ 103 | self.comp.process(VARCHAR(len_, c)) 104 | 105 | assert 'CHAR({}) CHAR SET {}'.format(len_, c) == \ 106 | self.comp.process(CHAR(len_, c)) 107 | 108 | assert 'CLOB({}) CHAR SET {}'.format(len_, c) == \ 109 | self.comp.process(CLOB(len_, c)) 110 | 111 | def test_timezones(self): 112 | assert self.comp.process(TIME(1, True)) == 'TIME(1) WITH TIME ZONE' 113 | assert self.comp.process(TIMESTAMP(0, True)) == 'TIMESTAMP(0) WITH TIME ZONE' 114 | -------------------------------------------------------------------------------- /test/test_limit_offset.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy_teradata.compiler import TeradataTypeCompiler as tdtc 2 | from sqlalchemy_teradata.dialect import TeradataDialect as tdd 3 | from sqlalchemy import create_engine, testing 4 | from sqlalchemy.testing import fixtures 5 | from sqlalchemy.sql import table, column, select 6 | 7 | class TestCompileTDLimitOffset(fixtures.TestBase): 8 | """ 9 | Test compilation of limit offset in Teradata 10 | """ 11 | def setup(self): 12 | # Running this locally for now 13 | def dump(sql, *multiparams, **params): 14 | sql.compile(dialect = self.engine.tdd) 15 | 16 | self.engine = create_engine('teradata://', strategy='mock', executor=dump) 17 | 18 | def test_limit_offset(self): 19 | t1 = table('t1', column('c1'), column('c2'), column('c3')) 20 | s = select([t1]).limit(3).offset(5) 21 | #assert s == 22 | s = select([t1]).limit(3) 23 | #assert s == 24 | s = select([t1]).limit(3).distinct() 25 | #assert s == 26 | s = select([t1]).order_by(t1.c.c2).limit(3).offset(5).distinct() 27 | #assert s == 28 | s = select([t1]).order_by(t1.c.c2).limit(3).offset(5) 29 | #assert s == 30 | s = select([t1]).order_by(t1.c.c2).offset(5) 31 | #assert s == 32 | s = select([t1]).order_by(t1.c.c2).limit(3) 33 | #assert s == 34 | stmt = s.compile(self.engine) 35 | 36 | -------------------------------------------------------------------------------- /test/test_suite.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy.testing.suite import * 2 | 3 | # this is the test suite run by py.test test/test_suite.py 4 | -------------------------------------------------------------------------------- /test/test_td_ddl.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Table, Column, Index 2 | from sqlalchemy.schema import CreateColumn, CreateTable, CreateIndex, CreateSchema 3 | from sqlalchemy import MetaData, create_engine 4 | from sqlalchemy_teradata.types import ( VARCHAR, CHAR, CLOB) 5 | from sqlalchemy_teradata.types import ( NUMERIC, DECIMAL, ) 6 | #from sqlalchemy_teradata.types import ( DATE, TIME, TIMESTAMP ) 7 | from sqlalchemy.testing import fixtures 8 | 9 | from itertools import product 10 | import datetime as dt 11 | 12 | """ 13 | Test DDL Expressions and Dialect Extensions 14 | The tests are based of off SQL Data Definition Language (Release 15.10, Dec '15) 15 | """ 16 | 17 | class TestCompileCreateColDDL(fixtures.TestBase): 18 | 19 | def setup(self): 20 | # Test locally for now 21 | def dump(sql, *multiaprams, **params): 22 | print(sql.compile(dialect=self.td_engine.dialect)) 23 | 24 | self.td_engine = create_engine('teradata://', strategy='mock', executor=dump) 25 | self.sqlalch_col_attrs = ['primary_key', 'unique', 'nullable', 'default','index'] 26 | 27 | def test_create_column(self): 28 | c = Column('column_name', VARCHAR(20, charset='GRAPHIC')) 29 | 30 | def test_col_attrs(self): 31 | assert False 32 | 33 | def test_col_add_attribute(self): 34 | assert False 35 | 36 | 37 | class TestCompileCreateTableDDL(fixtures.TestBase): 38 | 39 | def setup(self): 40 | def dump(sql, *multiparams, **params): 41 | print(sql.compile(dialect=self.td_engine.dialect)) 42 | self.td_engine = create_engine('teradata://', strategy='mock', executor=dump) 43 | 44 | def test_create_table(self): 45 | meta = MetaData(bind = self.td_engine) 46 | my_table = Table('tablename', meta, 47 | Column('column1', NUMERIC, primary_key=True), 48 | schema='database_name_or_user_name', 49 | prefixes=['multiset', 'global temporary']) 50 | 51 | def test_reflect_table(self): 52 | assert False 53 | -------------------------------------------------------------------------------- /test/test_td_types.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy_teradata.compiler import TeradataTypeCompiler as tdtc 2 | from sqlalchemy_teradata.dialect import TeradataDialect as tdd 3 | from sqlalchemy_teradata.types import ( IntervalYear, IntervalYearToMonth, IntervalMonth, 4 | IntervalDay, IntervalDayToHour, IntervalDayToMinute, 5 | IntervalDayToSecond, IntervalHour, IntervalHourToMinute, 6 | IntervalHourToSecond, IntervalMinute, IntervalMinuteToSecond, 7 | IntervalSecond, 8 | BYTEINT) 9 | 10 | from sqlalchemy.testing import fixtures 11 | 12 | class TestCompileTDInterval(fixtures.TestBase): 13 | """ 14 | Test the compilation of the Teradata Interval/ Interval to types 15 | """ 16 | 17 | def setup(self): 18 | 19 | # Teradata Type Compiler using Teradata Dialect to compile types 20 | self.comp = tdtc(tdd) 21 | 22 | def test_defaults(self): 23 | 24 | assert self.comp.process(IntervalYear()) == 'INTERVAL YEAR' 25 | assert self.comp.process(IntervalYearToMonth()) == 'INTERVAL YEAR TO MONTH' 26 | assert self.comp.process(IntervalMonth()) == 'INTERVAL MONTH' 27 | assert self.comp.process(IntervalDay()) == 'INTERVAL DAY' 28 | assert self.comp.process(IntervalDayToHour()) == 'INTERVAL DAY TO HOUR' 29 | assert self.comp.process(IntervalDayToMinute()) == 'INTERVAL DAY TO MINUTE' 30 | assert self.comp.process(IntervalDayToSecond()) == 'INTERVAL DAY TO SECOND' 31 | assert self.comp.process(IntervalHour()) == 'INTERVAL HOUR' 32 | assert self.comp.process(IntervalHourToMinute()) == 'INTERVAL HOUR TO MINUTE' 33 | assert self.comp.process(IntervalHourToSecond()) == 'INTERVAL HOUR TO SECOND' 34 | assert self.comp.process(IntervalMinute()) == 'INTERVAL MINUTE' 35 | assert self.comp.process(IntervalMinuteToSecond()) == 'INTERVAL MINUTE TO SECOND' 36 | assert self.comp.process(IntervalSecond()) == 'INTERVAL SECOND' 37 | 38 | assert self.comp.process(BYTEINT()) == 'BYTEINT' 39 | 40 | def test_interval(self): 41 | 42 | for prec in range(1,5): 43 | assert self.comp.process(IntervalYear(prec)) == 'INTERVAL YEAR({})'.format(prec) 44 | assert self.comp.process(IntervalYearToMonth(prec)) == \ 45 | 'INTERVAL YEAR({}) TO MONTH'.format(prec) 46 | assert self.comp.process(IntervalMonth(prec)) == 'INTERVAL MONTH({})'.format(prec) 47 | assert self.comp.process(IntervalDay(prec)) == 'INTERVAL DAY({})'.format(prec) 48 | assert self.comp.process(IntervalDayToHour(prec)) == \ 49 | 'INTERVAL DAY({}) TO HOUR'.format(prec) 50 | assert self.comp.process(IntervalDayToMinute(prec)) == \ 51 | 'INTERVAL DAY({}) TO MINUTE'.format(prec) 52 | assert self.comp.process(IntervalHour(prec)) == 'INTERVAL HOUR({})'.format(prec) 53 | assert self.comp.process(IntervalHourToMinute(prec)) == \ 54 | 'INTERVAL HOUR({}) TO MINUTE'.format(prec) 55 | assert self.comp.process(IntervalMinute(prec)) == 'INTERVAL MINUTE({})'.format(prec) 56 | assert self.comp.process(IntervalSecond(prec)) == 'INTERVAL SECOND({})'.format(prec) 57 | 58 | def test_interval_frac(self): 59 | """ 60 | Test valid ranges of precision (prec) and fractional second precision (fsec) 61 | """ 62 | for prec in range(1,5): 63 | for fsec in range(0,7): 64 | assert self.comp.process(IntervalDayToSecond(prec, fsec)) == \ 65 | 'INTERVAL DAY({}) TO SECOND({})'.format(prec, fsec) 66 | 67 | assert self.comp.process(IntervalHourToSecond(prec, fsec)) == \ 68 | 'INTERVAL HOUR({}) TO SECOND({})'.format(prec, fsec) 69 | 70 | assert self.comp.process(IntervalMinuteToSecond(prec, fsec)) == \ 71 | 'INTERVAL MINUTE({}) TO SECOND({})'.format(prec, fsec) 72 | 73 | assert self.comp.process(IntervalSecond(prec, fsec)) == \ 74 | 'INTERVAL SECOND({}, {})'.format(prec, fsec) 75 | 76 | -------------------------------------------------------------------------------- /test/usage_test.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import * 2 | from sqlalchemy.dialects import registry 3 | from sqlalchemy_teradata.dialect import TeradataDialect 4 | from sqlalchemy.testing import fixtures 5 | from sqlalchemy.testing.plugin.pytestplugin import * 6 | from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey 7 | from sqlalchemy.ext.declarative import declarative_base 8 | from sqlalchemy.orm import relation, sessionmaker 9 | from sqlalchemy import testing 10 | 11 | class DialectSQLAlchUsageTest(fixtures.TestBase): 12 | """ This usage test is meant to serve as documentation and follows the 13 | tutorial here: http://docs.sqlalchemy.org/en/latest/core/tutorial.html 14 | but with the dialect being developed 15 | """ 16 | 17 | # Note: this test uses pytest which captures stdout by default, pass -s to allow output to stdout 18 | 19 | def setUp(self): 20 | 21 | self.dialect = TeradataDialect() 22 | self.conn = testing.db.connect() 23 | self.engine = self.conn.engine 24 | 25 | # build a table with columns 26 | self.metadata = MetaData() 27 | self.users = Table('my_users', self.metadata, 28 | Column('uid', Integer, primary_key=True), 29 | Column('name', String(256)), 30 | Column('fullname', String(256)), 31 | ) 32 | 33 | self.addresses = Table('addresses', self.metadata, 34 | Column('id', Integer, primary_key=True), 35 | Column('user_id', None, ForeignKey('my_users.uid'), nullable=False), 36 | Column('email_address', String(256), nullable=False), 37 | ) 38 | 39 | self.metadata.create_all(self.engine) 40 | 41 | def tearDown(self): 42 | self.metadata.drop_all(self.engine) 43 | self.conn.close() 44 | 45 | def test_show_state(self): 46 | assert self.users in self.metadata.sorted_tables 47 | assert self.addresses in self.metadata.sorted_tables 48 | 49 | def test_inserts(self): 50 | self.ins = self.users.insert() 51 | 52 | # inserts by default require all columns to be provided 53 | assert(str(self.ins) == 'INSERT INTO my_users (uid, name, fullname) VALUES (:uid, :name, :fullname)') 54 | 55 | # use the VALUES clause to limit the values inserted 56 | self.ins = self.users.insert().values(name='mark', fullname='mark sandan') 57 | 58 | # actual values don't get stored in the string 59 | assert(str(self.ins) == 'INSERT INTO my_users (name, fullname) VALUES (:name, :fullname)') 60 | 61 | # data values are stored in the INSERT but only are used when executed, we can peek 62 | assert(str(self.ins.compile().params) == str({'fullname': 'mark sandan', 'name': 'mark'})) 63 | 64 | # None of the inserts above in this test get added to the database 65 | 66 | def test_executing(self): 67 | # re-create a new INSERT object 68 | self.ins = self.users.insert() 69 | 70 | # execute the insert statement 71 | res = self.conn.execute(self.ins, uid=1, name='jack', fullname='Jack Jones') 72 | assert(res.inserted_primary_key == [1]) 73 | res = self.conn.execute(self.ins, uid=2, name='wendy', fullname='Wendy Williams') 74 | assert(res.inserted_primary_key == [2]) 75 | 76 | # the res variable is a ResultProxy object, analagous to DBAPI cursor 77 | 78 | # issue many inserts, the same is possible for update and delete 79 | self.conn.execute(self.addresses.insert(), [ 80 | {'id': 1, 'user_id': 1, 'email_address': 'jack@yahoo.com'}, 81 | {'id': 2, 'user_id': 1, 'email_address': 'jack@msn.com'}, 82 | {'id': 3, 'user_id': 2, 'email_address': 'www@www.com'}, 83 | {'id': 4, 'user_id': 2, 'email_address': 'wendy@aol.com'} 84 | ]) 85 | 86 | # test selects on the inserted values 87 | from sqlalchemy.sql import select 88 | 89 | s = select([self.users]) 90 | res = self.conn.execute(s) 91 | u1 = res.fetchone() 92 | u2 = res.fetchone() 93 | 94 | # accessing rows 95 | assert(u1['name'] == u'jack') 96 | assert(u1['fullname'] == u'Jack Jones') 97 | 98 | assert(u2['name'] == u'wendy') 99 | assert(u2['fullname'] == u'Wendy Williams') 100 | 101 | assert(u1[1] == u1['name']) 102 | assert(u1[2] == u1['fullname']) 103 | 104 | assert(u2[1] == u2['name']) 105 | assert(u2[2] == u2['fullname']) 106 | 107 | # be sure to close the result set 108 | res.close() 109 | 110 | # use cols to access rows 111 | res = self.conn.execute(s) 112 | u3 = res.fetchone() 113 | u4 = res.fetchone() 114 | 115 | assert(u3[self.users.c.name] == u1['name']) 116 | assert(u3[self.users.c.fullname] == u1['fullname']) 117 | 118 | assert(u4[self.users.c.name] == u2['name']) 119 | assert(u4[self.users.c.fullname] == u2['fullname']) 120 | 121 | # reference individual columns in select clause 122 | s = select([self.users.c.name, self.users.c.fullname]) 123 | res = self.conn.execute(s) 124 | u3 = res.fetchone() 125 | u4 = res.fetchone() 126 | 127 | assert(u3[self.users.c.name] == u1['name']) 128 | assert(u3[self.users.c.fullname] == u1['fullname']) 129 | 130 | assert(u4[self.users.c.name] == u2['name']) 131 | assert(u4[self.users.c.fullname] == u2['fullname']) 132 | 133 | # test joins 134 | # cartesian product 135 | usrs = [row for row in self.conn.execute(select([self.users]))] 136 | addrs = [row for row in self.conn.execute(select([self.addresses]))] 137 | prod = [row for row in self.conn.execute(select([self.users, self.addresses]))] 138 | assert(len(prod) == len(usrs) * len(addrs)) 139 | 140 | # inner join on id 141 | s = select([self.users, self.addresses]).where(self.users.c.uid == self.addresses.c.user_id) 142 | inner = [row for row in self.conn.execute(s)] 143 | assert(len(inner) == 4) 144 | 145 | # operators between columns objects & other col objects/literals 146 | expr = self.users.c.uid == self.addresses.c.user_id 147 | assert('my_users.uid = addresses.user_id' == str(expr)) 148 | # see how Teradata concats two strings 149 | assert(str((self.users.c.name + self.users.c.fullname).compile(bind=self.engine)) == 150 | 'my_users.name || my_users.fullname') 151 | 152 | # built-in conjunctions 153 | from sqlalchemy.sql import and_, or_ 154 | 155 | s = select([(self.users.c.fullname + 156 | ", " + 157 | self.addresses.c.email_address).label('titles')]).where( 158 | and_( 159 | self.users.c.uid == self.addresses.c.user_id, 160 | self.users.c.name.between('m', 'z'), 161 | or_( 162 | self.addresses.c.email_address.like('%@aol.com'), 163 | self.addresses.c.email_address.like('%@msn.com') 164 | ) 165 | ) 166 | ) 167 | # print(s) 168 | res = self.conn.execute(s) 169 | for row in res: 170 | assert(str(row[0]) == u'Wendy Williams, wendy@aol.com') 171 | 172 | # more joins 173 | # ON condition auto generated based on ForeignKey 174 | assert(str(self.users.join(self.addresses)) == 175 | 'my_users JOIN addresses ON my_users.uid = addresses.user_id') 176 | 177 | # specify the join ON condition 178 | self.users.join(self.addresses, 179 | self.addresses.c.email_address.like(self.users.c.name + '%')) 180 | 181 | # select from clause to specify tables and the ON condition 182 | s = select([self.users.c.fullname]).select_from( 183 | self.users.join(self.addresses, self.addresses.c.email_address.like(self.users.c.name + '%'))) 184 | res = self.conn.execute(s) 185 | assert(len(res.fetchall()) == 3) 186 | 187 | # left outer joins 188 | s = select([self.users.c.fullname]).select_from(self.users.outerjoin(self.addresses)) 189 | # outer join works with teradata dialect (unlike oracle dialect < version9) 190 | 191 | assert(str(s) == str(s.compile(dialect=self.dialect))) 192 | 193 | # test bind params (positional) 194 | 195 | from sqlalchemy import text 196 | s = self.users.select(self.users.c.name.like( 197 | bindparam('username', type_=String)+text("'%'"))) 198 | res = self.conn.execute(s, username='wendy').fetchall() 199 | assert(len(res), 1) 200 | 201 | # functions 202 | from sqlalchemy.sql import func, column 203 | 204 | # certain function names are known by sqlalchemy 205 | assert(str(func.current_timestamp()), 'CURRENT_TIMESTAMP') 206 | 207 | # functions can be used in the select 208 | res = self.conn.execute(select( 209 | [func.max(self.addresses.c.email_address, type_=String).label( 210 | 'max_email')])).scalar() 211 | assert(res, 'www@www.com') 212 | 213 | # func result sets, define a function taking params x,y return q,z,r 214 | # useful for nested queries, subqueries - w/ dynamic params 215 | calculate = select([column('q'), column('z'), column('r')]).\ 216 | select_from( 217 | func.calculate( 218 | bindparam('x'), 219 | bindparam('y') 220 | ) 221 | ) 222 | calc = calculate.alias() 223 | s = select([self.users]).where(self.users.c.uid > calc.c.z) 224 | assert('SELECT my_users.uid, my_users.name, my_users.fullname\ 225 | FROM my_users, (SELECT q, z, r\ 226 | FROM calculate(:x, :y)) AS anon_1\ 227 | WHERE my_users.uid > anon_1.z', s) 228 | # instantiate the func 229 | calc1 = calculate.alias('c1').unique_params(x=17, y=45) 230 | calc2 = calculate.alias('c2').unique_params(x=5, y=12) 231 | 232 | s = select([self.users]).where(self.users.c.uid.between(calc1.c.z, calc2.c.z)) 233 | parms = s.compile().params 234 | 235 | assert('x_2' in parms, 'x_1' in parms) 236 | assert('y_2' in parms, 'y_1' in parms) 237 | assert(parms['x_1'] == 17, parms['y_1'] == 45) 238 | assert(parms['x_2'] == 5, parms['y_2'] == 12) 239 | 240 | # order by asc 241 | stmt = select([self.users.c.name]).order_by(self.users.c.name) 242 | res = self.conn.execute(stmt).fetchall() 243 | 244 | assert('jack' == res[0][0]) 245 | assert('wendy' == res[1][0]) 246 | 247 | # order by desc 248 | stmt = select([self.users.c.name]).order_by(self.users.c.name.desc()) 249 | res = self.conn.execute(stmt).fetchall() 250 | 251 | assert('wendy' == res[0][0]) 252 | assert('jack' == res[1][0]) 253 | 254 | # group by 255 | stmt = select([self.users.c.name, func.count(self.addresses.c.id)]).\ 256 | select_from(self.users.join(self.addresses)).\ 257 | group_by(self.users.c.name) 258 | 259 | res = self.conn.execute(stmt).fetchall() 260 | 261 | assert(res[1][0] == 'jack') 262 | assert(res[0][0] == 'wendy') 263 | assert(res[0][1] == res[1][1]) 264 | 265 | # group by having 266 | stmt = select([self.users.c.name, func.count(self.addresses.c.id)]).\ 267 | select_from(self.users.join(self.addresses)).\ 268 | group_by(self.users.c.name).\ 269 | having(func.length(self.users.c.name) > 4) 270 | 271 | res = self.conn.execute(stmt).fetchall() 272 | 273 | assert(res[0] == ('wendy', 2)) 274 | 275 | # distinct 276 | stmt = select([self.users.c.name]).\ 277 | where(self.addresses.c.email_address.contains(self.users.c.name)).distinct() 278 | 279 | res = self.conn.execute(stmt).fetchall() 280 | 281 | assert(len(res) == 2) 282 | assert(res[0][0] != res[1][0]) 283 | 284 | # limit 285 | stmt = select([self.users.c.name, self.addresses.c.email_address]).\ 286 | select_from(self.users.join(self.addresses)).\ 287 | limit(1) 288 | 289 | res = self.conn.execute(stmt).fetchall() 290 | 291 | assert(len(res) == 1) 292 | 293 | # offset 294 | 295 | # test union and except 296 | from sqlalchemy.sql import except_, union 297 | 298 | u = union( 299 | self.addresses.select().where(self.addresses.c.email_address == 'foo@bar.com'), 300 | self.addresses.select().where(self.addresses.c.email_address.like('%@yahoo.com')),)# .order_by(self.addresses.c.email_address) 301 | # print(u) 302 | # #res = self.conn.execute(u) this fails, syntax error order by expects pos integer? 303 | 304 | u = except_( 305 | self.addresses.select().where(self.addresses.c.email_address.like('%@%.com')), 306 | self.addresses.select().where(self.addresses.c.email_address.like('%@msn.com'))) 307 | res = self.conn.execute(u).fetchall() 308 | assert(1, len(res)) 309 | 310 | u = except_( 311 | union( 312 | self.addresses.select().where(self.addresses.c.email_address.like('%@yahoo.com')), 313 | self.addresses.select().where(self.addresses.c.email_address.like('%@msn.com')) 314 | ).alias().select(), self.addresses.select(self.addresses.c.email_address.like('%@msn.com')) 315 | ) 316 | 317 | res = self.conn.execute(u).fetchall() 318 | assert(1, len(res)) 319 | 320 | # scalar subqueries 321 | stmt = select([func.count(self.addresses.c.id)]).where(self.users.c.uid == self.addresses.c.user_id).as_scalar() 322 | 323 | # we can place stmt as any other column within another select 324 | res = self.conn.execute(select([self.users.c.name, stmt])).fetchall() 325 | 326 | # res is a list of tuples, one tuple per user's name 327 | assert(2, len(res)) 328 | 329 | u1 = res[0] 330 | u2 = res[1] 331 | 332 | assert(len(u1) == len(u2)) 333 | assert(u1[0] == u'jack') 334 | assert(u1[1] == u2[1]) 335 | assert(u2[0] == u'wendy') 336 | 337 | # we can label the inner query 338 | stmt = select([func.count(self.addresses.c.id)]).\ 339 | where(self.users.c.uid == self.addresses.c.user_id).\ 340 | label("address_count") 341 | 342 | res = self.conn.execute(select([self.users.c.name, stmt])).fetchall() 343 | assert(2, len(res)) 344 | 345 | u1 = res[0] 346 | u2 = res[1] 347 | 348 | assert(len(u1) == 2) 349 | assert(len(u2) == 2) 350 | 351 | # inserts, updates, deletes 352 | stmt = self.users.update().values(fullname="Fullname: " + self.users.c.name) 353 | res = self.conn.execute(stmt) 354 | 355 | assert('name_1' in res.last_updated_params()) 356 | assert(res.last_updated_params()['name_1'] == 'Fullname: ') 357 | 358 | stmt = self.users.insert().values(name=bindparam('_name') + " .. name") 359 | res = self.conn.execute(stmt, [{'uid': 4, '_name': 'name1'}, {'uid': 5, '_name': 'name2'}, {'uid': 6, '_name': 'name3'}, ]) 360 | 361 | # updates 362 | stmt = self.users.update().where(self.users.c.name == 'jack').values(name='ed') 363 | res = self.conn.execute(stmt) 364 | 365 | assert(res.rowcount == 1) 366 | assert(res.returns_rows is False) 367 | 368 | # update many with bound params 369 | stmt = self.users.update().where(self.users.c.name == bindparam('oldname')).\ 370 | values(name=bindparam('newname')) 371 | res = self.conn.execute(stmt, [ 372 | {'oldname': 'jack', 'newname': 'ed'}, 373 | {'oldname': 'wendy', 'newname': 'mary'}, 374 | ]) 375 | 376 | assert(res.returns_rows is False) 377 | assert(res.rowcount == 1) 378 | 379 | res = self.conn.execute(select([self.users]).where(self.users.c.name == 'ed')) 380 | r = res.fetchone() 381 | assert(r['name'] == 'ed') 382 | 383 | # correlated updates 384 | stmt = select([self.addresses.c.email_address]).\ 385 | where(self.addresses.c.user_id == self.users.c.uid).\ 386 | limit(1) 387 | # this fails, syntax error bc of LIMIT - need TOP/SAMPLE instead 388 | # Note: TOP can't be in a subquery 389 | # res = self.conn.execute(self.users.update().values(fullname=stmt)) 390 | 391 | # multiple table updates 392 | stmt = self.users.update().\ 393 | values(name='ed wood').\ 394 | where(self.users.c.uid == self.addresses.c.id).\ 395 | where(self.addresses.c.email_address.startswith('ed%')) 396 | 397 | # this fails, teradata does update from set where not update set from where 398 | # #res = self.conn.execute(stmt) 399 | 400 | stmt = self.users.update().\ 401 | values({ 402 | self.users.c.name: 'ed wood', 403 | self.addresses.c.email_address: 'ed.wood@foo.com' 404 | }).\ 405 | where(self.users.c.uid == self.addresses.c.id).\ 406 | where(self.addresses.c.email_address.startswith('ed%')) 407 | 408 | # fails but works on MySQL, should this work for us? 409 | # #res = self.conn.execute(stmt) 410 | 411 | # deletes 412 | self.conn.execute(self.addresses.delete()) 413 | self.conn.execute(self.users.delete().where(self.users.c.name > 'm')) 414 | 415 | # matched row counts 416 | # updates + deletes have a number indicating # rows matched by WHERE clause 417 | res = self.conn.execute(self.users.delete()) 418 | assert(res.rowcount == 1) 419 | --------------------------------------------------------------------------------