├── .bumpversion.cfg
├── .gitignore
├── .mention-bot
├── .no-sublime-package
├── Default (Linux).sublime-keymap
├── Default (OSX).sublime-keymap
├── Default (Windows).sublime-keymap
├── Default.sublime-commands
├── ISSUE_TEMPLATE.md
├── LICENSE.md
├── Main.sublime-menu
├── README.md
├── SQLTools.py
├── SQLTools.sublime-settings
├── SQLToolsAPI
├── Command.py
├── Completion.py
├── Connection.py
├── History.py
├── ParseUtils.py
├── README.md
├── Storage.py
├── Utils.py
├── __init__.py
└── lib
│ └── sqlparse
│ ├── __init__.py
│ ├── __main__.py
│ ├── cli.py
│ ├── compat.py
│ ├── engine
│ ├── __init__.py
│ ├── filter_stack.py
│ ├── grouping.py
│ └── statement_splitter.py
│ ├── exceptions.py
│ ├── filters
│ ├── __init__.py
│ ├── aligned_indent.py
│ ├── others.py
│ ├── output.py
│ ├── reindent.py
│ ├── right_margin.py
│ └── tokens.py
│ ├── formatter.py
│ ├── keywords.py
│ ├── lexer.py
│ ├── sql.py
│ ├── tokens.py
│ └── utils.py
├── SQLToolsConnections.sublime-settings
├── SQLToolsSavedQueries.sublime-settings
├── messages.json
└── messages
├── install.md
├── v0.1.6.md
├── v0.2.0.md
├── v0.3.0.md
├── v0.8.2.md
├── v0.9.0.md
├── v0.9.1.md
├── v0.9.10.md
├── v0.9.11.md
├── v0.9.12.md
├── v0.9.2.md
├── v0.9.3.md
├── v0.9.4.md
├── v0.9.5.md
├── v0.9.6.md
├── v0.9.7.md
├── v0.9.8.md
└── v0.9.9.md
/.bumpversion.cfg:
--------------------------------------------------------------------------------
1 | [bumpversion]
2 | current_version = 0.9.12
3 | files = SQLTools.py
4 | tag = True
5 | commit = True
6 |
7 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | .DS_Store
2 | *.pyc
3 | *.py~
4 | *.sublime-workspace
5 | .idea/
6 |
--------------------------------------------------------------------------------
/.mention-bot:
--------------------------------------------------------------------------------
1 | {
2 | "maxReviewers": 2,
3 | "skipCollaboratorPR": true
4 | }
5 |
--------------------------------------------------------------------------------
/.no-sublime-package:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mtxr/SublimeText-SQLTools/1af1303dff8f34da24bba63e1a63d73aa4fff496/.no-sublime-package
--------------------------------------------------------------------------------
/Default (Linux).sublime-keymap:
--------------------------------------------------------------------------------
1 | [
2 | { "keys": ["ctrl+alt+e"], "command": "st_select_connection" },
3 | { "keys": ["ctrl+e", "ctrl+e"], "command": "st_execute" },
4 | { "keys": ["ctrl+e", "ctrl+a"], "command": "st_execute_all" },
5 | { "keys": ["ctrl+e", "ctrl+x"], "command": "st_explain_plan" },
6 | { "keys": ["ctrl+e", "ctrl+h"], "command": "st_history" },
7 | { "keys": ["ctrl+e", "ctrl+s"], "command": "st_show_records" },
8 | { "keys": ["ctrl+e", "ctrl+d"], "command": "st_desc_table" },
9 | { "keys": ["ctrl+e", "ctrl+f"], "command": "st_desc_function" },
10 | { "keys": ["ctrl+e", "ctrl+b"], "command": "st_format" },
11 | { "keys": ["ctrl+e", "ctrl+q"], "command": "st_save_query" },
12 | { "keys": ["ctrl+e", "ctrl+r"], "command": "st_remove_saved_query" },
13 | { "keys": ["ctrl+e", "ctrl+l"], "command": "st_list_queries", "args": {"mode" : "run"}},
14 | { "keys": ["ctrl+e", "ctrl+o"], "command": "st_list_queries", "args": {"mode" : "open"}},
15 | { "keys": ["ctrl+e", "ctrl+i"], "command": "st_list_queries", "args": {"mode" : "insert"}}
16 | ]
17 |
--------------------------------------------------------------------------------
/Default (OSX).sublime-keymap:
--------------------------------------------------------------------------------
1 | [
2 | { "keys": ["ctrl+super+e"], "command": "st_select_connection" },
3 | { "keys": ["ctrl+e", "ctrl+e"], "command": "st_execute" },
4 | { "keys": ["ctrl+e", "ctrl+a"], "command": "st_execute_all" },
5 | { "keys": ["ctrl+e", "ctrl+x"], "command": "st_explain_plan" },
6 | { "keys": ["ctrl+e", "ctrl+h"], "command": "st_history" },
7 | { "keys": ["ctrl+e", "ctrl+s"], "command": "st_show_records" },
8 | { "keys": ["ctrl+e", "ctrl+d"], "command": "st_desc_table" },
9 | { "keys": ["ctrl+e", "ctrl+f"], "command": "st_desc_function" },
10 | { "keys": ["ctrl+e", "ctrl+b"], "command": "st_format" },
11 | { "keys": ["ctrl+e", "ctrl+q"], "command": "st_save_query" },
12 | { "keys": ["ctrl+e", "ctrl+r"], "command": "st_remove_saved_query" },
13 | { "keys": ["ctrl+e", "ctrl+l"], "command": "st_list_queries", "args": {"mode" : "run"}},
14 | { "keys": ["ctrl+e", "ctrl+o"], "command": "st_list_queries", "args": {"mode" : "open"}},
15 | { "keys": ["ctrl+e", "ctrl+i"], "command": "st_list_queries", "args": {"mode" : "insert"}}
16 | ]
17 |
--------------------------------------------------------------------------------
/Default (Windows).sublime-keymap:
--------------------------------------------------------------------------------
1 | [
2 | { "keys": ["ctrl+alt+e"], "command": "st_select_connection" },
3 | { "keys": ["ctrl+e", "ctrl+e"], "command": "st_execute" },
4 | { "keys": ["ctrl+e", "ctrl+a"], "command": "st_execute_all" },
5 | { "keys": ["ctrl+e", "ctrl+x"], "command": "st_explain_plan" },
6 | { "keys": ["ctrl+e", "ctrl+h"], "command": "st_history" },
7 | { "keys": ["ctrl+e", "ctrl+s"], "command": "st_show_records" },
8 | { "keys": ["ctrl+e", "ctrl+d"], "command": "st_desc_table" },
9 | { "keys": ["ctrl+e", "ctrl+f"], "command": "st_desc_function" },
10 | { "keys": ["ctrl+e", "ctrl+b"], "command": "st_format" },
11 | { "keys": ["ctrl+e", "ctrl+q"], "command": "st_save_query" },
12 | { "keys": ["ctrl+e", "ctrl+r"], "command": "st_remove_saved_query" },
13 | { "keys": ["ctrl+e", "ctrl+l"], "command": "st_list_queries", "args": {"mode" : "run"}},
14 | { "keys": ["ctrl+e", "ctrl+o"], "command": "st_list_queries", "args": {"mode" : "open"}},
15 | { "keys": ["ctrl+e", "ctrl+i"], "command": "st_list_queries", "args": {"mode" : "insert"}}
16 | ]
17 |
--------------------------------------------------------------------------------
/Default.sublime-commands:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "caption": "ST: Select Connection",
4 | "command": "st_select_connection"
5 | },
6 | {
7 | "caption": "ST: Execute",
8 | "command": "st_execute"
9 | },
10 | {
11 | "caption": "ST: Execute All File",
12 | "command": "st_execute_all"
13 | },
14 | {
15 | "caption": "ST: Explain Plan",
16 | "command": "st_explain_plan"
17 | },
18 | {
19 | "caption": "ST: History",
20 | "command": "st_history"
21 | },
22 | {
23 | "caption": "ST: Show Table Records",
24 | "command": "st_show_records"
25 | },
26 | {
27 | "caption": "ST: Table Description",
28 | "command": "st_desc_table"
29 | },
30 | {
31 | "caption": "ST: Function Description",
32 | "command": "st_desc_function"
33 | },
34 | {
35 | "caption": "ST: Refresh Connection Data",
36 | "command": "st_refresh_connection_data"
37 | },
38 | {
39 | "caption": "ST: Save Query",
40 | "command": "st_save_query"
41 | },
42 | {
43 | "caption": "ST: Remove Saved Query",
44 | "command": "st_remove_saved_query"
45 | },
46 | {
47 | "caption": "ST: List and Run Saved Queries",
48 | "command": "st_list_queries",
49 | "args" : {"mode": "run"}
50 | },
51 | {
52 | "caption": "ST: List and Open Saved Queries",
53 | "command": "st_list_queries",
54 | "args" : {"mode": "open"}
55 | },
56 | {
57 | "caption": "ST: List and Insert Saved Queries",
58 | "command": "st_list_queries",
59 | "args" : {"mode": "insert"}
60 | },
61 | {
62 | "caption": "ST: Format SQL",
63 | "command": "st_format"
64 | },
65 | {
66 | "caption": "ST: Format SQL All File",
67 | "command": "st_format_all"
68 | },
69 | {
70 | "caption": "ST: About",
71 | "command": "st_version"
72 | },
73 | {
74 | "caption": "ST: Setup Connections",
75 | "command": "edit_settings", "args":
76 | {
77 | "base_file": "${packages}/SQLTools/SQLToolsConnections.sublime-settings",
78 | "default": "// List all your connections to DBs here\n{\n\t\"connections\": {\n\t\t$0\n\t},\n\t\"default\": null\n}"
79 | }
80 | },
81 | {
82 | "caption": "ST: Settings",
83 | "command": "edit_settings", "args":
84 | {
85 | "base_file": "${packages}/SQLTools/SQLTools.sublime-settings",
86 | "default": "// Settings in here override those in \"SQLTools/SQLTools.sublime-settings\"\n{\n\t$0\n}\n"
87 | }
88 | }
89 | ]
90 |
--------------------------------------------------------------------------------
/ISSUE_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | > This issue template helps us understand your SQLTools issues better.
2 | >
3 | > You don't need to stick to this template, but please try to guide us to reproduce the errors or understand your feature requests.
4 | >
5 | > Before submitting an issue, please consider these things first:
6 | > * Are you running the latest version? If not, try to upgrade.
7 | > * Did you check the [Setup Guide](https://code.mteixeira.dev/SublimeText-SQLTools/)?
8 | > * Did you check the logs in console (``Ctrl+` `` or select *View → Show Console*)?
9 |
10 | ### Issue Type
11 |
12 | Feature Request |
13 | Bug/Error |
14 | Question |
15 | Other
16 |
17 | ### Description
18 |
19 | [Description of the bug / feature / question]
20 |
21 | ### Version
22 |
23 | - *SQLTools Version*: vX.Y.Z
24 | - *OS*: (Windows, Mac, Linux)
25 | - *RDBMS*: (MySQL, PostgreSQL, Oracle, MSSQL, SQLite, Vertica, ...)
26 |
27 | > You can get this information by executing `ST: About` from Sublime `Command Palette`.
28 |
29 | ### Steps to Reproduce (For bugfixes)
30 |
31 | 1. [First Step]
32 | 2. [Second Step]
33 | 3. [and so on...]
34 |
35 | **Expected behavior:** [What you expected to happen]
36 |
37 | **Actual behavior:** [What actually happened]
38 |
--------------------------------------------------------------------------------
/Main.sublime-menu:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "caption": "Preferences",
4 | "mnemonic": "n",
5 | "id": "preferences",
6 | "children":
7 | [
8 | {
9 | "caption": "Package Settings",
10 | "mnemonic": "P",
11 | "id": "package-settings",
12 | "children":
13 | [
14 | {
15 | "caption": "SQLTools",
16 | "children":
17 | [
18 | {
19 | "caption": "Connections",
20 | "command": "edit_settings", "args":
21 | {
22 | "base_file": "${packages}/SQLTools/SQLToolsConnections.sublime-settings",
23 | "default": "// List all your connections to DBs here\n{\n\t\"connections\": {\n\t\t$0\n\t},\n\t\"default\": null\n}"
24 | }
25 | },
26 | {
27 | "caption": "Settings",
28 | "command": "edit_settings", "args":
29 | {
30 | "base_file": "${packages}/SQLTools/SQLTools.sublime-settings",
31 | "default": "// Settings in here override those in \"SQLTools/SQLTools.sublime-settings\"\n{\n\t$0\n}\n"
32 | }
33 | },
34 | {
35 | "caption": "Key Bindings",
36 | "command": "edit_settings", "args":
37 | {
38 | "base_file": "${packages}/SQLTools/Default ($platform).sublime-keymap",
39 | "default": "[\n\t$0\n]\n"
40 | }
41 | }
42 | ]
43 | }
44 | ]
45 | }
46 | ]
47 | }
48 | ]
49 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |  SQLTools
2 | ===============
3 |
4 | > Looking for maintainers! I'm currently using VSCode as my editor, so I'm not actively maintaining this project anymore.
5 | >
6 | > If you are interested in maintaining this project, contact me.
7 | >
8 | > If you are interested in checking VSCode version, see [https://github.com/mtxr/vscode-sqltools](https://github.com/mtxr/vscode-sqltools).
9 |
10 |
11 | [](https://gitter.im/SQLTools/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
12 |
13 | Your swiss knife SQL for Sublime Text.
14 |
15 | Write your SQL with smart completions and handy table and function definitions, execute SQL and explain queries, format your queries and save them in history.
16 |
17 | Project website: [https://code.mteixeira.dev/SublimeText-SQLTools/](https://code.mteixeira.dev/SublimeText-SQLTools/)
18 |
19 | > If you are looking for VSCode version go to [https://github.com/mtxr/vscode-sqltools](https://github.com/mtxr/vscode-sqltools).
20 |
21 | ## Donate
22 |
23 | SQLTools was developed with ♥ to save us time during our programming journey. But It also takes me time and efforts to develop SQLTools.
24 |
25 | SQLTools will save you (for sure) a lot of time and help you to increase your productivity so, I hope you can donate and help SQLTools to become more awesome than ever.
26 |
27 |
28 |
29 | ## Features
30 |
31 | * Works with PostgreSQL, MySQL, Oracle, MSSQL, SQLite, Vertica, Firebird and Snowflake
32 | * Smart completions (except SQLite)
33 | * Run SQL Queries CTRL+e, CTRL+e
34 | 
35 | * View table description CTRL+e, CTRL+d
36 | 
37 | * Show table records CTRL+e, CTRL+s
38 | 
39 | * Show explain plan for queries CTRL+e, CTRL+x
40 | * Formatting SQL Queries CTRL+e, CTRL+b
41 | 
42 | * View Queries history CTRL+e, CTRL+h
43 | * Save queries CTRL+e, CTRL+q
44 | * List and Run saved queries CTRL+e, CTRL+l
45 | * Remove saved queries CTRL+e, CTRL+r
46 | * Threading support to prevent lockups
47 | * Query timeout (kill thread if query takes too long)
48 |
49 | ## Installing
50 |
51 | ### Using Sublime Package Control
52 |
53 | If you are using [Sublime Package Control](https://packagecontrol.io/packages/SQLTools), you can easily install SQLTools via the `Package Control: Install Package` menu item.
54 |
55 | 1. Press CTRL+SHIFT+p
56 | 2. Type *`Install Package`*
57 | 3. Find *`SQLTools`*
58 | 4. Wait & Done!
59 |
60 | ### Download Manually
61 |
62 | I strongly recommend you to use Package Control. It helps you to keep the package updated with the last version.
63 |
64 | 1. Download the latest released zip file [here](https://github.com/mtxr/SublimeText-SQLTools/releases/latest)
65 | 2. Unzip the files and rename the folder to `SQLTools`
66 | 3. Find your `Packages` directory using the menu item `Preferences -> Browse Packages...`
67 | 4. Copy the folder into your Sublime Text `Packages` directory
68 |
69 | ### Using SQLTools with Mac OS X
70 |
71 | Sublime Text has it's environment variable `PATH` set from launchctl, not by your shell. Binaries installed by packages such as homebrew, for instance `psql` DB CLI for `PostgreSQL`, cannot be found by Sublime Text and results in error in Sublime Text console by `SQLTools`. Installing the package `Fix Mac Path` or setting the full path to your DB CLI binary in `SQLTools.sublime-settings` resolves this issue. Package can be downloaded via [PackageControl](https://packagecontrol.io/packages/Fix%20Mac%20Path) or [github](https://github.com/int3h/SublimeFixMacPath).
72 |
73 | ## Contributors
74 |
75 | This project exists thanks to all the people who [contribute](https://github.com/mtxr/SublimeText-SQLTools/graphs/contributors).
76 |
77 |
78 | ## Configuration
79 |
80 | Documentation: [https://code.mteixeira.dev/SublimeText-SQLTools/](https://code.mteixeira.dev/SublimeText-SQLTools/)
81 |
82 |
83 |
84 |
85 |
--------------------------------------------------------------------------------
/SQLTools.sublime-settings:
--------------------------------------------------------------------------------
1 | // NOTE: it is strongly advised to override ONLY those settings
2 | // that you wish to change in your Users/SQLTools.sublime-settings.
3 | // Don't copy-paste the entire config.
4 | {
5 | /**
6 | * If DB cli binary is not in PATH, set the full path in "cli" section.
7 | * Note: forward slashes ("/") should be used in path. Example:
8 | * "mysql" : "c:/Program Files/MySQL/MySQL Server 5.7/bin/mysql.exe"
9 | */
10 | "cli": {
11 | "mysql" : "mysql",
12 | "pgsql" : "psql",
13 | "mssql" : "sqlcmd",
14 | "oracle" : "sqlplus",
15 | "sqlite" : "sqlite3",
16 | "vertica" : "vsql",
17 | "firebird": "isql",
18 | "sqsh" : "sqsh",
19 | "snowsql" : "snowsql"
20 | },
21 |
22 | // If there is no SQL selected, use "expanded" region.
23 | // Possible options:
24 | // "file" - entire file contents
25 | // "paragraph" - text between newlines relative to cursor(s)
26 | // "line" - current line of cursor(s)
27 | "expand_to": "file",
28 |
29 | // puts results either in output panel or new window
30 | "show_result_on_window": false,
31 |
32 | // focus on result panel
33 | "focus_on_result": false,
34 |
35 | // clears the output of previous query
36 | "clear_output": true,
37 |
38 | // query timeout in seconds
39 | "thread_timeout": 15,
40 |
41 | // stream the output line by line
42 | "use_streams": false,
43 |
44 | // number of queries to save in the history
45 | "history_size": 100,
46 |
47 | "show_records": {
48 | "limit": 50
49 | },
50 |
51 | // unless false, appends LIMIT clause to SELECT statements (not compatible with all DB's)
52 | "safe_limit": false,
53 |
54 | "debug": false,
55 |
56 | /**
57 | * Print the queries that were executed to the output.
58 | * Possible values for show_query: "top", "bottom", true ("top"), or false (disable)
59 | * When using regular output, this will determine where query details is displayed.
60 | * In stream output mode, any option that isn't false will print query details at
61 | * the bottom of result.
62 | */
63 | "show_query": false,
64 |
65 | /**
66 | * Possible values for autocompletion: "basic", "smart", true ("smart"),
67 | * or false (disable)
68 | * Completion keywords case is controlled by format.keyword_case (see below)
69 | */
70 | "autocompletion": "smart",
71 |
72 | // Settings used for formatting the queries and autocompletions
73 | //
74 | // "keyword_case" , "upper", "lower", "capitalize" and null (leaves case intact)
75 | // "identifier_case", "upper", "lower", "capitalize" and null (leaves case intact)
76 | // "strip_comments" , formatting removes comments
77 | // "indent_tabs" , use tabs instead of spaces
78 | // "indent_width" , indentation width
79 | // "reindent" , reindent code
80 | "format": {
81 | "keyword_case" : "upper",
82 | "identifier_case" : null,
83 | "strip_comments" : false,
84 | "indent_tabs" : false,
85 | "indent_width" : 4,
86 | "reindent" : true
87 | },
88 |
89 | /**
90 | * The list of syntax selectors for which the autocompletion will be active.
91 | * An empty list means autocompletion always active.
92 | */
93 | "autocomplete_selectors_active": [
94 | "source.sql",
95 | "source.pgsql",
96 | "source.plpgsql.postgres",
97 | "source.plsql.oracle",
98 | "source.tsql"
99 | ],
100 |
101 | /**
102 | * The list of syntax selectors for which the autocompletion will be disabled.
103 | */
104 | "autocomplete_selectors_ignore": [
105 | "string.quoted.single.sql",
106 | "string.quoted.single.pgsql",
107 | "string.quoted.single.postgres",
108 | "string.quoted.single.oracle",
109 | "string.group.quote.oracle"
110 | ],
111 |
112 | /**
113 | * Command line interface options for each RDBMS.
114 | * In this file, the section `cli` above has the names you can use here.
115 | * E.g.: "mysql", "pgsql", "oracle"
116 | *
117 | * Names in the curly brackets (e.g. `{host}`) in sections `args`, `args_optional`,
118 | * `env`, `env_optional` are replaced by the values specified in the connection.
119 | */
120 | "cli_options": {
121 | "pgsql": {
122 | "options": ["--no-password"],
123 | "before": [],
124 | "after": [],
125 | "args": "-d {database}",
126 | "args_optional": [
127 | "-h {host}",
128 | "-p {port}",
129 | "-U {username}"
130 | ],
131 | "env_optional": {
132 | "PGPASSWORD": "{password}"
133 | },
134 | "queries": {
135 | "execute": {
136 | "options": []
137 | },
138 | "show records": {
139 | "query": "select * from {0} limit {1};",
140 | "options": []
141 | },
142 | "desc table": {
143 | "query": "\\d+ {0}",
144 | "options": []
145 | },
146 | "desc function": {
147 | "query": "\\sf {0}",
148 | "options": []
149 | },
150 | "explain plan": {
151 | "query": "explain analyze {0};",
152 | "options": []
153 | },
154 | "desc": {
155 | "query": "select quote_ident(table_schema) || '.' || quote_ident(table_name) as tblname from information_schema.tables where table_schema not in ('pg_catalog', 'information_schema') order by table_schema = current_schema() desc, table_schema, table_name;",
156 | "options": ["--tuples-only", "--no-psqlrc"]
157 | },
158 | "columns": {
159 | "query": "select distinct quote_ident(table_name) || '.' || quote_ident(column_name) from information_schema.columns where table_schema not in ('pg_catalog', 'information_schema');",
160 | "options": ["--tuples-only", "--no-psqlrc"]
161 | },
162 | "functions": {
163 | "query": "select quote_ident(n.nspname) || '.' || quote_ident(f.proname) || '(' || pg_get_function_identity_arguments(f.oid) || ')' as funname from pg_catalog.pg_proc as f inner join pg_catalog.pg_namespace as n on n.oid = f.pronamespace where n.nspname not in ('pg_catalog', 'information_schema');",
164 | "options": ["--tuples-only", "--no-psqlrc"]
165 | }
166 | }
167 | },
168 |
169 | "mysql": {
170 | "options": ["--no-auto-rehash", "--compress"],
171 | "before": [],
172 | "after": [],
173 | "args": "-D\"{database}\"",
174 | "args_optional": [
175 | "--login-path=\"{login-path}\"",
176 | "--defaults-extra-file=\"{defaults-extra-file}\"",
177 | "--default-character-set=\"{default-character-set}\"",
178 | "-h\"{host}\"",
179 | "-P{port}",
180 | "-u\"{username}\""
181 | ],
182 | "env_optional": {
183 | "MYSQL_PWD": "{password}"
184 | },
185 | "queries": {
186 | "execute": {
187 | "options": ["--table"]
188 | },
189 | "show records": {
190 | "query": "select * from {0} limit {1};",
191 | "options": ["--table"]
192 | },
193 | "desc table": {
194 | "query": "desc {0};",
195 | "options": ["--table"]
196 | },
197 | "desc function": {
198 | "query": "select routine_definition from information_schema.routines where concat(case when routine_schema REGEXP '[^0-9a-zA-Z$_]' then concat('`',routine_schema,'`') else routine_schema end, '.', case when routine_name REGEXP '[^0-9a-zA-Z$_]' then concat('`',routine_name,'`') else routine_name end) = '{0}';",
199 | "options": ["--silent", "--raw"]
200 | },
201 | "explain plan": {
202 | "query": "explain {0};",
203 | "options": ["--table"]
204 | },
205 | "desc" : {
206 | "query": "select concat(case when table_schema REGEXP '[^0-9a-zA-Z$_]' then concat('`',table_schema,'`') else table_schema end, '.', case when table_name REGEXP '[^0-9a-zA-Z$_]' then concat('`',table_name,'`') else table_name end) as obj from information_schema.tables where table_schema = database() order by table_name;",
207 | "options": ["--silent", "--raw", "--skip-column-names"]
208 | },
209 | "columns": {
210 | "query": "select concat(case when table_name REGEXP '[^0-9a-zA-Z$_]' then concat('`',table_name,'`') else table_name end, '.', case when column_name REGEXP '[^0-9a-zA-Z$_]' then concat('`',column_name,'`') else column_name end) as obj from information_schema.columns where table_schema = database() order by table_name, ordinal_position;",
211 | "options": ["--silent", "--raw", "--skip-column-names"]
212 | },
213 | "functions": {
214 | "query": "select concat(case when routine_schema REGEXP '[^0-9a-zA-Z$_]' then concat('`',routine_schema,'`') else routine_schema end, '.', case when routine_name REGEXP '[^0-9a-zA-Z$_]' then concat('`',routine_name,'`') else routine_name end, '()') as obj from information_schema.routines where routine_schema = database();",
215 | "options": ["--silent", "--raw", "--skip-column-names"]
216 | }
217 | }
218 | },
219 |
220 | "mssql": {
221 | "options": [],
222 | "before": [],
223 | "after": ["go", "quit"],
224 | "args": "-d \"{database}\"",
225 | "args_optional": ["-S \"{host},{port}\"", "-S \"{host}\\{instance}\"", "-U \"{username}\"", "-P \"{password}\""],
226 | "queries": {
227 | "execute": {
228 | "options": ["-k"]
229 | },
230 | "show records": {
231 | "query": "select top {1} * from {0};",
232 | "options": []
233 | },
234 | "desc table": {
235 | "query": "exec sp_help N'{0}';",
236 | "options": ["-y30", "-Y30"]
237 | },
238 | "desc function": {
239 | "query": "exec sp_helptext N'{0}';",
240 | "options": ["-h-1"]
241 | },
242 | "explain plan": {
243 | "query": "{0};",
244 | "options": ["-k"],
245 | "before": [
246 | "SET STATISTICS PROFILE ON",
247 | "SET STATISTICS IO ON",
248 | "SET STATISTICS TIME ON"
249 | ]
250 | },
251 | "desc": {
252 | "query": "set nocount on; select concat(table_schema, '.', table_name) as obj from information_schema.tables order by table_schema, table_name;",
253 | "options": ["-h-1", "-r1"]
254 | },
255 | "columns": {
256 | "query": "set nocount on; select distinct concat(table_name, '.', column_name) as obj from information_schema.columns;",
257 | "options": ["-h-1", "-r1"]
258 | },
259 | "functions": {
260 | "query": "set nocount on; select concat(routine_schema, '.', routine_name) as obj from information_schema.routines order by routine_schema, routine_name;",
261 | "options": ["-h-1", "-r1"]
262 | }
263 | }
264 | },
265 |
266 | "oracle": {
267 | "options": ["-S"],
268 | "before": [
269 | "SET LINESIZE 32767",
270 | "SET WRAP OFF",
271 | "SET PAGESIZE 0",
272 | "SET EMBEDDED ON",
273 | "SET TRIMOUT ON",
274 | "SET TRIMSPOOL ON",
275 | "SET TAB OFF",
276 | "SET SERVEROUT ON",
277 | "SET NULL '@'",
278 | "SET COLSEP '|'",
279 | "SET SQLBLANKLINES ON"
280 | ],
281 | "after": [],
282 | "env_optional": {
283 | "NLS_LANG": "{nls_lang}"
284 | },
285 | "args": "{username}/{password}@\"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST={host})(PORT={port})))(CONNECT_DATA=(SERVICE_NAME={service})))\"",
286 | "queries": {
287 | "execute": {
288 | "options": [],
289 | "before": [
290 | // "SET TIMING ON",
291 | "SET FEEDBACK ON"
292 | ]
293 | },
294 | "show records": {
295 | "query": "select * from {0} where rownum <= {1};",
296 | "options": [],
297 | "before": [
298 | "SET FEEDBACK ON"
299 | ]
300 | },
301 | "desc table": {
302 | "query": "desc {0};",
303 | "options": [],
304 | "before": [
305 | "SET LINESIZE 80", // override for readability
306 | "SET WRAP ON", // override for readability
307 | "SET FEEDBACK ON"
308 | ]
309 | },
310 | "desc function": {
311 | "query": "select text from all_source where type in ('FUNCTION', 'PROCEDURE', 'PACKAGE', 'PACKAGE BODY') and name = nvl(substr(ltrim('{0}', sys_context('USERENV', 'CURRENT_SCHEMA') || '.' ), 0, instr(ltrim('{0}', sys_context('USERENV', 'CURRENT_SCHEMA') || '.' ), '.')-1), ltrim('{0}', sys_context('USERENV', 'CURRENT_SCHEMA') || '.' )) and owner = sys_context('USERENV', 'CURRENT_SCHEMA') order by type, line;",
312 | "options": [],
313 | "before": [
314 | "SET SERVEROUT OFF", // override
315 | "SET NULL ''", // override
316 | "SET HEADING OFF",
317 | "SET FEEDBACK OFF"
318 | ]
319 | },
320 | "explain plan": {
321 | "query": "explain plan for {0};\nselect plan_table_output from table(dbms_xplan.display());",
322 | "options": [],
323 | "before": [
324 | "SET FEEDBACK ON"
325 | ]
326 | },
327 | "desc" : {
328 | "query": "select owner || '.' || case when upper(name) = name then name else chr(34) || name || chr(34) end as obj from (select owner, table_name as name from all_tables union all select owner, view_name as name from all_views) o where owner not in ('ANONYMOUS','APPQOSSYS','CTXSYS','DBSNMP','EXFSYS', 'LBACSYS', 'MDSYS','MGMT_VIEW','OLAPSYS','OWBSYS','ORDPLUGINS', 'ORDSYS','OUTLN', 'SI_INFORMTN_SCHEMA','SYS','SYSMAN','SYSTEM', 'TSMSYS','WK_TEST','WKSYS', 'WKPROXY','WMSYS','XDB','APEX_040000', 'APEX_PUBLIC_USER','DIP', 'FLOWS_30000','FLOWS_FILES','MDDATA', 'ORACLE_OCM','SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR', 'XS$NULL','PUBLIC');",
329 | "options": [],
330 | "before": [
331 | "SET SERVEROUT OFF", // override
332 | "SET NULL ''", // override
333 | "SET HEADING OFF",
334 | "SET FEEDBACK OFF"
335 | ]
336 | },
337 | "columns": {
338 | "query": "select case when upper(table_name) = table_name then table_name else chr(34) || table_name || chr(34) end || '.' || case when upper(column_name) = column_name then column_name else chr(34) || column_name || chr(34) end as obj from (select c.table_name, c.column_name, t.owner from all_tab_columns c inner join all_tables t on c.owner = t.owner and c.table_name = t.table_name union all select c.table_name, c.column_name, t.owner from all_tab_columns c inner join all_views t on c.owner = t.owner and c.table_name = t.view_name) o where owner not in ('ANONYMOUS','APPQOSSYS','CTXSYS','DBSNMP','EXFSYS', 'LBACSYS', 'MDSYS','MGMT_VIEW','OLAPSYS','OWBSYS','ORDPLUGINS', 'ORDSYS','OUTLN', 'SI_INFORMTN_SCHEMA','SYS','SYSMAN','SYSTEM', 'TSMSYS','WK_TEST','WKSYS', 'WKPROXY','WMSYS','XDB','APEX_040000', 'APEX_PUBLIC_USER','DIP', 'FLOWS_30000','FLOWS_FILES','MDDATA', 'ORACLE_OCM','SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR', 'XS$NULL','PUBLIC');",
339 | "options": [],
340 | "before": [
341 | "SET SERVEROUT OFF", // override
342 | "SET NULL ''", // override
343 | "SET HEADING OFF",
344 | "SET FEEDBACK OFF"
345 | ]
346 | },
347 | "functions": {
348 | "query": "select case when object_type = 'PACKAGE' then object_name||'.'||procedure_name else owner || '.' || object_name end || '()' as obj from all_procedures where object_type in ('FUNCTION','PROCEDURE','PACKAGE') and not (object_type = 'PACKAGE' and procedure_name is null) and owner = sys_context('USERENV', 'CURRENT_SCHEMA');",
349 | "options": [],
350 | "before": [
351 | "SET SERVEROUT OFF", // override
352 | "SET NULL ''", // override
353 | "SET HEADING OFF",
354 | "SET FEEDBACK OFF"
355 | ]
356 | }
357 | }
358 | },
359 |
360 | "sqlite": {
361 | "options": [],
362 | "before": [],
363 | "after": [],
364 | "args": "\"{database}\"",
365 | "queries": {
366 | "execute": {
367 | "options": ["-column", "-header", "-bail"]
368 | },
369 | "show records": {
370 | "query": "select * from \"{0}\" limit {1};",
371 | "options": ["-column", "-header"]
372 | },
373 | "explain plan": {
374 | "query": "EXPLAIN QUERY PLAN {0};",
375 | "options": ["-column", "-header", "-bail"]
376 | },
377 | "desc table": {
378 | "query": ".schema \"{0}\"",
379 | "options": ["-column", "-header"]
380 | },
381 | "desc" : {
382 | "query": "SELECT name FROM sqlite_master WHERE type='table';",
383 | "options": ["-noheader"],
384 | "before": [
385 | ".headers off"
386 | ]
387 | }
388 | }
389 | },
390 |
391 | "vertica": {
392 | "options": [],
393 | "before" : [],
394 | "after": [],
395 | "args": "-h \"{host}\" -p {port} -U \"{username}\" -w \"{password}\" -d \"{database}\"",
396 | "queries": {
397 | "execute": {
398 | "options": []
399 | },
400 | "show records": {
401 | "query": "select * from {0} limit {1};",
402 | "options": []
403 | },
404 | "desc table": {
405 | "query": "\\d {0}",
406 | "options": []
407 | },
408 | "explain plan": {
409 | "query": "explain {0};",
410 | "options": []
411 | },
412 | "desc" : {
413 | "query": "select quote_ident(table_schema) || '.' || quote_ident(table_name) as obj from v_catalog.tables where is_system_table = false;",
414 | "options": ["--tuples-only", "--no-vsqlrc"]
415 | },
416 | "columns": {
417 | "query": "select quote_ident(table_name) || '.' || quote_ident(column_name) as obj from v_catalog.columns where is_system_table = false order by table_name, ordinal_position;",
418 | "options": ["--tuples-only", "--no-vsqlrc"]
419 | }
420 | }
421 | },
422 |
423 | "firebird": {
424 | "options": [],
425 | "before": [],
426 | "after": [],
427 | "args": "-pagelength 10000 -u \"{username}\" -p \"{password}\" \"{host}/{port}:{database}\"",
428 | "queries": {
429 | "execute": {
430 | "options": [],
431 | "before": [
432 | "set bail on;",
433 | "set count on;"
434 | ]
435 | },
436 | "show records": {
437 | "query": "select first {1} * from {0};",
438 | "options": [],
439 | "before": [
440 | "set bail off;",
441 | "set count on;"
442 | ]
443 | },
444 | "desc table": {
445 | "query": "show table {0}; \n show view {0};",
446 | "before": [
447 | "set bail off;"
448 | ]
449 | },
450 | "desc function": {
451 | "query": "show procedure {0};",
452 | "before": [
453 | "set bail off;"
454 | ]
455 | },
456 | "explain plan": {
457 | "query": "{0};",
458 | "before": [
459 | "set bail on;",
460 | "set count on;",
461 | "set plan on;",
462 | "set stats on;"
463 | ]
464 | },
465 | "desc" : {
466 | "query": "select case when upper(rdb$relation_name) = rdb$relation_name then trim(rdb$relation_name) else '\"' || trim(rdb$relation_name) || '\"' end as obj from rdb$relations where (rdb$system_flag is null or rdb$system_flag = 0);",
467 | "before": [
468 | "set bail off;",
469 | "set heading off;",
470 | "set count off;",
471 | "set stats off;"
472 | ]
473 | },
474 | "columns": {
475 | "query": "select case when upper(f.rdb$relation_name) = f.rdb$relation_name then trim(f.rdb$relation_name) else '\"' || trim(f.rdb$relation_name) || '\"' end || '.' || case when upper(f.rdb$field_name) = f.rdb$field_name then trim(f.rdb$field_name) else '\"' || trim(f.rdb$field_name) || '\"' end as obj from rdb$relation_fields f join rdb$relations r on f.rdb$relation_name = r.rdb$relation_name where (r.rdb$system_flag is null or r.rdb$system_flag = 0) order by f.rdb$relation_name, f.rdb$field_position;",
476 | "before": [
477 | "set bail off;",
478 | "set heading off;",
479 | "set count off;",
480 | "set stats off;"
481 | ]
482 | },
483 | "functions": {
484 | "query": "select case when upper(rdb$procedure_name) = rdb$procedure_name then trim(rdb$procedure_name) else '\"' || trim(rdb$procedure_name) || '\"' end || '()' as obj from rdb$procedures where (rdb$system_flag is null or rdb$system_flag = 0);",
485 | "before": [
486 | "set bail off;",
487 | "set heading off;",
488 | "set count off;",
489 | "set stats off;"
490 | ]
491 | }
492 | }
493 | },
494 |
495 | "sqsh": {
496 | "options": [],
497 | "before": [],
498 | "after": [],
499 | "args": "-S {host}:{port} -U\"{username}\" -P\"{password}\" -D{database}",
500 | "queries": {
501 | "execute": {
502 | "options": [],
503 | "before": ["\\set semicolon_cmd=\"\\go -mpretty -l\""]
504 | },
505 | "desc": {
506 | "query": "select concat(table_schema, '.', table_name) from information_schema.tables order by table_name;",
507 | "options": [],
508 | "before" :["\\set semicolon_cmd=\"\\go -mpretty -l -h -f\""]
509 | },
510 | "columns": {
511 | "query": "select concat(table_name, '.', column_name) from information_schema.columns order by table_name, ordinal_position;",
512 | "options": [],
513 | "before" :["\\set semicolon_cmd=\"\\go -mpretty -l -h -f\""]
514 | },
515 | "desc table": {
516 | "query": "exec sp_columns \"{0}\";",
517 | "options": [],
518 | "before": ["\\set semicolon_cmd=\"\\go -mpretty -l -h -f\""]
519 | },
520 | "show records": {
521 | "query": "select top {1} * from \"{0}\";",
522 | "options": [],
523 | "before": ["\\set semicolon_cmd=\"\\go -mpretty -l\""]
524 | }
525 | }
526 | },
527 |
528 | "snowsql": {
529 | "options": [],
530 | "env_optional": {
531 | "SNOWSQL_PWD": "{password}"
532 | },
533 | "args": "-u {user} -a {account} -d {database} --authenticator {auth} -K",
534 | "queries": {
535 | "execute": {
536 | "options": [],
537 | },
538 | "show records": {
539 | "query": "SELECT * FROM {0} LIMIT {1};"
540 | },
541 | "desc table": {
542 | "query": "DESC TABLE {0};",
543 | "options": ["-o", "output_format=plain",
544 | "-o", "timing=False"]
545 | },
546 | "desc": {
547 | "query": "SELECT TABLE_SCHEMA || '.' || TABLE_NAME AS tbl FROM INFORMATION_SCHEMA.TABLES ORDER BY tbl;",
548 | "options": ["-o", "output_format=plain",
549 | "-o", "header=False",
550 | "-o", "timing=False"]
551 | },
552 | "columns": {
553 | "query": "SELECT TABLE_NAME || '.' || COLUMN_NAME AS col FROM INFORMATION_SCHEMA.COLUMNS ORDER BY col;",
554 | "options": ["-o", "output_format=plain",
555 | "-o", "header=False",
556 | "-o", "timing=False"]
557 | }
558 | }
559 | }
560 | }
561 | }
562 |
--------------------------------------------------------------------------------
/SQLToolsAPI/Command.py:
--------------------------------------------------------------------------------
1 | import os
2 | import signal
3 | import subprocess
4 | import time
5 | import logging
6 |
7 | from threading import Thread, Timer
8 |
9 | logger = logging.getLogger(__name__)
10 |
11 | class Command(object):
12 | timeout = 15
13 |
14 | def __init__(self, args, env, callback, query=None, encoding='utf-8',
15 | options=None, timeout=15, silenceErrors=False, stream=False):
16 | if options is None:
17 | options = {}
18 |
19 | self.args = args
20 | self.env = env
21 | self.callback = callback
22 | self.query = query
23 | self.encoding = encoding
24 | self.options = options
25 | self.timeout = timeout
26 | self.silenceErrors = silenceErrors
27 | self.stream = stream
28 | self.process = None
29 |
30 | if 'show_query' not in self.options:
31 | self.options['show_query'] = False
32 | elif self.options['show_query'] not in ['top', 'bottom']:
33 | self.options['show_query'] = 'top' if (isinstance(self.options['show_query'], bool) and
34 | self.options['show_query']) else False
35 |
36 | def run(self):
37 | if not self.query:
38 | return
39 |
40 | self.args = map(str, self.args)
41 | si = None
42 | if os.name == 'nt':
43 | si = subprocess.STARTUPINFO()
44 | si.dwFlags |= subprocess.STARTF_USESHOWWINDOW
45 |
46 | # select appropriate file handle for stderr
47 | # usually we want to redirect stderr to stdout, so erros are shown
48 | # in the output in the right place (where they actually occurred)
49 | # only if silenceErrors=True, we separate stderr from stdout and discard it
50 | stderrHandle = subprocess.STDOUT
51 | if self.silenceErrors:
52 | stderrHandle = subprocess.PIPE
53 |
54 | # set the environment
55 | modifiedEnvironment = os.environ.copy()
56 | if (self.env):
57 | modifiedEnvironment.update(self.env)
58 |
59 | queryTimerStart = time.time()
60 |
61 | self.process = subprocess.Popen(self.args,
62 | stdout=subprocess.PIPE,
63 | stderr=stderrHandle,
64 | stdin=subprocess.PIPE,
65 | env=modifiedEnvironment,
66 | startupinfo=si)
67 |
68 | if self.stream:
69 | self.process.stdin.write(self.query.encode(self.encoding))
70 | self.process.stdin.close()
71 | hasWritten = False
72 |
73 | for line in self.process.stdout:
74 | self.callback(line.decode(self.encoding, 'replace').replace('\r', ''))
75 | hasWritten = True
76 |
77 | queryTimerEnd = time.time()
78 | # we are done with the output, terminate the process
79 | if self.process:
80 | self.process.terminate()
81 | else:
82 | if hasWritten:
83 | self.callback('\n')
84 |
85 | if self.options['show_query']:
86 | formattedQueryInfo = self._formatShowQuery(self.query, queryTimerStart, queryTimerEnd)
87 | self.callback(formattedQueryInfo + '\n')
88 |
89 | return
90 |
91 | # regular mode is handled with more reliable Popen.communicate
92 | # which also terminates the process afterwards
93 | results, errors = self.process.communicate(input=self.query.encode(self.encoding))
94 |
95 | queryTimerEnd = time.time()
96 |
97 | resultString = ''
98 |
99 | if results:
100 | resultString += results.decode(self.encoding,
101 | 'replace').replace('\r', '')
102 |
103 | if errors and not self.silenceErrors:
104 | resultString += errors.decode(self.encoding,
105 | 'replace').replace('\r', '')
106 |
107 | if self.process is None and resultString != '':
108 | resultString += '\n'
109 |
110 | if self.options['show_query']:
111 | formattedQueryInfo = self._formatShowQuery(self.query, queryTimerStart, queryTimerEnd)
112 | queryPlacement = self.options['show_query']
113 | if queryPlacement == 'top':
114 | resultString = "{0}\n{1}".format(formattedQueryInfo, resultString)
115 | elif queryPlacement == 'bottom':
116 | resultString = "{0}{1}\n".format(resultString, formattedQueryInfo)
117 |
118 | self.callback(resultString)
119 |
120 | @staticmethod
121 | def _formatShowQuery(query, queryTimeStart, queryTimeEnd):
122 | resultInfo = "/*\n-- Executed querie(s) at {0} took {1:.3f} s --".format(
123 | str(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(queryTimeStart))),
124 | (queryTimeEnd - queryTimeStart))
125 | resultLine = "-" * (len(resultInfo) - 3)
126 | resultString = "{0}\n{1}\n{2}\n{3}\n*/".format(
127 | resultInfo, resultLine, query, resultLine)
128 | return resultString
129 |
130 | @staticmethod
131 | def createAndRun(args, env, callback, query=None, encoding='utf-8',
132 | options=None, timeout=15, silenceErrors=False, stream=False):
133 | if options is None:
134 | options = {}
135 | command = Command(args=args,
136 | env=env,
137 | callback=callback,
138 | query=query,
139 | encoding=encoding,
140 | options=options,
141 | timeout=timeout,
142 | silenceErrors=silenceErrors,
143 | stream=stream)
144 | command.run()
145 |
146 |
147 | class ThreadCommand(Command, Thread):
148 | def __init__(self, args, env, callback, query=None, encoding='utf-8',
149 | options=None, timeout=Command.timeout, silenceErrors=False, stream=False):
150 | if options is None:
151 | options = {}
152 |
153 | Command.__init__(self,
154 | args=args,
155 | env=env,
156 | callback=callback,
157 | query=query,
158 | encoding=encoding,
159 | options=options,
160 | timeout=timeout,
161 | silenceErrors=silenceErrors,
162 | stream=stream)
163 | Thread.__init__(self)
164 |
165 | def stop(self):
166 | if not self.process:
167 | return
168 |
169 | # if poll returns None - proc still running, otherwise returns process return code
170 | if self.process.poll() is not None:
171 | return
172 |
173 | try:
174 | # Windows does not provide SIGKILL, go with SIGTERM
175 | sig = getattr(signal, 'SIGKILL', signal.SIGTERM)
176 | os.kill(self.process.pid, sig)
177 | self.process = None
178 |
179 | logger.info("command execution exceeded timeout (%s s), process killed", self.timeout)
180 | self.callback(("Command execution time exceeded 'thread_timeout' ({0} s).\n"
181 | "Process killed!\n\n"
182 | ).format(self.timeout))
183 | except Exception:
184 | logger.info("command execution exceeded timeout (%s s), process could not be killed", self.timeout)
185 | self.callback(("Command execution time exceeded 'thread_timeout' ({0} s).\n"
186 | "Process could not be killed!\n\n"
187 | ).format(self.timeout))
188 | pass
189 |
190 | @staticmethod
191 | def createAndRun(args, env, callback, query=None, encoding='utf-8',
192 | options=None, timeout=Command.timeout, silenceErrors=False, stream=False):
193 | # Don't allow empty dicts or lists as defaults in method signature,
194 | # cfr http://nedbatchelder.com/blog/200806/pylint.html
195 | if options is None:
196 | options = {}
197 | command = ThreadCommand(args=args,
198 | env=env,
199 | callback=callback,
200 | query=query,
201 | encoding=encoding,
202 | options=options,
203 | timeout=timeout,
204 | silenceErrors=silenceErrors,
205 | stream=stream)
206 | command.start()
207 | killTimeout = Timer(command.timeout, command.stop)
208 | killTimeout.start()
209 |
--------------------------------------------------------------------------------
/SQLToolsAPI/Completion.py:
--------------------------------------------------------------------------------
1 | import re
2 | import logging
3 | from collections import namedtuple
4 |
5 | from .ParseUtils import extractTables
6 |
7 | JOIN_COND_PATTERN = r"\s+?JOIN\s+?[\w\.`\"]+\s+?(?:AS\s+)?(\w+)\s+?ON\s+?(?:[\w\.]+)?$"
8 | JOIN_COND_REGEX = re.compile(JOIN_COND_PATTERN, re.IGNORECASE)
9 |
10 | keywords_list = [
11 | 'SELECT', 'UPDATE', 'DELETE', 'INSERT', 'INTO', 'FROM',
12 | 'WHERE', 'GROUP BY', 'ORDER BY', 'HAVING', 'JOIN',
13 | 'INNER JOIN', 'LEFT JOIN', 'RIGHT JOIN', 'USING',
14 | 'LIMIT', 'DISTINCT', 'SET'
15 | ]
16 |
17 | logger = logging.getLogger(__name__)
18 |
19 |
20 | # this function is generously used in completions code to get rid
21 | # of all sorts of leading and trailing quotes in RDBMS identifiers
22 | def _stripQuotes(ident):
23 | return ident.strip('"\'`')
24 |
25 |
26 | # used for formatting output
27 | def _stripQuotesOnDemand(ident, doStrip=True):
28 | if doStrip:
29 | return _stripQuotes(ident)
30 | return ident
31 |
32 |
33 | def _startsWithQuote(ident):
34 | # ident is matched against any of the possible ident quotes
35 | quotes = ('`', '"')
36 | return ident.startswith(quotes)
37 |
38 |
39 | def _stripPrefix(text, prefix):
40 | if text.startswith(prefix):
41 | return text[len(prefix):]
42 | return text
43 |
44 |
45 | # escape $ sign when formatting output
46 | def _escapeDollarSign(ident):
47 | return ident.replace("$", "\$")
48 |
49 |
50 | class CompletionItem(namedtuple('CompletionItem', ['type', 'ident'])):
51 | """Represents a potential or actual completion item.
52 | * type - type of item (Table, Function, Column)
53 | * ident - identifier (table.column, schema.table, alias)
54 | """
55 | __slots__ = ()
56 |
57 | @property
58 | def parent(self):
59 | """Parent of identifier, e.g. "table" from "table.column" """
60 | if self.ident.count('.') == 0:
61 | return None
62 | else:
63 | return self.ident.partition('.')[0]
64 |
65 | @property
66 | def name(self):
67 | """Name of identifier, e.g. "column" from "table.column" """
68 | return self.ident.split('.').pop()
69 |
70 | # for functions - strip open bracket "(" and everything after that
71 | # e.g: mydb.myAdd(int, int) --> mydb.myadd
72 | def _matchIdent(self):
73 | if self.type == 'Function':
74 | return self.ident.partition('(')[0].lower()
75 | return self.ident.lower()
76 |
77 | # Helper method for string matching
78 | # When exactly is true:
79 | # matches search string to target exactly, but empty search string matches anything
80 | # When exactly is false:
81 | # if only one char given in search string match this single char with start
82 | # of target string, otherwise match search string anywhere in target string
83 | @staticmethod
84 | def _stringMatched(target, search, exactly):
85 | if exactly:
86 | return target == search or search == ''
87 | else:
88 | if (len(search) == 1):
89 | return target.startswith(search)
90 | return search in target
91 |
92 | # Method to match completion item against search string (prefix).
93 | # Lower score means a better match.
94 | # If completion item matches prefix with parent identifier, e.g.:
95 | # table_name.column ~ table_name.co, then score = 1
96 | # If completion item matches prefix without parent identifier, e.g.:
97 | # table_name.column ~ co, then score = 2
98 | # If completion item matches, but prefix has no parent, e.g.:
99 | # table ~ tab, then score = 3
100 | def prefixMatchScore(self, search, exactly=False):
101 | target = self._matchIdent()
102 | search = search.lower()
103 |
104 | # match parent exactly and partially match name
105 | if '.' in target and '.' in search:
106 | searchList = search.split('.')
107 | searchObject = _stripQuotes(searchList.pop())
108 | searchParent = _stripQuotes(searchList.pop())
109 | targetList = target.split('.')
110 | targetObject = _stripQuotes(targetList.pop())
111 | targetParent = _stripQuotes(targetList.pop())
112 | if (searchParent == targetParent):
113 | if self._stringMatched(targetObject, searchObject, exactly):
114 | return 1 # highest score
115 | return 0
116 |
117 | # second part matches ?
118 | if '.' in target:
119 | targetObjectNoQuote = _stripQuotes(target.split('.').pop())
120 | searchNoQuote = _stripQuotes(search)
121 | if self._stringMatched(targetObjectNoQuote, searchNoQuote, exactly):
122 | return 2
123 | else:
124 | targetNoQuote = _stripQuotes(target)
125 | searchNoQuote = _stripQuotes(search)
126 | if self._stringMatched(targetNoQuote, searchNoQuote, exactly):
127 | return 3
128 | else:
129 | return 0
130 | return 0
131 |
132 | def prefixMatchListScore(self, searchList, exactly=False):
133 | for item in searchList:
134 | score = self.prefixMatchScore(item, exactly)
135 | if score:
136 | return score
137 | return 0
138 |
139 | # format completion item according to sublime text completions format
140 | def format(self, stripQuotes=False):
141 | typeDisplay = ''
142 | if self.type == 'Table':
143 | typeDisplay = self.type
144 | elif self.type == 'Keyword':
145 | typeDisplay = self.type
146 | elif self.type == 'Alias':
147 | typeDisplay = self.type
148 | elif self.type == 'Function':
149 | typeDisplay = 'Func'
150 | elif self.type == 'Column':
151 | typeDisplay = 'Col'
152 |
153 | if not typeDisplay:
154 | return (self.ident, _stripQuotesOnDemand(self.ident, stripQuotes))
155 |
156 | part = self.ident.split('.')
157 | if len(part) > 1:
158 | return ("{0}\t({1} {2})".format(part[1], part[0], typeDisplay),
159 | _stripQuotesOnDemand(_escapeDollarSign(part[1]), stripQuotes))
160 |
161 | return ("{0}\t({1})".format(self.ident, typeDisplay),
162 | _stripQuotesOnDemand(_escapeDollarSign(self.ident), stripQuotes))
163 |
164 |
165 | class Completion:
166 | def __init__(self, allTables, allColumns, allFunctions, settings=None):
167 | self.allTables = [CompletionItem('Table', table) for table in allTables]
168 | self.allColumns = [CompletionItem('Column', column) for column in allColumns]
169 | self.allFunctions = [CompletionItem('Function', func) for func in allFunctions]
170 |
171 | # we don't save the settings (we don't need them after init)
172 | if settings is None:
173 | settings = {}
174 |
175 | # check old setting name ('selectors') first for compatibility
176 | activeSelectors = settings.get('selectors', None)
177 | if not activeSelectors:
178 | activeSelectors = settings.get(
179 | 'autocomplete_selectors_active',
180 | ['source.sql'])
181 | self.activeSelectors = activeSelectors
182 |
183 | self.ignoreSelectors = settings.get(
184 | 'autocomplete_selectors_ignore',
185 | ['string.quoted.single.sql'])
186 |
187 | # determine type of completions
188 | self.completionType = settings.get('autocompletion', 'smart')
189 | if not self.completionType:
190 | self.completionType = None # autocompletion disabled
191 | else:
192 | self.completionType = str(self.completionType).strip()
193 | if self.completionType not in ['basic', 'smart']:
194 | self.completionType = 'smart'
195 |
196 | # determine desired keywords case from settings
197 | formatSettings = settings.get('format', {})
198 | keywordCase = formatSettings.get('keyword_case', 'upper')
199 | uppercaseKeywords = keywordCase.lower().startswith('upper')
200 |
201 | self.allKeywords = []
202 | for keyword in keywords_list:
203 | if uppercaseKeywords:
204 | keyword = keyword.upper()
205 | else:
206 | keyword = keyword.lower()
207 |
208 | self.allKeywords.append(CompletionItem('Keyword', keyword))
209 |
210 | def getActiveSelectors(self):
211 | return self.activeSelectors
212 |
213 | def getIgnoreSelectors(self):
214 | return self.ignoreSelectors
215 |
216 | def isDisabled(self):
217 | return self.completionType is None
218 |
219 | def getAutoCompleteList(self, prefix, sql, sqlToCursor):
220 | if self.isDisabled():
221 | return None
222 |
223 | autocompleteList = []
224 | inhibit = False
225 | if self.completionType == 'smart':
226 | autocompleteList, inhibit = self._getAutoCompleteListSmart(prefix, sql, sqlToCursor)
227 | else:
228 | autocompleteList = self._getAutoCompleteListBasic(prefix)
229 |
230 | if not autocompleteList:
231 | return None, False
232 |
233 | # return completions with or without quotes?
234 | # determined based on ident after last dot
235 | startsWithQuote = _startsWithQuote(prefix.split(".").pop())
236 | autocompleteList = [item.format(startsWithQuote) for item in autocompleteList]
237 |
238 | return autocompleteList, inhibit
239 |
240 | def _getAutoCompleteListBasic(self, prefix):
241 | prefix = prefix.lower()
242 | autocompleteList = []
243 |
244 | # columns, tables and functions that match the prefix
245 | for item in self.allColumns:
246 | score = item.prefixMatchScore(prefix)
247 | if score:
248 | autocompleteList.append(item)
249 |
250 | for item in self.allTables:
251 | score = item.prefixMatchScore(prefix)
252 | if score:
253 | autocompleteList.append(item)
254 |
255 | for item in self.allFunctions:
256 | score = item.prefixMatchScore(prefix)
257 | if score:
258 | autocompleteList.append(item)
259 |
260 | if len(autocompleteList) == 0:
261 | return None
262 |
263 | return autocompleteList
264 |
265 | def _getAutoCompleteListSmart(self, prefix, sql, sqlToCursor):
266 | """
267 | Generally, we recognize 3 different variations in prefix:
268 | * ident| // no dots (.) in prefix
269 | In this case we show completions for all available identifiers (tables, columns,
270 | functions) that have "ident" text in them. Identifiers relevant to current
271 | statement shown first.
272 | * parent.ident| // single dot in prefix
273 | In this case, if "parent" matches on of parsed table aliases we show column
274 | completions for them, as well as we do prefix search for all other identifiers.
275 | If something is matched, we return results as well as set a flag to suppress
276 | Sublime completions.
277 | If we don't find any objects using prefix search or we know that "parent" is
278 | a query alias, we don't return anything and allow Sublime to do it's job by
279 | showing most relevant completions.
280 | * database.table.col| // multiple dots in prefix
281 | In this case we only show columns for "table" column, as there is nothing else
282 | that could be referenced that way.
283 | Since it's too complicated to handle the specifics of identifiers case sensitivity
284 | as well as all nuances of quoting of those identifiers for each RDBMS, we always
285 | match against lower-cased and stripped quotes of both prefix and our internal saved
286 | identifiers (tables, columns, functions). E.g. "MyTable"."myCol" --> mytable.mycol
287 | """
288 |
289 | # TODO: add completions of function out fields
290 | prefix = prefix.lower()
291 | prefixDots = prefix.count('.')
292 |
293 | # continue with empty identifiers list, even if we failed to parse identifiers
294 | identifiers = []
295 | try:
296 | identifiers = extractTables(sql)
297 | except Exception as e:
298 | logger.debug('Failed to extact the list identifiers from SQL:\n {}'.format(sql),
299 | exc_info=True)
300 |
301 | # joinAlias is set only if user is editing join condition with alias. E.g.
302 | # SELECT a.* from tbl_a a inner join tbl_b b ON |
303 | joinAlias = None
304 | if prefixDots <= 1:
305 | try:
306 | joinCondMatch = JOIN_COND_REGEX.search(sqlToCursor, re.IGNORECASE)
307 | if joinCondMatch:
308 | joinAlias = joinCondMatch.group(1)
309 | except Exception as e:
310 | logger.debug('Failed search of join condition, SQL:\n {}'.format(sqlToCursor),
311 | exc_info=True)
312 |
313 | autocompleteList = []
314 | inhibit = False
315 | if prefixDots == 0:
316 | autocompleteList, inhibit = self._noDotsCompletions(prefix, identifiers, joinAlias)
317 | elif prefixDots == 1:
318 | autocompleteList, inhibit = self._singleDotCompletions(prefix, identifiers, joinAlias)
319 | else:
320 | autocompleteList, inhibit = self._multiDotCompletions(prefix, identifiers)
321 |
322 | if not autocompleteList:
323 | return None, False
324 |
325 | return autocompleteList, inhibit
326 |
327 | def _noDotsCompletions(self, prefix, identifiers, joinAlias=None):
328 | """
329 | Method handles most generic completions when prefix does not contain any dots.
330 | In this case completions can be anything: cols, tables, functions that have this name.
331 | Still we try to predict users needs and output aliases, tables, columns and function
332 | that are used in currently parsed statement first, then show everything else that
333 | could be related.
334 | Order: statement aliases -> statement cols -> statement tables -> statement functions,
335 | then: other cols -> other tables -> other functions that match the prefix in their names
336 | """
337 |
338 | # use set, as we are interested only in unique identifiers
339 | sqlAliases = set()
340 | sqlTables = []
341 | sqlColumns = []
342 | sqlFunctions = []
343 | otherTables = []
344 | otherColumns = []
345 | otherFunctions = []
346 | otherKeywords = []
347 | otherJoinConditions = []
348 |
349 | # utilitary temp lists
350 | identTables = set()
351 | identColumns = set()
352 | identFunctions = set()
353 |
354 | for ident in identifiers:
355 | if ident.has_alias():
356 | aliasItem = CompletionItem('Alias', ident.alias)
357 | score = aliasItem.prefixMatchScore(prefix)
358 | if score and aliasItem.ident != prefix:
359 | sqlAliases.add(aliasItem)
360 |
361 | if ident.is_function:
362 | identFunctions.add(ident.full_name)
363 | elif ident.is_table_alias:
364 | identTables.add(ident.full_name)
365 | identColumns.add(ident.name + '.' + prefix)
366 |
367 | for table in self.allTables:
368 | score = table.prefixMatchScore(prefix, exactly=False)
369 | if score:
370 | if table.prefixMatchListScore(identTables, exactly=True) > 0:
371 | sqlTables.append(table)
372 | else:
373 | otherTables.append(table)
374 |
375 | for col in self.allColumns:
376 | score = col.prefixMatchScore(prefix, exactly=False)
377 | if score:
378 | if col.prefixMatchListScore(identColumns, exactly=False) > 0:
379 | sqlColumns.append(col)
380 | else:
381 | otherColumns.append(col)
382 |
383 | for fun in self.allFunctions:
384 | score = fun.prefixMatchScore(prefix, exactly=False)
385 | if score:
386 | if fun.prefixMatchListScore(identFunctions, exactly=True) > 0:
387 | sqlColumns.append(fun)
388 | else:
389 | otherColumns.append(fun)
390 |
391 | # keywords
392 | for item in self.allKeywords:
393 | score = item.prefixMatchScore(prefix)
394 | if score:
395 | otherKeywords.append(item)
396 |
397 | # join conditions
398 | if joinAlias:
399 | joinConditions = self._joinConditionCompletions(identifiers, joinAlias)
400 |
401 | for condition in joinConditions:
402 | if condition.ident.lower().startswith(prefix):
403 | otherJoinConditions.append(condition)
404 |
405 | # collect the results in prefered order
406 | autocompleteList = []
407 |
408 | # first of all list join conditions (if applicable)
409 | autocompleteList.extend(otherJoinConditions)
410 |
411 | # then aliases and identifiers related to currently parsed statement
412 | autocompleteList.extend(sqlAliases)
413 |
414 | # then cols, tables, functions related to current statement
415 | autocompleteList.extend(sqlColumns)
416 | autocompleteList.extend(sqlTables)
417 | autocompleteList.extend(sqlFunctions)
418 |
419 | # then other matching cols, tables, functions
420 | autocompleteList.extend(otherKeywords)
421 | autocompleteList.extend(otherColumns)
422 | autocompleteList.extend(otherTables)
423 | autocompleteList.extend(otherFunctions)
424 |
425 | return autocompleteList, False
426 |
427 | def _singleDotCompletions(self, prefix, identifiers, joinAlias=None):
428 | """
429 | More intelligent completions can be shown if we have single dot in prefix in certain cases.
430 | """
431 | prefixList = prefix.split(".")
432 | prefixObject = prefixList.pop()
433 | prefixParent = prefixList.pop()
434 |
435 | # get join conditions
436 | joinConditions = []
437 | if joinAlias:
438 | joinConditions = self._joinConditionCompletions(identifiers, joinAlias)
439 |
440 | sqlTableAliases = set() # set of CompletionItem
441 | sqlQueryAliases = set() # set of strings
442 |
443 | # we use set, as we are interested only in unique identifiers
444 | for ident in identifiers:
445 | if ident.has_alias() and ident.alias.lower() == prefixParent:
446 | if ident.is_query_alias:
447 | sqlQueryAliases.add(ident.alias)
448 |
449 | if ident.is_table_alias:
450 | tables = [
451 | table
452 | for table in self.allTables
453 | if table.prefixMatchScore(ident.full_name, exactly=True) > 0
454 | ]
455 | sqlTableAliases.update(tables)
456 |
457 | autocompleteList = []
458 |
459 | for condition in joinConditions:
460 | aliasPrefix = prefixParent + '.'
461 | if condition.ident.lower().startswith(aliasPrefix):
462 | autocompleteList.append(CompletionItem(condition.type,
463 | _stripPrefix(condition.ident, aliasPrefix)))
464 |
465 | # first of all expand table aliases to real table names and try
466 | # to match their columns with prefix of these expanded identifiers
467 | # e.g. select x.co| from tab x // "x.co" will expland to "tab.co"
468 | for table_item in sqlTableAliases:
469 | prefix_to_match = table_item.name + '.' + prefixObject
470 | for item in self.allColumns:
471 | score = item.prefixMatchScore(prefix_to_match)
472 | if score:
473 | autocompleteList.append(item)
474 |
475 | # try to match all our other objects (tables, columns, functions) with prefix
476 | for item in self.allColumns:
477 | score = item.prefixMatchScore(prefix)
478 | if score:
479 | autocompleteList.append(item)
480 |
481 | for item in self.allTables:
482 | score = item.prefixMatchScore(prefix)
483 | if score:
484 | autocompleteList.append(item)
485 |
486 | for item in self.allFunctions:
487 | score = item.prefixMatchScore(prefix)
488 | if score:
489 | autocompleteList.append(item)
490 |
491 | inhibit = len(autocompleteList) > 0
492 | # in case prefix parent is a query alias we simply don't know what those
493 | # columns might be so, set inhibit = False to allow sublime default completions
494 | if prefixParent in sqlQueryAliases:
495 | inhibit = False
496 |
497 | return autocompleteList, inhibit
498 |
499 | # match only columns if prefix contains multiple dots (db.table.col)
500 | def _multiDotCompletions(self, prefix, identifiers):
501 | autocompleteList = []
502 | for item in self.allColumns:
503 | score = item.prefixMatchScore(prefix)
504 | if score:
505 | autocompleteList.append(item)
506 |
507 | if len(autocompleteList) > 0:
508 | return autocompleteList, True
509 |
510 | return None, False
511 |
512 | def _joinConditionCompletions(self, identifiers, joinAlias=None):
513 | if not joinAlias:
514 | return None
515 |
516 | # use set, as we are interested only in unique identifiers
517 | sqlTableAliases = set()
518 | joinAliasColumns = set()
519 | sqlOtherColumns = set()
520 |
521 | for ident in identifiers:
522 | if ident.has_alias() and not ident.is_function:
523 | sqlTableAliases.add(CompletionItem('Alias', ident.alias))
524 |
525 | prefixForColumnMatch = ident.name + '.'
526 | columns = [
527 | (ident.alias, col)
528 | for col in self.allColumns
529 | if (col.prefixMatchScore(prefixForColumnMatch, exactly=True) > 0 and
530 | _stripQuotes(col.name).lower().endswith('id'))
531 | ]
532 |
533 | if ident.alias == joinAlias:
534 | joinAliasColumns.update(columns)
535 | else:
536 | sqlOtherColumns.update(columns)
537 |
538 | joinCandidatesCompletions = []
539 | for joinAlias, joinColumn in joinAliasColumns:
540 | # str.endswith can be matched against a tuple
541 | columnsToMatch = None
542 | if _stripQuotes(joinColumn.name).lower() == 'id':
543 | columnsToMatch = (
544 | _stripQuotes(joinColumn.parent).lower() + _stripQuotes(joinColumn.name).lower(),
545 | _stripQuotes(joinColumn.parent).lower() + '_' + _stripQuotes(joinColumn.name).lower()
546 | )
547 | else:
548 | columnsToMatch = (
549 | _stripQuotes(joinColumn.name).lower(),
550 | _stripQuotes(joinColumn.parent).lower() + _stripQuotes(joinColumn.name).lower(),
551 | _stripQuotes(joinColumn.parent).lower() + '_' + _stripQuotes(joinColumn.name).lower()
552 | )
553 |
554 | for otherAlias, otherColumn in sqlOtherColumns:
555 | if _stripQuotes(otherColumn.name).lower().endswith(columnsToMatch):
556 | sideA = joinAlias + '.' + joinColumn.name
557 | sideB = otherAlias + '.' + otherColumn.name
558 |
559 | joinCandidatesCompletions.append(CompletionItem('Condition', sideA + ' = ' + sideB))
560 | joinCandidatesCompletions.append(CompletionItem('Condition', sideB + ' = ' + sideA))
561 |
562 | return joinCandidatesCompletions
563 |
--------------------------------------------------------------------------------
/SQLToolsAPI/Connection.py:
--------------------------------------------------------------------------------
1 | import shutil
2 | import shlex
3 | import codecs
4 | import logging
5 | import sqlparse
6 |
7 | from . import Utils as U
8 | from . import Command as C
9 |
10 | logger = logging.getLogger(__name__)
11 |
12 |
13 | def _encoding_exists(enc):
14 | try:
15 | codecs.lookup(enc)
16 | except LookupError:
17 | return False
18 | return True
19 |
20 |
21 | class Connection(object):
22 | DB_CLI_NOT_FOUND_MESSAGE = """DB CLI '{0}' could not be found.
23 | Please set the path to DB CLI '{0}' binary in your SQLTools settings before continuing.
24 | Example of "cli" section in SQLTools.sublime-settings:
25 | /* ... (note the use of forward slashes) */
26 | "cli" : {{
27 | "mysql" : "c:/Program Files/MySQL/MySQL Server 5.7/bin/mysql.exe",
28 | "pgsql" : "c:/Program Files/PostgreSQL/9.6/bin/psql.exe"
29 | }}
30 | You might need to restart the editor for settings to be refreshed."""
31 |
32 | name = None
33 | options = None
34 | settings = None
35 | type = None
36 | host = None
37 | port = None
38 | database = None
39 | username = None
40 | password = None
41 | encoding = None
42 | safe_limit = None
43 | show_query = None
44 | rowsLimit = None
45 | history = None
46 | timeout = None
47 |
48 | def __init__(self, name, options, settings=None, commandClass='ThreadCommand'):
49 | self.Command = getattr(C, commandClass)
50 |
51 | self.name = name
52 | self.options = {k: v for k, v in options.items() if v is not None}
53 |
54 | if settings is None:
55 | settings = {}
56 | self.settings = settings
57 |
58 | self.type = self.options.get('type', None)
59 | self.host = self.options.get('host', None)
60 | self.port = self.options.get('port', None)
61 | self.database = self.options.get('database', None)
62 | self.username = self.options.get('username', None)
63 | self.password = self.options.get('password', None)
64 | self.encoding = self.options.get('encoding', 'utf-8')
65 | self.encoding = self.encoding or 'utf-8' # defaults to utf-8
66 | if not _encoding_exists(self.encoding):
67 | self.encoding = 'utf-8'
68 |
69 | self.safe_limit = settings.get('safe_limit', None)
70 | self.show_query = settings.get('show_query', False)
71 | self.rowsLimit = settings.get('show_records', {}).get('limit', 50)
72 | self.useStreams = settings.get('use_streams', False)
73 | self.cli = settings.get('cli')[self.options['type']]
74 |
75 | cli_path = shutil.which(self.cli)
76 | if cli_path is None:
77 | logger.info(self.DB_CLI_NOT_FOUND_MESSAGE.format(self.cli))
78 | raise FileNotFoundError(self.DB_CLI_NOT_FOUND_MESSAGE.format(self.cli))
79 |
80 | def __str__(self):
81 | return self.name
82 |
83 | def info(self):
84 | return 'DB: {0}, Connection: {1}@{2}:{3}'.format(
85 | self.database, self.username, self.host, self.port)
86 |
87 | def runInternalNamedQueryCommand(self, queryName, callback):
88 | query = self.getNamedQuery(queryName)
89 | if not query:
90 | emptyList = []
91 | callback(emptyList)
92 | return
93 |
94 | queryToRun = self.buildNamedQuery(queryName, query)
95 | args = self.buildArgs(queryName)
96 | env = self.buildEnv()
97 |
98 | def cb(result):
99 | callback(U.getResultAsList(result))
100 |
101 | self.Command.createAndRun(args=args,
102 | env=env,
103 | callback=cb,
104 | query=queryToRun,
105 | encoding=self.encoding,
106 | timeout=60,
107 | silenceErrors=True,
108 | stream=False)
109 |
110 | def getTables(self, callback):
111 | self.runInternalNamedQueryCommand('desc', callback)
112 |
113 | def getColumns(self, callback):
114 | self.runInternalNamedQueryCommand('columns', callback)
115 |
116 | def getFunctions(self, callback):
117 | self.runInternalNamedQueryCommand('functions', callback)
118 |
119 | def runFormattedNamedQueryCommand(self, queryName, formatValues, callback):
120 | query = self.getNamedQuery(queryName)
121 | if not query:
122 | return
123 |
124 | # added for compatibility with older format string
125 | query = query.replace("%s", "{0}", 1)
126 | query = query.replace("%s", "{1}", 1)
127 |
128 | if isinstance(formatValues, tuple):
129 | query = query.format(*formatValues) # unpack the tuple
130 | else:
131 | query = query.format(formatValues)
132 |
133 | queryToRun = self.buildNamedQuery(queryName, query)
134 | args = self.buildArgs(queryName)
135 | env = self.buildEnv()
136 | self.Command.createAndRun(args=args,
137 | env=env,
138 | callback=callback,
139 | query=queryToRun,
140 | encoding=self.encoding,
141 | timeout=self.timeout,
142 | silenceErrors=False,
143 | stream=False)
144 |
145 | def getTableRecords(self, tableName, callback):
146 | # in case we expect multiple values pack them into tuple
147 | formatValues = (tableName, self.rowsLimit)
148 | self.runFormattedNamedQueryCommand('show records', formatValues, callback)
149 |
150 | def getTableDescription(self, tableName, callback):
151 | self.runFormattedNamedQueryCommand('desc table', tableName, callback)
152 |
153 | def getFunctionDescription(self, functionName, callback):
154 | self.runFormattedNamedQueryCommand('desc function', functionName, callback)
155 |
156 | def explainPlan(self, queries, callback):
157 | queryName = 'explain plan'
158 | explainQuery = self.getNamedQuery(queryName)
159 | if not explainQuery:
160 | return
161 |
162 | strippedQueries = [
163 | explainQuery.format(query.strip().strip(";"))
164 | for rawQuery in queries
165 | for query in filter(None, sqlparse.split(rawQuery))
166 | ]
167 | queryToRun = self.buildNamedQuery(queryName, strippedQueries)
168 | args = self.buildArgs(queryName)
169 | env = self.buildEnv()
170 | self.Command.createAndRun(args=args,
171 | env=env,
172 | callback=callback,
173 | query=queryToRun,
174 | encoding=self.encoding,
175 | timeout=self.timeout,
176 | silenceErrors=False,
177 | stream=self.useStreams)
178 |
179 | def execute(self, queries, callback, stream=None):
180 | queryName = 'execute'
181 |
182 | # if not explicitly overriden, use the value from settings
183 | if stream is None:
184 | stream = self.useStreams
185 |
186 | if isinstance(queries, str):
187 | queries = [queries]
188 |
189 | # add original (umodified) queries to the history
190 | if self.history:
191 | self.history.add('\n'.join(queries))
192 |
193 | processedQueriesList = []
194 | for rawQuery in queries:
195 | for query in sqlparse.split(rawQuery):
196 | if self.safe_limit:
197 | parsedTokens = sqlparse.parse(query.strip().replace("'", "\""))
198 | if ((parsedTokens[0][0].ttype in sqlparse.tokens.Keyword and
199 | parsedTokens[0][0].value == 'select')):
200 | applySafeLimit = True
201 | for parse in parsedTokens:
202 | for token in parse.tokens:
203 | if token.ttype in sqlparse.tokens.Keyword and token.value == 'limit':
204 | applySafeLimit = False
205 | if applySafeLimit:
206 | if (query.strip()[-1:] == ';'):
207 | query = query.strip()[:-1]
208 | query += " LIMIT {0};".format(self.safe_limit)
209 | processedQueriesList.append(query)
210 |
211 | queryToRun = self.buildNamedQuery(queryName, processedQueriesList)
212 | args = self.buildArgs(queryName)
213 | env = self.buildEnv()
214 |
215 | logger.debug("Query: %s", str(queryToRun))
216 |
217 | self.Command.createAndRun(args=args,
218 | env=env,
219 | callback=callback,
220 | query=queryToRun,
221 | encoding=self.encoding,
222 | options={'show_query': self.show_query},
223 | timeout=self.timeout,
224 | silenceErrors=False,
225 | stream=stream)
226 |
227 | def getNamedQuery(self, queryName):
228 | if not queryName:
229 | return None
230 |
231 | cliOptions = self.getOptionsForSgdbCli()
232 | return cliOptions.get('queries', {}).get(queryName, {}).get('query')
233 |
234 | def buildNamedQuery(self, queryName, queries):
235 | if not queryName:
236 | return None
237 |
238 | if not queries:
239 | return None
240 |
241 | cliOptions = self.getOptionsForSgdbCli()
242 | beforeCli = cliOptions.get('before')
243 | afterCli = cliOptions.get('after')
244 | beforeQuery = cliOptions.get('queries', {}).get(queryName, {}).get('before')
245 | afterQuery = cliOptions.get('queries', {}).get(queryName, {}).get('after')
246 |
247 | # sometimes we preprocess the raw queries from user, in that case we already have a list
248 | if type(queries) is not list:
249 | queries = [queries]
250 |
251 | builtQueries = []
252 | if beforeCli is not None:
253 | builtQueries.extend(beforeCli)
254 | if beforeQuery is not None:
255 | builtQueries.extend(beforeQuery)
256 | if queries is not None:
257 | builtQueries.extend(queries)
258 | if afterQuery is not None:
259 | builtQueries.extend(afterQuery)
260 | if afterCli is not None:
261 | builtQueries.extend(afterCli)
262 |
263 | # remove empty list items
264 | builtQueries = list(filter(None, builtQueries))
265 |
266 | return '\n'.join(builtQueries)
267 |
268 | def buildArgs(self, queryName=None):
269 | cliOptions = self.getOptionsForSgdbCli()
270 | args = [self.cli]
271 |
272 | # append otional args (if any) - could be a single value or a list
273 | optionalArgs = cliOptions.get('args_optional')
274 | if optionalArgs: # only if we have optional args
275 | if isinstance(optionalArgs, list):
276 | for item in optionalArgs:
277 | formattedItem = self.formatOptionalArgument(item, self.options)
278 | if formattedItem:
279 | args = args + shlex.split(formattedItem)
280 | else:
281 | formattedItem = self.formatOptionalArgument(optionalArgs, self.options)
282 | if formattedItem:
283 | args = args + shlex.split(formattedItem)
284 |
285 | # append generic options
286 | options = cliOptions.get('options', None)
287 | if options:
288 | args = args + options
289 |
290 | # append query specific options (if present)
291 | if queryName:
292 | queryOptions = cliOptions.get('queries', {}).get(queryName, {}).get('options')
293 | if queryOptions:
294 | if len(queryOptions) > 0:
295 | args = args + queryOptions
296 |
297 | # append main args - could be a single value or a list
298 | mainArgs = cliOptions['args']
299 | if isinstance(mainArgs, list):
300 | mainArgs = ' '.join(mainArgs)
301 |
302 | mainArgs = mainArgs.format(**self.options)
303 | args = args + shlex.split(mainArgs)
304 |
305 | logger.debug('CLI args (%s): %s', str(queryName), ' '.join(args))
306 | return args
307 |
308 | def buildEnv(self):
309 | cliOptions = self.getOptionsForSgdbCli()
310 | env = dict()
311 |
312 | # append **optional** environment variables dict (if any)
313 | optionalEnv = cliOptions.get('env_optional')
314 | if optionalEnv: # only if we have optional args
315 | if isinstance(optionalEnv, dict):
316 | for var, value in optionalEnv.items():
317 | formattedValue = self.formatOptionalArgument(value, self.options)
318 | if formattedValue:
319 | env.update({var: formattedValue})
320 |
321 | # append environment variables dict (if any)
322 | staticEnv = cliOptions.get('env')
323 | if staticEnv: # only if we have optional args
324 | if isinstance(staticEnv, dict):
325 | for var, value in staticEnv.items():
326 | formattedValue = value.format(**self.options)
327 | if formattedValue:
328 | env.update({var: formattedValue})
329 |
330 | logger.debug('CLI environment: %s', str(env))
331 | return env
332 |
333 | def getOptionsForSgdbCli(self):
334 | return self.settings.get('cli_options', {}).get(self.type)
335 |
336 | @staticmethod
337 | def formatOptionalArgument(argument, formatOptions):
338 | try:
339 | formattedArg = argument.format(**formatOptions)
340 | except (KeyError, IndexError):
341 | return None
342 |
343 | if argument == formattedArg: # string not changed after format
344 | return None
345 | return formattedArg
346 |
347 | @staticmethod
348 | def setTimeout(timeout):
349 | Connection.timeout = timeout
350 | logger.info('Connection timeout set to {0} seconds'.format(timeout))
351 |
352 | @staticmethod
353 | def setHistoryManager(manager):
354 | Connection.history = manager
355 | size = manager.getMaxSize()
356 | logger.info('Connection history size is {0}'.format(size))
357 |
--------------------------------------------------------------------------------
/SQLToolsAPI/History.py:
--------------------------------------------------------------------------------
1 | __version__ = "v0.1.0"
2 |
3 |
4 | class SizeException(Exception):
5 | pass
6 |
7 |
8 | class NotFoundException(Exception):
9 | pass
10 |
11 |
12 | class History:
13 |
14 | def __init__(self, maxSize=100):
15 | self.items = []
16 | self.maxSize = maxSize
17 |
18 | def add(self, query):
19 | if self.getSize() >= self.getMaxSize():
20 | self.items.pop(0)
21 | self.items.insert(0, query)
22 |
23 | def get(self, index):
24 | if index < 0 or index > (len(self.items) - 1):
25 | raise NotFoundException("No query selected")
26 |
27 | return self.items[index]
28 |
29 | def setMaxSize(self, size=100):
30 | if size < 1:
31 | raise SizeException("Size can't be lower than 1")
32 |
33 | self.maxSize = size
34 | return self.maxSize
35 |
36 | def getMaxSize(self):
37 | return self.maxSize
38 |
39 | def getSize(self):
40 | return len(self.items)
41 |
42 | def all(self):
43 | return self.items
44 |
45 | def clear(self):
46 | self.items = []
47 | return self.items
48 |
--------------------------------------------------------------------------------
/SQLToolsAPI/ParseUtils.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import itertools
4 | from collections import namedtuple
5 |
6 | dirpath = os.path.join(os.path.dirname(__file__), 'lib')
7 | if dirpath not in sys.path:
8 | sys.path.append(dirpath)
9 |
10 | import sqlparse
11 |
12 | from sqlparse.sql import IdentifierList, Identifier, Function
13 | from sqlparse.tokens import Keyword, DML
14 |
15 |
16 | class Reference(namedtuple('Reference', ['schema', 'name', 'alias', 'is_function'])):
17 | __slots__ = ()
18 |
19 | def has_alias(self):
20 | return self.alias is not None
21 |
22 | @property
23 | def is_query_alias(self):
24 | return self.name is None and self.alias is not None
25 |
26 | @property
27 | def is_table_alias(self):
28 | return self.name is not None and self.alias is not None and not self.is_function
29 |
30 | @property
31 | def full_name(self):
32 | if self.schema is None:
33 | return self.name
34 | else:
35 | return self.schema + '.' + self.name
36 |
37 |
38 | def _is_subselect(parsed):
39 | if not parsed.is_group:
40 | return False
41 | for item in parsed.tokens:
42 | if item.ttype is DML and item.value.upper() in ('SELECT', 'INSERT',
43 | 'UPDATE', 'CREATE', 'DELETE'):
44 | return True
45 | return False
46 |
47 |
48 | def _identifier_is_function(identifier):
49 | return any(isinstance(t, Function) for t in identifier.tokens)
50 |
51 |
52 | def _extract_from_part(parsed):
53 | tbl_prefix_seen = False
54 | for item in parsed.tokens:
55 | if item.is_group:
56 | for x in _extract_from_part(item):
57 | yield x
58 | if tbl_prefix_seen:
59 | if _is_subselect(item):
60 | for x in _extract_from_part(item):
61 | yield x
62 | # An incomplete nested select won't be recognized correctly as a
63 | # sub-select. eg: 'SELECT * FROM (SELECT id FROM user'. This causes
64 | # the second FROM to trigger this elif condition resulting in a
65 | # StopIteration. So we need to ignore the keyword if the keyword
66 | # FROM.
67 | # Also 'SELECT * FROM abc JOIN def' will trigger this elif
68 | # condition. So we need to ignore the keyword JOIN and its variants
69 | # INNER JOIN, FULL OUTER JOIN, etc.
70 | elif item.ttype is Keyword and (
71 | not item.value.upper() == 'FROM') and (
72 | not item.value.upper().endswith('JOIN')):
73 | tbl_prefix_seen = False
74 | else:
75 | yield item
76 | elif item.ttype is Keyword or item.ttype is Keyword.DML:
77 | item_val = item.value.upper()
78 | if (item_val in ('COPY', 'FROM', 'INTO', 'UPDATE', 'TABLE') or
79 | item_val.endswith('JOIN')):
80 | tbl_prefix_seen = True
81 | # 'SELECT a, FROM abc' will detect FROM as part of the column list.
82 | # So this check here is necessary.
83 | elif isinstance(item, IdentifierList):
84 | for identifier in item.get_identifiers():
85 | if (identifier.ttype is Keyword and
86 | identifier.value.upper() == 'FROM'):
87 | tbl_prefix_seen = True
88 | break
89 |
90 |
91 | def _extract_table_identifiers(token_stream):
92 | for item in token_stream:
93 | if isinstance(item, IdentifierList):
94 | for ident in item.get_identifiers():
95 | try:
96 | alias = ident.get_alias()
97 | schema_name = ident.get_parent_name()
98 | real_name = ident.get_real_name()
99 | except AttributeError:
100 | continue
101 | if real_name:
102 | yield Reference(schema_name, real_name,
103 | alias, _identifier_is_function(ident))
104 | elif isinstance(item, Identifier):
105 | yield Reference(item.get_parent_name(), item.get_real_name(),
106 | item.get_alias(), _identifier_is_function(item))
107 | elif isinstance(item, Function):
108 | yield Reference(item.get_parent_name(), item.get_real_name(),
109 | item.get_alias(), _identifier_is_function(item))
110 |
111 |
112 | def extractTables(sql):
113 | # let's handle multiple statements in one sql string
114 | extracted_tables = []
115 | statements = list(sqlparse.parse(sql))
116 | for statement in statements:
117 | stream = _extract_from_part(statement)
118 | extracted_tables.append(list(_extract_table_identifiers(stream)))
119 | return list(itertools.chain(*extracted_tables))
120 |
--------------------------------------------------------------------------------
/SQLToolsAPI/README.md:
--------------------------------------------------------------------------------
1 | ## SQLTools API for plugins - v0.2.5
2 |
3 | Docs will be ready soon
4 |
--------------------------------------------------------------------------------
/SQLToolsAPI/Storage.py:
--------------------------------------------------------------------------------
1 | import os
2 | import shutil
3 | from . import Utils as U
4 |
5 | __version__ = "v0.1.0"
6 |
7 |
8 | class Storage:
9 | def __init__(self, filename, default=None):
10 | self.storageFile = filename
11 | self.defaultFile = default
12 | self.items = {}
13 |
14 | # copy entire file, to keep comments
15 | # if not os.path.isfile(filename) and default and os.path.isfile(default):
16 | # shutil.copyfile(default, filename)
17 |
18 | self.all()
19 |
20 | def all(self):
21 | userFile = self.getFilename()
22 |
23 | if os.path.exists(userFile):
24 | self.items = U.parseJson(self.getFilename())
25 | else:
26 | self.items = {}
27 |
28 | return U.merge(self.items, self.defaults())
29 |
30 | def write(self):
31 | return U.saveJson(self.items if isinstance(self.items, dict) else {}, self.getFilename())
32 |
33 | def add(self, key, value):
34 | if len(key) <= 0:
35 | return
36 |
37 | self.all()
38 |
39 | if isinstance(value, str):
40 | value = [value]
41 |
42 | self.items[key] = '\n'.join(value)
43 | self.write()
44 |
45 | def delete(self, key):
46 | if len(key) <= 0:
47 | return
48 |
49 | self.all()
50 | self.items.pop(key)
51 | self.write()
52 |
53 | def get(self, key, default=None):
54 | if len(key) <= 0:
55 | return
56 |
57 | items = self.all()
58 | return items[key] if key in items else default
59 |
60 | def getFilename(self):
61 | return self.storageFile
62 |
63 | def defaults(self):
64 | if self.defaultFile and os.path.isfile(self.defaultFile):
65 | return U.parseJson(self.defaultFile)
66 | return {}
67 |
68 |
69 | class Settings(Storage):
70 | pass
71 |
--------------------------------------------------------------------------------
/SQLToolsAPI/Utils.py:
--------------------------------------------------------------------------------
1 | __version__ = "v0.2.0"
2 |
3 | import json
4 | import os
5 | import re
6 | import sys
7 |
8 | dirpath = os.path.join(os.path.dirname(__file__), 'lib')
9 | if dirpath not in sys.path:
10 | sys.path.append(dirpath)
11 |
12 | import sqlparse
13 |
14 | # Regular expression for comments
15 | comment_re = re.compile(
16 | '(^)?[^\S\n]*/(?:\*(.*?)\*/[^\S\n]*|/[^\n]*)($)?',
17 | re.DOTALL | re.MULTILINE
18 | )
19 |
20 |
21 | def parseJson(filename):
22 | """ Parse a JSON file
23 | First remove comments and then use the json module package
24 | Comments look like :
25 | // ...
26 | or
27 | /*
28 | ...
29 | */
30 | """
31 |
32 | with open(filename, mode='r', encoding='utf-8') as f:
33 | content = ''.join(f.readlines())
34 |
35 | # Looking for comments
36 | match = comment_re.search(content)
37 | while match:
38 | # single line comment
39 | content = content[:match.start()] + content[match.end():]
40 | match = comment_re.search(content)
41 |
42 | # remove trailing commas
43 | content = re.sub(r',([ \t\r\n]+)}', r'\1}', content)
44 | content = re.sub(r',([ \t\r\n]+)\]', r'\1]', content)
45 |
46 | # Return json file
47 | return json.loads(content, encoding='utf-8')
48 |
49 |
50 | def saveJson(content, filename):
51 | with open(filename, mode='w', encoding='utf-8') as outfile:
52 | json.dump(content, outfile,
53 | sort_keys=True, indent=2, separators=(',', ': '))
54 |
55 |
56 | def getResultAsList(results):
57 | resultList = []
58 | for result in results.splitlines():
59 | lineResult = ''
60 | for element in result.strip('|').split('|'):
61 | lineResult += element.strip()
62 | if lineResult:
63 | resultList.append(lineResult)
64 | return resultList
65 |
66 |
67 | def formatSql(raw, settings):
68 | try:
69 | result = sqlparse.format(raw, **settings)
70 |
71 | return result
72 | except Exception:
73 | return None
74 |
75 |
76 | def merge(source, destination):
77 | """
78 | run me with nosetests --with-doctest file.py
79 |
80 | >>> a = { 'first' : { 'all_rows' : { 'pass' : 'dog', 'number' : '1' } } }
81 | >>> b = { 'first' : { 'all_rows' : { 'fail' : 'cat', 'number' : '5' } } }
82 | >>> merge(b, a) == { 'first' : { 'all_rows' : { 'pass' : 'dog', 'fail' : 'cat', 'number' : '5' } } }
83 | True
84 | """
85 | for key, value in source.items():
86 | if isinstance(value, dict):
87 | # get node or create one
88 | node = destination.setdefault(key, {})
89 | merge(value, node)
90 | else:
91 | destination[key] = value
92 |
93 | return destination
94 |
--------------------------------------------------------------------------------
/SQLToolsAPI/__init__.py:
--------------------------------------------------------------------------------
1 | __version__ = "v0.3.0"
2 |
3 |
4 | __all__ = [
5 | 'Utils',
6 | 'Completion',
7 | 'Command',
8 | 'Connection',
9 | 'History',
10 | 'Storage',
11 | 'Settings'
12 | ]
13 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/__init__.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """Parse SQL statements."""
9 |
10 | # Setup namespace
11 | from sqlparse import sql
12 | from sqlparse import cli
13 | from sqlparse import engine
14 | from sqlparse import tokens
15 | from sqlparse import filters
16 | from sqlparse import formatter
17 |
18 | from sqlparse.compat import text_type
19 |
20 | __version__ = '0.2.3'
21 | __all__ = ['engine', 'filters', 'formatter', 'sql', 'tokens', 'cli']
22 |
23 |
24 | def parse(sql, encoding=None):
25 | """Parse sql and return a list of statements.
26 |
27 | :param sql: A string containing one or more SQL statements.
28 | :param encoding: The encoding of the statement (optional).
29 | :returns: A tuple of :class:`~sqlparse.sql.Statement` instances.
30 | """
31 | return tuple(parsestream(sql, encoding))
32 |
33 |
34 | def parsestream(stream, encoding=None):
35 | """Parses sql statements from file-like object.
36 |
37 | :param stream: A file-like object.
38 | :param encoding: The encoding of the stream contents (optional).
39 | :returns: A generator of :class:`~sqlparse.sql.Statement` instances.
40 | """
41 | stack = engine.FilterStack()
42 | stack.enable_grouping()
43 | return stack.run(stream, encoding)
44 |
45 |
46 | def format(sql, encoding=None, **options):
47 | """Format *sql* according to *options*.
48 |
49 | Available options are documented in :ref:`formatting`.
50 |
51 | In addition to the formatting options this function accepts the
52 | keyword "encoding" which determines the encoding of the statement.
53 |
54 | :returns: The formatted SQL statement as string.
55 | """
56 | stack = engine.FilterStack()
57 | options = formatter.validate_options(options)
58 | stack = formatter.build_filter_stack(stack, options)
59 | stack.postprocess.append(filters.SerializerUnicode())
60 | return u''.join(stack.run(sql, encoding))
61 |
62 |
63 | def split(sql, encoding=None):
64 | """Split *sql* into single statements.
65 |
66 | :param sql: A string containing one or more SQL statements.
67 | :param encoding: The encoding of the statement (optional).
68 | :returns: A list of strings.
69 | """
70 | stack = engine.FilterStack()
71 | return [text_type(stmt).strip() for stmt in stack.run(sql, encoding)]
72 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/__main__.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | #
4 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
5 | #
6 | # This module is part of python-sqlparse and is released under
7 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
8 |
9 | """Entrypoint module for `python -m sqlparse`.
10 |
11 | Why does this file exist, and why __main__? For more info, read:
12 | - https://www.python.org/dev/peps/pep-0338/
13 | - https://docs.python.org/2/using/cmdline.html#cmdoption-m
14 | - https://docs.python.org/3/using/cmdline.html#cmdoption-m
15 | """
16 |
17 | import sys
18 |
19 | from sqlparse.cli import main
20 |
21 | if __name__ == '__main__':
22 | sys.exit(main())
23 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/cli.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 | #
4 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
5 | #
6 | # This module is part of python-sqlparse and is released under
7 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
8 |
9 | """Module that contains the command line app.
10 |
11 | Why does this file exist, and why not put this in __main__?
12 | You might be tempted to import things from __main__ later, but that will
13 | cause problems: the code will get executed twice:
14 | - When you run `python -m sqlparse` python will execute
15 | ``__main__.py`` as a script. That means there won't be any
16 | ``sqlparse.__main__`` in ``sys.modules``.
17 | - When you import __main__ it will get executed again (as a module) because
18 | there's no ``sqlparse.__main__`` in ``sys.modules``.
19 | Also see (1) from http://click.pocoo.org/5/setuptools/#setuptools-integration
20 | """
21 |
22 | import argparse
23 | import sys
24 | from io import TextIOWrapper
25 | from codecs import open, getreader
26 |
27 | import sqlparse
28 | from sqlparse.compat import PY2
29 | from sqlparse.exceptions import SQLParseError
30 |
31 |
32 | # TODO: Add CLI Tests
33 | # TODO: Simplify formatter by using argparse `type` arguments
34 | def create_parser():
35 | _CASE_CHOICES = ['upper', 'lower', 'capitalize']
36 |
37 | parser = argparse.ArgumentParser(
38 | prog='sqlformat',
39 | description='Format FILE according to OPTIONS. Use "-" as FILE '
40 | 'to read from stdin.',
41 | usage='%(prog)s [OPTIONS] FILE, ...',
42 | )
43 |
44 | parser.add_argument('filename')
45 |
46 | parser.add_argument(
47 | '-o', '--outfile',
48 | dest='outfile',
49 | metavar='FILE',
50 | help='write output to FILE (defaults to stdout)')
51 |
52 | parser.add_argument(
53 | '--version',
54 | action='version',
55 | version=sqlparse.__version__)
56 |
57 | group = parser.add_argument_group('Formatting Options')
58 |
59 | group.add_argument(
60 | '-k', '--keywords',
61 | metavar='CHOICE',
62 | dest='keyword_case',
63 | choices=_CASE_CHOICES,
64 | help='change case of keywords, CHOICE is one of {0}'.format(
65 | ', '.join('"{0}"'.format(x) for x in _CASE_CHOICES)))
66 |
67 | group.add_argument(
68 | '-i', '--identifiers',
69 | metavar='CHOICE',
70 | dest='identifier_case',
71 | choices=_CASE_CHOICES,
72 | help='change case of identifiers, CHOICE is one of {0}'.format(
73 | ', '.join('"{0}"'.format(x) for x in _CASE_CHOICES)))
74 |
75 | group.add_argument(
76 | '-l', '--language',
77 | metavar='LANG',
78 | dest='output_format',
79 | choices=['python', 'php'],
80 | help='output a snippet in programming language LANG, '
81 | 'choices are "python", "php"')
82 |
83 | group.add_argument(
84 | '--strip-comments',
85 | dest='strip_comments',
86 | action='store_true',
87 | default=False,
88 | help='remove comments')
89 |
90 | group.add_argument(
91 | '-r', '--reindent',
92 | dest='reindent',
93 | action='store_true',
94 | default=False,
95 | help='reindent statements')
96 |
97 | group.add_argument(
98 | '--indent_width',
99 | dest='indent_width',
100 | default=2,
101 | type=int,
102 | help='indentation width (defaults to 2 spaces)')
103 |
104 | group.add_argument(
105 | '-a', '--reindent_aligned',
106 | action='store_true',
107 | default=False,
108 | help='reindent statements to aligned format')
109 |
110 | group.add_argument(
111 | '-s', '--use_space_around_operators',
112 | action='store_true',
113 | default=False,
114 | help='place spaces around mathematical operators')
115 |
116 | group.add_argument(
117 | '--wrap_after',
118 | dest='wrap_after',
119 | default=0,
120 | type=int,
121 | help='Column after which lists should be wrapped')
122 |
123 | group.add_argument(
124 | '--comma_first',
125 | dest='comma_first',
126 | default=False,
127 | type=bool,
128 | help='Insert linebreak before comma (default False)')
129 |
130 | group.add_argument(
131 | '--encoding',
132 | dest='encoding',
133 | default='utf-8',
134 | help='Specify the input encoding (default utf-8)')
135 |
136 | return parser
137 |
138 |
139 | def _error(msg):
140 | """Print msg and optionally exit with return code exit_."""
141 | sys.stderr.write(u'[ERROR] {0}\n'.format(msg))
142 | return 1
143 |
144 |
145 | def main(args=None):
146 | parser = create_parser()
147 | args = parser.parse_args(args)
148 |
149 | if args.filename == '-': # read from stdin
150 | if PY2:
151 | data = getreader(args.encoding)(sys.stdin).read()
152 | else:
153 | data = TextIOWrapper(
154 | sys.stdin.buffer, encoding=args.encoding).read()
155 | else:
156 | try:
157 | data = ''.join(open(args.filename, 'r', args.encoding).readlines())
158 | except IOError as e:
159 | return _error(
160 | u'Failed to read {0}: {1}'.format(args.filename, e))
161 |
162 | if args.outfile:
163 | try:
164 | stream = open(args.outfile, 'w', args.encoding)
165 | except IOError as e:
166 | return _error(u'Failed to open {0}: {1}'.format(args.outfile, e))
167 | else:
168 | stream = sys.stdout
169 |
170 | formatter_opts = vars(args)
171 | try:
172 | formatter_opts = sqlparse.formatter.validate_options(formatter_opts)
173 | except SQLParseError as e:
174 | return _error(u'Invalid options: {0}'.format(e))
175 |
176 | s = sqlparse.format(data, **formatter_opts)
177 | stream.write(s)
178 | stream.flush()
179 | return 0
180 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/compat.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """Python 2/3 compatibility.
9 |
10 | This module only exists to avoid a dependency on six
11 | for very trivial stuff. We only need to take care of
12 | string types, buffers and metaclasses.
13 |
14 | Parts of the code is copied directly from six:
15 | https://bitbucket.org/gutworth/six
16 | """
17 |
18 | import sys
19 | from io import TextIOBase
20 |
21 | PY2 = sys.version_info[0] == 2
22 | PY3 = sys.version_info[0] == 3
23 |
24 |
25 | if PY3:
26 | def unicode_compatible(cls):
27 | return cls
28 |
29 | bytes_type = bytes
30 | text_type = str
31 | string_types = (str,)
32 | from io import StringIO
33 | file_types = (StringIO, TextIOBase)
34 |
35 |
36 | elif PY2:
37 | def unicode_compatible(cls):
38 | cls.__unicode__ = cls.__str__
39 | cls.__str__ = lambda x: x.__unicode__().encode('utf-8')
40 | return cls
41 |
42 | bytes_type = str
43 | text_type = unicode
44 | string_types = (str, unicode,)
45 | from StringIO import StringIO
46 | file_types = (file, StringIO, TextIOBase)
47 | from StringIO import StringIO
48 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/engine/__init__.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse.engine import grouping
9 | from sqlparse.engine.filter_stack import FilterStack
10 | from sqlparse.engine.statement_splitter import StatementSplitter
11 |
12 | __all__ = [
13 | 'grouping',
14 | 'FilterStack',
15 | 'StatementSplitter',
16 | ]
17 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/engine/filter_stack.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """filter"""
9 |
10 | from sqlparse import lexer
11 | from sqlparse.engine import grouping
12 | from sqlparse.engine.statement_splitter import StatementSplitter
13 |
14 |
15 | class FilterStack(object):
16 | def __init__(self):
17 | self.preprocess = []
18 | self.stmtprocess = []
19 | self.postprocess = []
20 | self._grouping = False
21 |
22 | def enable_grouping(self):
23 | self._grouping = True
24 |
25 | def run(self, sql, encoding=None):
26 | stream = lexer.tokenize(sql, encoding)
27 | # Process token stream
28 | for filter_ in self.preprocess:
29 | stream = filter_.process(stream)
30 |
31 | stream = StatementSplitter().process(stream)
32 |
33 | # Output: Stream processed Statements
34 | for stmt in stream:
35 | if self._grouping:
36 | stmt = grouping.group(stmt)
37 |
38 | for filter_ in self.stmtprocess:
39 | filter_.process(stmt)
40 |
41 | for filter_ in self.postprocess:
42 | stmt = filter_.process(stmt)
43 |
44 | yield stmt
45 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/engine/grouping.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import sql
9 | from sqlparse import tokens as T
10 | from sqlparse.utils import recurse, imt
11 |
12 | T_NUMERICAL = (T.Number, T.Number.Integer, T.Number.Float)
13 | T_STRING = (T.String, T.String.Single, T.String.Symbol)
14 | T_NAME = (T.Name, T.Name.Placeholder)
15 |
16 |
17 | def _group_matching(tlist, cls):
18 | """Groups Tokens that have beginning and end."""
19 | opens = []
20 | tidx_offset = 0
21 | for idx, token in enumerate(list(tlist)):
22 | tidx = idx - tidx_offset
23 |
24 | if token.is_whitespace:
25 | # ~50% of tokens will be whitespace. Will checking early
26 | # for them avoid 3 comparisons, but then add 1 more comparison
27 | # for the other ~50% of tokens...
28 | continue
29 |
30 | if token.is_group and not isinstance(token, cls):
31 | # Check inside previously grouped (ie. parenthesis) if group
32 | # of differnt type is inside (ie, case). though ideally should
33 | # should check for all open/close tokens at once to avoid recursion
34 | _group_matching(token, cls)
35 | continue
36 |
37 | if token.match(*cls.M_OPEN):
38 | opens.append(tidx)
39 |
40 | elif token.match(*cls.M_CLOSE):
41 | try:
42 | open_idx = opens.pop()
43 | except IndexError:
44 | # this indicates invalid sql and unbalanced tokens.
45 | # instead of break, continue in case other "valid" groups exist
46 | continue
47 | close_idx = tidx
48 | tlist.group_tokens(cls, open_idx, close_idx)
49 | tidx_offset += close_idx - open_idx
50 |
51 |
52 | def group_brackets(tlist):
53 | _group_matching(tlist, sql.SquareBrackets)
54 |
55 |
56 | def group_parenthesis(tlist):
57 | _group_matching(tlist, sql.Parenthesis)
58 |
59 |
60 | def group_case(tlist):
61 | _group_matching(tlist, sql.Case)
62 |
63 |
64 | def group_if(tlist):
65 | _group_matching(tlist, sql.If)
66 |
67 |
68 | def group_for(tlist):
69 | _group_matching(tlist, sql.For)
70 |
71 |
72 | def group_begin(tlist):
73 | _group_matching(tlist, sql.Begin)
74 |
75 |
76 | def group_typecasts(tlist):
77 | def match(token):
78 | return token.match(T.Punctuation, '::')
79 |
80 | def valid(token):
81 | return token is not None
82 |
83 | def post(tlist, pidx, tidx, nidx):
84 | return pidx, nidx
85 |
86 | valid_prev = valid_next = valid
87 | _group(tlist, sql.Identifier, match, valid_prev, valid_next, post)
88 |
89 |
90 | def group_period(tlist):
91 | def match(token):
92 | return token.match(T.Punctuation, '.')
93 |
94 | def valid_prev(token):
95 | sqlcls = sql.SquareBrackets, sql.Identifier
96 | ttypes = T.Name, T.String.Symbol
97 | return imt(token, i=sqlcls, t=ttypes)
98 |
99 | def valid_next(token):
100 | # issue261, allow invalid next token
101 | return True
102 |
103 | def post(tlist, pidx, tidx, nidx):
104 | # next_ validation is being performed here. issue261
105 | sqlcls = sql.SquareBrackets, sql.Function
106 | ttypes = T.Name, T.String.Symbol, T.Wildcard
107 | next_ = tlist[nidx] if nidx is not None else None
108 | valid_next = imt(next_, i=sqlcls, t=ttypes)
109 |
110 | return (pidx, nidx) if valid_next else (pidx, tidx)
111 |
112 | _group(tlist, sql.Identifier, match, valid_prev, valid_next, post)
113 |
114 |
115 | def group_as(tlist):
116 | def match(token):
117 | return token.is_keyword and token.normalized == 'AS'
118 |
119 | def valid_prev(token):
120 | return token.normalized == 'NULL' or not token.is_keyword
121 |
122 | def valid_next(token):
123 | ttypes = T.DML, T.DDL
124 | return not imt(token, t=ttypes) and token is not None
125 |
126 | def post(tlist, pidx, tidx, nidx):
127 | return pidx, nidx
128 |
129 | _group(tlist, sql.Identifier, match, valid_prev, valid_next, post)
130 |
131 |
132 | def group_assignment(tlist):
133 | def match(token):
134 | return token.match(T.Assignment, ':=')
135 |
136 | def valid(token):
137 | return token is not None
138 |
139 | def post(tlist, pidx, tidx, nidx):
140 | m_semicolon = T.Punctuation, ';'
141 | snidx, _ = tlist.token_next_by(m=m_semicolon, idx=nidx)
142 | nidx = snidx or nidx
143 | return pidx, nidx
144 |
145 | valid_prev = valid_next = valid
146 | _group(tlist, sql.Assignment, match, valid_prev, valid_next, post)
147 |
148 |
149 | def group_comparison(tlist):
150 | sqlcls = (sql.Parenthesis, sql.Function, sql.Identifier,
151 | sql.Operation)
152 | ttypes = T_NUMERICAL + T_STRING + T_NAME
153 |
154 | def match(token):
155 | return token.ttype == T.Operator.Comparison
156 |
157 | def valid(token):
158 | if imt(token, t=ttypes, i=sqlcls):
159 | return True
160 | elif token and token.is_keyword and token.normalized == 'NULL':
161 | return True
162 | else:
163 | return False
164 |
165 | def post(tlist, pidx, tidx, nidx):
166 | return pidx, nidx
167 |
168 | valid_prev = valid_next = valid
169 | _group(tlist, sql.Comparison, match,
170 | valid_prev, valid_next, post, extend=False)
171 |
172 |
173 | @recurse(sql.Identifier)
174 | def group_identifier(tlist):
175 | ttypes = (T.String.Symbol, T.Name)
176 |
177 | tidx, token = tlist.token_next_by(t=ttypes)
178 | while token:
179 | tlist.group_tokens(sql.Identifier, tidx, tidx)
180 | tidx, token = tlist.token_next_by(t=ttypes, idx=tidx)
181 |
182 |
183 | def group_arrays(tlist):
184 | sqlcls = sql.SquareBrackets, sql.Identifier, sql.Function
185 | ttypes = T.Name, T.String.Symbol
186 |
187 | def match(token):
188 | return isinstance(token, sql.SquareBrackets)
189 |
190 | def valid_prev(token):
191 | return imt(token, i=sqlcls, t=ttypes)
192 |
193 | def valid_next(token):
194 | return True
195 |
196 | def post(tlist, pidx, tidx, nidx):
197 | return pidx, tidx
198 |
199 | _group(tlist, sql.Identifier, match,
200 | valid_prev, valid_next, post, extend=True, recurse=False)
201 |
202 |
203 | def group_operator(tlist):
204 | ttypes = T_NUMERICAL + T_STRING + T_NAME
205 | sqlcls = (sql.SquareBrackets, sql.Parenthesis, sql.Function,
206 | sql.Identifier, sql.Operation)
207 |
208 | def match(token):
209 | return imt(token, t=(T.Operator, T.Wildcard))
210 |
211 | def valid(token):
212 | return imt(token, i=sqlcls, t=ttypes)
213 |
214 | def post(tlist, pidx, tidx, nidx):
215 | tlist[tidx].ttype = T.Operator
216 | return pidx, nidx
217 |
218 | valid_prev = valid_next = valid
219 | _group(tlist, sql.Operation, match,
220 | valid_prev, valid_next, post, extend=False)
221 |
222 |
223 | def group_identifier_list(tlist):
224 | m_role = T.Keyword, ('null', 'role')
225 | sqlcls = (sql.Function, sql.Case, sql.Identifier, sql.Comparison,
226 | sql.IdentifierList, sql.Operation)
227 | ttypes = (T_NUMERICAL + T_STRING + T_NAME +
228 | (T.Keyword, T.Comment, T.Wildcard))
229 |
230 | def match(token):
231 | return token.match(T.Punctuation, ',')
232 |
233 | def valid(token):
234 | return imt(token, i=sqlcls, m=m_role, t=ttypes)
235 |
236 | def post(tlist, pidx, tidx, nidx):
237 | return pidx, nidx
238 |
239 | valid_prev = valid_next = valid
240 | _group(tlist, sql.IdentifierList, match,
241 | valid_prev, valid_next, post, extend=True)
242 |
243 |
244 | @recurse(sql.Comment)
245 | def group_comments(tlist):
246 | tidx, token = tlist.token_next_by(t=T.Comment)
247 | while token:
248 | eidx, end = tlist.token_not_matching(
249 | lambda tk: imt(tk, t=T.Comment) or tk.is_whitespace, idx=tidx)
250 | if end is not None:
251 | eidx, end = tlist.token_prev(eidx, skip_ws=False)
252 | tlist.group_tokens(sql.Comment, tidx, eidx)
253 |
254 | tidx, token = tlist.token_next_by(t=T.Comment, idx=tidx)
255 |
256 |
257 | @recurse(sql.Where)
258 | def group_where(tlist):
259 | tidx, token = tlist.token_next_by(m=sql.Where.M_OPEN)
260 | while token:
261 | eidx, end = tlist.token_next_by(m=sql.Where.M_CLOSE, idx=tidx)
262 |
263 | if end is None:
264 | end = tlist._groupable_tokens[-1]
265 | else:
266 | end = tlist.tokens[eidx - 1]
267 | # TODO: convert this to eidx instead of end token.
268 | # i think above values are len(tlist) and eidx-1
269 | eidx = tlist.token_index(end)
270 | tlist.group_tokens(sql.Where, tidx, eidx)
271 | tidx, token = tlist.token_next_by(m=sql.Where.M_OPEN, idx=tidx)
272 |
273 |
274 | @recurse()
275 | def group_aliased(tlist):
276 | I_ALIAS = (sql.Parenthesis, sql.Function, sql.Case, sql.Identifier,
277 | sql.Operation)
278 |
279 | tidx, token = tlist.token_next_by(i=I_ALIAS, t=T.Number)
280 | while token:
281 | nidx, next_ = tlist.token_next(tidx)
282 | if isinstance(next_, sql.Identifier):
283 | tlist.group_tokens(sql.Identifier, tidx, nidx, extend=True)
284 | tidx, token = tlist.token_next_by(i=I_ALIAS, t=T.Number, idx=tidx)
285 |
286 |
287 | @recurse(sql.Function)
288 | def group_functions(tlist):
289 | has_create = False
290 | has_table = False
291 | for tmp_token in tlist.tokens:
292 | if tmp_token.value == 'CREATE':
293 | has_create = True
294 | if tmp_token.value == 'TABLE':
295 | has_table = True
296 | if has_create and has_table:
297 | return
298 |
299 | tidx, token = tlist.token_next_by(t=T.Name)
300 | while token:
301 | nidx, next_ = tlist.token_next(tidx)
302 | if isinstance(next_, sql.Parenthesis):
303 | tlist.group_tokens(sql.Function, tidx, nidx)
304 | tidx, token = tlist.token_next_by(t=T.Name, idx=tidx)
305 |
306 |
307 | def group_order(tlist):
308 | """Group together Identifier and Asc/Desc token"""
309 | tidx, token = tlist.token_next_by(t=T.Keyword.Order)
310 | while token:
311 | pidx, prev_ = tlist.token_prev(tidx)
312 | if imt(prev_, i=sql.Identifier, t=T.Number):
313 | tlist.group_tokens(sql.Identifier, pidx, tidx)
314 | tidx = pidx
315 | tidx, token = tlist.token_next_by(t=T.Keyword.Order, idx=tidx)
316 |
317 |
318 | @recurse()
319 | def align_comments(tlist):
320 | tidx, token = tlist.token_next_by(i=sql.Comment)
321 | while token:
322 | pidx, prev_ = tlist.token_prev(tidx)
323 | if isinstance(prev_, sql.TokenList):
324 | tlist.group_tokens(sql.TokenList, pidx, tidx, extend=True)
325 | tidx = pidx
326 | tidx, token = tlist.token_next_by(i=sql.Comment, idx=tidx)
327 |
328 |
329 | def group(stmt):
330 | for func in [
331 | group_comments,
332 |
333 | # _group_matching
334 | group_brackets,
335 | group_parenthesis,
336 | group_case,
337 | group_if,
338 | group_for,
339 | group_begin,
340 |
341 | group_functions,
342 | group_where,
343 | group_period,
344 | group_arrays,
345 | group_identifier,
346 | group_order,
347 | group_typecasts,
348 | group_operator,
349 | group_as,
350 | group_aliased,
351 | group_assignment,
352 | group_comparison,
353 |
354 | align_comments,
355 | group_identifier_list,
356 | ]:
357 | func(stmt)
358 | return stmt
359 |
360 |
361 | def _group(tlist, cls, match,
362 | valid_prev=lambda t: True,
363 | valid_next=lambda t: True,
364 | post=None,
365 | extend=True,
366 | recurse=True
367 | ):
368 | """Groups together tokens that are joined by a middle token. ie. x < y"""
369 |
370 | tidx_offset = 0
371 | pidx, prev_ = None, None
372 | for idx, token in enumerate(list(tlist)):
373 | tidx = idx - tidx_offset
374 |
375 | if token.is_whitespace:
376 | continue
377 |
378 | if recurse and token.is_group and not isinstance(token, cls):
379 | _group(token, cls, match, valid_prev, valid_next, post, extend)
380 |
381 | if match(token):
382 | nidx, next_ = tlist.token_next(tidx)
383 | if prev_ and valid_prev(prev_) and valid_next(next_):
384 | from_idx, to_idx = post(tlist, pidx, tidx, nidx)
385 | grp = tlist.group_tokens(cls, from_idx, to_idx, extend=extend)
386 |
387 | tidx_offset += to_idx - from_idx
388 | pidx, prev_ = from_idx, grp
389 | continue
390 |
391 | pidx, prev_ = tidx, token
392 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/engine/statement_splitter.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import sql, tokens as T
9 |
10 |
11 | class StatementSplitter(object):
12 | """Filter that split stream at individual statements"""
13 |
14 | def __init__(self):
15 | self._reset()
16 |
17 | def _reset(self):
18 | """Set the filter attributes to its default values"""
19 | self._in_declare = False
20 | self._is_create = False
21 | self._begin_depth = 0
22 |
23 | self.consume_ws = False
24 | self.tokens = []
25 | self.level = 0
26 |
27 | def _change_splitlevel(self, ttype, value):
28 | """Get the new split level (increase, decrease or remain equal)"""
29 | # ANSI
30 | # if normal token return
31 | # wouldn't parenthesis increase/decrease a level?
32 | # no, inside a paranthesis can't start new statement
33 | if ttype not in T.Keyword:
34 | return 0
35 |
36 | # Everything after here is ttype = T.Keyword
37 | # Also to note, once entered an If statement you are done and basically
38 | # returning
39 | unified = value.upper()
40 |
41 | # three keywords begin with CREATE, but only one of them is DDL
42 | # DDL Create though can contain more words such as "or replace"
43 | if ttype is T.Keyword.DDL and unified.startswith('CREATE'):
44 | self._is_create = True
45 | return 0
46 |
47 | # can have nested declare inside of being...
48 | if unified == 'DECLARE' and self._is_create and self._begin_depth == 0:
49 | self._in_declare = True
50 | return 1
51 |
52 | if unified == 'BEGIN':
53 | self._begin_depth += 1
54 | if self._is_create:
55 | # FIXME(andi): This makes no sense.
56 | return 1
57 | return 0
58 |
59 | # Should this respect a preceeding BEGIN?
60 | # In CASE ... WHEN ... END this results in a split level -1.
61 | # Would having multiple CASE WHEN END and a Assigment Operator
62 | # cause the statement to cut off prematurely?
63 | if unified == 'END':
64 | self._begin_depth = max(0, self._begin_depth - 1)
65 | return -1
66 |
67 | if (unified in ('IF', 'FOR', 'WHILE') and
68 | self._is_create and self._begin_depth > 0):
69 | return 1
70 |
71 | if unified in ('END IF', 'END FOR', 'END WHILE'):
72 | return -1
73 |
74 | # Default
75 | return 0
76 |
77 | def process(self, stream):
78 | """Process the stream"""
79 | EOS_TTYPE = T.Whitespace, T.Comment.Single
80 |
81 | # Run over all stream tokens
82 | for ttype, value in stream:
83 | # Yield token if we finished a statement and there's no whitespaces
84 | # It will count newline token as a non whitespace. In this context
85 | # whitespace ignores newlines.
86 | # why don't multi line comments also count?
87 | if self.consume_ws and ttype not in EOS_TTYPE:
88 | yield sql.Statement(self.tokens)
89 |
90 | # Reset filter and prepare to process next statement
91 | self._reset()
92 |
93 | # Change current split level (increase, decrease or remain equal)
94 | self.level += self._change_splitlevel(ttype, value)
95 |
96 | # Append the token to the current statement
97 | self.tokens.append(sql.Token(ttype, value))
98 |
99 | # Check if we get the end of a statement
100 | if self.level <= 0 and ttype is T.Punctuation and value == ';':
101 | self.consume_ws = True
102 |
103 | # Yield pending statement (if any)
104 | if self.tokens:
105 | yield sql.Statement(self.tokens)
106 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/exceptions.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """Exceptions used in this package."""
9 |
10 |
11 | class SQLParseError(Exception):
12 | """Base class for exceptions in this module."""
13 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/__init__.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse.filters.others import SerializerUnicode
9 | from sqlparse.filters.others import StripCommentsFilter
10 | from sqlparse.filters.others import StripWhitespaceFilter
11 | from sqlparse.filters.others import SpacesAroundOperatorsFilter
12 |
13 | from sqlparse.filters.output import OutputPHPFilter
14 | from sqlparse.filters.output import OutputPythonFilter
15 |
16 | from sqlparse.filters.tokens import KeywordCaseFilter
17 | from sqlparse.filters.tokens import IdentifierCaseFilter
18 | from sqlparse.filters.tokens import TruncateStringFilter
19 |
20 | from sqlparse.filters.reindent import ReindentFilter
21 | from sqlparse.filters.right_margin import RightMarginFilter
22 | from sqlparse.filters.aligned_indent import AlignedIndentFilter
23 |
24 | __all__ = [
25 | 'SerializerUnicode',
26 | 'StripCommentsFilter',
27 | 'StripWhitespaceFilter',
28 | 'SpacesAroundOperatorsFilter',
29 |
30 | 'OutputPHPFilter',
31 | 'OutputPythonFilter',
32 |
33 | 'KeywordCaseFilter',
34 | 'IdentifierCaseFilter',
35 | 'TruncateStringFilter',
36 |
37 | 'ReindentFilter',
38 | 'RightMarginFilter',
39 | 'AlignedIndentFilter',
40 | ]
41 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/aligned_indent.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import sql, tokens as T
9 | from sqlparse.compat import text_type
10 | from sqlparse.utils import offset, indent
11 |
12 |
13 | class AlignedIndentFilter(object):
14 | join_words = (r'((LEFT\s+|RIGHT\s+|FULL\s+)?'
15 | r'(INNER\s+|OUTER\s+|STRAIGHT\s+)?|'
16 | r'(CROSS\s+|NATURAL\s+)?)?JOIN\b')
17 | split_words = ('FROM',
18 | join_words, 'ON',
19 | 'WHERE', 'AND', 'OR',
20 | 'GROUP', 'HAVING', 'LIMIT',
21 | 'ORDER', 'UNION', 'VALUES',
22 | 'SET', 'BETWEEN', 'EXCEPT')
23 |
24 | def __init__(self, char=' ', n='\n'):
25 | self.n = n
26 | self.offset = 0
27 | self.indent = 0
28 | self.char = char
29 | self._max_kwd_len = len('select')
30 |
31 | def nl(self, offset=1):
32 | # offset = 1 represent a single space after SELECT
33 | offset = -len(offset) if not isinstance(offset, int) else offset
34 | # add two for the space and parens
35 | indent = self.indent * (2 + self._max_kwd_len)
36 |
37 | return sql.Token(T.Whitespace, self.n + self.char * (
38 | self._max_kwd_len + offset + indent + self.offset))
39 |
40 | def _process_statement(self, tlist):
41 | if tlist.tokens[0].is_whitespace and self.indent == 0:
42 | tlist.tokens.pop(0)
43 |
44 | # process the main query body
45 | self._process(sql.TokenList(tlist.tokens))
46 |
47 | def _process_parenthesis(self, tlist):
48 | # if this isn't a subquery, don't re-indent
49 | _, token = tlist.token_next_by(m=(T.DML, 'SELECT'))
50 | if token is not None:
51 | with indent(self):
52 | tlist.insert_after(tlist[0], self.nl('SELECT'))
53 | # process the inside of the parantheses
54 | self._process_default(tlist)
55 |
56 | # de-indent last parenthesis
57 | tlist.insert_before(tlist[-1], self.nl())
58 |
59 | def _process_identifierlist(self, tlist):
60 | # columns being selected
61 | identifiers = list(tlist.get_identifiers())
62 | identifiers.pop(0)
63 | [tlist.insert_before(token, self.nl()) for token in identifiers]
64 | self._process_default(tlist)
65 |
66 | def _process_case(self, tlist):
67 | offset_ = len('case ') + len('when ')
68 | cases = tlist.get_cases(skip_ws=True)
69 | # align the end as well
70 | end_token = tlist.token_next_by(m=(T.Keyword, 'END'))[1]
71 | cases.append((None, [end_token]))
72 |
73 | condition_width = [len(' '.join(map(text_type, cond))) if cond else 0
74 | for cond, _ in cases]
75 | max_cond_width = max(condition_width)
76 |
77 | for i, (cond, value) in enumerate(cases):
78 | # cond is None when 'else or end'
79 | stmt = cond[0] if cond else value[0]
80 |
81 | if i > 0:
82 | tlist.insert_before(stmt, self.nl(
83 | offset_ - len(text_type(stmt))))
84 | if cond:
85 | ws = sql.Token(T.Whitespace, self.char * (
86 | max_cond_width - condition_width[i]))
87 | tlist.insert_after(cond[-1], ws)
88 |
89 | def _next_token(self, tlist, idx=-1):
90 | split_words = T.Keyword, self.split_words, True
91 | tidx, token = tlist.token_next_by(m=split_words, idx=idx)
92 | # treat "BETWEEN x and y" as a single statement
93 | if token and token.normalized == 'BETWEEN':
94 | tidx, token = self._next_token(tlist, tidx)
95 | if token and token.normalized == 'AND':
96 | tidx, token = self._next_token(tlist, tidx)
97 | return tidx, token
98 |
99 | def _split_kwds(self, tlist):
100 | tidx, token = self._next_token(tlist)
101 | while token:
102 | # joins are special case. only consider the first word as aligner
103 | if token.match(T.Keyword, self.join_words, regex=True):
104 | token_indent = token.value.split()[0]
105 | else:
106 | token_indent = text_type(token)
107 | tlist.insert_before(token, self.nl(token_indent))
108 | tidx += 1
109 | tidx, token = self._next_token(tlist, tidx)
110 |
111 | def _process_default(self, tlist):
112 | self._split_kwds(tlist)
113 | # process any sub-sub statements
114 | for sgroup in tlist.get_sublists():
115 | idx = tlist.token_index(sgroup)
116 | pidx, prev_ = tlist.token_prev(idx)
117 | # HACK: make "group/order by" work. Longer than max_len.
118 | offset_ = 3 if (prev_ and prev_.match(T.Keyword, 'BY')) else 0
119 | with offset(self, offset_):
120 | self._process(sgroup)
121 |
122 | def _process(self, tlist):
123 | func_name = '_process_{cls}'.format(cls=type(tlist).__name__)
124 | func = getattr(self, func_name.lower(), self._process_default)
125 | func(tlist)
126 |
127 | def process(self, stmt):
128 | self._process(stmt)
129 | return stmt
130 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/others.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import sql, tokens as T
9 | from sqlparse.utils import split_unquoted_newlines
10 |
11 |
12 | class StripCommentsFilter(object):
13 | @staticmethod
14 | def _process(tlist):
15 | def get_next_comment():
16 | # TODO(andi) Comment types should be unified, see related issue38
17 | return tlist.token_next_by(i=sql.Comment, t=T.Comment)
18 |
19 | tidx, token = get_next_comment()
20 | while token:
21 | pidx, prev_ = tlist.token_prev(tidx, skip_ws=False)
22 | nidx, next_ = tlist.token_next(tidx, skip_ws=False)
23 | # Replace by whitespace if prev and next exist and if they're not
24 | # whitespaces. This doesn't apply if prev or next is a paranthesis.
25 | if (prev_ is None or next_ is None or
26 | prev_.is_whitespace or prev_.match(T.Punctuation, '(') or
27 | next_.is_whitespace or next_.match(T.Punctuation, ')')):
28 | tlist.tokens.remove(token)
29 | else:
30 | tlist.tokens[tidx] = sql.Token(T.Whitespace, ' ')
31 |
32 | tidx, token = get_next_comment()
33 |
34 | def process(self, stmt):
35 | [self.process(sgroup) for sgroup in stmt.get_sublists()]
36 | StripCommentsFilter._process(stmt)
37 | return stmt
38 |
39 |
40 | class StripWhitespaceFilter(object):
41 | def _stripws(self, tlist):
42 | func_name = '_stripws_{cls}'.format(cls=type(tlist).__name__)
43 | func = getattr(self, func_name.lower(), self._stripws_default)
44 | func(tlist)
45 |
46 | @staticmethod
47 | def _stripws_default(tlist):
48 | last_was_ws = False
49 | is_first_char = True
50 | for token in tlist.tokens:
51 | if token.is_whitespace:
52 | token.value = '' if last_was_ws or is_first_char else ' '
53 | last_was_ws = token.is_whitespace
54 | is_first_char = False
55 |
56 | def _stripws_identifierlist(self, tlist):
57 | # Removes newlines before commas, see issue140
58 | last_nl = None
59 | for token in list(tlist.tokens):
60 | if last_nl and token.ttype is T.Punctuation and token.value == ',':
61 | tlist.tokens.remove(last_nl)
62 | last_nl = token if token.is_whitespace else None
63 |
64 | # next_ = tlist.token_next(token, skip_ws=False)
65 | # if (next_ and not next_.is_whitespace and
66 | # token.ttype is T.Punctuation and token.value == ','):
67 | # tlist.insert_after(token, sql.Token(T.Whitespace, ' '))
68 | return self._stripws_default(tlist)
69 |
70 | def _stripws_parenthesis(self, tlist):
71 | if tlist.tokens[1].is_whitespace:
72 | tlist.tokens.pop(1)
73 | if tlist.tokens[-2].is_whitespace:
74 | tlist.tokens.pop(-2)
75 | self._stripws_default(tlist)
76 |
77 | def process(self, stmt, depth=0):
78 | [self.process(sgroup, depth + 1) for sgroup in stmt.get_sublists()]
79 | self._stripws(stmt)
80 | if depth == 0 and stmt.tokens and stmt.tokens[-1].is_whitespace:
81 | stmt.tokens.pop(-1)
82 | return stmt
83 |
84 |
85 | class SpacesAroundOperatorsFilter(object):
86 | @staticmethod
87 | def _process(tlist):
88 |
89 | ttypes = (T.Operator, T.Comparison)
90 | tidx, token = tlist.token_next_by(t=ttypes)
91 | while token:
92 | nidx, next_ = tlist.token_next(tidx, skip_ws=False)
93 | if next_ and next_.ttype != T.Whitespace:
94 | tlist.insert_after(tidx, sql.Token(T.Whitespace, ' '))
95 |
96 | pidx, prev_ = tlist.token_prev(tidx, skip_ws=False)
97 | if prev_ and prev_.ttype != T.Whitespace:
98 | tlist.insert_before(tidx, sql.Token(T.Whitespace, ' '))
99 | tidx += 1 # has to shift since token inserted before it
100 |
101 | # assert tlist.token_index(token) == tidx
102 | tidx, token = tlist.token_next_by(t=ttypes, idx=tidx)
103 |
104 | def process(self, stmt):
105 | [self.process(sgroup) for sgroup in stmt.get_sublists()]
106 | SpacesAroundOperatorsFilter._process(stmt)
107 | return stmt
108 |
109 |
110 | # ---------------------------
111 | # postprocess
112 |
113 | class SerializerUnicode(object):
114 | @staticmethod
115 | def process(stmt):
116 | lines = split_unquoted_newlines(stmt)
117 | return '\n'.join(line.rstrip() for line in lines)
118 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/output.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import sql, tokens as T
9 | from sqlparse.compat import text_type
10 |
11 |
12 | class OutputFilter(object):
13 | varname_prefix = ''
14 |
15 | def __init__(self, varname='sql'):
16 | self.varname = self.varname_prefix + varname
17 | self.count = 0
18 |
19 | def _process(self, stream, varname, has_nl):
20 | raise NotImplementedError
21 |
22 | def process(self, stmt):
23 | self.count += 1
24 | if self.count > 1:
25 | varname = u'{f.varname}{f.count}'.format(f=self)
26 | else:
27 | varname = self.varname
28 |
29 | has_nl = len(text_type(stmt).strip().splitlines()) > 1
30 | stmt.tokens = self._process(stmt.tokens, varname, has_nl)
31 | return stmt
32 |
33 |
34 | class OutputPythonFilter(OutputFilter):
35 | def _process(self, stream, varname, has_nl):
36 | # SQL query asignation to varname
37 | if self.count > 1:
38 | yield sql.Token(T.Whitespace, '\n')
39 | yield sql.Token(T.Name, varname)
40 | yield sql.Token(T.Whitespace, ' ')
41 | yield sql.Token(T.Operator, '=')
42 | yield sql.Token(T.Whitespace, ' ')
43 | if has_nl:
44 | yield sql.Token(T.Operator, '(')
45 | yield sql.Token(T.Text, "'")
46 |
47 | # Print the tokens on the quote
48 | for token in stream:
49 | # Token is a new line separator
50 | if token.is_whitespace and '\n' in token.value:
51 | # Close quote and add a new line
52 | yield sql.Token(T.Text, " '")
53 | yield sql.Token(T.Whitespace, '\n')
54 |
55 | # Quote header on secondary lines
56 | yield sql.Token(T.Whitespace, ' ' * (len(varname) + 4))
57 | yield sql.Token(T.Text, "'")
58 |
59 | # Indentation
60 | after_lb = token.value.split('\n', 1)[1]
61 | if after_lb:
62 | yield sql.Token(T.Whitespace, after_lb)
63 | continue
64 |
65 | # Token has escape chars
66 | elif "'" in token.value:
67 | token.value = token.value.replace("'", "\\'")
68 |
69 | # Put the token
70 | yield sql.Token(T.Text, token.value)
71 |
72 | # Close quote
73 | yield sql.Token(T.Text, "'")
74 | if has_nl:
75 | yield sql.Token(T.Operator, ')')
76 |
77 |
78 | class OutputPHPFilter(OutputFilter):
79 | varname_prefix = '$'
80 |
81 | def _process(self, stream, varname, has_nl):
82 | # SQL query asignation to varname (quote header)
83 | if self.count > 1:
84 | yield sql.Token(T.Whitespace, '\n')
85 | yield sql.Token(T.Name, varname)
86 | yield sql.Token(T.Whitespace, ' ')
87 | if has_nl:
88 | yield sql.Token(T.Whitespace, ' ')
89 | yield sql.Token(T.Operator, '=')
90 | yield sql.Token(T.Whitespace, ' ')
91 | yield sql.Token(T.Text, '"')
92 |
93 | # Print the tokens on the quote
94 | for token in stream:
95 | # Token is a new line separator
96 | if token.is_whitespace and '\n' in token.value:
97 | # Close quote and add a new line
98 | yield sql.Token(T.Text, ' ";')
99 | yield sql.Token(T.Whitespace, '\n')
100 |
101 | # Quote header on secondary lines
102 | yield sql.Token(T.Name, varname)
103 | yield sql.Token(T.Whitespace, ' ')
104 | yield sql.Token(T.Operator, '.=')
105 | yield sql.Token(T.Whitespace, ' ')
106 | yield sql.Token(T.Text, '"')
107 |
108 | # Indentation
109 | after_lb = token.value.split('\n', 1)[1]
110 | if after_lb:
111 | yield sql.Token(T.Whitespace, after_lb)
112 | continue
113 |
114 | # Token has escape chars
115 | elif '"' in token.value:
116 | token.value = token.value.replace('"', '\\"')
117 |
118 | # Put the token
119 | yield sql.Token(T.Text, token.value)
120 |
121 | # Close quote
122 | yield sql.Token(T.Text, '"')
123 | yield sql.Token(T.Punctuation, ';')
124 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/reindent.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import sql, tokens as T
9 | from sqlparse.compat import text_type
10 | from sqlparse.utils import offset, indent
11 |
12 |
13 | class ReindentFilter(object):
14 | def __init__(self, width=2, char=' ', wrap_after=0, n='\n',
15 | comma_first=False):
16 | self.n = n
17 | self.width = width
18 | self.char = char
19 | self.indent = 0
20 | self.offset = 0
21 | self.wrap_after = wrap_after
22 | self.comma_first = comma_first
23 | self._curr_stmt = None
24 | self._last_stmt = None
25 |
26 | def _flatten_up_to_token(self, token):
27 | """Yields all tokens up to token but excluding current."""
28 | if token.is_group:
29 | token = next(token.flatten())
30 |
31 | for t in self._curr_stmt.flatten():
32 | if t == token:
33 | break
34 | yield t
35 |
36 | @property
37 | def leading_ws(self):
38 | return self.offset + self.indent * self.width
39 |
40 | def _get_offset(self, token):
41 | raw = u''.join(map(text_type, self._flatten_up_to_token(token)))
42 | line = (raw or '\n').splitlines()[-1]
43 | # Now take current offset into account and return relative offset.
44 | return len(line) - len(self.char * self.leading_ws)
45 |
46 | def nl(self, offset=0):
47 | return sql.Token(
48 | T.Whitespace,
49 | self.n + self.char * max(0, self.leading_ws + offset))
50 |
51 | def _next_token(self, tlist, idx=-1):
52 | split_words = ('FROM', 'STRAIGHT_JOIN$', 'JOIN$', 'AND', 'OR',
53 | 'GROUP', 'ORDER', 'UNION', 'VALUES',
54 | 'SET', 'BETWEEN', 'EXCEPT', 'HAVING', 'LIMIT')
55 | m_split = T.Keyword, split_words, True
56 | tidx, token = tlist.token_next_by(m=m_split, idx=idx)
57 |
58 | if token and token.normalized == 'BETWEEN':
59 | tidx, token = self._next_token(tlist, tidx)
60 |
61 | if token and token.normalized == 'AND':
62 | tidx, token = self._next_token(tlist, tidx)
63 |
64 | return tidx, token
65 |
66 | def _split_kwds(self, tlist):
67 | tidx, token = self._next_token(tlist)
68 | while token:
69 | pidx, prev_ = tlist.token_prev(tidx, skip_ws=False)
70 | uprev = text_type(prev_)
71 |
72 | if prev_ and prev_.is_whitespace:
73 | del tlist.tokens[pidx]
74 | tidx -= 1
75 |
76 | if not (uprev.endswith('\n') or uprev.endswith('\r')):
77 | tlist.insert_before(tidx, self.nl())
78 | tidx += 1
79 |
80 | tidx, token = self._next_token(tlist, tidx)
81 |
82 | def _split_statements(self, tlist):
83 | ttypes = T.Keyword.DML, T.Keyword.DDL
84 | tidx, token = tlist.token_next_by(t=ttypes)
85 | while token:
86 | pidx, prev_ = tlist.token_prev(tidx, skip_ws=False)
87 | if prev_ and prev_.is_whitespace:
88 | del tlist.tokens[pidx]
89 | tidx -= 1
90 | # only break if it's not the first token
91 | if prev_:
92 | tlist.insert_before(tidx, self.nl())
93 | tidx += 1
94 | tidx, token = tlist.token_next_by(t=ttypes, idx=tidx)
95 |
96 | def _process(self, tlist):
97 | func_name = '_process_{cls}'.format(cls=type(tlist).__name__)
98 | func = getattr(self, func_name.lower(), self._process_default)
99 | func(tlist)
100 |
101 | def _process_where(self, tlist):
102 | tidx, token = tlist.token_next_by(m=(T.Keyword, 'WHERE'))
103 | # issue121, errors in statement fixed??
104 | tlist.insert_before(tidx, self.nl())
105 |
106 | with indent(self):
107 | self._process_default(tlist)
108 |
109 | def _process_parenthesis(self, tlist):
110 | ttypes = T.Keyword.DML, T.Keyword.DDL
111 | _, is_dml_dll = tlist.token_next_by(t=ttypes)
112 | fidx, first = tlist.token_next_by(m=sql.Parenthesis.M_OPEN)
113 |
114 | with indent(self, 1 if is_dml_dll else 0):
115 | tlist.tokens.insert(0, self.nl()) if is_dml_dll else None
116 | with offset(self, self._get_offset(first) + 1):
117 | self._process_default(tlist, not is_dml_dll)
118 |
119 | def _process_identifierlist(self, tlist):
120 | identifiers = list(tlist.get_identifiers())
121 | first = next(identifiers.pop(0).flatten())
122 | num_offset = 1 if self.char == '\t' else self._get_offset(first)
123 | if not tlist.within(sql.Function):
124 | with offset(self, num_offset):
125 | position = 0
126 | for token in identifiers:
127 | # Add 1 for the "," separator
128 | position += len(token.value) + 1
129 | if position > (self.wrap_after - self.offset):
130 | adjust = 0
131 | if self.comma_first:
132 | adjust = -2
133 | _, comma = tlist.token_prev(
134 | tlist.token_index(token))
135 | if comma is None:
136 | continue
137 | token = comma
138 | tlist.insert_before(token, self.nl(offset=adjust))
139 | if self.comma_first:
140 | _, ws = tlist.token_next(
141 | tlist.token_index(token), skip_ws=False)
142 | if (ws is not None
143 | and ws.ttype is not T.Text.Whitespace):
144 | tlist.insert_after(
145 | token, sql.Token(T.Whitespace, ' '))
146 | position = 0
147 | self._process_default(tlist)
148 |
149 | def _process_case(self, tlist):
150 | iterable = iter(tlist.get_cases())
151 | cond, _ = next(iterable)
152 | first = next(cond[0].flatten())
153 |
154 | with offset(self, self._get_offset(tlist[0])):
155 | with offset(self, self._get_offset(first)):
156 | for cond, value in iterable:
157 | token = value[0] if cond is None else cond[0]
158 | tlist.insert_before(token, self.nl())
159 |
160 | # Line breaks on group level are done. let's add an offset of
161 | # len "when ", "then ", "else "
162 | with offset(self, len("WHEN ")):
163 | self._process_default(tlist)
164 | end_idx, end = tlist.token_next_by(m=sql.Case.M_CLOSE)
165 | if end_idx is not None:
166 | tlist.insert_before(end_idx, self.nl())
167 |
168 | def _process_default(self, tlist, stmts=True):
169 | self._split_statements(tlist) if stmts else None
170 | self._split_kwds(tlist)
171 | for sgroup in tlist.get_sublists():
172 | self._process(sgroup)
173 |
174 | def process(self, stmt):
175 | self._curr_stmt = stmt
176 | self._process(stmt)
177 |
178 | if self._last_stmt is not None:
179 | nl = '\n' if text_type(self._last_stmt).endswith('\n') else '\n\n'
180 | stmt.tokens.insert(0, sql.Token(T.Whitespace, nl))
181 |
182 | self._last_stmt = stmt
183 | return stmt
184 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/right_margin.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | import re
9 |
10 | from sqlparse import sql, tokens as T
11 | from sqlparse.compat import text_type
12 |
13 |
14 | # FIXME: Doesn't work
15 | class RightMarginFilter(object):
16 | keep_together = (
17 | # sql.TypeCast, sql.Identifier, sql.Alias,
18 | )
19 |
20 | def __init__(self, width=79):
21 | self.width = width
22 | self.line = ''
23 |
24 | def _process(self, group, stream):
25 | for token in stream:
26 | if token.is_whitespace and '\n' in token.value:
27 | if token.value.endswith('\n'):
28 | self.line = ''
29 | else:
30 | self.line = token.value.splitlines()[-1]
31 | elif token.is_group and type(token) not in self.keep_together:
32 | token.tokens = self._process(token, token.tokens)
33 | else:
34 | val = text_type(token)
35 | if len(self.line) + len(val) > self.width:
36 | match = re.search(r'^ +', self.line)
37 | if match is not None:
38 | indent = match.group()
39 | else:
40 | indent = ''
41 | yield sql.Token(T.Whitespace, '\n{0}'.format(indent))
42 | self.line = indent
43 | self.line += val
44 | yield token
45 |
46 | def process(self, group):
47 | # return
48 | # group.tokens = self._process(group, group.tokens)
49 | raise NotImplementedError
50 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/filters/tokens.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | from sqlparse import tokens as T
9 | from sqlparse.compat import text_type
10 |
11 |
12 | class _CaseFilter(object):
13 | ttype = None
14 |
15 | def __init__(self, case=None):
16 | case = case or 'upper'
17 | self.convert = getattr(text_type, case)
18 |
19 | def process(self, stream):
20 | for ttype, value in stream:
21 | if ttype in self.ttype:
22 | value = self.convert(value)
23 | yield ttype, value
24 |
25 |
26 | class KeywordCaseFilter(_CaseFilter):
27 | ttype = T.Keyword
28 |
29 |
30 | class IdentifierCaseFilter(_CaseFilter):
31 | ttype = T.Name, T.String.Symbol
32 |
33 | def process(self, stream):
34 | for ttype, value in stream:
35 | if ttype in self.ttype and value.strip()[0] != '"':
36 | value = self.convert(value)
37 | yield ttype, value
38 |
39 |
40 | class TruncateStringFilter(object):
41 | def __init__(self, width, char):
42 | self.width = width
43 | self.char = char
44 |
45 | def process(self, stream):
46 | for ttype, value in stream:
47 | if ttype != T.Literal.String.Single:
48 | yield ttype, value
49 | continue
50 |
51 | if value[:2] == "''":
52 | inner = value[2:-2]
53 | quote = "''"
54 | else:
55 | inner = value[1:-1]
56 | quote = "'"
57 |
58 | if len(inner) > self.width:
59 | value = ''.join((quote, inner[:self.width], self.char, quote))
60 | yield ttype, value
61 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/formatter.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """SQL formatter"""
9 |
10 | from sqlparse import filters
11 | from sqlparse.exceptions import SQLParseError
12 |
13 |
14 | def validate_options(options):
15 | """Validates options."""
16 | kwcase = options.get('keyword_case')
17 | if kwcase not in [None, 'upper', 'lower', 'capitalize']:
18 | raise SQLParseError('Invalid value for keyword_case: '
19 | '{0!r}'.format(kwcase))
20 |
21 | idcase = options.get('identifier_case')
22 | if idcase not in [None, 'upper', 'lower', 'capitalize']:
23 | raise SQLParseError('Invalid value for identifier_case: '
24 | '{0!r}'.format(idcase))
25 |
26 | ofrmt = options.get('output_format')
27 | if ofrmt not in [None, 'sql', 'python', 'php']:
28 | raise SQLParseError('Unknown output format: '
29 | '{0!r}'.format(ofrmt))
30 |
31 | strip_comments = options.get('strip_comments', False)
32 | if strip_comments not in [True, False]:
33 | raise SQLParseError('Invalid value for strip_comments: '
34 | '{0!r}'.format(strip_comments))
35 |
36 | space_around_operators = options.get('use_space_around_operators', False)
37 | if space_around_operators not in [True, False]:
38 | raise SQLParseError('Invalid value for use_space_around_operators: '
39 | '{0!r}'.format(space_around_operators))
40 |
41 | strip_ws = options.get('strip_whitespace', False)
42 | if strip_ws not in [True, False]:
43 | raise SQLParseError('Invalid value for strip_whitespace: '
44 | '{0!r}'.format(strip_ws))
45 |
46 | truncate_strings = options.get('truncate_strings')
47 | if truncate_strings is not None:
48 | try:
49 | truncate_strings = int(truncate_strings)
50 | except (ValueError, TypeError):
51 | raise SQLParseError('Invalid value for truncate_strings: '
52 | '{0!r}'.format(truncate_strings))
53 | if truncate_strings <= 1:
54 | raise SQLParseError('Invalid value for truncate_strings: '
55 | '{0!r}'.format(truncate_strings))
56 | options['truncate_strings'] = truncate_strings
57 | options['truncate_char'] = options.get('truncate_char', '[...]')
58 |
59 | reindent = options.get('reindent', False)
60 | if reindent not in [True, False]:
61 | raise SQLParseError('Invalid value for reindent: '
62 | '{0!r}'.format(reindent))
63 | elif reindent:
64 | options['strip_whitespace'] = True
65 |
66 | reindent_aligned = options.get('reindent_aligned', False)
67 | if reindent_aligned not in [True, False]:
68 | raise SQLParseError('Invalid value for reindent_aligned: '
69 | '{0!r}'.format(reindent))
70 | elif reindent_aligned:
71 | options['strip_whitespace'] = True
72 |
73 | indent_tabs = options.get('indent_tabs', False)
74 | if indent_tabs not in [True, False]:
75 | raise SQLParseError('Invalid value for indent_tabs: '
76 | '{0!r}'.format(indent_tabs))
77 | elif indent_tabs:
78 | options['indent_char'] = '\t'
79 | else:
80 | options['indent_char'] = ' '
81 |
82 | indent_width = options.get('indent_width', 2)
83 | try:
84 | indent_width = int(indent_width)
85 | except (TypeError, ValueError):
86 | raise SQLParseError('indent_width requires an integer')
87 | if indent_width < 1:
88 | raise SQLParseError('indent_width requires a positive integer')
89 | options['indent_width'] = indent_width
90 |
91 | wrap_after = options.get('wrap_after', 0)
92 | try:
93 | wrap_after = int(wrap_after)
94 | except (TypeError, ValueError):
95 | raise SQLParseError('wrap_after requires an integer')
96 | if wrap_after < 0:
97 | raise SQLParseError('wrap_after requires a positive integer')
98 | options['wrap_after'] = wrap_after
99 |
100 | comma_first = options.get('comma_first', False)
101 | if comma_first not in [True, False]:
102 | raise SQLParseError('comma_first requires a boolean value')
103 | options['comma_first'] = comma_first
104 |
105 | right_margin = options.get('right_margin')
106 | if right_margin is not None:
107 | try:
108 | right_margin = int(right_margin)
109 | except (TypeError, ValueError):
110 | raise SQLParseError('right_margin requires an integer')
111 | if right_margin < 10:
112 | raise SQLParseError('right_margin requires an integer > 10')
113 | options['right_margin'] = right_margin
114 |
115 | return options
116 |
117 |
118 | def build_filter_stack(stack, options):
119 | """Setup and return a filter stack.
120 |
121 | Args:
122 | stack: :class:`~sqlparse.filters.FilterStack` instance
123 | options: Dictionary with options validated by validate_options.
124 | """
125 | # Token filter
126 | if options.get('keyword_case'):
127 | stack.preprocess.append(
128 | filters.KeywordCaseFilter(options['keyword_case']))
129 |
130 | if options.get('identifier_case'):
131 | stack.preprocess.append(
132 | filters.IdentifierCaseFilter(options['identifier_case']))
133 |
134 | if options.get('truncate_strings'):
135 | stack.preprocess.append(filters.TruncateStringFilter(
136 | width=options['truncate_strings'], char=options['truncate_char']))
137 |
138 | if options.get('use_space_around_operators', False):
139 | stack.enable_grouping()
140 | stack.stmtprocess.append(filters.SpacesAroundOperatorsFilter())
141 |
142 | # After grouping
143 | if options.get('strip_comments'):
144 | stack.enable_grouping()
145 | stack.stmtprocess.append(filters.StripCommentsFilter())
146 |
147 | if options.get('strip_whitespace') or options.get('reindent'):
148 | stack.enable_grouping()
149 | stack.stmtprocess.append(filters.StripWhitespaceFilter())
150 |
151 | if options.get('reindent'):
152 | stack.enable_grouping()
153 | stack.stmtprocess.append(
154 | filters.ReindentFilter(char=options['indent_char'],
155 | width=options['indent_width'],
156 | wrap_after=options['wrap_after'],
157 | comma_first=options['comma_first']))
158 |
159 | if options.get('reindent_aligned', False):
160 | stack.enable_grouping()
161 | stack.stmtprocess.append(
162 | filters.AlignedIndentFilter(char=options['indent_char']))
163 |
164 | if options.get('right_margin'):
165 | stack.enable_grouping()
166 | stack.stmtprocess.append(
167 | filters.RightMarginFilter(width=options['right_margin']))
168 |
169 | # Serializer
170 | if options.get('output_format'):
171 | frmt = options['output_format']
172 | if frmt.lower() == 'php':
173 | fltr = filters.OutputPHPFilter()
174 | elif frmt.lower() == 'python':
175 | fltr = filters.OutputPythonFilter()
176 | else:
177 | fltr = None
178 | if fltr is not None:
179 | stack.postprocess.append(fltr)
180 |
181 | return stack
182 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/lexer.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """SQL Lexer"""
9 |
10 | # This code is based on the SqlLexer in pygments.
11 | # http://pygments.org/
12 | # It's separated from the rest of pygments to increase performance
13 | # and to allow some customizations.
14 |
15 | from sqlparse import tokens
16 | from sqlparse.keywords import SQL_REGEX
17 | from sqlparse.compat import bytes_type, text_type, file_types
18 | from sqlparse.utils import consume
19 |
20 |
21 | class Lexer(object):
22 | """Lexer
23 | Empty class. Leaving for backwards-compatibility
24 | """
25 |
26 | @staticmethod
27 | def get_tokens(text, encoding=None):
28 | """
29 | Return an iterable of (tokentype, value) pairs generated from
30 | `text`. If `unfiltered` is set to `True`, the filtering mechanism
31 | is bypassed even if filters are defined.
32 |
33 | Also preprocess the text, i.e. expand tabs and strip it if
34 | wanted and applies registered filters.
35 |
36 | Split ``text`` into (tokentype, text) pairs.
37 |
38 | ``stack`` is the inital stack (default: ``['root']``)
39 | """
40 | if isinstance(text, file_types):
41 | text = text.read()
42 |
43 | if isinstance(text, text_type):
44 | pass
45 | elif isinstance(text, bytes_type):
46 | if encoding:
47 | text = text.decode(encoding)
48 | else:
49 | try:
50 | text = text.decode('utf-8')
51 | except UnicodeDecodeError:
52 | text = text.decode('unicode-escape')
53 | else:
54 | raise TypeError(u"Expected text or file-like object, got {!r}".
55 | format(type(text)))
56 |
57 | iterable = enumerate(text)
58 | for pos, char in iterable:
59 | for rexmatch, action in SQL_REGEX:
60 | m = rexmatch(text, pos)
61 |
62 | if not m:
63 | continue
64 | elif isinstance(action, tokens._TokenType):
65 | yield action, m.group()
66 | elif callable(action):
67 | yield action(m.group())
68 |
69 | consume(iterable, m.end() - pos - 1)
70 | break
71 | else:
72 | yield tokens.Error, char
73 |
74 |
75 | def tokenize(sql, encoding=None):
76 | """Tokenize sql.
77 |
78 | Tokenize *sql* using the :class:`Lexer` and return a 2-tuple stream
79 | of ``(token type, value)`` items.
80 | """
81 | return Lexer().get_tokens(sql, encoding)
82 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/sql.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | """This module contains classes representing syntactical elements of SQL."""
9 | from __future__ import print_function
10 |
11 | import re
12 |
13 | from sqlparse import tokens as T
14 | from sqlparse.compat import string_types, text_type, unicode_compatible
15 | from sqlparse.utils import imt, remove_quotes
16 |
17 |
18 | @unicode_compatible
19 | class Token(object):
20 | """Base class for all other classes in this module.
21 |
22 | It represents a single token and has two instance attributes:
23 | ``value`` is the unchange value of the token and ``ttype`` is
24 | the type of the token.
25 | """
26 |
27 | __slots__ = ('value', 'ttype', 'parent', 'normalized', 'is_keyword',
28 | 'is_group', 'is_whitespace')
29 |
30 | def __init__(self, ttype, value):
31 | value = text_type(value)
32 | self.value = value
33 | self.ttype = ttype
34 | self.parent = None
35 | self.is_group = False
36 | self.is_keyword = ttype in T.Keyword
37 | self.is_whitespace = self.ttype in T.Whitespace
38 | self.normalized = value.upper() if self.is_keyword else value
39 |
40 | def __str__(self):
41 | return self.value
42 |
43 | # Pending tokenlist __len__ bug fix
44 | # def __len__(self):
45 | # return len(self.value)
46 |
47 | def __repr__(self):
48 | cls = self._get_repr_name()
49 | value = self._get_repr_value()
50 |
51 | q = u'"' if value.startswith("'") and value.endswith("'") else u"'"
52 | return u"<{cls} {q}{value}{q} at 0x{id:2X}>".format(
53 | id=id(self), **locals())
54 |
55 | def _get_repr_name(self):
56 | return str(self.ttype).split('.')[-1]
57 |
58 | def _get_repr_value(self):
59 | raw = text_type(self)
60 | if len(raw) > 7:
61 | raw = raw[:6] + '...'
62 | return re.sub(r'\s+', ' ', raw)
63 |
64 | def flatten(self):
65 | """Resolve subgroups."""
66 | yield self
67 |
68 | def match(self, ttype, values, regex=False):
69 | """Checks whether the token matches the given arguments.
70 |
71 | *ttype* is a token type. If this token doesn't match the given token
72 | type.
73 | *values* is a list of possible values for this token. The values
74 | are OR'ed together so if only one of the values matches ``True``
75 | is returned. Except for keyword tokens the comparison is
76 | case-sensitive. For convenience it's ok to pass in a single string.
77 | If *regex* is ``True`` (default is ``False``) the given values are
78 | treated as regular expressions.
79 | """
80 | type_matched = self.ttype is ttype
81 | if not type_matched or values is None:
82 | return type_matched
83 |
84 | if isinstance(values, string_types):
85 | values = (values,)
86 |
87 | if regex:
88 | # TODO: Add test for regex with is_keyboard = false
89 | flag = re.IGNORECASE if self.is_keyword else 0
90 | values = (re.compile(v, flag) for v in values)
91 |
92 | for pattern in values:
93 | if pattern.search(self.normalized):
94 | return True
95 | return False
96 |
97 | if self.is_keyword:
98 | values = (v.upper() for v in values)
99 |
100 | return self.normalized in values
101 |
102 | def within(self, group_cls):
103 | """Returns ``True`` if this token is within *group_cls*.
104 |
105 | Use this method for example to check if an identifier is within
106 | a function: ``t.within(sql.Function)``.
107 | """
108 | parent = self.parent
109 | while parent:
110 | if isinstance(parent, group_cls):
111 | return True
112 | parent = parent.parent
113 | return False
114 |
115 | def is_child_of(self, other):
116 | """Returns ``True`` if this token is a direct child of *other*."""
117 | return self.parent == other
118 |
119 | def has_ancestor(self, other):
120 | """Returns ``True`` if *other* is in this tokens ancestry."""
121 | parent = self.parent
122 | while parent:
123 | if parent == other:
124 | return True
125 | parent = parent.parent
126 | return False
127 |
128 |
129 | @unicode_compatible
130 | class TokenList(Token):
131 | """A group of tokens.
132 |
133 | It has an additional instance attribute ``tokens`` which holds a
134 | list of child-tokens.
135 | """
136 |
137 | __slots__ = 'tokens'
138 |
139 | def __init__(self, tokens=None):
140 | self.tokens = tokens or []
141 | [setattr(token, 'parent', self) for token in tokens]
142 | super(TokenList, self).__init__(None, text_type(self))
143 | self.is_group = True
144 |
145 | def __str__(self):
146 | return u''.join(token.value for token in self.flatten())
147 |
148 | # weird bug
149 | # def __len__(self):
150 | # return len(self.tokens)
151 |
152 | def __iter__(self):
153 | return iter(self.tokens)
154 |
155 | def __getitem__(self, item):
156 | return self.tokens[item]
157 |
158 | def _get_repr_name(self):
159 | return type(self).__name__
160 |
161 | def _pprint_tree(self, max_depth=None, depth=0, f=None):
162 | """Pretty-print the object tree."""
163 | indent = u' | ' * depth
164 | for idx, token in enumerate(self.tokens):
165 | cls = token._get_repr_name()
166 | value = token._get_repr_value()
167 |
168 | q = u'"' if value.startswith("'") and value.endswith("'") else u"'"
169 | print(u"{indent}{idx:2d} {cls} {q}{value}{q}"
170 | .format(**locals()), file=f)
171 |
172 | if token.is_group and (max_depth is None or depth < max_depth):
173 | token._pprint_tree(max_depth, depth + 1, f)
174 |
175 | def get_token_at_offset(self, offset):
176 | """Returns the token that is on position offset."""
177 | idx = 0
178 | for token in self.flatten():
179 | end = idx + len(token.value)
180 | if idx <= offset < end:
181 | return token
182 | idx = end
183 |
184 | def flatten(self):
185 | """Generator yielding ungrouped tokens.
186 |
187 | This method is recursively called for all child tokens.
188 | """
189 | for token in self.tokens:
190 | if token.is_group:
191 | for item in token.flatten():
192 | yield item
193 | else:
194 | yield token
195 |
196 | def get_sublists(self):
197 | for token in self.tokens:
198 | if token.is_group:
199 | yield token
200 |
201 | @property
202 | def _groupable_tokens(self):
203 | return self.tokens
204 |
205 | def _token_matching(self, funcs, start=0, end=None, reverse=False):
206 | """next token that match functions"""
207 | if start is None:
208 | return None
209 |
210 | if not isinstance(funcs, (list, tuple)):
211 | funcs = (funcs,)
212 |
213 | if reverse:
214 | assert end is None
215 | for idx in range(start - 2, -1, -1):
216 | token = self.tokens[idx]
217 | for func in funcs:
218 | if func(token):
219 | return idx, token
220 | else:
221 | for idx, token in enumerate(self.tokens[start:end], start=start):
222 | for func in funcs:
223 | if func(token):
224 | return idx, token
225 | return None, None
226 |
227 | def token_first(self, skip_ws=True, skip_cm=False):
228 | """Returns the first child token.
229 |
230 | If *skip_ws* is ``True`` (the default), whitespace
231 | tokens are ignored.
232 |
233 | if *skip_cm* is ``True`` (default: ``False``), comments are
234 | ignored too.
235 | """
236 | # this on is inconsistent, using Comment instead of T.Comment...
237 | funcs = lambda tk: not ((skip_ws and tk.is_whitespace) or
238 | (skip_cm and imt(tk, t=T.Comment, i=Comment)))
239 | return self._token_matching(funcs)[1]
240 |
241 | def token_next_by(self, i=None, m=None, t=None, idx=-1, end=None):
242 | funcs = lambda tk: imt(tk, i, m, t)
243 | idx += 1
244 | return self._token_matching(funcs, idx, end)
245 |
246 | def token_not_matching(self, funcs, idx):
247 | funcs = (funcs,) if not isinstance(funcs, (list, tuple)) else funcs
248 | funcs = [lambda tk: not func(tk) for func in funcs]
249 | return self._token_matching(funcs, idx)
250 |
251 | def token_matching(self, funcs, idx):
252 | return self._token_matching(funcs, idx)[1]
253 |
254 | def token_prev(self, idx, skip_ws=True, skip_cm=False):
255 | """Returns the previous token relative to *idx*.
256 |
257 | If *skip_ws* is ``True`` (the default) whitespace tokens are ignored.
258 | If *skip_cm* is ``True`` comments are ignored.
259 | ``None`` is returned if there's no previous token.
260 | """
261 | return self.token_next(idx, skip_ws, skip_cm, _reverse=True)
262 |
263 | # TODO: May need to re-add default value to idx
264 | def token_next(self, idx, skip_ws=True, skip_cm=False, _reverse=False):
265 | """Returns the next token relative to *idx*.
266 |
267 | If *skip_ws* is ``True`` (the default) whitespace tokens are ignored.
268 | If *skip_cm* is ``True`` comments are ignored.
269 | ``None`` is returned if there's no next token.
270 | """
271 | if idx is None:
272 | return None, None
273 | idx += 1 # alot of code usage current pre-compensates for this
274 | funcs = lambda tk: not ((skip_ws and tk.is_whitespace) or
275 | (skip_cm and imt(tk, t=T.Comment, i=Comment)))
276 | return self._token_matching(funcs, idx, reverse=_reverse)
277 |
278 | def token_index(self, token, start=0):
279 | """Return list index of token."""
280 | start = start if isinstance(start, int) else self.token_index(start)
281 | return start + self.tokens[start:].index(token)
282 |
283 | def group_tokens(self, grp_cls, start, end, include_end=True,
284 | extend=False):
285 | """Replace tokens by an instance of *grp_cls*."""
286 | start_idx = start
287 | start = self.tokens[start_idx]
288 |
289 | end_idx = end + include_end
290 |
291 | # will be needed later for new group_clauses
292 | # while skip_ws and tokens and tokens[-1].is_whitespace:
293 | # tokens = tokens[:-1]
294 |
295 | if extend and isinstance(start, grp_cls):
296 | subtokens = self.tokens[start_idx + 1:end_idx]
297 |
298 | grp = start
299 | grp.tokens.extend(subtokens)
300 | del self.tokens[start_idx + 1:end_idx]
301 | grp.value = text_type(start)
302 | else:
303 | subtokens = self.tokens[start_idx:end_idx]
304 | grp = grp_cls(subtokens)
305 | self.tokens[start_idx:end_idx] = [grp]
306 | grp.parent = self
307 |
308 | for token in subtokens:
309 | token.parent = grp
310 |
311 | return grp
312 |
313 | def insert_before(self, where, token):
314 | """Inserts *token* before *where*."""
315 | if not isinstance(where, int):
316 | where = self.token_index(where)
317 | token.parent = self
318 | self.tokens.insert(where, token)
319 |
320 | def insert_after(self, where, token, skip_ws=True):
321 | """Inserts *token* after *where*."""
322 | if not isinstance(where, int):
323 | where = self.token_index(where)
324 | nidx, next_ = self.token_next(where, skip_ws=skip_ws)
325 | token.parent = self
326 | if next_ is None:
327 | self.tokens.append(token)
328 | else:
329 | self.tokens.insert(nidx, token)
330 |
331 | def has_alias(self):
332 | """Returns ``True`` if an alias is present."""
333 | return self.get_alias() is not None
334 |
335 | def get_alias(self):
336 | """Returns the alias for this identifier or ``None``."""
337 |
338 | # "name AS alias"
339 | kw_idx, kw = self.token_next_by(m=(T.Keyword, 'AS'))
340 | if kw is not None:
341 | return self._get_first_name(kw_idx + 1, keywords=True)
342 |
343 | # "name alias" or "complicated column expression alias"
344 | _, ws = self.token_next_by(t=T.Whitespace)
345 | if len(self.tokens) > 2 and ws is not None:
346 | return self._get_first_name(reverse=True)
347 |
348 | def get_name(self):
349 | """Returns the name of this identifier.
350 |
351 | This is either it's alias or it's real name. The returned valued can
352 | be considered as the name under which the object corresponding to
353 | this identifier is known within the current statement.
354 | """
355 | return self.get_alias() or self.get_real_name()
356 |
357 | def get_real_name(self):
358 | """Returns the real name (object name) of this identifier."""
359 | # a.b
360 | dot_idx, _ = self.token_next_by(m=(T.Punctuation, '.'))
361 | return self._get_first_name(dot_idx)
362 |
363 | def get_parent_name(self):
364 | """Return name of the parent object if any.
365 |
366 | A parent object is identified by the first occuring dot.
367 | """
368 | dot_idx, _ = self.token_next_by(m=(T.Punctuation, '.'))
369 | _, prev_ = self.token_prev(dot_idx)
370 | return remove_quotes(prev_.value) if prev_ is not None else None
371 |
372 | def _get_first_name(self, idx=None, reverse=False, keywords=False):
373 | """Returns the name of the first token with a name"""
374 |
375 | tokens = self.tokens[idx:] if idx else self.tokens
376 | tokens = reversed(tokens) if reverse else tokens
377 | types = [T.Name, T.Wildcard, T.String.Symbol]
378 |
379 | if keywords:
380 | types.append(T.Keyword)
381 |
382 | for token in tokens:
383 | if token.ttype in types:
384 | return remove_quotes(token.value)
385 | elif isinstance(token, (Identifier, Function)):
386 | return token.get_name()
387 |
388 |
389 | class Statement(TokenList):
390 | """Represents a SQL statement."""
391 |
392 | def get_type(self):
393 | """Returns the type of a statement.
394 |
395 | The returned value is a string holding an upper-cased reprint of
396 | the first DML or DDL keyword. If the first token in this group
397 | isn't a DML or DDL keyword "UNKNOWN" is returned.
398 |
399 | Whitespaces and comments at the beginning of the statement
400 | are ignored.
401 | """
402 | first_token = self.token_first(skip_cm=True)
403 | if first_token is None:
404 | # An "empty" statement that either has not tokens at all
405 | # or only whitespace tokens.
406 | return 'UNKNOWN'
407 |
408 | elif first_token.ttype in (T.Keyword.DML, T.Keyword.DDL):
409 | return first_token.normalized
410 |
411 | elif first_token.ttype == T.Keyword.CTE:
412 | # The WITH keyword should be followed by either an Identifier or
413 | # an IdentifierList containing the CTE definitions; the actual
414 | # DML keyword (e.g. SELECT, INSERT) will follow next.
415 | fidx = self.token_index(first_token)
416 | tidx, token = self.token_next(fidx, skip_ws=True)
417 | if isinstance(token, (Identifier, IdentifierList)):
418 | _, dml_keyword = self.token_next(tidx, skip_ws=True)
419 |
420 | if dml_keyword.ttype == T.Keyword.DML:
421 | return dml_keyword.normalized
422 |
423 | # Hmm, probably invalid syntax, so return unknown.
424 | return 'UNKNOWN'
425 |
426 |
427 | class Identifier(TokenList):
428 | """Represents an identifier.
429 |
430 | Identifiers may have aliases or typecasts.
431 | """
432 |
433 | def is_wildcard(self):
434 | """Return ``True`` if this identifier contains a wildcard."""
435 | _, token = self.token_next_by(t=T.Wildcard)
436 | return token is not None
437 |
438 | def get_typecast(self):
439 | """Returns the typecast or ``None`` of this object as a string."""
440 | midx, marker = self.token_next_by(m=(T.Punctuation, '::'))
441 | nidx, next_ = self.token_next(midx, skip_ws=False)
442 | return next_.value if next_ else None
443 |
444 | def get_ordering(self):
445 | """Returns the ordering or ``None`` as uppercase string."""
446 | _, ordering = self.token_next_by(t=T.Keyword.Order)
447 | return ordering.normalized if ordering else None
448 |
449 | def get_array_indices(self):
450 | """Returns an iterator of index token lists"""
451 |
452 | for token in self.tokens:
453 | if isinstance(token, SquareBrackets):
454 | # Use [1:-1] index to discard the square brackets
455 | yield token.tokens[1:-1]
456 |
457 |
458 | class IdentifierList(TokenList):
459 | """A list of :class:`~sqlparse.sql.Identifier`\'s."""
460 |
461 | def get_identifiers(self):
462 | """Returns the identifiers.
463 |
464 | Whitespaces and punctuations are not included in this generator.
465 | """
466 | for token in self.tokens:
467 | if not (token.is_whitespace or token.match(T.Punctuation, ',')):
468 | yield token
469 |
470 |
471 | class Parenthesis(TokenList):
472 | """Tokens between parenthesis."""
473 | M_OPEN = T.Punctuation, '('
474 | M_CLOSE = T.Punctuation, ')'
475 |
476 | @property
477 | def _groupable_tokens(self):
478 | return self.tokens[1:-1]
479 |
480 |
481 | class SquareBrackets(TokenList):
482 | """Tokens between square brackets"""
483 | M_OPEN = T.Punctuation, '['
484 | M_CLOSE = T.Punctuation, ']'
485 |
486 | @property
487 | def _groupable_tokens(self):
488 | return self.tokens[1:-1]
489 |
490 |
491 | class Assignment(TokenList):
492 | """An assignment like 'var := val;'"""
493 |
494 |
495 | class If(TokenList):
496 | """An 'if' clause with possible 'else if' or 'else' parts."""
497 | M_OPEN = T.Keyword, 'IF'
498 | M_CLOSE = T.Keyword, 'END IF'
499 |
500 |
501 | class For(TokenList):
502 | """A 'FOR' loop."""
503 | M_OPEN = T.Keyword, ('FOR', 'FOREACH')
504 | M_CLOSE = T.Keyword, 'END LOOP'
505 |
506 |
507 | class Comparison(TokenList):
508 | """A comparison used for example in WHERE clauses."""
509 |
510 | @property
511 | def left(self):
512 | return self.tokens[0]
513 |
514 | @property
515 | def right(self):
516 | return self.tokens[-1]
517 |
518 |
519 | class Comment(TokenList):
520 | """A comment."""
521 |
522 | def is_multiline(self):
523 | return self.tokens and self.tokens[0].ttype == T.Comment.Multiline
524 |
525 |
526 | class Where(TokenList):
527 | """A WHERE clause."""
528 | M_OPEN = T.Keyword, 'WHERE'
529 | M_CLOSE = T.Keyword, ('ORDER', 'GROUP', 'LIMIT', 'UNION', 'EXCEPT',
530 | 'HAVING', 'RETURNING', 'INTO')
531 |
532 |
533 | class Case(TokenList):
534 | """A CASE statement with one or more WHEN and possibly an ELSE part."""
535 | M_OPEN = T.Keyword, 'CASE'
536 | M_CLOSE = T.Keyword, 'END'
537 |
538 | def get_cases(self, skip_ws=False):
539 | """Returns a list of 2-tuples (condition, value).
540 |
541 | If an ELSE exists condition is None.
542 | """
543 | CONDITION = 1
544 | VALUE = 2
545 |
546 | ret = []
547 | mode = CONDITION
548 |
549 | for token in self.tokens:
550 | # Set mode from the current statement
551 | if token.match(T.Keyword, 'CASE'):
552 | continue
553 |
554 | elif skip_ws and token.ttype in T.Whitespace:
555 | continue
556 |
557 | elif token.match(T.Keyword, 'WHEN'):
558 | ret.append(([], []))
559 | mode = CONDITION
560 |
561 | elif token.match(T.Keyword, 'THEN'):
562 | mode = VALUE
563 |
564 | elif token.match(T.Keyword, 'ELSE'):
565 | ret.append((None, []))
566 | mode = VALUE
567 |
568 | elif token.match(T.Keyword, 'END'):
569 | mode = None
570 |
571 | # First condition without preceding WHEN
572 | if mode and not ret:
573 | ret.append(([], []))
574 |
575 | # Append token depending of the current mode
576 | if mode == CONDITION:
577 | ret[-1][0].append(token)
578 |
579 | elif mode == VALUE:
580 | ret[-1][1].append(token)
581 |
582 | # Return cases list
583 | return ret
584 |
585 |
586 | class Function(TokenList):
587 | """A function or procedure call."""
588 |
589 | def get_parameters(self):
590 | """Return a list of parameters."""
591 | parenthesis = self.tokens[-1]
592 | for token in parenthesis.tokens:
593 | if isinstance(token, IdentifierList):
594 | return token.get_identifiers()
595 | elif imt(token, i=(Function, Identifier), t=T.Literal):
596 | return [token, ]
597 | return []
598 |
599 |
600 | class Begin(TokenList):
601 | """A BEGIN/END block."""
602 | M_OPEN = T.Keyword, 'BEGIN'
603 | M_CLOSE = T.Keyword, 'END'
604 |
605 |
606 | class Operation(TokenList):
607 | """Grouping of operations"""
608 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/tokens.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 | #
8 | # The Token implementation is based on pygment's token system written
9 | # by Georg Brandl.
10 | # http://pygments.org/
11 |
12 | """Tokens"""
13 |
14 |
15 | class _TokenType(tuple):
16 | parent = None
17 |
18 | def __contains__(self, item):
19 | return item is not None and (self is item or item[:len(self)] == self)
20 |
21 | def __getattr__(self, name):
22 | new = _TokenType(self + (name,))
23 | setattr(self, name, new)
24 | new.parent = self
25 | return new
26 |
27 | def __repr__(self):
28 | # self can be False only if its the `root` ie. Token itself
29 | return 'Token' + ('.' if self else '') + '.'.join(self)
30 |
31 |
32 | Token = _TokenType()
33 |
34 | # Special token types
35 | Text = Token.Text
36 | Whitespace = Text.Whitespace
37 | Newline = Whitespace.Newline
38 | Error = Token.Error
39 | # Text that doesn't belong to this lexer (e.g. HTML in PHP)
40 | Other = Token.Other
41 |
42 | # Common token types for source code
43 | Keyword = Token.Keyword
44 | Name = Token.Name
45 | Literal = Token.Literal
46 | String = Literal.String
47 | Number = Literal.Number
48 | Punctuation = Token.Punctuation
49 | Operator = Token.Operator
50 | Comparison = Operator.Comparison
51 | Wildcard = Token.Wildcard
52 | Comment = Token.Comment
53 | Assignment = Token.Assignment
54 |
55 | # Generic types for non-source code
56 | Generic = Token.Generic
57 |
58 | # String and some others are not direct childs of Token.
59 | # alias them:
60 | Token.Token = Token
61 | Token.String = String
62 | Token.Number = Number
63 |
64 | # SQL specific tokens
65 | DML = Keyword.DML
66 | DDL = Keyword.DDL
67 | CTE = Keyword.CTE
68 | Command = Keyword.Command
69 |
--------------------------------------------------------------------------------
/SQLToolsAPI/lib/sqlparse/utils.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # Copyright (C) 2016 Andi Albrecht, albrecht.andi@gmail.com
4 | #
5 | # This module is part of python-sqlparse and is released under
6 | # the BSD License: https://opensource.org/licenses/BSD-3-Clause
7 |
8 | import itertools
9 | import re
10 | from collections import deque
11 | from contextlib import contextmanager
12 | from sqlparse.compat import text_type
13 |
14 | # This regular expression replaces the home-cooked parser that was here before.
15 | # It is much faster, but requires an extra post-processing step to get the
16 | # desired results (that are compatible with what you would expect from the
17 | # str.splitlines() method).
18 | #
19 | # It matches groups of characters: newlines, quoted strings, or unquoted text,
20 | # and splits on that basis. The post-processing step puts those back together
21 | # into the actual lines of SQL.
22 | SPLIT_REGEX = re.compile(r"""
23 | (
24 | (?: # Start of non-capturing group
25 | (?:\r\n|\r|\n) | # Match any single newline, or
26 | [^\r\n'"]+ | # Match any character series without quotes or
27 | # newlines, or
28 | "(?:[^"\\]|\\.)*" | # Match double-quoted strings, or
29 | '(?:[^'\\]|\\.)*' # Match single quoted strings
30 | )
31 | )
32 | """, re.VERBOSE)
33 |
34 | LINE_MATCH = re.compile(r'(\r\n|\r|\n)')
35 |
36 |
37 | def split_unquoted_newlines(stmt):
38 | """Split a string on all unquoted newlines.
39 |
40 | Unlike str.splitlines(), this will ignore CR/LF/CR+LF if the requisite
41 | character is inside of a string."""
42 | text = text_type(stmt)
43 | lines = SPLIT_REGEX.split(text)
44 | outputlines = ['']
45 | for line in lines:
46 | if not line:
47 | continue
48 | elif LINE_MATCH.match(line):
49 | outputlines.append('')
50 | else:
51 | outputlines[-1] += line
52 | return outputlines
53 |
54 |
55 | def remove_quotes(val):
56 | """Helper that removes surrounding quotes from strings."""
57 | if val is None:
58 | return
59 | if val[0] in ('"', "'") and val[0] == val[-1]:
60 | val = val[1:-1]
61 | return val
62 |
63 |
64 | def recurse(*cls):
65 | """Function decorator to help with recursion
66 |
67 | :param cls: Classes to not recurse over
68 | :return: function
69 | """
70 | def wrap(f):
71 | def wrapped_f(tlist):
72 | for sgroup in tlist.get_sublists():
73 | if not isinstance(sgroup, cls):
74 | wrapped_f(sgroup)
75 | f(tlist)
76 |
77 | return wrapped_f
78 |
79 | return wrap
80 |
81 |
82 | def imt(token, i=None, m=None, t=None):
83 | """Helper function to simplify comparisons Instance, Match and TokenType
84 | :param token:
85 | :param i: Class or Tuple/List of Classes
86 | :param m: Tuple of TokenType & Value. Can be list of Tuple for multiple
87 | :param t: TokenType or Tuple/List of TokenTypes
88 | :return: bool
89 | """
90 | clss = i
91 | types = [t, ] if t and not isinstance(t, list) else t
92 | mpatterns = [m, ] if m and not isinstance(m, list) else m
93 |
94 | if token is None:
95 | return False
96 | elif clss and isinstance(token, clss):
97 | return True
98 | elif mpatterns and any((token.match(*pattern) for pattern in mpatterns)):
99 | return True
100 | elif types and any([token.ttype in ttype for ttype in types]):
101 | return True
102 | else:
103 | return False
104 |
105 |
106 | def consume(iterator, n):
107 | """Advance the iterator n-steps ahead. If n is none, consume entirely."""
108 | deque(itertools.islice(iterator, n), maxlen=0)
109 |
110 |
111 | @contextmanager
112 | def offset(filter_, n=0):
113 | filter_.offset += n
114 | yield
115 | filter_.offset -= n
116 |
117 |
118 | @contextmanager
119 | def indent(filter_, n=1):
120 | filter_.indent += n
121 | yield
122 | filter_.indent -= n
123 |
--------------------------------------------------------------------------------
/SQLToolsConnections.sublime-settings:
--------------------------------------------------------------------------------
1 | {
2 | "connections": {
3 | /*
4 | "Generic Template": { // Connection name, used in menu (Display name)
5 | // connection properties set to "null" will prompt for value when connecting
6 | "type" : "pgsql", // DB type: (mysql, pgsql, oracle, vertica, sqlite, firebird, sqsh)
7 | "host" : "HOSTNAME", // DB host to connect to
8 | "port" : PORT, // DB port
9 | "database" : "DATABASE", // DB name (for SQLite this is the path to DB file)
10 | "username" : "USERNAME", // DB username
11 | "password" : "PASSWORD", // DB password (see RDMBS specific comments below)
12 | "encoding" : "utf-8" // DB encoding. Default: utf-8
13 | },
14 | "Connection MySQL": {
15 | "type" : "mysql",
16 | "host" : "127.0.0.1",
17 | "port" : 3306,
18 | "database": "dbname",
19 | "username": "user",
20 | // use of password for MySQL is not recommended (use "defaults-extra-file" or "login-path")
21 | "password": "password", // you will get a security warning in the output
22 | // "defaults-extra-file": "/path/to/defaults_file_with_password", // use [client] or [mysql] section
23 | // "login-path": "your_login_path", // login path in your ".mylogin.cnf"
24 | "default-character-set": "utf8",
25 | "encoding": "utf-8"
26 | },
27 | "Connection PostgreSQL": {
28 | "type" : "pgsql",
29 | "host" : "127.0.0.1",
30 | "port" : 5432,
31 | "database": "dbname",
32 | "username": "anotheruser",
33 | // password is optional (setup "pgpass.conf" file instead)
34 | "password": "password",
35 | "encoding": "utf-8"
36 | },
37 | "Connection Oracle": {
38 | "type" : "oracle",
39 | "host" : "127.0.0.1",
40 | "port" : 1521,
41 | "database": "dbname",
42 | "username": "anotheruser",
43 | "password": "password",
44 | "service" : "servicename",
45 | // nls_lang is optional
46 | "nls_lang": "american_america.al32utf8",
47 | "encoding": "utf-8"
48 | },
49 | "Connection MSSQL": {
50 | "type" : "mssql",
51 | // use either ("host", "port") and remove "instance"
52 | // or ("host", "instance") and remove "port"
53 | "host" : "localhost",
54 | "instance": "SQLEXPRESS",
55 | // "port" : 1433,
56 | // "username" and "password" are optional (remove if not needed)
57 | "username": "sa",
58 | "password": "password",
59 | "database": "sample",
60 | "encoding": "utf-8"
61 | },
62 | "Connection SQLite": {
63 | "type" : "sqlite",
64 | // note the forward slashes in path
65 | "database": "c:/sqlite/sample_db/chinook.db",
66 | "encoding": "utf-8"
67 | },
68 | "Connection Vertica": {
69 | "type" : "vertica",
70 | "host" : "localhost",
71 | "port" : 5433,
72 | "username": "anotheruser",
73 | "password": "password",
74 | "database": "dbname",
75 | "encoding": "utf-8"
76 | },
77 | "Connection Firebird": {
78 | "type" : "firebird",
79 | "host" : "localhost",
80 | "port" : 3050,
81 | "username": "sysdba",
82 | "password": "password",
83 | // note the forward slashes (if path is used)
84 | "database": "c:/firebird/examples/empbuild/employee.fdb",
85 | "encoding": "utf-8"
86 | },
87 | "Connection Snowflake": {
88 | "type" : "snowsql",
89 | "database": "database",
90 | // for possible authentication configurations see
91 | // https://docs.snowflake.net/manuals/user-guide/snowsql-start.html#authenticator
92 | "user" : "user@example.com",
93 | "account" : "account_name",
94 | "auth": : "snowflake | externalbrowser | ",
95 | // if using "auth": "snowflake", provide a password
96 | // you can alternatively set SNOWSQL_PWD in you environment instead
97 | // if using "auth": "externalbrowser" or "", no password needed
98 | "password": "pwd"
99 | }
100 | */
101 | },
102 | "default": null
103 | }
104 |
--------------------------------------------------------------------------------
/SQLToolsSavedQueries.sublime-settings:
--------------------------------------------------------------------------------
1 | {
2 | }
3 |
--------------------------------------------------------------------------------
/messages.json:
--------------------------------------------------------------------------------
1 | {
2 | "install": "messages/install.md",
3 | "0.1.6": "messages/v0.1.6.md",
4 | "0.2.0": "messages/v0.2.0.md",
5 | "0.3.0": "messages/v0.3.0.md",
6 | "0.3.1": "messages/v0.3.0.md",
7 | "0.8.2": "messages/v0.8.2.md",
8 | "0.9.0": "messages/v0.9.0.md",
9 | "0.9.1": "messages/v0.9.1.md",
10 | "0.9.2": "messages/v0.9.2.md",
11 | "0.9.3": "messages/v0.9.3.md",
12 | "0.9.4": "messages/v0.9.4.md",
13 | "0.9.5": "messages/v0.9.5.md",
14 | "0.9.6": "messages/v0.9.6.md",
15 | "0.9.7": "messages/v0.9.7.md",
16 | "0.9.8": "messages/v0.9.8.md",
17 | "0.9.9": "messages/v0.9.9.md",
18 | "0.9.10": "messages/v0.9.10.md",
19 | "0.9.11": "messages/v0.9.11.md",
20 | "0.9.12": "messages/v0.9.12.md"
21 | }
22 |
--------------------------------------------------------------------------------
/messages/install.md:
--------------------------------------------------------------------------------
1 | # SQLTools
2 | ===============
3 |
4 | Your swiss knife SQL for Sublime Text.
5 |
6 | Write your SQL with smart completions and handy table and function definitions, execute SQL and explain queries, format your queries and save them in history.
7 |
8 | Project website: https://code.mteixeira.dev/SublimeText-SQLTools/
9 |
10 | ## Features
11 |
12 | * Works with PostgreSQL, MySQL, Oracle, MSSQL, SQLite, Vertica, Firebird and Snowflake
13 | * Smart completions (except SQLite)
14 | * Run SQL Queries `CTRL+e, CTRL+e`
15 | * View table description `CTRL+e, CTRL+d`
16 | * Show table records `CTRL+e, CTRL+s`
17 | * Show explain plan for queries `CTRL+e, CTRL+x`
18 | * Formatting SQL Queries `CTRL+e, CTRL+b`
19 | * View Queries history `CTRL+e, CTRL+h`
20 | * Save queries `CTRL+e, CTRL+q`
21 | * List and Run saved queries `CTRL+e, CTRL+l`
22 | * Remove saved queries `CTRL+e, CTRL+r`
23 | * Threading support to prevent lockups
24 | * Query timeout (kill thread if query takes too long)
25 |
26 | ## Configuration
27 |
28 | Documentation: https://code.mteixeira.dev/SublimeText-SQLTools/
29 |
--------------------------------------------------------------------------------
/messages/v0.1.6.md:
--------------------------------------------------------------------------------
1 | # SQLTools
2 | ===============
3 |
4 | Your swiss knife SQL for Sublime Text.
5 |
6 |
7 | ## v0.1.6 Changelog
8 |
9 | ### Improvements
10 | * History sorted reversed (newer queries first)
11 | * Some package improvements (auto reload)
12 |
13 | ### Fixes
14 |
15 | * Issue #5 - Command window is now hidden
16 | * Issue #9 - Fixed buildArgs
17 |
--------------------------------------------------------------------------------
/messages/v0.2.0.md:
--------------------------------------------------------------------------------
1 | ## v0.2.0 Changelog
2 |
3 | Now you can open your saved queries using `CTRL+E CTRL+O`!
4 |
5 | ### Improvements
6 | * Added opening support for saved queries (Issue #12)
7 | ```javascript
8 | // new hotkeys setting
9 | {
10 | "keys": ["ctrl+e", "ctrl+o"],
11 | "command": "st_list_queries",
12 | "args": {
13 | "mode" : "open"
14 | }
15 | }
16 | ```
17 |
18 | ### Fixes
19 |
20 | No fixes for this version
21 |
--------------------------------------------------------------------------------
/messages/v0.3.0.md:
--------------------------------------------------------------------------------
1 | ## v0.3.0 Changelog
2 |
3 | ### Features
4 |
5 | * Describe user functions `CTRL+E CTRL+F` for PostgreSQL!
6 | * PostgreSQL user functions added to auto complete.
7 |
8 |
9 | ### Improvements
10 |
11 | * Window result shown always in the same tab
12 |
--------------------------------------------------------------------------------
/messages/v0.8.2.md:
--------------------------------------------------------------------------------
1 | ## v0.8.2 Notes
2 |
3 | ### Features
4 |
5 | * New smarter completions that will suggest tables, columns,
6 | aliases, columns for table aliases and join conditions.
7 | Demo of new functionality:
8 | https://github.com/mtxr/SQLTools/issues/67#issuecomment-297849135
9 |
10 | **NOTE**: It is highly recommended that you review your SQLTools
11 | settings file (Users/SQLTools.sublime-settings) and leave only
12 | those settings that you altered specifically to your needs and
13 | remove all other settings. This way the updated queries listed
14 | in default settings file would be used, for new smarter
15 | completions to work correctly.
16 |
17 |
18 | ### Improvements
19 |
20 | * Plain Text syntax is used in the output panel when executing
21 | queries (for performance reasons and to prevent weird highlighting)
22 |
--------------------------------------------------------------------------------
/messages/v0.9.0.md:
--------------------------------------------------------------------------------
1 | ## v0.9.0 Notes
2 |
3 | ### Features
4 |
5 | * Added support for query result streaming [#19](https://github.com/mtxr/SQLTools/issues/19)
6 |
--------------------------------------------------------------------------------
/messages/v0.9.1.md:
--------------------------------------------------------------------------------
1 | ## v0.9.1 Notes
2 |
3 | ### Improvements
4 |
5 | * Display errors inline instead of appending them at the bottom [#92](https://github.com/mtxr/SQLTools/issues/92)
6 |
7 |
8 | ### Fixes
9 |
10 | * Thread timeout is always triggered when streaming results output (MacOS) [#90](https://github.com/mtxr/SQLTools/issues/90)
11 | * stderr output is ignored when streaming results output [#91](https://github.com/mtxr/SQLTools/issues/91)
12 |
--------------------------------------------------------------------------------
/messages/v0.9.10.md:
--------------------------------------------------------------------------------
1 | ## v0.9.10 Notes
2 |
3 | ### Fixes
4 |
5 | * Added `focus_result` setting closing issue [#183]
6 |
--------------------------------------------------------------------------------
/messages/v0.9.11.md:
--------------------------------------------------------------------------------
1 | ## v0.9.11 Notes
2 |
3 | ### Fixes
4 |
5 | * New command `ST: Format SQL All File` (internal name `st_format_all`) to format the entire file [#182]
6 |
--------------------------------------------------------------------------------
/messages/v0.9.12.md:
--------------------------------------------------------------------------------
1 | ## v0.9.12 Notes
2 |
3 | ### Improvements
4 |
5 | * Snowflake (`snowsql`) support
6 | * Added support for prompting of the connection parameters
7 | * Table/view/function selection quick panel is pre-populated with the text of the current selection
8 | * New command `ST: Refresh Connection Data` (internal name `st_refresh_connection_data`) to reload the list of database objects
9 | * Improve performance (completions and connection listing)
10 | * Add a configurable setting to enable/disable completion for certain selectors (`autocomplete_selectors_ignore`, `autocomplete_selectors_active`)
11 |
12 |
13 | ### Fixes
14 |
15 | * Fix PostgreSQL v11 compatibility issue (functions were not listed)
16 | * Do not create an empty settings file [#144]
17 | * Bump timeout to 60 seconds for internal commands
18 | * Use python core logging for logs
19 |
--------------------------------------------------------------------------------
/messages/v0.9.2.md:
--------------------------------------------------------------------------------
1 | ## v0.9.2 Notes
2 |
3 | ### Improvements
4 |
5 | * Support PostgreSQL "password" setting in Connections file [#106](https://github.com/mtxr/SQLTools/issues/106)
6 | * Improved performance of smart completions
7 | * If configured, use stream output for saved and history queries
8 | * [MySQL] Add support for `Describe Function`
9 |
10 |
11 | ### Fixes
12 |
13 | * Fix Query History [#96](https://github.com/mtxr/SQLTools/issues/96)
14 | * Fix functionality of "clear_output" for pannel output [#102](https://github.com/mtxr/SQLTools/issues/102)
15 |
--------------------------------------------------------------------------------
/messages/v0.9.3.md:
--------------------------------------------------------------------------------
1 | ## v0.9.3 Notes
2 |
3 | ### Improvements
4 |
5 | * [Oracle] Get identifiers (tables, columns) in all schemas [#112](https://github.com/mtxr/SQLTools/issues/112)
6 |
7 |
8 | Please restart Sublime Text after installing this update.
9 |
--------------------------------------------------------------------------------
/messages/v0.9.4.md:
--------------------------------------------------------------------------------
1 | ## v0.9.4 Notes
2 |
3 | ### Improvements
4 |
5 | * Execute All File [#114]
6 | https://github.com/mtxr/SQLTools/issues/114
7 | * Configurable top/bottom `show_query` placement
8 | https://github.com/mtxr/SQLTools/pull/116
9 | * [Oracle] In addition to tables views are listed as well (Describe Table works with views) [#115]
10 | https://github.com/mtxr/SQLTools/pull/115
11 |
--------------------------------------------------------------------------------
/messages/v0.9.5.md:
--------------------------------------------------------------------------------
1 | ## v0.9.5 Notes
2 |
3 | ### Improvements
4 |
5 | * New Feature: List and Insert Saved Queries (ctrl+e ctrl+i) [#126]
6 | * Better error messages if setting json file could not be parsed
7 |
8 | ### Fixes
9 |
10 | * Display completions for upper case aliases [#142]
11 | * Fix the display of status bar message when query is executed [#130]
12 | * Open Saved Queries executed the query if not connected to a database prior [#125]
13 |
--------------------------------------------------------------------------------
/messages/v0.9.6.md:
--------------------------------------------------------------------------------
1 | ## v0.9.6 Notes
2 |
3 | ### Fixes
4 |
5 | * [MySQL] Added basic backtick escaping of identifiers [#147]
6 |
--------------------------------------------------------------------------------
/messages/v0.9.7.md:
--------------------------------------------------------------------------------
1 | ## v0.9.7 Notes
2 |
3 | ### Fixes
4 |
5 | * Completions not working with identifiers containing $ symbol [#152]
6 |
--------------------------------------------------------------------------------
/messages/v0.9.8.md:
--------------------------------------------------------------------------------
1 | ## v0.9.8 Notes
2 |
3 | ### Improvements
4 |
5 | * Add MSSQL support via native `sqlcmd` CLI
6 | * Add more options for expanding the empty selection. Instead of config option `expand_to_paragraph`, introduce new option called `expand_to`, which can be configured to expand empty selection to: `line`, `paragraph` or `file`
7 | * General review/improvement of each DB config
8 | * Use the `encoding` option supplied in connection settings when writing to standard input and reading from standard output of CLI command
9 | * Changes how top level and per-query `options` apply to CLI invocations
10 | * Changes how `before` and `after` applied (top level and per-query)
11 | * Introduction of new `execute` named query section which is used when executing statements with `ST: Execute` and friends
12 | * Now all named query formatting is done via `str.format()`, therefore all instances of `%s` in template strings got replaced with `{0}`, with back-patch support (i.e. `%s` should still work for those users who have it on their own user config)
13 | * Improve the way the output is shown - the output panel is not shown until the first output from DB CLI arrives
14 | * Add sample connections for `Vertica` and `Firebird`
15 | * [PostgreSQL] Connection options `host`, `port`, `username` are now optional (can be set via environment vars and other means)
16 | * [MySQL] Add configurable connection option `default-character-set`
17 | * [MySQL] Add `--no-auto-rehash` and `--compress` to improve MySQL startup time and improve latency on slow networks
18 | * [MySQL] Connection options `host`, `port`, `username` are now optional (can be set via `--defaults-extra-file` and `--login-path`)
19 | * [MySQL] Supply password via environment variable `MYSQL_PWD` to avoid security warning
20 | * [Oracle] Add ability to configure `NSL_LANG` to match the server encoding
21 | * [Oracle] Add support for quoted table and column names
22 | * [Oracle] Add support for functions & procedures completions as well as getting and functions & procedures definitions (both top level and those in packages)
23 | * [Vertica] Add support for quoted identifiers
24 | * Other minor improvements
25 |
26 | ### Fixes
27 |
28 | * Remove unused settings option `unescape_quotes` from config
29 |
--------------------------------------------------------------------------------
/messages/v0.9.9.md:
--------------------------------------------------------------------------------
1 | ## v0.9.9 Notes
2 |
3 | ### Improvements
4 |
5 | * Use `utf-8` encoding if Connection `encoding` is not a valid python encoding
6 |
7 |
8 | ### Fixes
9 |
10 | * Exception when Connection `encoding` is set to null [#177]
11 |
--------------------------------------------------------------------------------