├── .gitignore ├── CHANGES ├── LICENSE ├── MANIFEST.in ├── README ├── ez_setup.py ├── schemasync ├── __init__.py ├── schemasync.py ├── syncdb.py └── utils.py ├── setup.py └── tests ├── __init__.py ├── test_all.py ├── test_regex.py ├── test_sync_columns.py ├── test_sync_constraints.py ├── test_sync_database.py ├── test_sync_tables.py └── test_utils.py /.gitignore: -------------------------------------------------------------------------------- 1 | build/ 2 | dist/ 3 | docs/ 4 | *egg-info 5 | *.pyc 6 | .* # Ignore all dotfiles... 7 | !.gitignore # except for .gitignore 8 | .idea 9 | -------------------------------------------------------------------------------- /CHANGES: -------------------------------------------------------------------------------- 1 | == 0.9.4 / 2016-09-18 2 | * fixed bugs 3 | 4 | == 0.9.3 / 2016-09-17 5 | * merged all changes 6 | 7 | == 0.9.2 / 2010-12-05 8 | * Added MANIFEST.in so ez_setup.py is included for users without setuptools 9 | 10 | == 0.9.1 / 2010-02-16 11 | * Bug Fix: Table/Column COMMENT and AUTO_INCREMENT Regular Expression is Greedy 12 | * Bug Fix: Column sync misses columns changes once sequence is in order 13 | * Improved file version Regular Expression 14 | * Added tests for all Regular Expressions 15 | 16 | == 0.9.0 / 2009-12-02 17 | * Updated SchemaObject dependancy to version >= 0.5.2 18 | * Fixes: 19 | - Column DEFAULT now quoted if string 20 | - CHARACTER SET, COLLATE omitted if same as parent Table 21 | - Column definition syntax bug fixed 22 | 23 | == 0.9.0 / 2009-11-10 24 | * Initial Public Release 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2009-2016 Mitch Matuson 2 | Copyright 2016 Mustafa Ozgur 3 | 4 | Licensed under the Apache License, Version 2.0 (the "License"); 5 | you may not use this file except in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | http://www.apache.org/licenses/LICENSE-2.0 9 | 10 | Unless required by applicable law or agreed to in writing, software 11 | distributed under the License is distributed on an "AS IS" BASIS, 12 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | See the License for the specific language governing permissions and 14 | limitations under the License. -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include README CHANGES *.py 2 | recursive-include schemasync *.py 3 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | Project maintaining again. If you want to help us, we really appreciate that. 2 | 3 | 4 | Schema Sync v0.9.4 5 | +++++++++++++++++++ 6 | a MySQL schema synchronization utility 7 | http://mmatuson.github.io/SchemaSync/ 8 | 9 | 10 | SYNOPSIS 11 | ======== 12 | schemasync [options] 13 | 14 | # source/target format: mysql://user:pass@host:port/database 15 | # output format: [_].YYYYMMDD.(patch|revert)[_].sql 16 | 17 | 18 | DESCRIPTION 19 | =========== 20 | Schema Sync will generate the SQL necessary to migrate the schema of a source database to a target database (patch script), as well as a the SQL necessary to undo the changes after you apply them (revert script). 21 | 22 | * Schema Sync does not alter your database. It only generates the .sql files containing the differences. You must apply the changes. 23 | * Schema Sync does not yet recognize Tables or Columns that have been renamed. A rename will result in the old table or column being dropped and the new one added. 24 | * All ADD|MODIFY COLUMN statements have the AFTER (or FIRST) SQL syntax even if no move is required. 25 | * COMMENTS and AUTO_INCREMENT values are not by synced by default. See help (-h) for details. 26 | * Partitions (MySQL 5.1+) are not yet supported 27 | 28 | OPTIONS 29 | ================= 30 | -h, --help show this help message and exit 31 | -V, --version show version and exit. 32 | -r, --revision increment the migration script version number 33 | if a file with the same name already exists. 34 | -a, --sync-auto-inc sync the AUTO_INCREMENT value for each table. 35 | -c, --sync-comments sync the COMMENT field for all tables AND columns 36 | -D, --no-date removes the date from the file format 37 | --charset=CHARSET set the connection charset, default: utf8 38 | --tag=TAG tag the migration scripts as _. 39 | Valid characters include [A-Za-z0-9-_] 40 | --output-directory=OUTPUT_DIRECTORY 41 | directory to write the migration scrips. 42 | The default is current working directory. 43 | Must use absolute path if provided. 44 | --log-directory=LOG_DIRECTORY 45 | set the directory to write the log to. 46 | Must use absolute path if provided. 47 | Default is output directory. 48 | Log filename is schemasync.log 49 | 50 | 51 | Download and Install 52 | ==================== 53 | 54 | Prerequisites 55 | ------------- 56 | * To run Schema Sync, you need to have: 57 | - Python 2.4, 2.5, or 2.6 58 | - MySQL , version 5.0 or higher 59 | - MySQLdb , version 1.2.1p2 or higher 60 | - SchemaObject 0.5.7 or higher 61 | * To run the test suite, you need to install a copy of the Sakila Database , version 0.8 62 | 63 | Standard Installation 64 | --------------------- 65 | For installation instructions, see http://mmatuson.github.io/SchemaSync/install.htm 66 | 67 | 68 | Status & License 69 | ================ 70 | It is released under the Apache License, Version 2.0 . 71 | 72 | You can obtain a copy of the latest source code from the Git repository , or fork it on Github . 73 | 74 | You can report bugs via the Schema Sync Issues page 75 | -------------------------------------------------------------------------------- /ez_setup.py: -------------------------------------------------------------------------------- 1 | #--coding:utf8--!-- 2 | #!python 3 | """Bootstrap setuptools installation 4 | 5 | If you want to use setuptools in your package's setup.py, just include this 6 | file in the same directory with it, and add this to the top of your setup.py:: 7 | 8 | from ez_setup import use_setuptools 9 | use_setuptools() 10 | 11 | If you want to require a specific version of setuptools, set a download 12 | mirror, or use an alternate download directory, you can do so by supplying 13 | the appropriate options to ``use_setuptools()``. 14 | 15 | This file can also be run as a script to install or upgrade setuptools. 16 | """ 17 | import sys 18 | DEFAULT_VERSION = "0.6c9" 19 | DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3] 20 | 21 | md5_data = { 22 | 'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca', 23 | 'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb', 24 | 'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b', 25 | 'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a', 26 | 'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618', 27 | 'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac', 28 | 'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5', 29 | 'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4', 30 | 'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c', 31 | 'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b', 32 | 'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27', 33 | 'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277', 34 | 'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa', 35 | 'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e', 36 | 'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e', 37 | 'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f', 38 | 'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2', 39 | 'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc', 40 | 'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167', 41 | 'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64', 42 | 'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d', 43 | 'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20', 44 | 'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab', 45 | 'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53', 46 | 'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2', 47 | 'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e', 48 | 'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372', 49 | 'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902', 50 | 'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de', 51 | 'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b', 52 | 'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03', 53 | 'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a', 54 | 'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6', 55 | 'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a', 56 | } 57 | 58 | import sys, os 59 | try: from hashlib import md5 60 | except ImportError: from md5 import md5 61 | 62 | def _validate_md5(egg_name, data): 63 | if egg_name in md5_data: 64 | digest = md5(data).hexdigest() 65 | if digest != md5_data[egg_name]: 66 | print >>sys.stderr, ( 67 | "md5 validation of %s failed! (Possible download problem?)" 68 | % egg_name 69 | ) 70 | sys.exit(2) 71 | return data 72 | 73 | def use_setuptools( 74 | version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, 75 | download_delay=15 76 | ): 77 | """Automatically find/download setuptools and make it available on sys.path 78 | 79 | `version` should be a valid setuptools version number that is available 80 | as an egg for download under the `download_base` URL (which should end with 81 | a '/'). `to_dir` is the directory where setuptools will be downloaded, if 82 | it is not already available. If `download_delay` is specified, it should 83 | be the number of seconds that will be paused before initiating a download, 84 | should one be required. If an older version of setuptools is installed, 85 | this routine will print a message to ``sys.stderr`` and raise SystemExit in 86 | an attempt to abort the calling script. 87 | """ 88 | was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules 89 | def do_download(): 90 | egg = download_setuptools(version, download_base, to_dir, download_delay) 91 | sys.path.insert(0, egg) 92 | import setuptools; setuptools.bootstrap_install_from = egg 93 | try: 94 | import pkg_resources 95 | except ImportError: 96 | return do_download() 97 | try: 98 | pkg_resources.require("setuptools>="+version); return 99 | except pkg_resources.VersionConflict, e: 100 | if was_imported: 101 | print >>sys.stderr, ( 102 | "The required version of setuptools (>=%s) is not available, and\n" 103 | "can't be installed while this script is running. Please install\n" 104 | " a more recent version first, using 'easy_install -U setuptools'." 105 | "\n\n(Currently using %r)" 106 | ) % (version, e.args[0]) 107 | sys.exit(2) 108 | else: 109 | del pkg_resources, sys.modules['pkg_resources'] # reload ok 110 | return do_download() 111 | except pkg_resources.DistributionNotFound: 112 | return do_download() 113 | 114 | def download_setuptools( 115 | version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, 116 | delay = 15 117 | ): 118 | """Download setuptools from a specified location and return its filename 119 | 120 | `version` should be a valid setuptools version number that is available 121 | as an egg for download under the `download_base` URL (which should end 122 | with a '/'). `to_dir` is the directory where the egg will be downloaded. 123 | `delay` is the number of seconds to pause before an actual download attempt. 124 | """ 125 | import urllib2, shutil 126 | egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3]) 127 | url = download_base + egg_name 128 | saveto = os.path.join(to_dir, egg_name) 129 | src = dst = None 130 | if not os.path.exists(saveto): # Avoid repeated downloads 131 | try: 132 | from distutils import log 133 | if delay: 134 | log.warn(""" 135 | --------------------------------------------------------------------------- 136 | This script requires setuptools version %s to run (even to display 137 | help). I will attempt to download it for you (from 138 | %s), but 139 | you may need to enable firewall access for this script first. 140 | I will start the download in %d seconds. 141 | 142 | (Note: if this machine does not have network access, please obtain the file 143 | 144 | %s 145 | 146 | and place it in this directory before rerunning this script.) 147 | ---------------------------------------------------------------------------""", 148 | version, download_base, delay, url 149 | ); from time import sleep; sleep(delay) 150 | log.warn("Downloading %s", url) 151 | src = urllib2.urlopen(url) 152 | # Read/write all in one block, so we don't create a corrupt file 153 | # if the download is interrupted. 154 | data = _validate_md5(egg_name, src.read()) 155 | dst = open(saveto,"wb"); dst.write(data) 156 | finally: 157 | if src: src.close() 158 | if dst: dst.close() 159 | return os.path.realpath(saveto) 160 | 161 | 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | 185 | 186 | 187 | 188 | 189 | 190 | 191 | 192 | 193 | 194 | 195 | 196 | def main(argv, version=DEFAULT_VERSION): 197 | """Install or upgrade setuptools and EasyInstall""" 198 | try: 199 | import setuptools 200 | except ImportError: 201 | egg = None 202 | try: 203 | egg = download_setuptools(version, delay=0) 204 | sys.path.insert(0,egg) 205 | from setuptools.command.easy_install import main 206 | return main(list(argv)+[egg]) # we're done here 207 | finally: 208 | if egg and os.path.exists(egg): 209 | os.unlink(egg) 210 | else: 211 | if setuptools.__version__ == '0.0.1': 212 | print >>sys.stderr, ( 213 | "You have an obsolete version of setuptools installed. Please\n" 214 | "remove it from your system entirely before rerunning this script." 215 | ) 216 | sys.exit(2) 217 | 218 | req = "setuptools>="+version 219 | import pkg_resources 220 | try: 221 | pkg_resources.require(req) 222 | except pkg_resources.VersionConflict: 223 | try: 224 | from setuptools.command.easy_install import main 225 | except ImportError: 226 | from easy_install import main 227 | main(list(argv)+[download_setuptools(delay=0)]) 228 | sys.exit(0) # try to force an exit 229 | else: 230 | if argv: 231 | from setuptools.command.easy_install import main 232 | main(argv) 233 | else: 234 | print "Setuptools version",version,"or greater has been installed." 235 | print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)' 236 | 237 | def update_md5(filenames): 238 | """Update our built-in md5 registry""" 239 | 240 | import re 241 | 242 | for name in filenames: 243 | base = os.path.basename(name) 244 | f = open(name,'rb') 245 | md5_data[base] = md5(f.read()).hexdigest() 246 | f.close() 247 | 248 | data = [" %r: %r,\n" % it for it in md5_data.items()] 249 | data.sort() 250 | repl = "".join(data) 251 | 252 | import inspect 253 | srcfile = inspect.getsourcefile(sys.modules[__name__]) 254 | f = open(srcfile, 'rb'); src = f.read(); f.close() 255 | 256 | match = re.search("\nmd5_data = {\n([^}]+)}", src) 257 | if not match: 258 | print >>sys.stderr, "Internal error!" 259 | sys.exit(2) 260 | 261 | src = src[:match.start(1)] + repl + src[match.end(1):] 262 | f = open(srcfile,'w') 263 | f.write(src) 264 | f.close() 265 | 266 | 267 | if __name__=='__main__': 268 | if len(sys.argv)>2 and sys.argv[1]=='--md5update': 269 | update_md5(sys.argv[2:]) 270 | else: 271 | main(sys.argv[1:]) 272 | 273 | 274 | 275 | 276 | 277 | 278 | -------------------------------------------------------------------------------- /schemasync/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hhyo/SchemaSync/854220fbd11770c31f382ae6a144bbcf103a09fb/schemasync/__init__.py -------------------------------------------------------------------------------- /schemasync/schemasync.py: -------------------------------------------------------------------------------- 1 | #--coding:utf8--!-- 2 | #!/usr/bin/python 3 | 4 | import re 5 | import sys 6 | import os 7 | import logging 8 | import datetime 9 | import optparse 10 | import syncdb 11 | import utils 12 | import warnings 13 | 14 | __author__ = """ 15 | Mitch Matuson 16 | Mustafa Ozgur 17 | """ 18 | __copyright__ = """ 19 | Copyright 2009-2016 Mitch Matuson 20 | Copyright 2016 Mustafa Ozgur 21 | """ 22 | __version__ = "0.9.4" 23 | __license__ = "Apache 2.0" 24 | 25 | # supress MySQLdb DeprecationWarning in Python 2.6 26 | warnings.simplefilter("ignore", DeprecationWarning) 27 | 28 | try: 29 | import MySQLdb 30 | except ImportError: 31 | print "Error: Missing Required Dependency MySQLdb." 32 | sys.exit(1) 33 | 34 | try: 35 | import schemaobject 36 | except ImportError: 37 | print "Error: Missing Required Dependency SchemaObject" 38 | sys.exit(1) 39 | 40 | APPLICATION_VERSION = __version__ 41 | APPLICATION_NAME = "Schema Sync" 42 | LOG_FILENAME = "schemasync.log" 43 | DATE_FORMAT = "%Y%m%d" 44 | TPL_DATE_FORMAT = "%a, %b %d, %Y" 45 | PATCH_TPL = """-- 46 | -- Schema Sync %(app_version)s %(type)s 47 | -- Created: %(created)s 48 | -- Server Version: %(server_version)s 49 | -- Apply To: %(target_host)s/%(target_database)s 50 | -- 51 | %(data)s""" 52 | 53 | 54 | def parse_cmd_line(fn): 55 | """Parse the command line options and pass them to the application""" 56 | 57 | def processor(): 58 | usage = """ 59 | %prog [options] 60 | source/target format: mysql://user:pass@host:port/database""" 61 | description = """ 62 | A MySQL Schema Synchronization Utility 63 | """ 64 | parser = optparse.OptionParser(usage=usage, 65 | description=description) 66 | 67 | parser.add_option("-V", "--version", 68 | action="store_true", 69 | dest="show_version", 70 | default=False, 71 | help="show version and exit.") 72 | 73 | parser.add_option("-r", "--revision", 74 | action="store_true", 75 | dest="version_filename", 76 | default=False, 77 | help=("increment the migration script version number " 78 | "if a file with the same name already exists.")) 79 | 80 | parser.add_option("-a", "--sync-auto-inc", 81 | dest="sync_auto_inc", 82 | action="store_true", 83 | default=False, 84 | help="sync the AUTO_INCREMENT value for each table.") 85 | 86 | parser.add_option("-c", "--sync-comments", 87 | dest="sync_comments", 88 | action="store_true", 89 | default=False, 90 | help=("sync the COMMENT field for all " 91 | "tables AND columns")) 92 | 93 | parser.add_option("-D", "--no-date", 94 | dest="no_date", 95 | action="store_true", 96 | default=False, 97 | help="removes the date from the file format ") 98 | 99 | parser.add_option("--charset", 100 | dest="charset", 101 | default='utf8', 102 | help="set the connection charset, default: utf8") 103 | 104 | parser.add_option("--tag", 105 | dest="tag", 106 | help=("tag the migration scripts as _." 107 | " Valid characters include [A-Za-z0-9-_]")) 108 | 109 | parser.add_option("--output-directory", 110 | dest="output_directory", 111 | default=os.getcwd(), 112 | help=("directory to write the migration scrips. " 113 | "The default is current working directory. " 114 | "Must use absolute path if provided.")) 115 | 116 | parser.add_option("--log-directory", 117 | dest="log_directory", 118 | help=("set the directory to write the log to. " 119 | "Must use absolute path if provided. " 120 | "Default is output directory. " 121 | "Log filename is schemasync.log")) 122 | 123 | options, args = parser.parse_args(sys.argv[1:]) 124 | 125 | if options.show_version: 126 | print APPLICATION_NAME, __version__ 127 | return 0 128 | 129 | if (not args) or (len(args) != 2): 130 | parser.print_help() 131 | return 0 132 | 133 | return fn(*args, **dict(version_filename=options.version_filename, 134 | output_directory=options.output_directory, 135 | log_directory=options.log_directory, 136 | no_date=options.no_date, 137 | tag=options.tag, 138 | charset=options.charset, 139 | sync_auto_inc=options.sync_auto_inc, 140 | sync_comments=options.sync_comments)) 141 | 142 | return processor 143 | 144 | 145 | def app(sourcedb='', targetdb='', version_filename=False, 146 | output_directory=None, log_directory=None, no_date=False, 147 | tag=None, charset=None, sync_auto_inc=False, sync_comments=False): 148 | """Main Application""" 149 | 150 | options = locals() 151 | 152 | if not os.path.isabs(output_directory): 153 | print "Error: Output directory must be an absolute path. Quiting." 154 | return 1 155 | 156 | if not os.path.isdir(output_directory): 157 | print "Error: Output directory does not exist. Quiting." 158 | return 1 159 | 160 | if not log_directory or not os.path.isdir(log_directory): 161 | if log_directory: 162 | print "Log directory does not exist, writing log to %s" % output_directory 163 | log_directory = output_directory 164 | 165 | logging.basicConfig(filename=os.path.join(log_directory, LOG_FILENAME), 166 | level=logging.INFO, 167 | format='[%(levelname)s %(asctime)s] %(message)s') 168 | 169 | console = logging.StreamHandler() 170 | console.setLevel(logging.DEBUG) 171 | if len(logging.getLogger('').handlers) <= 1: 172 | logging.getLogger('').addHandler(console) 173 | 174 | if not sourcedb: 175 | logging.error("Source database URL not provided. Exiting.") 176 | return 1 177 | 178 | source_info = schemaobject.connection.parse_database_url(sourcedb) 179 | if not source_info: 180 | logging.error("Invalid source database URL format. Exiting.") 181 | return 1 182 | 183 | if not source_info['protocol'] == 'mysql': 184 | logging.error("Source database must be MySQL. Exiting.") 185 | return 1 186 | 187 | if 'db' not in source_info: 188 | logging.error("Source database name not provided. Exiting.") 189 | return 1 190 | 191 | if not targetdb: 192 | logging.error("Target database URL not provided. Exiting.") 193 | return 1 194 | 195 | target_info = schemaobject.connection.parse_database_url(targetdb) 196 | if not target_info: 197 | logging.error("Invalid target database URL format. Exiting.") 198 | return 1 199 | 200 | if not target_info['protocol'] == 'mysql': 201 | logging.error("Target database must be MySQL. Exiting.") 202 | return 1 203 | 204 | if 'db' not in target_info: 205 | logging.error("Target database name not provided. Exiting.") 206 | return 1 207 | 208 | if source_info['db'] == '*' and target_info['db'] == '*': 209 | from schemaobject.connection import DatabaseConnection 210 | 211 | sourcedb_none = sourcedb[:-1] 212 | targetdb_none = targetdb[:-1] 213 | connection = DatabaseConnection() 214 | connection.connect(sourcedb_none, charset='utf8') 215 | sql_schema = """ 216 | SELECT SCHEMA_NAME FROM information_schema.SCHEMATA 217 | WHERE SCHEMA_NAME NOT IN ('mysql', 'information_schema', 'performance_schema', 'sys') 218 | """ 219 | schemas = connection.execute(sql_schema) 220 | for schema_info in schemas: 221 | db = schema_info['SCHEMA_NAME'] 222 | sourcedb = sourcedb_none + db 223 | targetdb = targetdb_none + db 224 | try: 225 | app(sourcedb=sourcedb, targetdb=targetdb, version_filename=version_filename, 226 | output_directory=output_directory, log_directory=log_directory, no_date=no_date, 227 | tag=tag, charset=charset, sync_auto_inc=sync_auto_inc, sync_comments=sync_comments) 228 | except schemaobject.connection.DatabaseError, e: 229 | logging.error("MySQL Error %d: %s (Ignore)" % (e.args[0], e.args[1])) 230 | return 1 231 | 232 | source_obj = schemaobject.SchemaObject(sourcedb, charset) 233 | target_obj = schemaobject.SchemaObject(targetdb, charset) 234 | 235 | if utils.compare_version(source_obj.version, '5.0.0') < 0: 236 | logging.error("%s requires MySQL version 5.0+ (source is v%s)" 237 | % (APPLICATION_NAME, source_obj.version)) 238 | return 1 239 | 240 | if utils.compare_version(target_obj.version, '5.0.0') < 0: 241 | logging.error("%s requires MySQL version 5.0+ (target is v%s)" 242 | % (APPLICATION_NAME, target_obj.version)) 243 | return 1 244 | 245 | # data transformation filters 246 | filters = (lambda d: utils.REGEX_MULTI_SPACE.sub(' ', d), 247 | lambda d: utils.REGEX_DISTANT_SEMICOLIN.sub(';', d), 248 | lambda d: utils.REGEX_SEMICOLON_EXPLODE_TO_NEWLINE.sub(";\n", d)) 249 | 250 | # Information about this run, used in the patch/revert templates 251 | ctx = dict(app_version=APPLICATION_VERSION, 252 | server_version=target_obj.version, 253 | target_host=target_obj.host, 254 | target_database=target_obj.selected.name, 255 | created=datetime.datetime.now().strftime(TPL_DATE_FORMAT)) 256 | 257 | p_fname, r_fname = utils.create_pnames(target_obj.selected.name, 258 | tag=tag, 259 | date_format=DATE_FORMAT, 260 | no_date=no_date) 261 | 262 | ctx['type'] = "Patch Script" 263 | p_buffer = utils.PatchBuffer(name=os.path.join(output_directory, p_fname), 264 | filters=filters, tpl=PATCH_TPL, ctx=ctx.copy(), 265 | version_filename=version_filename) 266 | 267 | ctx['type'] = "Revert Script" 268 | r_buffer = utils.PatchBuffer(name=os.path.join(output_directory, r_fname), 269 | filters=filters, tpl=PATCH_TPL, ctx=ctx.copy(), 270 | version_filename=version_filename) 271 | 272 | db_selected = False 273 | for patch, revert in syncdb.sync_schema(source_obj.selected, 274 | target_obj.selected, options): 275 | if patch and revert: 276 | if not db_selected: 277 | p_buffer.write(target_obj.selected.select() + '\n') 278 | r_buffer.write(target_obj.selected.select() + '\n') 279 | p_buffer.write(target_obj.selected.fk_checks(0) + '\n') 280 | r_buffer.write(target_obj.selected.fk_checks(0) + '\n') 281 | db_selected = True 282 | 283 | p_buffer.write(patch + '\n') 284 | r_buffer.write(revert + '\n') 285 | 286 | if db_selected: 287 | p_buffer.write(target_obj.selected.fk_checks(1) + '\n') 288 | r_buffer.write(target_obj.selected.fk_checks(1) + '\n') 289 | 290 | for patch, revert in syncdb.sync_views(source_obj.selected, target_obj.selected): 291 | if patch and revert: 292 | if not db_selected: 293 | p_buffer.write(target_obj.selected.select() + '\n') 294 | r_buffer.write(target_obj.selected.select() + '\n') 295 | db_selected = True 296 | 297 | p_buffer.write(patch + '\n') 298 | r_buffer.write(revert + '\n') 299 | 300 | for patch, revert in syncdb.sync_triggers(source_obj.selected, target_obj.selected): 301 | if patch and revert: 302 | if not db_selected: 303 | p_buffer.write(target_obj.selected.select() + '\n') 304 | r_buffer.write(target_obj.selected.select() + '\n') 305 | db_selected = True 306 | 307 | p_buffer.write(patch + '\n') 308 | r_buffer.write(revert + '\n') 309 | 310 | for patch, revert in syncdb.sync_procedures(source_obj.selected, target_obj.selected): 311 | if patch and revert: 312 | 313 | if not db_selected: 314 | p_buffer.write(target_obj.selected.select() + '\n') 315 | r_buffer.write(target_obj.selected.select() + '\n') 316 | p_buffer.write(target_obj.selected.fk_checks(0) + '\n') 317 | r_buffer.write(target_obj.selected.fk_checks(0) + '\n') 318 | db_selected = True 319 | 320 | p_buffer.write(patch + '\n') 321 | r_buffer.write(revert + '\n') 322 | 323 | if db_selected: 324 | p_buffer.write(target_obj.selected.fk_checks(1) + '\n') 325 | r_buffer.write(target_obj.selected.fk_checks(1) + '\n') 326 | 327 | for patch, revert in syncdb.sync_views(source_obj.selected, target_obj.selected): 328 | if patch and revert: 329 | if not db_selected: 330 | p_buffer.write(target_obj.selected.select() + '\n') 331 | r_buffer.write(target_obj.selected.select() + '\n') 332 | db_selected = True 333 | 334 | p_buffer.write(patch + '\n') 335 | r_buffer.write(revert + '\n') 336 | 337 | for patch, revert in syncdb.sync_triggers(source_obj.selected, target_obj.selected): 338 | if patch and revert: 339 | if not db_selected: 340 | p_buffer.write(target_obj.selected.select() + '\n') 341 | r_buffer.write(target_obj.selected.select() + '\n') 342 | db_selected = True 343 | 344 | p_buffer.write(patch + '\n') 345 | r_buffer.write(revert + '\n') 346 | 347 | for patch, revert in syncdb.sync_procedures(source_obj.selected, target_obj.selected): 348 | if patch and revert: 349 | if not db_selected: 350 | p_buffer.write(target_obj.selected.select() + '\n') 351 | r_buffer.write(target_obj.selected.select() + '\n') 352 | db_selected = True 353 | 354 | p_buffer.write(patch + '\n') 355 | r_buffer.write(revert + '\n') 356 | 357 | if not p_buffer.modified: 358 | logging.info(("No migration scripts written." 359 | " mysql://%s/%s and mysql://%s/%s were in sync.") % 360 | (source_obj.host, source_obj.selected.name, 361 | target_obj.host, target_obj.selected.name)) 362 | else: 363 | try: 364 | p_buffer.save() 365 | r_buffer.save() 366 | logging.info("Migration scripts created for mysql://%s/%s\n" 367 | "Patch Script: %s\nRevert Script: %s" 368 | % (target_obj.host, target_obj.selected.name, 369 | p_buffer.name, r_buffer.name)) 370 | except OSError, e: 371 | p_buffer.delete() 372 | r_buffer.delete() 373 | logging.error("Failed writing migration scripts. %s" % e) 374 | return 1 375 | 376 | return 0 377 | 378 | 379 | def main(): 380 | try: 381 | sys.exit(parse_cmd_line(app)()) 382 | except schemaobject.connection.DatabaseError, e: 383 | logging.error("MySQL Error %d: %s" % (e.args[0], e.args[1])) 384 | sys.exit(1) 385 | except KeyboardInterrupt: 386 | print "Sync Interrupted, Exiting." 387 | sys.exit(1) 388 | 389 | 390 | if __name__ == "__main__": 391 | main() 392 | -------------------------------------------------------------------------------- /schemasync/syncdb.py: -------------------------------------------------------------------------------- 1 | #--coding:utf8--!-- 2 | from utils import REGEX_TABLE_AUTO_INC, REGEX_TABLE_COMMENT 3 | 4 | 5 | def sync_schema(fromdb, todb, options): 6 | """Generate the SQL statements needed to sync two Databases and all of 7 | their children (Tables, Columns, Indexes, Foreign Keys) 8 | 9 | Args: 10 | fromdb: A SchemaObject Schema Instance. 11 | todb: A SchemaObject Schema Instance. 12 | options: dictionary of options to use when syncing schemas 13 | sync_auto_inc: Bool, sync auto inc value throughout the schema? 14 | sync_comments: Bool, sync comment fields trhoughout the schema? 15 | 16 | Yields: 17 | A tuple (patch, revert) containing the next SQL statement needed 18 | to migrate fromdb to todb. The tuple will always contain 2 strings, 19 | even if they are empty. 20 | """ 21 | p, r = sync_database_options(fromdb, todb) 22 | 23 | if p and r: 24 | yield ( 25 | "%s %s;" % (todb.alter(), p), 26 | "%s %s;" % (todb.alter(), r) 27 | ) 28 | 29 | for p, r in sync_created_tables(fromdb.tables, todb.tables, 30 | sync_auto_inc=options['sync_auto_inc'], 31 | sync_comments=options['sync_comments']): 32 | yield p, r 33 | 34 | for p, r in sync_dropped_tables(fromdb.tables, todb.tables, 35 | sync_auto_inc=options['sync_auto_inc'], 36 | sync_comments=options['sync_comments']): 37 | yield p, r 38 | 39 | for t in fromdb.tables: 40 | if t not in todb.tables: 41 | continue 42 | 43 | from_table = fromdb.tables[t] 44 | to_table = todb.tables[t] 45 | 46 | plist = [] 47 | rlist = [] 48 | for p, r in sync_table(from_table, to_table, options): 49 | plist.append(p) 50 | rlist.append(r) 51 | 52 | if plist and rlist: 53 | p = "%s %s;" % (to_table.alter(), ', '.join(plist)) 54 | r = "%s %s;" % (to_table.alter(), ', '.join(rlist)) 55 | yield p, r 56 | 57 | 58 | def sync_table(from_table, to_table, options): 59 | """Generate the SQL statements needed to sync two Tables and all of their 60 | children (Columns, Indexes, Foreign Keys) 61 | 62 | Args: 63 | from_table: A SchemaObject TableSchema Instance. 64 | to_table: A SchemaObject TableSchema Instance. 65 | options: dictionary of options to use when syncing schemas 66 | sync_auto_inc: Bool, sync auto inc value throughout the table? 67 | sync_comments: Bool, sync comment fields trhoughout the table? 68 | 69 | Yields: 70 | A tuple (patch, revert) containing the next SQL statements 71 | """ 72 | for p, r in sync_created_columns(from_table.columns, 73 | to_table.columns, 74 | sync_comments=options['sync_comments']): 75 | yield (p, r) 76 | 77 | for p, r in sync_dropped_columns(from_table.columns, 78 | to_table.columns, 79 | sync_comments=options['sync_comments']): 80 | yield (p, r) 81 | 82 | if from_table and to_table: 83 | for p, r in sync_modified_columns(from_table.columns, 84 | to_table.columns, 85 | sync_comments=options['sync_comments']): 86 | yield (p, r) 87 | 88 | # add new indexes, then compare existing indexes for changes 89 | for p, r in sync_created_constraints(from_table.indexes, to_table.indexes): 90 | yield (p, r) 91 | 92 | for p, r in sync_modified_constraints(from_table.indexes, to_table.indexes): 93 | yield (p, r) 94 | 95 | # we'll drop indexes after we process foreign keys... 96 | 97 | # add new foreign keys and compare existing fks for changes 98 | for p, r in sync_created_constraints(from_table.foreign_keys, to_table.foreign_keys): 99 | yield (p, r) 100 | 101 | for p, r in sync_modified_constraints(from_table.foreign_keys, to_table.foreign_keys): 102 | yield (p, r) 103 | 104 | for p, r in sync_dropped_constraints(from_table.foreign_keys, to_table.foreign_keys): 105 | yield (p, r) 106 | 107 | # drop remaining indexes 108 | for p, r in sync_dropped_constraints(from_table.indexes, to_table.indexes): 109 | yield (p, r) 110 | 111 | # end the alter table syntax with the changed table options 112 | p, r = sync_table_options(from_table, to_table, 113 | sync_auto_inc=options['sync_auto_inc'], 114 | sync_comments=options['sync_comments']) 115 | if p: 116 | yield (p, r) 117 | 118 | 119 | def sync_database_options(from_db, to_db): 120 | """Generate the SQL statements needed to modify the Database options 121 | of the target schema (patch), and restore them to their previous 122 | definition (revert) 123 | 124 | Args: 125 | from_db: A SchemaObject DatabaseSchema Instance. 126 | to_db: A SchemaObject DatabaseSchema Instance. 127 | 128 | Returns: 129 | A tuple (patch, revert) containing the SQL statements 130 | A tuple of empty strings will be returned if no changes were found 131 | """ 132 | p = [] 133 | r = [] 134 | 135 | for opt in from_db.options: 136 | if from_db.options[opt] != to_db.options[opt]: 137 | p.append(from_db.options[opt].create()) 138 | r.append(to_db.options[opt].create()) 139 | 140 | if p: 141 | return ' '.join(p), ' '.join(r) 142 | else: 143 | return '', '' 144 | 145 | 146 | def sync_created_tables(from_tables, to_tables, 147 | sync_auto_inc=False, sync_comments=False): 148 | """Generate the SQL statements needed to CREATE Tables in the target 149 | schema (patch), and remove them (revert) 150 | 151 | Args: 152 | from_tables: A OrderedDict of SchemaObject.TableSchema Instances. 153 | to_tables: A OrderedDict of SchemaObject.TableSchema Instances. 154 | sync_auto_inc: Bool (default=False), sync auto increment for each table? 155 | sync_comments: Bool (default=False), sync the comment field for the table? 156 | 157 | Yields: 158 | A tuple (patch, revert) containing the next SQL statements 159 | """ 160 | for t in from_tables: 161 | if t not in to_tables: 162 | p, r = from_tables[t].create(), from_tables[t].drop() 163 | if not sync_auto_inc: 164 | p = REGEX_TABLE_AUTO_INC.sub('', p) 165 | r = REGEX_TABLE_AUTO_INC.sub('', r) 166 | if not sync_comments: 167 | p = REGEX_TABLE_COMMENT.sub('', p) 168 | r = REGEX_TABLE_COMMENT.sub('', r) 169 | 170 | yield p, r 171 | 172 | 173 | def sync_dropped_tables(from_tables, to_tables, 174 | sync_auto_inc=False, sync_comments=False): 175 | """Generate the SQL statements needed to DROP Tables in the target 176 | schema (patch), and restore them to their previous definition (revert) 177 | 178 | Args: 179 | from_tables: A OrderedDict of SchemaObject.TableSchema Instances. 180 | to_tables: A OrderedDict of SchemaObject.TableSchema Instances. 181 | sync_auto_inc: Bool (default=False), sync auto increment for each table? 182 | sync_comments: Bool (default=False), sync the comment field for the table? 183 | 184 | Yields: 185 | A tuple (patch, revert) containing the next SQL statements 186 | """ 187 | for t in to_tables: 188 | if t not in from_tables: 189 | p, r = to_tables[t].drop(), to_tables[t].create() 190 | if not sync_auto_inc: 191 | p = REGEX_TABLE_AUTO_INC.sub('', p) 192 | r = REGEX_TABLE_AUTO_INC.sub('', r) 193 | if not sync_comments: 194 | p = REGEX_TABLE_COMMENT.sub('', p) 195 | r = REGEX_TABLE_COMMENT.sub('', r) 196 | 197 | yield p, r 198 | 199 | 200 | def sync_table_options(from_table, to_table, 201 | sync_auto_inc=False, sync_comments=False): 202 | """Generate the SQL statements needed to modify the Table options 203 | of the target table (patch), and restore them to their previous 204 | definition (revert) 205 | 206 | Args: 207 | from_table: A SchemaObject TableSchema Instance. 208 | to_table: A SchemaObject TableSchema Instance. 209 | sync_auto_inc: Bool, sync the tables auto increment value? 210 | sync_comments: Bool, sync the tbales comment field? 211 | 212 | Returns: 213 | A tuple (patch, revert) containing the SQL statements. 214 | A tuple of empty strings will be returned if no changes were found 215 | """ 216 | p = [] 217 | r = [] 218 | 219 | for opt in from_table.options: 220 | if (opt == 'auto_increment' and not sync_auto_inc) or (opt == 'comment' and not sync_comments): 221 | continue 222 | 223 | if from_table.options[opt] != to_table.options[opt]: 224 | p.append(from_table.options[opt].create()) 225 | r.append(to_table.options[opt].create()) 226 | 227 | if p: 228 | return ' '.join(p), ' '.join(r) 229 | else: 230 | return '', '' 231 | 232 | 233 | def get_previous_item(lst, item): 234 | """ Given an item, find its previous item in the list 235 | If the item appears more than once in the list, return the first index 236 | 237 | Args: 238 | lst: the list to search 239 | item: the item we want to find the previous item for 240 | 241 | Returns: The previous item or None if not found. 242 | """ 243 | try: 244 | i = lst.index(item) 245 | if i > 0: 246 | return lst[i - 1] 247 | except (IndexError, ValueError): 248 | pass 249 | 250 | return None 251 | 252 | 253 | def sync_created_columns(from_cols, to_cols, sync_comments=False): 254 | """Generate the SQL statements needed to ADD Columns to the target 255 | table (patch) and remove them (revert) 256 | 257 | Args: 258 | from_cols: A OrderedDict of SchemaObject.ColumnSchema Instances. 259 | to_cols: A OrderedDict of SchemaObject.ColumnSchema Instances. 260 | sync_comments: Bool (default=False), sync the comment field for each column? 261 | 262 | Yields: 263 | A tuple (patch, revert) containing the next SQL statements 264 | """ 265 | for c in from_cols: 266 | if c not in to_cols: 267 | fprev = get_previous_item(from_cols.keys(), c) 268 | yield (from_cols[c].create(after=fprev, with_comment=sync_comments), 269 | from_cols[c].drop()) 270 | 271 | 272 | def sync_dropped_columns(from_cols, to_cols, sync_comments=False): 273 | """Generate the SQL statements needed to DROP Columns in the target 274 | table (patch) and restore them to their previous definition (revert) 275 | 276 | Args: 277 | from_cols: A OrderedDictionary of SchemaObject.ColumnSchema Instances. 278 | to_cols: A OrderedDictionary of SchemaObject.ColumnSchema Instances. 279 | sync_comments: Bool (default=False), sync the comment field for each column? 280 | 281 | Yields: 282 | A tuple (patch, revert) containing the next SQL statements 283 | """ 284 | for c in to_cols: 285 | if c not in from_cols: 286 | tprev = get_previous_item(to_cols.keys(), c) 287 | yield (to_cols[c].drop(), 288 | to_cols[c].create(after=tprev, with_comment=sync_comments)) 289 | 290 | 291 | def sync_modified_columns(from_cols, to_cols, sync_comments=False): 292 | """Generate the SQL statements needed to MODIFY Columns in the target 293 | table (patch) and restore them to their previous definition (revert) 294 | 295 | Args: 296 | from_cols: A OrderedDict of SchemaObject.ColumnSchema Instances. 297 | to_cols: A OrderedDict of SchemaObject.ColumnSchema Instances. 298 | sync_comments: Bool (default=False), sync the comment field for each column? 299 | 300 | Yields: 301 | A tuple (patch, revert) containing the next SQL statements 302 | """ 303 | # find the column names comomon to each table 304 | # and retain the order in which they appear 305 | from_names = [c for c in from_cols.keys() if c in to_cols] 306 | to_names = [c for c in to_cols.keys() if c in from_cols] 307 | 308 | for from_idx, name in enumerate(from_names): 309 | 310 | to_idx = to_names.index(name) 311 | 312 | if ((from_idx != to_idx) or 313 | (to_cols[name] != from_cols[name]) or 314 | (sync_comments and (from_cols[name].comment != to_cols[name].comment))): 315 | 316 | # move the element to its correct spot as we do comparisons 317 | # this will prevent a domino effect of off-by-one false positives. 318 | if from_names.index(to_names[from_idx]) > to_idx: 319 | name = to_names[from_idx] 320 | from_names.remove(name) 321 | from_names.insert(from_idx, name) 322 | else: 323 | to_names.remove(name) 324 | to_names.insert(from_idx, name) 325 | 326 | fprev = get_previous_item(from_cols.keys(), name) 327 | tprev = get_previous_item(to_cols.keys(), name) 328 | yield (from_cols[name].modify(after=fprev, with_comment=sync_comments), 329 | to_cols[name].modify(after=tprev, with_comment=sync_comments)) 330 | 331 | 332 | def sync_created_constraints(src, dest): 333 | """Generate the SQL statements needed to ADD constraints 334 | (indexes, foreign keys) to the target table (patch) 335 | and remove them (revert) 336 | 337 | Args: 338 | src: A OrderedDictionary of SchemaObject IndexSchema 339 | or ForeignKeySchema Instances 340 | dest: A OrderedDictionary of SchemaObject IndexSchema 341 | or ForeignKeySchema Instances 342 | 343 | Yields: 344 | A tuple (patch, revert) containing the next SQL statements 345 | """ 346 | for c in src: 347 | if c not in dest: 348 | yield src[c].create(), src[c].drop() 349 | 350 | 351 | def sync_dropped_constraints(src, dest): 352 | """Generate the SQL statements needed to DROP constraints 353 | (indexes, foreign keys) from the target table (patch) 354 | and re-add them (revert) 355 | 356 | Args: 357 | src: A OrderedDict of SchemaObject IndexSchema 358 | or ForeignKeySchema Instances 359 | dest: A OrderedDict of SchemaObject IndexSchema 360 | or ForeignKeySchema Instances 361 | 362 | Yields: 363 | A tuple (patch, revert) containing the next SQL statements 364 | """ 365 | for c in dest: 366 | if c not in src: 367 | yield dest[c].drop(), dest[c].create() 368 | 369 | 370 | def sync_modified_constraints(src, dest): 371 | """Generate the SQL statements needed to modify 372 | constraints (indexes, foreign keys) in the target table (patch) 373 | and restore them to their previous definition (revert) 374 | 375 | 2 tuples will be generated for every change needed. 376 | Constraints must be dropped and re-added, since you can not modify them. 377 | 378 | Args: 379 | src: A OrderedDict of SchemaObject IndexSchema 380 | or ForeignKeySchema Instances 381 | dest: A OrderedDict of SchemaObject IndexSchema 382 | or ForeignKeySchema Instances 383 | 384 | Yields: 385 | A tuple (patch, revert) containing the next SQL statements 386 | """ 387 | for c in src: 388 | if c in dest and src[c] != dest[c]: 389 | yield dest[c].drop(), dest[c].drop() 390 | yield src[c].create(), dest[c].create() 391 | 392 | 393 | def sync_views(fromdb, todb): 394 | src = fromdb.views 395 | dest = todb.views 396 | 397 | for p, r in sync_created_views(src, dest): 398 | yield p, r 399 | 400 | for p, r in sync_dropped_views(src, dest): 401 | yield p, r 402 | 403 | for p, r in sync_modified_views(src, dest): 404 | yield p, r 405 | 406 | 407 | def sync_created_views(src, dest): 408 | for v in src: 409 | if v not in dest: 410 | yield src[v].create(), src[v].drop() 411 | 412 | 413 | def sync_dropped_views(src, dest): 414 | for v in dest: 415 | if v not in src: 416 | yield dest[v].drop(), dest[v].create() 417 | 418 | 419 | def sync_modified_views(src, dest): 420 | for v in src: 421 | if v in dest and src[v] != dest[v]: 422 | yield src[v].modify(), dest[v].modify() 423 | 424 | 425 | def sync_procedures(fromdb, todb): 426 | src = fromdb.procedures 427 | dest = todb.procedures 428 | 429 | for p, r in sync_created_procedures(src, dest): 430 | yield p, r 431 | 432 | for p, r in sync_dropped_procedures(src, dest): 433 | yield p, r 434 | 435 | for p, r in sync_modified_procedures(src, dest): 436 | yield p, r 437 | 438 | 439 | def sync_created_procedures(src, dest): 440 | for p in src: 441 | if p not in dest: 442 | yield src[p].create(), src[p].drop() 443 | 444 | 445 | def sync_dropped_procedures(src, dest): 446 | for p in dest: 447 | if p not in src: 448 | yield dest[p].drop(), dest[p].create() 449 | 450 | 451 | def sync_modified_procedures(src, dest): 452 | for p in src: 453 | if p in dest and src[p] != dest[p]: 454 | yield dest[p].drop(), dest[p].create() # Drop 455 | yield src[p].create(), src[p].drop() # Re-add 456 | 457 | 458 | def sync_triggers(fromdb, todb): 459 | src = fromdb.triggers 460 | dest = todb.triggers 461 | 462 | for p, r in sync_created_triggers(src, dest): 463 | yield p, r 464 | 465 | for p, r in sync_dropped_triggers(src, dest): 466 | yield p, r 467 | 468 | for p, r in sync_modified_triggers(src, dest): 469 | yield p, r 470 | 471 | 472 | def sync_created_triggers(src, dest): 473 | for t in src: 474 | if t not in dest: 475 | yield src[t].create(), src[t].drop() 476 | 477 | 478 | def sync_dropped_triggers(src, dest): 479 | for t in dest: 480 | if t not in src: 481 | yield dest[t].drop(), dest[t].create() 482 | 483 | 484 | def sync_modified_triggers(src, dest): 485 | for t in src: 486 | if t in dest and src[t] != dest[t]: 487 | yield dest[t].drop(), dest[t].create() 488 | yield src[t].create(), src[t].drop() 489 | -------------------------------------------------------------------------------- /schemasync/utils.py: -------------------------------------------------------------------------------- 1 | #--coding:utf8--!-- 2 | #--!--coding:utf8--!-- 3 | import sys 4 | if sys.getdefaultencoding() != 'utf8': 5 | reload(sys) 6 | sys.setdefaultencoding('utf8') 7 | 8 | """Utility functions for Schema Sync""" 9 | 10 | import re 11 | import os 12 | import datetime 13 | import glob 14 | import cStringIO 15 | 16 | # REGEX_NO_TICKS = re.compile('`') 17 | # REGEX_INT_SIZE = re.compile('int\(\d+\)') 18 | REGEX_MULTI_SPACE = re.compile(r'\s\s+') 19 | REGEX_DISTANT_SEMICOLIN = re.compile(r'(\s+;)$') 20 | REGEX_FILE_COUNTER = re.compile(r"_(?P[0-9]+)\.(?:[^.]+)$") 21 | REGEX_TABLE_COMMENT = re.compile(r"COMMENT(?:(?:\s*=\s*)|\s*)'(.*?)'", re.I) 22 | REGEX_TABLE_AUTO_INC = re.compile(r"AUTO_INCREMENT(?:(?:\s*=\s*)|\s*)(\d+)", re.I) 23 | REGEX_SEMICOLON_EXPLODE_TO_NEWLINE = re.compile(r';\s+') 24 | 25 | 26 | def versioned(filename): 27 | """Return the versioned name for a file. 28 | If filename exists, the next available sequence # will be added to it. 29 | file.txt => file_1.txt => file_2.txt => ... 30 | If filename does not exist the original filename is returned. 31 | 32 | Args: 33 | filename: the filename to version (including path to file) 34 | 35 | Returns: 36 | String, New filename. 37 | """ 38 | name, ext = os.path.splitext(filename) 39 | files = glob.glob(name + '*' + ext) 40 | if not files: 41 | return filename 42 | 43 | files = map(lambda x: REGEX_FILE_COUNTER.search(x, re.I), files) 44 | file_counters = [i.group('i') for i in files if i] 45 | 46 | if file_counters: 47 | i = int(max(file_counters)) + 1 48 | else: 49 | i = 1 50 | 51 | return name + ('_%d' % i) + ext 52 | 53 | 54 | def create_pnames(db, tag=None, date_format="%Y%m%d", no_date=False): 55 | """Returns a tuple of the filenames to use to create the migration scripts. 56 | Filename format: [_]..(patch|revert).sql 57 | 58 | Args: 59 | db: string, database name 60 | tag: string, optional, tag for the filenames 61 | date_format: string, the current date format 62 | Default Format: 21092009 63 | no_date: bool 64 | 65 | Returns: 66 | tuple of strings (patch_filename, revert_filename) 67 | """ 68 | d = datetime.datetime.now().strftime(date_format) 69 | if tag: 70 | tag = re.sub('[^A-Za-z0-9_-]', '', tag) 71 | basename = "%s_%s.%s" % (db, tag, d) 72 | elif no_date: 73 | basename = "%s" % (db) 74 | else: 75 | basename = "%s.%s" % (db, d) 76 | 77 | return ("%s.%s" % (basename, "patch.sql"), 78 | "%s.%s" % (basename, "revert.sql")) 79 | 80 | 81 | def compare_version(x, y, separator=r'[.-]'): 82 | """Return negative if version xy. 83 | 84 | Args: 85 | x: string, version x to compare 86 | y: string, version y to compare 87 | separator: regex 88 | 89 | Returns: 90 | integer representing the compare result of version x and y. 91 | """ 92 | x_array = re.split(separator, x) 93 | y_array = re.split(separator, y) 94 | for index in range(min(len(x_array), len(y_array))): 95 | if x_array[index] != y_array[index]: 96 | try: 97 | return cmp(int(x_array[index]), int(y_array[index])) 98 | except ValueError: 99 | return 0 100 | return 0 101 | 102 | 103 | class PatchBuffer(object): 104 | """Class for creating patch files 105 | 106 | Attributes: 107 | name: String, filename to use when saving the patch 108 | filters: List of functions to map to the patch data 109 | tpl: The patch template where the data will be written 110 | All data written to the PatchBuffer is palced in the 111 | template variable %(data)s. 112 | ctx: Dictionary of values to be put replaced in the template. 113 | version_filename: Bool, version the filename if it already exists? 114 | modified: Bool (default=False), flag to check if the 115 | PatchBuffer has been written to. 116 | """ 117 | 118 | def __init__(self, name, filters, tpl, ctx, version_filename=False): 119 | """Inits the PatchBuffer class""" 120 | self._buffer = cStringIO.StringIO() 121 | self.name = name 122 | self.filters = filters 123 | self.tpl = tpl 124 | self.ctx = ctx 125 | self.version_filename = version_filename 126 | self.modified = False 127 | 128 | def write(self, data): 129 | """Write data to the buffer.""" 130 | self.modified = True 131 | self._buffer.write(data) 132 | 133 | def save(self): 134 | """Apply filters, template transformations and write buffer to disk""" 135 | data = self._buffer.getvalue() 136 | if not data: 137 | return False 138 | 139 | if self.version_filename: 140 | self.name = versioned(self.name) 141 | fh = open(self.name, 'w') 142 | 143 | for f in self.filters: 144 | data = f(data) 145 | 146 | self.ctx['data'] = data 147 | 148 | fh.write(self.tpl % self.ctx) 149 | fh.close() 150 | 151 | return True 152 | 153 | def delete(self): 154 | """Delete the patch once it has been writen to disk""" 155 | if os.path.isfile(self.name): 156 | os.unlink(self.name) 157 | 158 | def __del__(self): 159 | self._buffer.close() 160 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #--coding:utf8--!-- 2 | #!/usr/bin/env python 3 | import ez_setup 4 | 5 | ez_setup.use_setuptools() 6 | 7 | from setuptools import setup 8 | 9 | setup( 10 | name='schema-sync', 11 | version='0.9.5', 12 | description='A MySQL Schema Synchronization Utility', 13 | author='Mitch Matuson, Mustafa Ozgur', 14 | url='https://github.com/hhyo/SchemaSync', 15 | packages=['schemasync'], 16 | install_requires=['schema-object >= 0.5.8'], 17 | entry_points={ 18 | 'console_scripts': [ 19 | 'schemasync = schemasync.schemasync:main', 20 | ] 21 | }, 22 | 23 | keywords=["MySQL", "database", "schema", "migration", "SQL"], 24 | 25 | classifiers=[ 26 | "Environment :: Console", 27 | "Intended Audience :: Information Technology", 28 | "Intended Audience :: System Administrators", 29 | "Intended Audience :: Developers", 30 | "License :: OSI Approved :: Apache Software License", 31 | "Programming Language :: Python", 32 | "Topic :: Database", 33 | "Topic :: Database :: Front-Ends", 34 | "Topic :: Software Development :: Build Tools", 35 | "Topic :: Software Development :: Code Generators", 36 | "Topic :: Utilities", 37 | ], 38 | 39 | long_description="""\ 40 | Schema Sync will generate the SQL necessary to migrate the schema of a source database 41 | to a target database (patch script), as well as a the SQL necessary to undo the changes 42 | after you apply them (revert script). 43 | """ 44 | ) 45 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/hhyo/SchemaSync/854220fbd11770c31f382ae6a144bbcf103a09fb/tests/__init__.py -------------------------------------------------------------------------------- /tests/test_all.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | from test_sync_database import TestSyncDatabase 4 | from test_sync_tables import TestSyncTables 5 | from test_sync_columns import TestSyncColumns 6 | from test_sync_constraints import TestSyncConstraints 7 | from test_utils import TestVersioned, TestPNames, TestPatchBuffer 8 | from test_regex import TestTableCommentRegex, TestTableAutoIncrementRegex, TestMultiSpaceRegex,TestFileCounterRegex,TestDistantSemiColonRegex 9 | 10 | def get_database_url(): 11 | database_url = raw_input("\nTests need to be run against the Sakila Database v0.8\n" 12 | "Enter the MySQL Database Connection URL without the database name\n" 13 | "Example: mysql://user:pass@host:port/\n" 14 | "URL: ") 15 | if not database_url.endswith('/'): 16 | database_url += '/' 17 | return database_url 18 | 19 | def regressionTest(): 20 | test_cases = [ 21 | TestTableCommentRegex, 22 | TestTableAutoIncrementRegex, 23 | TestMultiSpaceRegex, 24 | TestDistantSemiColonRegex, 25 | TestFileCounterRegex, 26 | TestSyncDatabase, 27 | TestSyncTables, 28 | TestSyncColumns, 29 | TestSyncConstraints, 30 | TestVersioned, 31 | TestPNames, 32 | TestPatchBuffer, 33 | ] 34 | database_url = get_database_url() 35 | 36 | suite = unittest.TestSuite() 37 | for tc in test_cases: 38 | tc.database_url = database_url 39 | suite.addTest(unittest.makeSuite(tc)) 40 | return suite 41 | 42 | if __name__ == "__main__": 43 | unittest.main(defaultTest="regressionTest") -------------------------------------------------------------------------------- /tests/test_regex.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | import re 4 | from schemasync.utils import REGEX_TABLE_AUTO_INC, REGEX_TABLE_COMMENT 5 | from schemasync.utils import REGEX_MULTI_SPACE, REGEX_DISTANT_SEMICOLIN, REGEX_FILE_COUNTER 6 | 7 | class TestTableCommentRegex(unittest.TestCase): 8 | 9 | def test_single_column_comment_case_insensitive(self): 10 | """Test REGEX_TABLE_COMMENT lowercase (comment '*')""" 11 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL comment 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 12 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 13 | self.assertEqual(count, 1) 14 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 15 | 16 | def test_single_column_comment_space_seperator(self): 17 | """Test REGEX_TABLE_COMMENT space seperator (COMMENT '*')""" 18 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 19 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 20 | self.assertEqual(count, 1) 21 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 22 | 23 | def test_single_column_comment_space_seperator_multiple_spaces(self): 24 | """Test REGEX_TABLE_COMMENT multiple spaces as the seperator (COMMENT '*')""" 25 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 26 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 27 | self.assertEqual(count, 1) 28 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 29 | 30 | def test_single_column_comment_equals_seperator(self): 31 | """Test REGEX_TABLE_COMMENT = seperator (COMMENT='*')""" 32 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT='this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 33 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 34 | self.assertEqual(count, 1) 35 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 36 | 37 | def test_single_column_comment_equals_seperator_with_spaces(self): 38 | """Test REGEX_TABLE_COMMENT = seperator surrounded by spaces (COMMENT = '*')""" 39 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT = 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 40 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 41 | self.assertEqual(count, 1) 42 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 43 | 44 | def test_multiple_mixed_seperated_column_comment(self): 45 | """Test REGEX_TABLE_COMMENT multiple column comments (COMMENT '*', COMMENT='*')""" 46 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL COMMENT='this is your last name',`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 47 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 48 | self.assertEqual(count, 2) 49 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL ,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 50 | 51 | def test_table_comment(self): 52 | """Test REGEX_TABLE_COMMENT Table comment""" 53 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB COMMENT 'table comment' DEFAULT CHARSET=utf8""" 54 | (sql, count) = re.subn(REGEX_TABLE_COMMENT, '', table) 55 | self.assertEqual(count, 1) 56 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 57 | 58 | def test_multiple_column_comment_with_table_comment(self): 59 | """Test REGEX_TABLE_COMMENT multiple column comments and the Table comment""" 60 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL COMMENT='this is your last name',`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB COMMENT 'table comment' DEFAULT CHARSET=utf8""" 61 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 62 | self.assertEqual(count, 3) 63 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL ,`last_name` varchar(100) NOT NULL ,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 64 | 65 | def test_no_comments(self): 66 | """Test REGEX_TABLE_COMMENT no comments""" 67 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL,`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""" 68 | (sql, count) = REGEX_TABLE_COMMENT.subn('', table) 69 | self.assertEqual(count, 0) 70 | self.assertEqual(sql, table) 71 | 72 | 73 | class TestTableAutoIncrementRegex(unittest.TestCase): 74 | 75 | def test_auto_inc_regex_space_seperator(self): 76 | """Test REGEX_TABLE_AUTO_INC table option AUTO_INCREMENT 1""" 77 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) AUTO_INCREMENT 1 ENGINE=InnoDB DEFAULT CHARSET=utf8""" 78 | (sql, count) = REGEX_TABLE_AUTO_INC.subn('', table) 79 | self.assertEqual(count, 1) 80 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 81 | 82 | def test_auto_inc_regex_space_seperator_with_multiple_spaces(self): 83 | """Test REGEX_TABLE_AUTO_INC table option AUTO_INCREMENT 1""" 84 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) AUTO_INCREMENT 1 ENGINE=InnoDB DEFAULT CHARSET=utf8""" 85 | (sql, count) = REGEX_TABLE_AUTO_INC.subn('', table) 86 | self.assertEqual(count, 1) 87 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 88 | 89 | def test_auto_inc_regex_equals_seperator(self): 90 | """Test REGEX_TABLE_AUTO_INC table option AUTO_INCREMENT=1""" 91 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) AUTO_INCREMENT=1 ENGINE=InnoDB DEFAULT CHARSET=utf8""" 92 | (sql, count) = REGEX_TABLE_AUTO_INC.subn('', table) 93 | self.assertEqual(count, 1) 94 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 95 | 96 | def test_auto_inc_regex_equals_seperator_with_spaces(self): 97 | """Test REGEX_TABLE_AUTO_INC table option AUTO_INCREMENT = 1""" 98 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) AUTO_INCREMENT = 1 ENGINE=InnoDB DEFAULT CHARSET=utf8""" 99 | (sql, count) = REGEX_TABLE_AUTO_INC.subn('', table) 100 | self.assertEqual(count, 1) 101 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 102 | 103 | def test_auto_inc_regex_case_insensitive(self): 104 | """Test REGEX_TABLE_AUTO_INC table option auto_increment=1""" 105 | table = """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) auto_increment=1 ENGINE=InnoDB DEFAULT CHARSET=utf8""" 106 | (sql, count) = REGEX_TABLE_AUTO_INC.subn('', table) 107 | self.assertEqual(count, 1) 108 | self.assertEqual(sql, """CREATE TABLE `person` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL COMMENT 'this is your first name',`last_name` varchar(100) NOT NULL,`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8""") 109 | 110 | 111 | class TestMultiSpaceRegex(unittest.TestCase): 112 | 113 | def test_multiple_spaces_in_string(self): 114 | """Test REGEX_MULTI_SPACE in string""" 115 | s = "hello, world." 116 | matches = REGEX_MULTI_SPACE.findall(s) 117 | self.assertTrue(matches) 118 | self.assertEqual(matches, [' ' * 2]) 119 | 120 | def test_multiple_spaces_leading_string(self): 121 | """Test REGEX_MULTI_SPACE leading string""" 122 | s = " hello, world." 123 | matches = REGEX_MULTI_SPACE.findall(s) 124 | self.assertTrue(matches) 125 | self.assertEqual(matches, [' ' * 5]) 126 | 127 | def test_multiple_spaces_trailing_string(self): 128 | """Test REGEX_MULTI_SPACE trailing string""" 129 | s = "hello, world. " 130 | matches = REGEX_MULTI_SPACE.findall(s) 131 | self.assertTrue(matches) 132 | self.assertEqual(matches, [' ' * 3]) 133 | 134 | def test_no_match(self): 135 | """Test REGEX_MULTI_SPACE no match""" 136 | s = "hello, world." 137 | matches = REGEX_MULTI_SPACE.findall(s) 138 | self.assertFalse(matches) 139 | 140 | 141 | class TestDistantSemiColonRegex(unittest.TestCase): 142 | 143 | def test_single_space(self): 144 | """Test REGEX_DISTANT_SEMICOLIN with single space""" 145 | s = "CREATE DATABSE foobar ;" 146 | matches = REGEX_DISTANT_SEMICOLIN.search(s) 147 | self.assertTrue(matches) 148 | 149 | def test_multiple_spaces(self): 150 | """Test REGEX_DISTANT_SEMICOLIN with multiple spaces""" 151 | s = "CREATE DATABSE foobar ;" 152 | matches = REGEX_DISTANT_SEMICOLIN.search(s) 153 | self.assertTrue(matches) 154 | 155 | def test_tabs(self): 156 | """Test REGEX_DISTANT_SEMICOLIN with tabs""" 157 | s = "CREATE DATABSE foobar ;" 158 | matches = REGEX_DISTANT_SEMICOLIN.search(s) 159 | self.assertTrue(matches) 160 | 161 | def test_newline(self): 162 | """Test REGEX_DISTANT_SEMICOLIN with newline""" 163 | s = """CREATE DATABSE foobar 164 | ;""" 165 | matches = REGEX_DISTANT_SEMICOLIN.search(s) 166 | self.assertTrue(matches) 167 | 168 | def test_ignore_in_string(self): 169 | """Test REGEX_DISTANT_SEMICOLIN ignore when in string""" 170 | s = """ALTER TABLE `foo` COMMENT 'hello ;' ;""" 171 | matches = REGEX_DISTANT_SEMICOLIN.findall(s) 172 | self.assertEqual(len(matches), 1) 173 | self.assertEqual(matches, [' ;']) 174 | 175 | s = """ALTER TABLE `foo` COMMENT 'hello ;';""" 176 | matches = REGEX_DISTANT_SEMICOLIN.findall(s) 177 | self.assertFalse(matches) 178 | 179 | def test_no_match(self): 180 | """Test REGEX_DISTANT_SEMICOLIN with no spaces""" 181 | s = "CREATE DATABSE foobar;" 182 | matches = REGEX_DISTANT_SEMICOLIN.search(s) 183 | self.assertFalse(matches) 184 | 185 | class TestFileCounterRegex(unittest.TestCase): 186 | 187 | def test_valid_numeric_matches_zero(self): 188 | """ Test REGEX_FILE_COUNTER valid numeric match 0""" 189 | test_str = "file_0.txt" 190 | matches = REGEX_FILE_COUNTER.search(test_str) 191 | self.assertTrue(matches) 192 | self.assertEqual(matches.group('i'), '0') 193 | 194 | def test_valid_numeric_matches_single_digit(self): 195 | """ Test REGEX_FILE_COUNTER valid numeric match 1 digit""" 196 | test_str = "file_8.txt" 197 | matches = REGEX_FILE_COUNTER.search(test_str) 198 | self.assertTrue(matches) 199 | self.assertEqual(matches.group('i'), '8') 200 | 201 | def test_valid_numeric_matches_two_digits(self): 202 | """ Test REGEX_FILE_COUNTER valid numeric match 2 digits""" 203 | test_str = "file_16.txt" 204 | matches = REGEX_FILE_COUNTER.search(test_str) 205 | self.assertTrue(matches) 206 | self.assertEqual(matches.group('i'), '16') 207 | 208 | def test_valid_numeric_matches_three_digit(self): 209 | """ Test REGEX_FILE_COUNTER valid numeric match 3 digits""" 210 | test_str = "file_256.txt" 211 | matches = REGEX_FILE_COUNTER.search(test_str) 212 | self.assertTrue(matches) 213 | self.assertEqual(matches.group('i'), '256') 214 | 215 | def test_valid_numeric_matches_four_digit(self): 216 | """ Test REGEX_FILE_COUNTER valid numeric match 4 digits""" 217 | test_str = "file_1024.txt" 218 | matches = REGEX_FILE_COUNTER.search(test_str) 219 | self.assertTrue(matches) 220 | self.assertEqual(matches.group('i'), '1024') 221 | 222 | def test_sequence_simple(self): 223 | """ Test REGEX_FILE_COUNTER simplest valid sequence""" 224 | test_str = "_1.txt" 225 | matches = REGEX_FILE_COUNTER.search(test_str) 226 | self.assertTrue(matches) 227 | self.assertEqual(matches.group('i'), '1') 228 | 229 | def test_sequence_repeated(self): 230 | """ Test REGEX_FILE_COUNTER repeated in sequence""" 231 | test_str = "hello_1._3.txt" 232 | matches = REGEX_FILE_COUNTER.findall(test_str) 233 | self.assertEqual(len(matches), 1) 234 | self.assertEqual(matches, ['3',]) 235 | 236 | def test_sequence_underscore_ext(self): 237 | """ Test REGEX_FILE_COUNTER extention with underscore""" 238 | test_str = "hello_3._txt" 239 | matches = REGEX_FILE_COUNTER.findall(test_str) 240 | self.assertEqual(len(matches), 1) 241 | self.assertEqual(matches, ['3',]) 242 | 243 | def test_sequence_numeric_ext_with_underscore(self): 244 | """ Test REGEX_FILE_COUNTER numeric extention with underscore""" 245 | test_str = "hello_3._123" 246 | matches = REGEX_FILE_COUNTER.findall(test_str) 247 | self.assertEqual(len(matches), 1) 248 | self.assertEqual(matches, ['3',]) 249 | 250 | def test_no_match_invlaid_extention(self): 251 | """Test REGEX_FILE_COUNTER no match: invalid extention""" 252 | test_str = "_1." 253 | matches = REGEX_FILE_COUNTER.search(test_str) 254 | self.assertFalse(matches) 255 | 256 | def test_no_match_missing_sequence(self): 257 | """Test REGEX_FILE_COUNTER no match: missing sequence""" 258 | test_str = "file.txt" 259 | matches = REGEX_FILE_COUNTER.search(test_str) 260 | self.assertFalse(matches) 261 | 262 | def test_no_match_invalid_sequence(self): 263 | """Test REGEX_FILE_COUNTER no match: invalid sequence""" 264 | test_str = "file1.txt" 265 | matches = REGEX_FILE_COUNTER.search(test_str) 266 | self.assertFalse(matches) 267 | 268 | def test_no_match_sequence_not_at_end(self): 269 | """Test REGEX_FILE_COUNTER no match: sequence must be before extention""" 270 | test_str = "hello_3.x_x.txt" 271 | matches = REGEX_FILE_COUNTER.findall(test_str) 272 | self.assertFalse(matches) 273 | 274 | if __name__ == '__main__': 275 | unittest.main() -------------------------------------------------------------------------------- /tests/test_sync_columns.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | import schemaobject 4 | from schemasync import syncdb 5 | 6 | class TestSyncColumns(unittest.TestCase): 7 | 8 | def setUp(self): 9 | self.schema = schemaobject.SchemaObject(self.database_url + 'sakila') 10 | self.src = self.schema.selected.tables['rental'] 11 | 12 | self.schema2 = schemaobject.SchemaObject(self.database_url + 'sakila') 13 | self.dest = self.schema2.selected.tables['rental'] 14 | 15 | def test_get_previous_item(self): 16 | """Test get previous item from Column list""" 17 | lst = ['bobby tables', 'jack', 'jill'] 18 | self.assertEqual('jack', syncdb.get_previous_item(lst, 'jill')) 19 | self.assertEqual('bobby tables', syncdb.get_previous_item(lst, 'jack')) 20 | self.assertEqual(None, syncdb.get_previous_item(lst, 'bobby tables')) 21 | self.assertEqual(None, syncdb.get_previous_item(lst, 'jeff')) 22 | 23 | def test_sync_created_column(self): 24 | """Test: src table has columns not in dest table (ignore Column COMMENT)""" 25 | saved = self.dest.columns['staff_id'] 26 | pos = self.dest.columns.index('staff_id') 27 | del self.dest.columns['staff_id'] 28 | 29 | for i, (p,r) in enumerate(syncdb.sync_created_columns(self.src.columns, self.dest.columns, sync_comments=False)): 30 | self.assertEqual(p, "ADD COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL AFTER `return_date`") 31 | self.assertEqual(r, "DROP COLUMN `staff_id`") 32 | 33 | self.assertEqual(i, 0) 34 | 35 | def test_sync_created_column_with_comments(self): 36 | """Test: src table has columns not in dest table (include Column COMMENT)""" 37 | saved = self.dest.columns['staff_id'] 38 | pos = self.dest.columns.index('staff_id') 39 | del self.dest.columns['staff_id'] 40 | 41 | self.src.columns['staff_id'].comment = "hello world" 42 | 43 | for i, (p,r) in enumerate(syncdb.sync_created_columns(self.src.columns, self.dest.columns, sync_comments=True)): 44 | self.assertEqual(p, "ADD COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL COMMENT 'hello world' AFTER `return_date`") 45 | self.assertEqual(r, "DROP COLUMN `staff_id`") 46 | 47 | self.assertEqual(i, 0) 48 | 49 | def test_sync_dropped_column(self): 50 | """Test: dest table has columns not in src table (ignore Column COMMENT)""" 51 | saved = self.src.columns['staff_id'] 52 | pos = self.src.columns.index('staff_id') 53 | del self.src.columns['staff_id'] 54 | 55 | for i, (p,r) in enumerate(syncdb.sync_dropped_columns(self.src.columns, self.dest.columns, sync_comments=False)): 56 | self.assertEqual(p, "DROP COLUMN `staff_id`") 57 | self.assertEqual(r, "ADD COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL AFTER `return_date`") 58 | 59 | self.assertEqual(i, 0) 60 | 61 | def test_sync_dropped_column_with_comment(self): 62 | """Test: dest table has columns not in src table (include Column COMMENT)""" 63 | saved = self.src.columns['staff_id'] 64 | pos = self.src.columns.index('staff_id') 65 | del self.src.columns['staff_id'] 66 | 67 | self.dest.columns['staff_id'].comment = "hello world" 68 | 69 | for i, (p,r) in enumerate(syncdb.sync_dropped_columns(self.src.columns, self.dest.columns, sync_comments=True)): 70 | self.assertEqual(p, "DROP COLUMN `staff_id`") 71 | self.assertEqual(r, "ADD COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL COMMENT 'hello world' AFTER `return_date`") 72 | 73 | self.assertEqual(i, 0) 74 | 75 | def test_sync_modified_column(self): 76 | """Test: column in src table have been modified in dest table (ignore Column COMMENT)""" 77 | self.dest.columns['rental_date'].type = "TEXT" 78 | self.dest.columns['rental_date'].null = True 79 | self.dest.columns['rental_date'].comment = "hello world" 80 | 81 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=False)): 82 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 83 | self.assertEqual(r, "MODIFY COLUMN `rental_date` TEXT NULL AFTER `rental_id`") 84 | 85 | self.assertEqual(i, 0) 86 | 87 | def test_sync_multiple_modified_columns(self): 88 | """Test: multiple columns in src table have been modified in dest table (ignore Column COMMENT)""" 89 | self.dest.columns['rental_date'].type = "TEXT" 90 | self.dest.columns['rental_date'].null = True 91 | self.dest.columns['rental_date'].comment = "hello world" 92 | self.dest.columns['return_date'].type = "TIMESTAMP" 93 | 94 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=False)): 95 | if i == 0: 96 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 97 | self.assertEqual(r, "MODIFY COLUMN `rental_date` TEXT NULL AFTER `rental_id`") 98 | if i == 1: 99 | self.assertEqual(p, "MODIFY COLUMN `return_date` DATETIME NULL AFTER `customer_id`") 100 | self.assertEqual(r, "MODIFY COLUMN `return_date` TIMESTAMP NULL AFTER `customer_id`") 101 | 102 | self.assertEqual(i, 1) 103 | 104 | def test_sync_modified_column_with_comments(self): 105 | """Test: columns in src table have been modified in dest table (include Column COMMENT)""" 106 | self.dest.columns['rental_date'].type = "TEXT" 107 | self.dest.columns['rental_date'].null = True 108 | self.dest.columns['rental_date'].comment = "hello world" 109 | 110 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 111 | self.assertEqual(r, "MODIFY COLUMN `rental_date` TEXT NULL COMMENT 'hello world' AFTER `rental_id`") 112 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 113 | 114 | self.assertEqual(i, 0) 115 | 116 | def test_move_col_to_end_in_dest(self): 117 | """Move a column in the dest table towards the end of the column list""" 118 | 119 | tmp = self.dest.columns._sequence[1] 120 | self.dest.columns._sequence.remove(tmp) 121 | self.dest.columns._sequence.insert(5, tmp) 122 | 123 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 124 | if i == 0: 125 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 126 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `staff_id`") 127 | 128 | self.assertEqual(i, 0) 129 | 130 | def test_move_col_to_beg_in_dest(self): 131 | """Move a column in the dest table towards the begining of the column list""" 132 | 133 | tmp = self.dest.columns._sequence[4] 134 | self.dest.columns._sequence.remove(tmp) 135 | self.dest.columns._sequence.insert(1, tmp) 136 | 137 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 138 | if i == 0: 139 | self.assertEqual(p, "MODIFY COLUMN `return_date` DATETIME NULL AFTER `customer_id`") 140 | self.assertEqual(r, "MODIFY COLUMN `return_date` DATETIME NULL AFTER `rental_id`") 141 | 142 | self.assertEqual(i, 0) 143 | 144 | def test_swap_two_cols_in_dest(self): 145 | """Swap the position of 2 columns in the dest table""" 146 | 147 | self.dest.columns._sequence[1], self.dest.columns._sequence[5] = self.dest.columns._sequence[5], self.dest.columns._sequence[1] 148 | 149 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 150 | if i == 0: 151 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 152 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `return_date`") 153 | if i == 1: 154 | self.assertEqual(p, "MODIFY COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL AFTER `return_date`") 155 | self.assertEqual(r, "MODIFY COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL AFTER `rental_id`") 156 | 157 | self.assertEqual(i, 1) 158 | 159 | def test_swap_pairs_of_cols_in_dest(self): 160 | """Swap the position of 2 pairs of columns in the dest table""" 161 | 162 | a,b = self.dest.columns._sequence[1], self.dest.columns._sequence[2] 163 | self.dest.columns._sequence[1], self.dest.columns._sequence[2] = self.dest.columns._sequence[4], self.dest.columns._sequence[5] 164 | self.dest.columns._sequence[4], self.dest.columns._sequence[5] = a,b 165 | 166 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 167 | if i == 0: 168 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 169 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `customer_id`") 170 | if i == 1: 171 | self.assertEqual(p, "MODIFY COLUMN `inventory_id` MEDIUMINT(8) UNSIGNED NOT NULL AFTER `rental_date`") 172 | self.assertEqual(r, "MODIFY COLUMN `inventory_id` MEDIUMINT(8) UNSIGNED NOT NULL AFTER `rental_date`") 173 | if i == 2: 174 | self.assertEqual(p, "MODIFY COLUMN `customer_id` SMALLINT(5) UNSIGNED NOT NULL AFTER `inventory_id`") 175 | self.assertEqual(r, "MODIFY COLUMN `customer_id` SMALLINT(5) UNSIGNED NOT NULL AFTER `staff_id`") 176 | 177 | self.assertEqual(i, 2) 178 | 179 | def test_move_3_cols_in_dest(self): 180 | """Move around 3 columns in the dest table""" 181 | 182 | self.dest.columns._sequence[0], self.dest.columns._sequence[3] = self.dest.columns._sequence[3], self.dest.columns._sequence[0] 183 | tmp = self.dest.columns._sequence[1] 184 | self.dest.columns._sequence.remove(tmp) 185 | self.dest.columns._sequence.insert(2, tmp) 186 | 187 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 188 | if i == 0: 189 | self.assertEqual(p, "MODIFY COLUMN `rental_id` INT(11) NOT NULL auto_increment FIRST") 190 | self.assertEqual(r, "MODIFY COLUMN `rental_id` INT(11) NOT NULL auto_increment AFTER `rental_date`") 191 | if i == 1: 192 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 193 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `inventory_id`") 194 | if i == 2: 195 | self.assertEqual(p, "MODIFY COLUMN `inventory_id` MEDIUMINT(8) UNSIGNED NOT NULL AFTER `rental_date`") 196 | self.assertEqual(r, "MODIFY COLUMN `inventory_id` MEDIUMINT(8) UNSIGNED NOT NULL AFTER `customer_id`") 197 | 198 | self.assertEqual(i, 2) 199 | 200 | 201 | def test_move_col_to_end_in_src(self): 202 | """Move a column in the dest table towards the end of the column list""" 203 | 204 | tmp = self.src.columns._sequence[1] 205 | self.src.columns._sequence.remove(tmp) 206 | self.src.columns._sequence.insert(5, tmp) 207 | 208 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 209 | if i == 0: 210 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `staff_id`") 211 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 212 | 213 | self.assertEqual(i, 0) 214 | 215 | def test_move_col_to_beg_in_src(self): 216 | """Move a column in the dest table towards the begining of the column list""" 217 | 218 | tmp = self.src.columns._sequence[4] 219 | self.src.columns._sequence.remove(tmp) 220 | self.src.columns._sequence.insert(1, tmp) 221 | 222 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 223 | if i == 0: 224 | self.assertEqual(p, "MODIFY COLUMN `return_date` DATETIME NULL AFTER `rental_id`") 225 | self.assertEqual(r, "MODIFY COLUMN `return_date` DATETIME NULL AFTER `customer_id`") 226 | 227 | self.assertEqual(i, 0) 228 | 229 | def test_swap_two_cols_in_src(self): 230 | """Swap the position of 2 columns in the dest table""" 231 | 232 | self.src.columns._sequence[1], self.src.columns._sequence[5] = self.src.columns._sequence[5], self.src.columns._sequence[1] 233 | 234 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 235 | if i == 0: 236 | self.assertEqual(p, "MODIFY COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL AFTER `rental_id`") 237 | self.assertEqual(r, "MODIFY COLUMN `staff_id` TINYINT(3) UNSIGNED NOT NULL AFTER `return_date`") 238 | 239 | if i == 1: 240 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `return_date`") 241 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 242 | 243 | 244 | self.assertEqual(i, 1) 245 | 246 | def test_move_3_cols_in_src(self): 247 | """Move around 3 columns in the dest table""" 248 | 249 | self.src.columns._sequence[0], self.src.columns._sequence[3] = self.src.columns._sequence[3], self.src.columns._sequence[0] 250 | tmp = self.src.columns._sequence[1] 251 | self.src.columns._sequence.remove(tmp) 252 | self.src.columns._sequence.insert(2, tmp) 253 | 254 | for i, (p,r) in enumerate(syncdb.sync_modified_columns(self.src.columns, self.dest.columns, sync_comments=True)): 255 | if i == 0: 256 | self.assertEqual(p, "MODIFY COLUMN `customer_id` SMALLINT(5) UNSIGNED NOT NULL FIRST") 257 | self.assertEqual(r, "MODIFY COLUMN `customer_id` SMALLINT(5) UNSIGNED NOT NULL AFTER `inventory_id`") 258 | 259 | if i == 1: 260 | self.assertEqual(p, "MODIFY COLUMN `inventory_id` MEDIUMINT(8) UNSIGNED NOT NULL AFTER `customer_id`") 261 | self.assertEqual(r, "MODIFY COLUMN `inventory_id` MEDIUMINT(8) UNSIGNED NOT NULL AFTER `rental_date`") 262 | 263 | if i == 2: 264 | self.assertEqual(p, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `inventory_id`") 265 | self.assertEqual(r, "MODIFY COLUMN `rental_date` DATETIME NOT NULL AFTER `rental_id`") 266 | 267 | self.assertEqual(i, 2) 268 | if __name__ == "__main__": 269 | from test_all import get_database_url 270 | TestSyncColumns.database_url = get_database_url() 271 | unittest.main() -------------------------------------------------------------------------------- /tests/test_sync_constraints.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | import schemaobject 4 | from schemasync import syncdb 5 | 6 | class TestSyncConstraints(unittest.TestCase): 7 | 8 | def setUp(self): 9 | 10 | self.schema = schemaobject.SchemaObject(self.database_url + 'sakila') 11 | self.src = self.schema.selected.tables['rental'] 12 | 13 | self.schema2 = schemaobject.SchemaObject(self.database_url + 'sakila') 14 | self.dest = self.schema2.selected.tables['rental'] 15 | 16 | def test_sync_created_index(self): 17 | """Test: src table has indexes not in dest table""" 18 | saved = self.dest.indexes['idx_fk_customer_id'] 19 | pos = self.dest.indexes.index('idx_fk_customer_id') 20 | del self.dest.indexes['idx_fk_customer_id'] 21 | 22 | for i, (p,r) in enumerate(syncdb.sync_created_constraints(self.src.indexes, self.dest.indexes)): 23 | self.assertEqual(p, "ADD INDEX `idx_fk_customer_id` (`customer_id`) USING BTREE") 24 | self.assertEqual(r, "DROP INDEX `idx_fk_customer_id`") 25 | 26 | self.assertEqual(i, 0) 27 | 28 | def test_sync_dropped_index(self): 29 | """Test: dest table has indexes not in src table""" 30 | saved = self.src.indexes['idx_fk_customer_id'] 31 | pos = self.dest.indexes.index('idx_fk_customer_id') 32 | del self.src.indexes['idx_fk_customer_id'] 33 | 34 | for i, (p,r) in enumerate(syncdb.sync_dropped_constraints(self.src.indexes, self.dest.indexes)): 35 | self.assertEqual(p, "DROP INDEX `idx_fk_customer_id`") 36 | self.assertEqual(r, "ADD INDEX `idx_fk_customer_id` (`customer_id`) USING BTREE") 37 | 38 | self.assertEqual(i, 0) 39 | 40 | def test_sync_modified_index(self): 41 | """Test: src table has indexes modified in dest table""" 42 | self.dest.indexes['idx_fk_customer_id'].kind = "UNIQUE" 43 | self.dest.indexes['idx_fk_customer_id'].fields = [('inventory_id', 0)] 44 | 45 | for i, (p,r) in enumerate(syncdb.sync_modified_constraints(self.src.indexes, self.dest.indexes)): 46 | if i==0: 47 | self.assertEqual(p, "DROP INDEX `idx_fk_customer_id`") 48 | self.assertEqual(r, "DROP INDEX `idx_fk_customer_id`") 49 | if i==1: 50 | self.assertEqual(p, "ADD INDEX `idx_fk_customer_id` (`customer_id`) USING BTREE") 51 | self.assertEqual(r, "ADD UNIQUE INDEX `idx_fk_customer_id` (`inventory_id`) USING BTREE") 52 | 53 | self.assertEqual(i, 1) 54 | 55 | def test_sync_created_fk(self): 56 | """Test: src table has foreign keys not in dest table""" 57 | saved = self.dest.foreign_keys['fk_rental_customer'] 58 | pos = self.dest.foreign_keys.index('fk_rental_customer') 59 | del self.dest.foreign_keys['fk_rental_customer'] 60 | 61 | for i, (p,r) in enumerate(syncdb.sync_created_constraints(self.src.foreign_keys, self.dest.foreign_keys)): 62 | self.assertEqual(p, "ADD CONSTRAINT `fk_rental_customer` FOREIGN KEY `fk_rental_customer` (`customer_id`) REFERENCES `customer` (`customer_id`) ON DELETE RESTRICT ON UPDATE CASCADE") 63 | self.assertEqual(r, "DROP FOREIGN KEY `fk_rental_customer`") 64 | 65 | self.assertEqual(i, 0) 66 | 67 | def test_sync_dropped_fk(self): 68 | """Test: dest table has foreign keys not in src table""" 69 | saved = self.src.foreign_keys['fk_rental_customer'] 70 | pos = self.dest.foreign_keys.index('fk_rental_customer') 71 | del self.src.foreign_keys['fk_rental_customer'] 72 | 73 | for i, (p,r) in enumerate(syncdb.sync_dropped_constraints(self.src.foreign_keys, self.dest.foreign_keys)): 74 | self.assertEqual(p, "DROP FOREIGN KEY `fk_rental_customer`") 75 | self.assertEqual(r, "ADD CONSTRAINT `fk_rental_customer` FOREIGN KEY `fk_rental_customer` (`customer_id`) REFERENCES `customer` (`customer_id`) ON DELETE RESTRICT ON UPDATE CASCADE") 76 | 77 | self.assertEqual(i, 0) 78 | 79 | def test_sync_modified_fk(self): 80 | """Test: src table has foreign keys modified in dest table""" 81 | self.dest.foreign_keys['fk_rental_customer'].delete_rule = "SET NULL" 82 | 83 | for i, (p,r) in enumerate(syncdb.sync_modified_constraints(self.src.foreign_keys, self.dest.foreign_keys)): 84 | if i==0: 85 | self.assertEqual(p, "DROP FOREIGN KEY `fk_rental_customer`") 86 | self.assertEqual(r, "DROP FOREIGN KEY `fk_rental_customer`") 87 | if i==1: 88 | self.assertEqual(p, "ADD CONSTRAINT `fk_rental_customer` FOREIGN KEY `fk_rental_customer` (`customer_id`) REFERENCES `customer` (`customer_id`) ON DELETE RESTRICT ON UPDATE CASCADE") 89 | self.assertEqual(r, "ADD CONSTRAINT `fk_rental_customer` FOREIGN KEY `fk_rental_customer` (`customer_id`) REFERENCES `customer` (`customer_id`) ON DELETE SET NULL ON UPDATE CASCADE") 90 | 91 | self.assertEqual(i, 1) 92 | 93 | if __name__ == "__main__": 94 | from test_all import get_database_url 95 | TestSyncConstraints.database_url = get_database_url() 96 | unittest.main() -------------------------------------------------------------------------------- /tests/test_sync_database.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | import schemaobject 4 | from schemasync import syncdb 5 | 6 | class TestSyncDatabase(unittest.TestCase): 7 | 8 | def setUp(self): 9 | self.schema = schemaobject.SchemaObject(self.database_url + 'sakila') 10 | self.src = self.schema.selected 11 | 12 | self.schema2 = schemaobject.SchemaObject(self.database_url + 'sakila') 13 | self.dest = self.schema2.selected 14 | 15 | def test_database_options(self): 16 | """Test: src and dest database options are different""" 17 | self.src.options['charset'].value = "utf8" 18 | self.src.options['collation'].value = "utf8_general_ci" 19 | 20 | p,r = syncdb.sync_database_options(self.src, self.dest) 21 | self.assertEqual(p, "CHARACTER SET=utf8 COLLATE=utf8_general_ci") 22 | self.assertEqual(r, "CHARACTER SET=latin1 COLLATE=latin1_swedish_ci") 23 | 24 | 25 | if __name__ == "__main__": 26 | from test_all import get_database_url 27 | TestSyncDatabase.database_url = get_database_url() 28 | unittest.main() -------------------------------------------------------------------------------- /tests/test_sync_tables.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | import schemaobject 4 | from schemasync import syncdb 5 | 6 | class TestSyncTables(unittest.TestCase): 7 | 8 | def setUp(self): 9 | self.schema = schemaobject.SchemaObject(self.database_url + 'sakila') 10 | self.src = self.schema.selected 11 | 12 | self.schema2 = schemaobject.SchemaObject(self.database_url + 'sakila') 13 | self.dest = self.schema2.selected 14 | 15 | 16 | def test_created_tables(self): 17 | """Test: src db has tables not in dest db""" 18 | saved = self.dest.tables['rental'] 19 | pos = self.dest.tables.index('rental') 20 | del self.dest.tables['rental'] 21 | 22 | for i, (p, r) in enumerate(syncdb.sync_created_tables(self.src.tables, self.dest.tables)): 23 | self.assertEqual(p, "CREATE TABLE `rental` ( `rental_id` int(11) NOT NULL AUTO_INCREMENT, `rental_date` datetime NOT NULL, `inventory_id` mediumint(8) unsigned NOT NULL, `customer_id` smallint(5) unsigned NOT NULL, `return_date` datetime DEFAULT NULL, `staff_id` tinyint(3) unsigned NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`rental_id`), UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`), KEY `idx_fk_inventory_id` (`inventory_id`), KEY `idx_fk_staff_id` (`staff_id`), KEY `idx_fk_customer_id` (`customer_id`) USING BTREE, CONSTRAINT `fk_rental_customer` FOREIGN KEY (`customer_id`) REFERENCES `customer` (`customer_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_inventory` FOREIGN KEY (`inventory_id`) REFERENCES `inventory` (`inventory_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_staff` FOREIGN KEY (`staff_id`) REFERENCES `staff` (`staff_id`) ON UPDATE CASCADE) ENGINE=InnoDB DEFAULT CHARSET=utf8;") 24 | self.assertEqual(r, "DROP TABLE `rental`;") 25 | 26 | self.assertEqual(i, 0) 27 | 28 | def test_created_tables_strip_auto_increment(self): 29 | """Test: src db has tables not in dest db (strip table option auto_increment)""" 30 | saved = self.dest.tables['rental'] 31 | pos = self.dest.tables.index('rental') 32 | del self.dest.tables['rental'] 33 | 34 | for i, (p, r) in enumerate(syncdb.sync_created_tables(self.src.tables, self.dest.tables, sync_auto_inc=True)): 35 | self.assertEqual(p, "CREATE TABLE `rental` ( `rental_id` int(11) NOT NULL AUTO_INCREMENT, `rental_date` datetime NOT NULL, `inventory_id` mediumint(8) unsigned NOT NULL, `customer_id` smallint(5) unsigned NOT NULL, `return_date` datetime DEFAULT NULL, `staff_id` tinyint(3) unsigned NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`rental_id`), UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`), KEY `idx_fk_inventory_id` (`inventory_id`), KEY `idx_fk_staff_id` (`staff_id`), KEY `idx_fk_customer_id` (`customer_id`) USING BTREE, CONSTRAINT `fk_rental_customer` FOREIGN KEY (`customer_id`) REFERENCES `customer` (`customer_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_inventory` FOREIGN KEY (`inventory_id`) REFERENCES `inventory` (`inventory_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_staff` FOREIGN KEY (`staff_id`) REFERENCES `staff` (`staff_id`) ON UPDATE CASCADE) ENGINE=InnoDB DEFAULT CHARSET=utf8;") 36 | self.assertEqual(r, "DROP TABLE `rental`;") 37 | 38 | self.assertEqual(i, 0) 39 | 40 | def test_created_tables_strip_comments(self): 41 | """Test: src db has tables not in dest db (strip comments)""" 42 | saved = self.dest.tables['rental'] 43 | pos = self.dest.tables.index('rental') 44 | del self.dest.tables['rental'] 45 | 46 | for i, (p, r) in enumerate(syncdb.sync_created_tables(self.src.tables, self.dest.tables, sync_comments=True)): 47 | self.assertEqual(p, "CREATE TABLE `rental` ( `rental_id` int(11) NOT NULL AUTO_INCREMENT, `rental_date` datetime NOT NULL, `inventory_id` mediumint(8) unsigned NOT NULL, `customer_id` smallint(5) unsigned NOT NULL, `return_date` datetime DEFAULT NULL, `staff_id` tinyint(3) unsigned NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`rental_id`), UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`), KEY `idx_fk_inventory_id` (`inventory_id`), KEY `idx_fk_staff_id` (`staff_id`), KEY `idx_fk_customer_id` (`customer_id`) USING BTREE, CONSTRAINT `fk_rental_customer` FOREIGN KEY (`customer_id`) REFERENCES `customer` (`customer_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_inventory` FOREIGN KEY (`inventory_id`) REFERENCES `inventory` (`inventory_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_staff` FOREIGN KEY (`staff_id`) REFERENCES `staff` (`staff_id`) ON UPDATE CASCADE) ENGINE=InnoDB DEFAULT CHARSET=utf8;") 48 | self.assertEqual(r, "DROP TABLE `rental`;") 49 | 50 | self.assertEqual(i, 0) 51 | 52 | def test_dropped_tables(self): 53 | """Test: dest db has tables not in src db""" 54 | saved = self.src.tables['rental'] 55 | pos = self.src.tables.index('rental') 56 | del self.src.tables['rental'] 57 | 58 | for i, (p, r) in enumerate(syncdb.sync_dropped_tables(self.src.tables, self.dest.tables)): 59 | self.assertEqual(p, "DROP TABLE `rental`;") 60 | self.assertEqual(r, "CREATE TABLE `rental` ( `rental_id` int(11) NOT NULL AUTO_INCREMENT, `rental_date` datetime NOT NULL, `inventory_id` mediumint(8) unsigned NOT NULL, `customer_id` smallint(5) unsigned NOT NULL, `return_date` datetime DEFAULT NULL, `staff_id` tinyint(3) unsigned NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`rental_id`), UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`), KEY `idx_fk_inventory_id` (`inventory_id`), KEY `idx_fk_staff_id` (`staff_id`), KEY `idx_fk_customer_id` (`customer_id`) USING BTREE, CONSTRAINT `fk_rental_customer` FOREIGN KEY (`customer_id`) REFERENCES `customer` (`customer_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_inventory` FOREIGN KEY (`inventory_id`) REFERENCES `inventory` (`inventory_id`) ON UPDATE CASCADE, CONSTRAINT `fk_rental_staff` FOREIGN KEY (`staff_id`) REFERENCES `staff` (`staff_id`) ON UPDATE CASCADE) ENGINE=InnoDB DEFAULT CHARSET=utf8;") 61 | 62 | self.assertEqual(i, 0) 63 | 64 | def test_table_options(self): 65 | """Test: src and dest table have different options""" 66 | self.src.tables['address'].options['engine'].value = "MyISAM" 67 | 68 | p,r = syncdb.sync_table_options(self.src.tables['address'], self.dest.tables['address'], sync_auto_inc=False, sync_comments=False) 69 | self.assertEqual(p, "ENGINE=MyISAM") 70 | self.assertEqual(r, "ENGINE=InnoDB") 71 | 72 | 73 | def test_table_options_with_auto_inc(self): 74 | """Test: src and dest table have different options (include AUTO_INCREMENT value)""" 75 | self.src.tables['address'].options['engine'].value = "MyISAM" 76 | self.src.tables['address'].options['auto_increment'].value = 11 77 | 78 | p,r = syncdb.sync_table_options(self.src.tables['address'], self.dest.tables['address'], sync_auto_inc=True, sync_comments=False) 79 | self.assertEqual(p, "ENGINE=MyISAM AUTO_INCREMENT=11") 80 | self.assertEqual(r, "ENGINE=InnoDB AUTO_INCREMENT=1") 81 | 82 | 83 | def test_table_options_with_comment(self): 84 | """Test: src and dest table have different options (include table COMMENT)""" 85 | self.src.tables['address'].options['engine'].value = "MyISAM" 86 | self.src.tables['address'].options['comment'].value = "hello world" 87 | 88 | p,r = syncdb.sync_table_options(self.src.tables['address'], self.dest.tables['address'], sync_auto_inc=False, sync_comments=True) 89 | self.assertEqual(p, "ENGINE=MyISAM COMMENT='hello world'") 90 | self.assertEqual(r, "ENGINE=InnoDB COMMENT=''" ) 91 | 92 | 93 | if __name__ == "__main__": 94 | from test_all import get_database_url 95 | TestSyncTables.database_url = get_database_url() 96 | unittest.main() -------------------------------------------------------------------------------- /tests/test_utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import unittest 3 | import os 4 | import glob 5 | import datetime 6 | from schemasync.utils import versioned, create_pnames, compare_version, PatchBuffer 7 | 8 | 9 | class TestVersioned(unittest.TestCase): 10 | def setUp(self): 11 | filename = "/tmp/schemasync_util_testfile.txt" 12 | self.base_name, self.ext = os.path.splitext(filename) 13 | files = glob.glob(self.base_name + '*' + self.ext) 14 | 15 | for f in files: 16 | os.unlink(f) 17 | 18 | def tearDown(self): 19 | files = glob.glob(self.base_name + '*' + self.ext) 20 | 21 | for f in files: 22 | os.unlink(f) 23 | 24 | def test_inital_file(self): 25 | self.assertEqual(self.base_name + self.ext, 26 | versioned(self.base_name + self.ext)) 27 | 28 | def test_inc_sequence(self): 29 | open(self.base_name + self.ext, 'w').close() 30 | 31 | self.assertEqual(self.base_name + '_1' + self.ext, 32 | versioned(self.base_name + self.ext)) 33 | 34 | def test_inc_sequence_incomplete(self): 35 | open(self.base_name + self.ext, 'w').close() 36 | open(self.base_name + '_2' + self.ext, 'w').close() 37 | 38 | self.assertEqual(self.base_name + '_3' + self.ext, 39 | versioned(self.base_name + self.ext)) 40 | 41 | def test_inc_sequence_missing(self): 42 | open(self.base_name + '_4' + self.ext, 'w').close() 43 | 44 | self.assertEqual(self.base_name + '_5' + self.ext, 45 | versioned(self.base_name + self.ext)) 46 | 47 | 48 | class TestPNames(unittest.TestCase): 49 | def test_no_tag(self): 50 | d = datetime.datetime.now().strftime("%Y%m%d") 51 | p = "mydb.%s.patch.sql" % d 52 | r = "mydb.%s.revert.sql" % d 53 | self.assertEqual((p,r), create_pnames("mydb", date_format="%Y%m%d")) 54 | 55 | def test_simple_tag(self): 56 | d = datetime.datetime.now().strftime("%Y%m%d") 57 | p = "mydb_tag.%s.patch.sql" % d 58 | r = "mydb_tag.%s.revert.sql" % d 59 | self.assertEqual((p,r), create_pnames("mydb",tag="tag", date_format="%Y%m%d")) 60 | 61 | def test_alphanumeric_tag(self): 62 | d = datetime.datetime.now().strftime("%Y%m%d") 63 | p = "mydb_tag123.%s.patch.sql" % d 64 | r = "mydb_tag123.%s.revert.sql" % d 65 | self.assertEqual((p,r), create_pnames("mydb",tag="tag123", date_format="%Y%m%d")) 66 | 67 | def test_tag_with_spaces(self): 68 | d = datetime.datetime.now().strftime("%Y%m%d") 69 | p = "mydb_mytag.%s.patch.sql" % d 70 | r = "mydb_mytag.%s.revert.sql" % d 71 | self.assertEqual((p,r), create_pnames("mydb",tag="my tag", date_format="%Y%m%d")) 72 | 73 | def test_tag_with_invalid_chars(self): 74 | d = datetime.datetime.now().strftime("%Y%m%d") 75 | p = "mydb_tag.%s.patch.sql" % d 76 | r = "mydb_tag.%s.revert.sql" % d 77 | self.assertEqual((p,r), create_pnames("mydb",tag="tag!@#$%^&*()+?<>:{},./|\[];", date_format="%Y%m%d")) 78 | 79 | def test_tag_with_valid_chars(self): 80 | d = datetime.datetime.now().strftime("%Y%m%d") 81 | p = "mydb_my-tag_123.%s.patch.sql" % d 82 | r = "mydb_my-tag_123.%s.revert.sql" % d 83 | self.assertEqual((p,r), create_pnames("mydb",tag="my-tag_123", date_format="%Y%m%d")) 84 | 85 | class TestCompareVersion(unittest.TestCase): 86 | def test_basic_compare(self): 87 | self.assertTrue(compare_version('10.0.0-mysql', '5.0.0-mysql') > 0) 88 | self.assertTrue(compare_version('10.0.0-mysql', '5.0.0-log') > 0) 89 | self.assertTrue(compare_version('5.1.0-mysql', '5.0.1-log') > 0) 90 | self.assertTrue(compare_version('5.0.0-mysql', '5.0.0-log') == 0) 91 | self.assertTrue(compare_version('5.0.0-mysql', '5.0.1-log') < 0) 92 | 93 | class TestPatchBuffer(unittest.TestCase): 94 | 95 | def setUp(self): 96 | self.p = PatchBuffer(name="patch.txt", 97 | filters=[], 98 | tpl="data in this file: %(data)s", 99 | ctx={'x':'y'}, 100 | version_filename=True) 101 | 102 | def tearDown(self): 103 | if (os.path.isfile(self.p.name)): 104 | os.unlink(self.p.name) 105 | 106 | def test_loaded(self): 107 | self.assertEqual("patch.txt", self.p.name) 108 | self.assertEqual([], self.p.filters) 109 | self.assertEqual("data in this file: %(data)s", self.p.tpl) 110 | self.assertEqual({'x':'y'}, self.p.ctx) 111 | self.assertEqual(True, self.p.version_filename) 112 | self.assertEqual(False, self.p.modified) 113 | 114 | def test_write(self): 115 | self.assertEqual(False, self.p.modified) 116 | self.p.write("hello world") 117 | self.assertEqual(True, self.p.modified) 118 | 119 | def test_save(self): 120 | self.assertEqual(False, os.path.isfile(self.p.name)) 121 | self.p.write("hello, world") 122 | self.p.save() 123 | self.assertEqual(True, os.path.isfile(self.p.name)) 124 | f= open(self.p.name, 'r') 125 | self.assertEqual("data in this file: hello, world", f.readline()) 126 | 127 | def test_save_versioned(self): 128 | self.p.version_filename = True 129 | self.assertEqual(False, os.path.isfile(self.p.name)) 130 | self.p.write("hello, world") 131 | 132 | self.p.save() 133 | self.assertEqual(self.p.name, "patch.txt") 134 | self.assertEqual(True, os.path.isfile(self.p.name)) 135 | f= open(self.p.name, 'r') 136 | self.assertEqual("data in this file: hello, world", f.readline()) 137 | 138 | self.p.save() 139 | self.assertEqual(self.p.name, "patch_1.txt") 140 | self.assertEqual(True, os.path.isfile(self.p.name)) 141 | f= open(self.p.name, 'r') 142 | self.assertEqual("data in this file: hello, world", f.readline()) 143 | 144 | os.unlink("patch.txt") 145 | self.assertEqual(False, os.path.isfile("patch.txt")) 146 | 147 | os.unlink("patch_1.txt") 148 | self.assertEqual(False, os.path.isfile("patch_1.txt")) 149 | 150 | def test_delete(self): 151 | self.assertEqual(False, os.path.isfile(self.p.name)) 152 | self.p.write("hello, world") 153 | self.p.save() 154 | self.assertEqual(self.p.name, "patch.txt") 155 | self.assertEqual(True, os.path.isfile(self.p.name)) 156 | self.p.delete() 157 | self.assertEqual(False, os.path.isfile(self.p.name)) 158 | 159 | if __name__ == "__main__": 160 | unittest.main() --------------------------------------------------------------------------------