├── .gitignore ├── Changes.txt ├── README.md ├── edda ├── __init__.py ├── filters │ ├── __init__.py │ ├── balancer.py │ ├── chunk_migration.py │ ├── conn_msg.py │ ├── fsync_lock.py │ ├── init_and_listen.py │ ├── restart.py │ ├── rs_end_sync.py │ ├── rs_exit.py │ ├── rs_reconfig.py │ ├── rs_status.py │ ├── rs_sync.py │ ├── stale_secondary.py │ └── template.py ├── post │ ├── __init__.py │ ├── clock_skew.py │ ├── event_matchup.py │ ├── replace_clock_skew.py │ └── server_matchup.py ├── run_edda.py ├── sample_logs │ ├── hp │ │ ├── 1.log │ │ ├── 2.log │ │ ├── 3.log │ │ ├── 4.log │ │ ├── 5.log │ │ ├── 6.log │ │ ├── 7.log │ │ └── json_example │ │ │ └── hp_test.json │ ├── pr │ │ ├── 1.log │ │ ├── 2.log │ │ ├── 3.log │ │ ├── 4.log │ │ ├── 5.log │ │ ├── 6.log │ │ └── json_example │ │ │ └── pr_test.json │ └── rs-add-remove │ │ ├── db27017.log │ │ ├── db27018.log │ │ ├── db27019.log │ │ └── db27020.log ├── supporting_methods.py └── ui │ ├── __init__.py │ ├── connection.py │ ├── display │ ├── edda.html │ ├── favicon.ico │ ├── img │ │ └── texture.png │ ├── js │ │ ├── arrows.js │ │ ├── connection.js │ │ ├── draw_servers.js │ │ ├── links.js │ │ ├── mongostrap.min.js │ │ ├── mouse_over.js │ │ ├── pop.js │ │ ├── render.js │ │ └── setup.js │ ├── server_icons.jpg │ └── style │ │ ├── bootstrap3_override.css │ │ ├── edda.css │ │ └── mongostrap.min.css │ └── frames.py ├── scripts ├── edd3 ├── edda └── edda2 ├── setup.py └── test ├── __init__.py ├── repl_config.js ├── test_addr_matchup.py ├── test_clock_skew.py ├── test_connection.py ├── test_event_matchup.py ├── test_frames.py ├── test_fsync_lock.py ├── test_init_and_listen.py ├── test_organizing_servers.py ├── test_replacing_clock_skew.py ├── test_rs_exit.py ├── test_rs_reconfig.py ├── test_rs_status.py ├── test_rs_sync.py ├── test_stale_secondary.py └── test_supporting_methods.py /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | \#* 3 | *.pyc 4 | .DS_Store -------------------------------------------------------------------------------- /Changes.txt: -------------------------------------------------------------------------------- 1 | [Next release, insert number here] 2 | [NEW FEATURES] 3 | 4 | * After each run, a '.json' file is created at the place that you run the program from with your current configuration stored in it. Instead of sending entire log files for parsing, you can just send one '.json' file that will contain all of the information that edda needs to recreate the last run. 5 | 6 | 0.6.1 Fri, Aug 3, 2012 7 | [BUG FIXES] 8 | 9 | * Added missing sample log files in the hp directory 10 | 11 | 0.6.0 Wed, Aug 1, 2012 12 | 13 | [ENHANCEMENTS] 14 | 15 | * Speed of parsing log files has increased 5 fold. 16 | 17 | 18 | [NEW FEATURES] 19 | 20 | * Progress bar added in terminal to show the progress of parsing a log file. 21 | 22 | * Ability to read compressed log files in the form of '.gz'. 23 | 24 | * Added field in pop up, displaying server version. 25 | 26 | * Use '--http_port #####' to specify a port for the communication between the front and back ends to use. 27 | 28 | [BUG FIXES] 29 | 30 | * Fixed bug that was not allowing you to set a custom port using the --port modifier. 31 | 32 | 33 | 0.5.0 Tue, Jul 24, 2012 34 | 35 | [RELEASE] 36 | 37 | - Innitial release of Edda! 38 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | DISCLAIMER 2 | ---------- 3 | Please note: all tools/ scripts in this repo are released for use "AS IS" without any warranties of any kind, including, but not limited to their installation, use, or performance. We disclaim any and all warranties, either express or implied, including but not limited to any warranty of noninfringement, merchantability, and/ or fitness for a particular purpose. We do not warrant that the technology will meet your requirements, that the operation thereof will be uninterrupted or error-free, or that any errors will be corrected. 4 | Any use of these scripts and tools is at your own risk. There is no guarantee that they have been through thorough testing in a comparable environment and we are not responsible for any damage or data loss incurred with their use. 5 | You are responsible for reviewing and testing any scripts you run thoroughly before use in any non-testing environment. 6 | 7 | Edda, a log visualizer for MongoDB 8 | ================================== 9 | 10 | Edda © 2014 MongoDB, Inc. 11 | 12 | Authors: Samantha Ritter, Kaushal Parikh 13 | 14 | INSTALL 15 | ------- 16 | 17 | You must have the following installed as prerequisites for running Edda. 18 | 19 | + Pip: 20 | 21 | http://www.pip-installer.org/en/latest/installing.html# 22 | 23 | + MongoDB 24 | 25 | see http://www.mongodb.org/downloads 26 | 27 | + Install Edda: 28 | 29 | $ pip install edda 30 | 31 | NOTE: version 0.7.0 is the latest version of Edda on PyPi. This version only supports logs from Mongod up to v2.4. To use logs from MongoDB v2.6 and newer, please clone from the repository instead. 32 | 33 | RUN 34 | --- 35 | 36 | In order to run edda you must first have a mongod running: 37 | 38 | $ mongod 39 | 40 | Give the log files from your servers as command-line 41 | arguments to edda. Please provide only log files from the same server cluster! 42 | 43 | $ python edda/run_edda.py --options filename1 filename2 ... 44 | (see python run_edda.py --help for options) 45 | 46 | After each run, edda generates a '.json' file that contains all of the information required to recreate the current run. Run the '.json' file just as you would a '.log'. 47 | If you've run edda before, you can pass in a .json file to skip the processing step and go straight to visualization: 48 | 49 | $ python edda/run_edda.py previous_edda_data.json 50 | 51 | There are some sample log files in edda/sample_logs you can run 52 | if you don't have any log files of your own yet. 53 | 54 | ADDITIONAL 55 | ---------- 56 | 57 | If you'd like to report a bug or request a new feature, 58 | please file an issue on our github repository: 59 | https://github.com/10gen-labs/edda/issues/new 60 | -------------------------------------------------------------------------------- /edda/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | -------------------------------------------------------------------------------- /edda/filters/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import balancer 16 | import chunk_migration 17 | import fsync_lock 18 | import init_and_listen 19 | import restart 20 | import rs_end_sync 21 | import rs_exit 22 | import rs_reconfig 23 | import rs_status 24 | import rs_sync 25 | import stale_secondary 26 | -------------------------------------------------------------------------------- /edda/filters/balancer.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | from edda.supporting_methods import capture_address 18 | 19 | def criteria(msg): 20 | """Does the given log line fit the criteria for this filter? 21 | If yes, return an integer code. Otherwise, return -1. 22 | """ 23 | # recognize a shard 24 | if 'starting new replica set monitor for replica set' in msg: 25 | return 1 26 | if 'Starting new replica set monitor for' in msg: 27 | return 2 28 | return 0 29 | 30 | def process(msg, date): 31 | """If the given log line fits the critera for this filter, 32 | process it and create a document of the following format: 33 | doc = { 34 | "date" : date, 35 | "type" : "balancer", 36 | "msg" : msg, 37 | "origin_server" : name, 38 | "info" : { 39 | "subtype" : "new_shard", 40 | "replSet" : name, 41 | "members" : [ strings of server names ], 42 | "mongos" : True/False 43 | } 44 | } 45 | """ 46 | result = criteria(msg) 47 | if result == 0: 48 | return None 49 | 50 | doc = {} 51 | doc["date"] = date 52 | doc["type"] = "balancer" 53 | doc["msg"] = msg 54 | doc["info"] = {} 55 | 56 | doc["info"]["mongos"] = False 57 | if '[mongosMain]' in msg: 58 | doc["info"]["mongos"] = True 59 | 60 | if result == 1: 61 | # get replica set name and seeds 62 | a = msg.split("starting new replica set monitor for replica set") 63 | b = a[1].split() 64 | doc["info"]["subtype"] = "new_shard" 65 | doc["info"]["replSet"] = b[0] 66 | doc["info"]["members"] = b[3].split(',') 67 | doc["info"]["server"] = "self" 68 | return doc 69 | 70 | if result == 2: 71 | # updated parsing for 3.4 72 | a = msg.split("Starting new replica set monitor for ") 73 | b = a[1].split('/') 74 | doc["info"]["subtype"] = "new_shard" 75 | doc["info"]["replSet"] = b[0] 76 | doc["info"]["members"] = b[1].split(',') 77 | doc["info"]["server"] = "self" 78 | print "returning shard doc " + str(doc) 79 | return doc 80 | 81 | return None 82 | -------------------------------------------------------------------------------- /edda/filters/chunk_migration.py: -------------------------------------------------------------------------------- 1 | # Copyright 2016 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | import logging 19 | 20 | 21 | def criteria(msg): 22 | """Does the given log line fit the criteria for this filter? 23 | If so, return an integer code. Otherwise, return 0. 24 | """ 25 | if 'Starting chunk migration' in msg: 26 | return 1 27 | if 'moveChunk.commit' in msg: 28 | return 2 29 | if 'moveChunk.abort' in msg: 30 | return 3 31 | if 'moveChunk data transfer progress' in msg: 32 | return 4 33 | return 0 34 | 35 | def process(msg, date): 36 | """If the given log line fits the criteria for this filter, 37 | process the line and create a document of the following format: 38 | document = { 39 | "date" : date, 40 | "type" : "start_migration", 41 | "msg" : msg, 42 | "info" : { 43 | "server" : "self", 44 | } 45 | } 46 | 47 | For actual data transfer information: 48 | document = { 49 | "date" : date, 50 | "type" : "migration" or "commit_migration" or "abort_migration", 51 | "msg" : msg, 52 | "info" : { 53 | "server" : "self", 54 | "from_shard" : "string", 55 | "to_shard" : "string" 56 | } 57 | """ 58 | messageType = criteria(msg) 59 | if not messageType: 60 | return None 61 | 62 | doc = {} 63 | doc["date"] = date 64 | doc["info"] = {} 65 | doc["msg"] = msg 66 | 67 | # populate info 68 | doc["info"]["server"] = "self" 69 | 70 | # starts 71 | if messageType == 1: 72 | doc["type"] = "start_migration" 73 | # successful migrations 74 | elif messageType == 2: 75 | doc["type"] = "commit_migration" 76 | # aborted migrations 77 | elif messageType == 3: 78 | doc["type"] = "abort_migration" 79 | # progress messages 80 | elif messageType == 4: 81 | doc["type"] = "migration" 82 | label = "sessionId: \"" 83 | shards = msg[msg.find(label) + len(label):].split('_') 84 | doc["info"]["from_shard"] = shards[0] 85 | doc["info"]["to_shard"] = shards[1] 86 | 87 | # clean this up 88 | if messageType == 2 or messageType == 3: 89 | from_label = "from: \"" 90 | doc["info"]["from_shard"] = msg[msg.find(from_label) + len(from_label):].split("\"")[0] 91 | to_label = "to: \"" 92 | doc["info"]["to_shard"] = msg[msg.find(to_label) + len(to_label):].split("\"")[0] 93 | 94 | logger = logging.getLogger(__name__) 95 | logger.debug(doc) 96 | return doc 97 | -------------------------------------------------------------------------------- /edda/filters/conn_msg.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import logging 16 | import re 17 | from edda.supporting_methods import capture_address 18 | 19 | # module-level regex 20 | START_CONN_NUMBER = re.compile("#[0-9]+") 21 | END_CONN_NUMBER = re.compile("\[conn[0-9]+\]") 22 | ANY_NUMBER = re.compile("[0-9]+") 23 | 24 | 25 | def criteria(msg): 26 | """Determing if the given message is an instance 27 | of a connection type message 28 | """ 29 | if 'connection accepted' in msg: 30 | return 1 31 | if 'end connection' in msg: 32 | return 2 33 | return 0 34 | 35 | 36 | def process(msg, date): 37 | """Turn this message into a properly formatted 38 | connection type document: 39 | doc = { 40 | "type" : "conn" 41 | "date" : datetime 42 | "msg" : msg 43 | "info" : { 44 | "subtype" : "new_conn" or "end_conn" 45 | "conn_addr" : "addr:port" 46 | "conn_num" : int 47 | "server" : "self" 48 | } 49 | } 50 | """ 51 | 52 | result = criteria(msg) 53 | if not result: 54 | return None 55 | doc = {} 56 | doc["date"] = date 57 | doc["info"] = {} 58 | doc["msg"] = msg 59 | doc["type"] = "conn" 60 | 61 | if result == 1: 62 | new_conn(msg, doc) 63 | if result == 2: 64 | ended(msg, doc) 65 | return doc 66 | 67 | 68 | def new_conn(msg, doc): 69 | logger = logging.getLogger(__name__) 70 | """Generate a document for a new connection event.""" 71 | doc["info"]["subtype"] = "new_conn" 72 | 73 | addr = capture_address(msg) 74 | if not addr: 75 | logger.warning("No hostname or IP found for this server") 76 | return None 77 | doc["info"]["server"] = "self" 78 | doc["info"]["conn_addr"] = addr 79 | 80 | # isolate connection number 81 | m = START_CONN_NUMBER.search(msg) 82 | if not m: 83 | logger.debug("malformed new_conn message: no connection number found") 84 | return None 85 | doc["info"]["conn_number"] = m.group(0)[1:] 86 | 87 | debug = "Returning new doc for a message of type: initandlisten: new_conn" 88 | logger.debug(debug) 89 | return doc 90 | 91 | 92 | def ended(msg, doc): 93 | logger = logging.getLogger(__name__) 94 | """Generate a document for an end-of-connection event.""" 95 | doc["info"]["subtype"] = "end_conn" 96 | 97 | addr = capture_address(msg) 98 | if not addr: 99 | logger.warning("No hostname or IP found for this server") 100 | return None 101 | doc["info"]["server"] = "self" 102 | doc["info"]["conn_addr"] = addr 103 | 104 | # isolate connection number 105 | m = END_CONN_NUMBER.search(msg) 106 | if not m: 107 | logger.warning("malformed new_conn message: no connection number found") 108 | return None 109 | # do a second search for the actual number 110 | n = ANY_NUMBER.search(m.group(0)) 111 | doc["info"]["conn_number"] = n.group(0) 112 | 113 | debug = "Returning new doc for a message of type: initandlisten: end_conn" 114 | logger.debug(debug) 115 | 116 | return doc 117 | -------------------------------------------------------------------------------- /edda/filters/fsync_lock.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | 16 | def criteria(msg): 17 | """Does the given log line fit the criteria for this filter? 18 | If yes, return an integer code if yes. Otherwise, return 0. 19 | """ 20 | if 'command: unlock requested' in msg: 21 | return 1 22 | elif 'CMD fsync: sync:1 lock:1' in msg: 23 | return 2 24 | elif 'db is now locked' in msg: 25 | return 3 26 | return -1 27 | 28 | 29 | def process(msg, date): 30 | """If the given log line fits the criteria 31 | for this filter, processes the line and creates 32 | a document of the following format: 33 | doc = { 34 | "date" : date, 35 | "type" : "fsync", 36 | "info" : { 37 | "state" : state 38 | "server" : "self" 39 | } 40 | "msg" : msg 41 | } 42 | """ 43 | message_type = criteria(msg) 44 | if message_type <= 0: 45 | return None 46 | 47 | doc = {} 48 | doc["date"] = date 49 | doc["type"] = "fsync" 50 | doc["info"] = {} 51 | doc["original_message"] = msg 52 | 53 | if message_type == 1: 54 | doc["info"]["state"] = "UNLOCKED" 55 | elif message_type == 2: 56 | doc["info"]["state"] = "FSYNC" 57 | else: 58 | doc["info"]["state"] = "LOCKED" 59 | 60 | doc["info"]["server"] = "self" 61 | return doc 62 | -------------------------------------------------------------------------------- /edda/filters/init_and_listen.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | import logging 18 | import re 19 | 20 | PORT_NUMBER = re.compile("port=[0-9]{1,5}") 21 | LOGGER = logging.getLogger(__name__) 22 | 23 | def criteria(msg): 24 | """ Does the given log line fit the criteria for this filter? 25 | If yes, return an integer code. If not, return 0. 26 | """ 27 | if ('[initandlisten] MongoDB starting' in msg or 28 | '[mongosMain] MongoS' in msg or 29 | '[mongosMain] mongos' in msg): 30 | return 1 31 | if 'db version' in msg: 32 | return 2 33 | if 'options:' in msg: 34 | return 3 35 | if 'build info:' in msg: 36 | return 4 37 | return 0 38 | 39 | def process(msg, date): 40 | """If the given log line fits the criteria for 41 | this filter, processes the line and creates 42 | a document of the following format: 43 | 44 | "init" type documents: 45 | doc = { 46 | "date" : date, 47 | "type" : "init", 48 | "msg" : msg, 49 | "origin_server" : name --> this field is added in the main file 50 | "info" field structure varies with subtype: 51 | (startup) "info" : { 52 | "subtype" : "startup", 53 | "addr" : "hostaddr:port", 54 | "type" : mongos, mongod, config 55 | } 56 | (new_conn) "info" : { 57 | "subtype" : "new_conn", 58 | "server" : "hostaddr:port", 59 | "conn_number" : int, 60 | } 61 | } 62 | 63 | "version" type documents: 64 | doc = { 65 | "date" : date, 66 | "type" : "version", 67 | "msg" : msg, 68 | "version" : version number, 69 | "info" : { 70 | "server" : "self" 71 | } 72 | } 73 | 74 | "startup_options" documents: 75 | doc = { 76 | "date" : date, 77 | "type" : "startup_options", 78 | "msg" : msg, 79 | "info" : { 80 | "replSet" : replica set name (if there is one), 81 | "options" : all options, as a string 82 | } 83 | } 84 | 85 | "build_info" documents: 86 | doc = { 87 | "date" : date, 88 | "type" : "build_info", 89 | "msg" : msg, 90 | "info" : { 91 | "build_info" : string 92 | } 93 | } 94 | """ 95 | 96 | result = criteria(msg) 97 | if not result: 98 | return None 99 | doc = {} 100 | doc["date"] = date 101 | doc["info"] = {} 102 | doc["info"]["server"] = "self" 103 | 104 | # initial startup message 105 | if result == 1: 106 | doc["type"] = "init" 107 | doc["msg"] = msg 108 | return starting_up(msg, doc) 109 | 110 | # db version 111 | if result == 2: 112 | doc["type"] = "version" 113 | m = msg.find("db version v") 114 | # ick, but supports older-style log messages 115 | doc["version"] = msg[m + 12:].split()[0].split(',')[0] 116 | return doc 117 | 118 | # startup options 119 | if result == 3: 120 | doc["type"] = "startup_options" 121 | m = msg.find("replSet:") 122 | if m > -1: 123 | doc["info"]["replSet"] = msg[m:].split("\"")[1] 124 | doc["info"]["options"] = msg[msg.find("options:") + 9:] 125 | return doc 126 | 127 | # build info 128 | if result == 4: 129 | doc["type"] = "build_info" 130 | m = msg.find("build info:") 131 | doc["info"]["build_info"] = msg[m + 12:] 132 | return doc 133 | 134 | def starting_up(msg, doc): 135 | """Generate a document for a server startup event.""" 136 | doc["info"]["subtype"] = "startup" 137 | 138 | # what type of server is this? 139 | if ('MongoS' in msg) or ('mongos' in msg): 140 | doc["info"]["type"] = "mongos" 141 | # mongos startup does not provide an address 142 | doc["info"]["addr"] = "unknown" 143 | LOGGER.debug("Returning mongos startup doc") 144 | return doc 145 | elif msg.find("MongoDB") > -1: 146 | doc["info"]["type"] = "mongod" 147 | 148 | # isolate port number 149 | m = PORT_NUMBER.search(msg) 150 | if m is None: 151 | LOGGER.debug("malformed starting_up message: no port number found") 152 | return None 153 | 154 | port = m.group(0)[5:] 155 | host = msg[msg.find("host=") + 5:].split()[0] 156 | 157 | addr = host + ":" + port 158 | addr = addr.replace('\n', "") 159 | addr = addr.replace(" ", "") 160 | doc["info"]["addr"] = addr 161 | deb = "Returning new doc for a message of type: initandlisten: starting_up" 162 | LOGGER.debug(deb) 163 | return doc 164 | -------------------------------------------------------------------------------- /edda/filters/restart.py: -------------------------------------------------------------------------------- 1 | # Copyright 2016 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | import logging 19 | 20 | 21 | def criteria(msg): 22 | """Does the given log line fit the criteria for this filter? 23 | If so, return an integer code. Otherwise, return 0. 24 | """ 25 | if '***** SERVER RESTARTED *****' in msg: 26 | return 1 27 | return 0 28 | 29 | 30 | def process(msg, date): 31 | """If the given log line fits the criteria for this filter, 32 | process the line and create a document of the following format: 33 | document = { 34 | "date" : date, 35 | "type" : "restart", 36 | "msg" : msg, 37 | "info" : { 38 | "server" : "self" 39 | } 40 | } 41 | """ 42 | if criteria(msg) == 0: 43 | return None 44 | 45 | doc = {} 46 | doc["date"] = date 47 | doc["type"] = "restart" 48 | doc["info"] = { "server" : "self" } 49 | 50 | logger = logging.getLogger(__name__) 51 | logger.debug(doc) 52 | return doc 53 | -------------------------------------------------------------------------------- /edda/filters/rs_end_sync.py: -------------------------------------------------------------------------------- 1 | # Copyright 2016 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | import logging 19 | 20 | 21 | logs = [ 22 | "could not find member to sync from", 23 | "failed to find sync source", 24 | "Fetcher stopped querying remote oplog", 25 | "Cannot select sync source which is blacklisted" 26 | ] 27 | 28 | def criteria(msg): 29 | """Does the given log line fit the criteria for this filter? 30 | If so, return an integer code. Otherwise, return 0. 31 | """ 32 | for log in logs: 33 | if msg.find(log) >= 0: 34 | return 1 35 | return 0 36 | 37 | def process(msg, date): 38 | """If the given log line fits the criteria for this filter, 39 | process the line and create a document of the following format: 40 | document = { 41 | "date" : date, 42 | "type" : "end_sync", 43 | "msg" : msg, 44 | "info" : { 45 | "server" : "self" 46 | } 47 | } 48 | """ 49 | messageType = criteria(msg) 50 | if not messageType: 51 | return None 52 | doc = {} 53 | doc["date"] = date 54 | doc["type"] = "end_sync" 55 | doc["info"] = {} 56 | doc["msg"] = msg 57 | 58 | # populate info 59 | doc["info"]["server"] = "self" 60 | 61 | logger = logging.getLogger(__name__) 62 | logger.debug(doc) 63 | return doc 64 | -------------------------------------------------------------------------------- /edda/filters/rs_exit.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | def criteria(msg): 19 | """Does the given log line fit the criteria for this filter? 20 | If yes, return an integer code. If not, return 0. 21 | """ 22 | if 'dbexit: really exiting now' in msg: 23 | return 1 24 | return 0 25 | 26 | 27 | def process(msg, date): 28 | """if the given log line fits the criteria for this filter, 29 | processes the line and creates a document for it. 30 | document = { 31 | "date" : date, 32 | "type" : "exit", 33 | "info" : { 34 | "server": "self" 35 | } 36 | "msg" : msg 37 | } 38 | """ 39 | 40 | messagetype = criteria(msg) 41 | if not messagetype: 42 | return None 43 | 44 | doc = {} 45 | doc["date"] = date 46 | doc["type"] = "exit" 47 | doc["info"] = {} 48 | doc["msg"] = msg 49 | doc["info"]["server"] = "self" 50 | return doc 51 | -------------------------------------------------------------------------------- /edda/filters/rs_reconfig.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | 16 | def criteria(msg): 17 | """Does the given log line fit the criteria for this filter? 18 | If yes, return an integer code. If not, return -1. 19 | """ 20 | if 'replSetReconfig' in msg: 21 | return 1 22 | return 0 23 | 24 | 25 | def process(msg, date): 26 | 27 | """If the given log line fits the criteria for 28 | this filter, processes the line and creates 29 | a document of the following format: 30 | doc = { 31 | "date" : date, 32 | "type" : "reconfig", 33 | "info" : { 34 | "server" : self 35 | } 36 | "msg" : msg 37 | } 38 | """ 39 | message_type = criteria(msg) 40 | if not message_type: 41 | return None 42 | doc = {} 43 | doc["date"] = date 44 | doc["type"] = "reconfig" 45 | doc["msg"] = msg 46 | doc["info"] = {} 47 | doc["info"]["server"] = "self" 48 | 49 | return doc 50 | -------------------------------------------------------------------------------- /edda/filters/rs_status.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | from edda.supporting_methods import capture_address 18 | 19 | 20 | def criteria(msg): 21 | """Does the given log line fit the criteria for this filter? 22 | If yes, return an integer code. Otherwise, return -1. 23 | """ 24 | # state STARTUP1 25 | if 'replSet I am' in msg: 26 | return 0 27 | # state PRIMARY 28 | if 'PRIMARY' in msg: 29 | return 1 30 | # state SECONDARY 31 | if 'SECONDARY' in msg: 32 | return 2 33 | # state RECOVERING 34 | if 'RECOVERING' in msg: 35 | return 3 36 | # state FATAL ERROR 37 | if 'FATAL' in msg: 38 | return 4 39 | # state STARTUP2 40 | if 'STARTUP2' in msg: 41 | return 5 42 | # state UNKNOWN 43 | if 'UNKNOWN' in msg: 44 | return 6 45 | # state ARBITER 46 | if 'ARBITER' in msg: 47 | return 7 48 | # state DOWN 49 | if 'DOWN' in msg: 50 | return 8 51 | # state ROLLBACK 52 | if 'ROLLBACK' in msg: 53 | return 9 54 | # state REMOVED 55 | if 'REMOVED' in msg: 56 | return 10 57 | 58 | 59 | def process(msg, date): 60 | 61 | """If the given log line fits the critera for this filter, 62 | process it and create a document of the following format: 63 | doc = { 64 | "date" : date, 65 | "type" : "status", 66 | "msg" : msg, 67 | "origin_server" : name, 68 | "info" : { 69 | "state" : state, 70 | "state_code" : int, 71 | "server" : "host:port", 72 | } 73 | } 74 | """ 75 | result = criteria(msg) 76 | if result < 0: 77 | return None 78 | labels = ["STARTUP1", "PRIMARY", "SECONDARY", 79 | "RECOVERING", "FATAL", "STARTUP2", 80 | "UNKNOWN", "ARBITER", "DOWN", "ROLLBACK", 81 | "REMOVED"] 82 | doc = {} 83 | doc["date"] = date 84 | doc["type"] = "status" 85 | doc["info"] = {} 86 | doc["msg"] = msg 87 | doc["info"]["state_code"] = result 88 | doc["info"]["state"] = labels[result] 89 | 90 | # if this is a startup message, and includes server address, do something special!!! 91 | # add an extra field to capture the IP 92 | n = capture_address(msg[20:]) 93 | if n: 94 | if result == 0: 95 | doc["info"]["server"] = "self" 96 | doc["info"]["addr"] = n 97 | else: 98 | doc["info"]["server"] = n 99 | else: 100 | # if no server found, assume self is target 101 | doc["info"]["server"] = "self" 102 | return doc 103 | -------------------------------------------------------------------------------- /edda/filters/rs_sync.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | import logging 19 | 20 | 21 | def criteria(msg): 22 | """Does the given log line fit the criteria for this filter? 23 | If so, return an integer code. Otherwise, return 0. 24 | """ 25 | if ('[rsSync]' in msg and 26 | 'syncing' in msg): 27 | return 1 28 | if 'sync source candidate' in msg: 29 | return 1 30 | return 0 31 | 32 | 33 | def process(msg, date): 34 | """If the given log line fits the criteria for this filter, 35 | process the line and create a document of the following format: 36 | document = { 37 | "date" : date, 38 | "type" : "sync", 39 | "msg" : msg, 40 | "info" : { 41 | "sync_server" : "host:port" 42 | "server" : "self 43 | } 44 | } 45 | """ 46 | messageType = criteria(msg) 47 | if not messageType: 48 | return None 49 | doc = {} 50 | doc["date"] = date 51 | doc["type"] = "sync" 52 | doc["info"] = {} 53 | doc["msg"] = msg 54 | 55 | #Has the member begun syncing to a different place 56 | if(messageType == 1): 57 | return syncing_diff(msg, doc) 58 | 59 | 60 | def syncing_diff(msg, doc): 61 | """Generate and return a document for replica sets 62 | that are syncing to a new server. 63 | """ 64 | logs = [ 65 | "syncing to: ", 66 | "sync source candidate: " 67 | ] 68 | 69 | for log in logs: 70 | i = msg.find(log) 71 | if i < 0: 72 | continue 73 | doc["info"]["sync_server"] = msg[i + len(log): len(msg)] 74 | break 75 | 76 | if not doc["info"]["sync_server"]: 77 | return None 78 | 79 | doc["info"]["server"] = "self" 80 | logger = logging.getLogger(__name__) 81 | logger.debug(doc) 82 | return doc 83 | -------------------------------------------------------------------------------- /edda/filters/stale_secondary.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | 16 | def criteria(msg): 17 | """Does the given log line fit the criteria for this filter? 18 | Return an integer code if yes. Otherwise return 0. 19 | """ 20 | if 'too stale to catch up' in msg: 21 | return 1 22 | return 0 23 | 24 | 25 | def process(msg, date): 26 | 27 | """If the given log line fits the criteria for this filter, 28 | processes the line and creates a document for it. 29 | document = { 30 | "date" : date, 31 | "type" : "stale", 32 | "info" : { 33 | "server" : host:port 34 | } 35 | "msg" : msg 36 | } 37 | """ 38 | message_type = criteria(msg) 39 | if not message_type: 40 | return None 41 | 42 | doc = {} 43 | doc["date"] = date 44 | doc["type"] = "stale" 45 | doc["info"] = {} 46 | doc["msg"] = msg 47 | 48 | doc["info"]["server"] = "self" 49 | 50 | return doc 51 | -------------------------------------------------------------------------------- /edda/filters/template.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | def criteria(msg): 19 | """Does the given log line fit the criteria for this filter? 20 | If yes, return an integer code. Otherwise return 0. 21 | """ 22 | # perform a check or series of checks here on msg 23 | raise NotImplementedError 24 | 25 | 26 | def process(msg, date): 27 | """If the given log line fits the critera for this filter, 28 | processes the line and creates a document for it of the 29 | following format: 30 | doc = { 31 | "date" : date, 32 | "msg" : msg, 33 | "type" : (name of filter), 34 | "info" : { 35 | "server" : "host:port" or "self", 36 | (any additional fields) 37 | } 38 | } 39 | """ 40 | 41 | # doc = {} 42 | # doc["date"] = date 43 | # doc["msg"] = msg 44 | # doc["type"] = "your_filter_name" 45 | # doc["info"] = {} 46 | # etc. 47 | raise NotImplementedError 48 | -------------------------------------------------------------------------------- /edda/post/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import os 16 | import re 17 | 18 | __all__ = [] 19 | dirList = os.listdir(os.path.dirname(os.path.abspath(__file__))) 20 | pattern = re.compile(".py$") 21 | 22 | for d in dirList: 23 | # ignore anything that isn't strictly .py 24 | m = pattern.search(d) 25 | if (m != None): 26 | d = d[0:len(d) - 3] 27 | __all__.append(d) 28 | -------------------------------------------------------------------------------- /edda/post/clock_skew.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | # anatomy of a clock skew document: 18 | # document = { 19 | # "type" = "clock_skew" 20 | # "server_num" = int 21 | # "partners" = { 22 | # server_num : { 23 | # "skew_1" : weight, 24 | # "skew_2" : weight... 25 | # } 26 | # } 27 | 28 | from datetime import timedelta 29 | import logging 30 | 31 | 32 | def server_clock_skew(db, coll_name): 33 | """ Given the mongodb entries generated by edda, 34 | attempts to detect and resolve clock skew 35 | across different servers. 36 | """ 37 | logger = logging.getLogger(__name__) 38 | 39 | clock_skew = db[coll_name + ".clock_skew"] 40 | servers = db[coll_name + ".servers"] 41 | 42 | for doc_a in servers.find(): 43 | a_name = doc_a["network_name"] 44 | a_num = str(doc_a["server_num"]) 45 | if a_name == "unknown": 46 | logger.debug("Skipping unknown server") 47 | continue 48 | skew_a = clock_skew.find_one({"server_num": a_num}) 49 | if not skew_a: 50 | skew_a = clock_skew_doc(a_num) 51 | clock_skew.save(skew_a) 52 | for doc_b in servers.find(): 53 | b_name = doc_b["network_name"] 54 | b_num = str(doc_b["server_num"]) 55 | if b_name == "unknown": 56 | logger.debug("Skipping unknown server") 57 | continue 58 | if a_name == b_name: 59 | logger.debug("Skipping identical server") 60 | continue 61 | if b_num in skew_a["partners"]: 62 | logger.debug("Clock skew already found for this server") 63 | continue 64 | logger.info("Finding clock skew " 65 | "for {0} - {1}...".format(a_name, b_name)) 66 | skew_a["partners"][b_num] = detect(a_name, b_name, db, coll_name) 67 | if not skew_a["partners"][b_num]: 68 | continue 69 | skew_b = clock_skew.find_one({"server_num": b_num}) 70 | if not skew_b: 71 | skew_b = clock_skew_doc(b_num) 72 | # flip according to sign convention for other server: 73 | # if server is ahead, +t 74 | # if server is behind, -t 75 | skew_b["partners"][a_num] = {} 76 | for t in skew_a["partners"][b_num]: 77 | wt = skew_a["partners"][b_num][t] 78 | t = str(-int(t)) 79 | logger.debug("flipped one") 80 | skew_b["partners"][a_num][t] = wt 81 | clock_skew.save(skew_a) 82 | clock_skew.save(skew_b) 83 | 84 | 85 | def detect(a, b, db, coll_name): 86 | """ Compares each entry from cursor_a against every entry from 87 | cursor_b. In the case of matching messages, advances both cursors. 88 | Calculates time skew. While entries continue to match, adds 89 | weight to that time skew value. Stores all found time skew values, 90 | with respective weights, in a dictionary and returns. 91 | KNOWN BUGS: this algorithm may count some matches twice. 92 | """ 93 | 94 | entries = db[coll_name + ".entries"] 95 | 96 | # set up cursors 97 | cursor_a = entries.find({ 98 | "type": "status", 99 | "origin_server": a, 100 | "info.server": b 101 | }) 102 | 103 | cursor_b = entries.find({ 104 | "type": "status", 105 | "origin_server": b, 106 | "info.server": "self" 107 | }) 108 | cursor_a.sort("date") 109 | cursor_b.sort("date") 110 | logger = logging.getLogger(__name__) 111 | skews = {} 112 | list_a = [] 113 | list_b = [] 114 | 115 | # store the entries from the cursor in a list 116 | for a in cursor_a: 117 | list_a.append(a) 118 | for b in cursor_b: 119 | list_b.append(b) 120 | 121 | # for each a, compare against every b 122 | for i in range(0, len(list_a)): 123 | for j in range(0, len(list_b)): 124 | # if they match, crawl through and count matches 125 | if match(list_a[i], list_b[j]): 126 | wt = 0 127 | while match(list_a[i + wt], list_b[j + wt]): 128 | wt += 1 129 | if (wt + i >= len(list_a)) or (wt + j >= len(list_b)): 130 | break 131 | # calculate time skew, save with weight 132 | td = list_b[j + wt - 1]["date"] - list_a[i + wt - 1]["date"] 133 | td = timedelta_to_int(td) 134 | if abs(td) > 2: 135 | key = in_skews(td, skews) 136 | if not key: 137 | logger.debug(("inserting new weight " 138 | "for td {0} into skews {1}").format(td, skews)) 139 | skews[str(td)] = wt 140 | else: 141 | logger.debug( 142 | " adding additional weight for " 143 | "td {0} into skews {1}".format(td, skews)) 144 | skews[key] += wt 145 | # could maybe fix redundant counting by taking 146 | # each a analyzed here and comparing against all earlier b's. 147 | # another option would be to keep a table of 148 | # size[len(a)*len(b)] of booleans. 149 | # or, just accept this bug as something that weights multiple 150 | # matches in a row even higher. 151 | 152 | return skews 153 | 154 | 155 | def match(a, b): 156 | """ Given two entries, determines whether 157 | they match. For now, only handles pure status messages. 158 | """ 159 | if a["info"]["state_code"] == b["info"]["state_code"]: 160 | return True 161 | return False 162 | 163 | 164 | def in_skews(t, skews): 165 | """ If this entry is not close in value 166 | to an existing entry in skews, return None. 167 | If it is close in value to an existing entry, 168 | return the key for that entry. 169 | """ 170 | for skew in skews: 171 | if abs(int(skew) - t) < 2: 172 | return skew 173 | return None 174 | 175 | 176 | def timedelta_to_int(td): 177 | """ Takes a timedelta and converts it 178 | to a single string that represents its value 179 | in seconds. Returns a string. 180 | """ 181 | # because MongoDB cannot store timedeltas 182 | sec = 0 183 | t = abs(td) 184 | sec += t.seconds 185 | sec += (86400 * t.days) 186 | if td < timedelta(0): 187 | sec = -sec 188 | return sec 189 | 190 | 191 | def clock_skew_doc(num): 192 | """ Create and return an empty clock skew doc 193 | for this server. 194 | """ 195 | doc = {} 196 | doc["server_num"] = num 197 | doc["type"] = "clock_skew" 198 | doc["partners"] = {} 199 | return doc 200 | -------------------------------------------------------------------------------- /edda/post/event_matchup.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | 18 | import logging 19 | 20 | from datetime import timedelta 21 | from edda.supporting_methods import * 22 | from operator import itemgetter 23 | 24 | LOGGER = logging.getLogger(__name__) 25 | 26 | 27 | def event_matchup(db, coll_name): 28 | """This method sorts through the db's entries to 29 | find discrete events that happen across servers. It will 30 | organize these entries into a list of "events", which are 31 | each a dictionary built as follows: 32 | event = { 33 | "type" = type of event, see list below 34 | "date" = datetime, as string 35 | "target" = affected server 36 | "witnesses" = servers who agree on this event 37 | "dissenters" = servers who did not see this event 38 | (not for connection or sync messages) 39 | "log_line" = original log message (one from the witnesses) 40 | "summary" = a mnemonic summary of the event 41 | (event-specific additional fields:) 42 | "conn_addr" = for new_conn or end_conn messages 43 | "conn_num" = for new_conn or end_conn messages 44 | "from_shard" = for chunk migrations 45 | "to_shard" = for chunk migrations 46 | "state" = for status type messages (label, not code) 47 | "sync_to" = for sync type messages 48 | } 49 | 50 | possible event types include: 51 | 52 | "exit" : a replica set is exiting 53 | "end_conn" : end a user connection 54 | "lock" : a server requests to lock itself from writes 55 | "new_conn" : new user connections 56 | "reconfig" : new config information was received 57 | "restart" : a server was restarted 58 | "stale" : a secondary is going stale 59 | "status" : a status change for a server 60 | "sync" : a new sync pattern for a server 61 | "unlock" : a server requests to unlock itself from writes 62 | 63 | This module assumes that normal network delay can account 64 | for up to 2 second of lag between server logs. Beyond this 65 | margin, module assumes that servers are no longer in sync. 66 | """ 67 | # put events in ordered lists by date, one per origin_server 68 | # last communication with the db! 69 | entries = organize_servers(db, coll_name) 70 | events = [] 71 | 72 | server_coll = db[coll_name + ".servers"] 73 | server_nums = server_coll.distinct("server_num") 74 | 75 | # make events 76 | while(True): 77 | event = next_event(server_nums, entries, db, coll_name) 78 | if not event: 79 | break 80 | events.append(event) 81 | 82 | # attempt to resolve any undetected skew in events 83 | events = resolve_dissenters(events) 84 | return events 85 | 86 | 87 | def next_event(servers, server_entries, db, coll_name): 88 | """Given lists of entries from servers ordered by date, 89 | and a list of server numbers, finds a new event 90 | and returns it. Returns None if out of entries""" 91 | # NOTE: this method makes no attempt to adjust for clock skew, 92 | # only normal network delay. 93 | # find the first entry from any server 94 | 95 | # these are messages that do not involve 96 | # corresponding messages across servers 97 | loners = ["conn", "fsync", "sync", "stale", "init"] 98 | 99 | first_server = None 100 | for s in servers: 101 | if not server_entries[s]: 102 | continue 103 | if (first_server and 104 | server_entries[s][0]["date"] > 105 | server_entries[first_server][0]["date"]): 106 | continue 107 | first_server = s 108 | if not first_server: 109 | LOGGER.debug("No more entries in queue, returning") 110 | return None 111 | 112 | first = server_entries[first_server].pop(0) 113 | 114 | servers_coll = db[coll_name + ".servers"] 115 | event = {} 116 | event["witnesses"] = [] 117 | event["dissenters"] = [] 118 | 119 | # get and use server number for the target 120 | if first["info"]["server"] == "self": 121 | event["target"] = str(first["origin_server"]) 122 | else: 123 | event["target"] = get_server_num(first["info"]["server"], 124 | False, servers_coll) 125 | # define other event fields 126 | event["type"] = first["type"] 127 | event["date"] = first["date"] 128 | 129 | LOGGER.debug("Handling event of type {0} with" 130 | "target {1}".format(event["type"], event["target"])) 131 | 132 | # some messages need specific fields set: 133 | # status events 134 | if event["type"] == "status": 135 | event["state"] = first["info"]["state"] 136 | 137 | # server restarts 138 | elif event["type"] == "restart": 139 | print "setting state to DOWN for RESTART" 140 | event["type"] = "status" 141 | event["state"] = "DOWN" 142 | 143 | # init events, for mongos 144 | elif event["type"] == "init" and first["info"]["type"] == "mongos": 145 | # make this a status event, and make the state "MONGOS-UP" 146 | event["type"] = "status" 147 | event["state"] = "MONGOS-UP" 148 | 149 | # chunk migrations 150 | elif (event["type"] == "migration" 151 | or event["type"] == "commit_migration" 152 | or event["type"] == "abort_migration"): 153 | print first 154 | event["from_shard"] = first["info"]["from_shard"] 155 | event["to_shard"] = first["info"]["to_shard"] 156 | 157 | # exit messages 158 | elif event["type"] == "exit": 159 | event["state"] = "DOWN" 160 | 161 | # locking messages 162 | elif event["type"] == "fsync": 163 | event["type"] = first["info"]["state"] 164 | 165 | # sync events 166 | elif event["type"] == "sync": 167 | # must have a server number for this server 168 | num = get_server_num(first["info"]["sync_server"], False, servers_coll) 169 | event["sync_to"] = num 170 | 171 | # conn messages 172 | elif first["type"] == "conn": 173 | event["type"] = first["info"]["subtype"] 174 | event["conn_addr"] = first["info"]["conn_addr"] 175 | event["conn_number"] = first["info"]["conn_number"] 176 | 177 | # get a hostname 178 | label = "" 179 | num, self_name, network_name = name_me(event["target"], servers_coll) 180 | if self_name: 181 | label = self_name 182 | elif network_name: 183 | label = network_name 184 | else: 185 | label = event["target"] 186 | 187 | event["summary"] = generate_summary(event, label) 188 | event["log_line"] = first["log_line"] 189 | 190 | # handle corresponding messages 191 | event["witnesses"].append(first["origin_server"]) 192 | if not first["type"] in loners: 193 | event = get_corresponding_events(servers, server_entries, 194 | event, first, servers_coll) 195 | return event 196 | 197 | 198 | def get_corresponding_events(servers, server_entries, 199 | event, first, servers_coll): 200 | """Given a list of server names and entries 201 | organized by server, find all events that correspond to 202 | this one and combine them""" 203 | delay = timedelta(seconds=1) 204 | 205 | # find corresponding messages 206 | for s in servers: 207 | add = False 208 | add_entry = None 209 | if s == first["origin_server"]: 210 | continue 211 | for entry in server_entries[s]: 212 | if abs(entry["date"] - event["date"]) > delay: 213 | break 214 | type = type_check(first, entry) 215 | if not type: 216 | continue 217 | if not target_server_match(entry, first, servers_coll): 218 | continue 219 | # match found! 220 | event["type"] = type 221 | add = True 222 | add_entry = entry 223 | if add: 224 | server_entries[s].remove(add_entry) 225 | event["witnesses"].append(s) 226 | if not add: 227 | LOGGER.debug("No matches found for server {0}," 228 | "adding to dissenters".format(s)) 229 | event["dissenters"].append(s) 230 | return event 231 | 232 | 233 | def type_check(entry_a, entry_b): 234 | """Given two .entries documents, perform checks specific to 235 | their type to see if they refer to corresponding events 236 | """ 237 | 238 | if entry_a["type"] == entry_b["type"]: 239 | if entry_a["type"] == "status": 240 | if entry_a["info"]["state"] != entry_b["info"]["state"]: 241 | return None 242 | return entry_a["type"] 243 | 244 | # handle exit messages carefully 245 | # if exit and down messages, save as "exit" type 246 | if entry_a["type"] == "exit" and entry_b["type"] == "status": 247 | if entry_b["info"]["state"] == "DOWN": 248 | return "exit" 249 | elif entry_b["type"] == "exit" and entry_a["type"] == "status": 250 | if entry_a["info"]["state"] == "DOWN": 251 | return "exit" 252 | return None 253 | 254 | 255 | def target_server_match(entry_a, entry_b, servers): 256 | """Given two .entries documents, are they talking about the 257 | same sever? (these should never be from the same 258 | origin_server) Return True or False. 259 | 260 | Side effect: may update servers entries in the db.""" 261 | 262 | target_a = entry_a["info"]["server"] 263 | target_b = entry_b["info"]["server"] 264 | 265 | if target_a == "self" and target_b == "self": 266 | # a and b are both talking about themselves 267 | return False 268 | 269 | a_doc = servers.find_one({"server_num": entry_a["origin_server"]}) 270 | b_doc = servers.find_one({"server_num": entry_b["origin_server"]}) 271 | 272 | if target_a == "self" and target_b == a_doc["network_name"]: 273 | # a is talking about itself, and b is talking about a 274 | return True 275 | 276 | if target_b == "self" and target_a == b_doc["network_name"]: 277 | # b is talking about itself, and a is talking about b 278 | return True 279 | 280 | # If one target is talking about itself and it doesn't have a network name, 281 | # assume that these entries match, and name the unnamed server 282 | if target_a == "self": 283 | return check_and_assign(target_b, a_doc, servers) 284 | 285 | if target_b == "self": 286 | return check_and_assign(target_b, b_doc, servers) 287 | 288 | return target_a == target_b 289 | 290 | 291 | def resolve_dissenters(events): 292 | """Goes over the list of events and for each event where 293 | the number of dissenters > the number of witnesses, 294 | attempts to match that event to another corresponding 295 | event outside the margin of allowable network delay""" 296 | # useful for cases with undetected clock skew 297 | LOGGER.info("--------------------------------" 298 | "Attempting to resolve dissenters" 299 | "--------------------------------") 300 | for a in events[:]: 301 | if len(a["dissenters"]) >= len(a["witnesses"]): 302 | events_b = events[:] 303 | for b in events_b: 304 | if a["summary"] == b["summary"]: 305 | for wit_a in a["witnesses"]: 306 | if wit_a in b["witnesses"]: 307 | break 308 | else: 309 | LOGGER.debug("Corresponding, " 310 | "clock-skewed events found, merging events") 311 | LOGGER.debug("skew is {0}".format(a["date"] - b["date"])) 312 | events.remove(a) 313 | # resolve witnesses and dissenters lists 314 | for wit_a in a["witnesses"]: 315 | b["witnesses"].append(wit_a) 316 | if wit_a in b["dissenters"]: 317 | b["dissenters"].remove(wit_a) 318 | # we've already found a match, stop looking 319 | break 320 | LOGGER.debug("Match not found for this event") 321 | continue 322 | return events 323 | 324 | 325 | def generate_summary(event, hostname): 326 | """Given an event, generates and returns a one-line, 327 | mnemonic summary for that event 328 | """ 329 | # for reconfig messages 330 | if event["type"] == "reconfig": 331 | return "All servers received a reconfig message" 332 | 333 | summary = hostname 334 | 335 | # for status messages 336 | if event["type"] == "status": 337 | summary += " is now " + event["state"] 338 | #if event["state"] == "ARBITER": 339 | 340 | # for connection messages 341 | elif (event["type"].find("conn") >= 0): 342 | if event["type"] == "new_conn": 343 | summary += " opened connection #" 344 | elif event["type"] == "end_conn": 345 | summary += " closed connection #" 346 | summary += event["conn_number"] + " to user " + event["conn_addr"] 347 | 348 | # for exit messages 349 | elif event["type"] == "exit": 350 | summary += " is now exiting" 351 | 352 | # for locking messages 353 | elif event["type"] == "UNLOCKED": 354 | summary += " is unlocking itself" 355 | elif event["type"] == "LOCKED": 356 | summary += " is locking itself" 357 | elif event["type"] == "FSYNC": 358 | summary += " is in FSYNC" 359 | 360 | # for stale messages 361 | elif event["type"] == "stale": 362 | summary += " is going stale" 363 | 364 | # for syncing messages 365 | elif event["type"] == "sync": 366 | summary += " is syncing to " + event["sync_to"] 367 | 368 | # for any uncaught messages 369 | else: 370 | summary += " is reporting status " + event["type"] 371 | 372 | return summary 373 | 374 | 375 | def organize_servers(db, collName): 376 | """Organizes entries from .entries collection into lists 377 | sorted by date, one per origin server, as follows: 378 | { "server1" : [doc1, doc2, doc3...]} 379 | { "server2" : [doc1, doc2, doc3...]} and 380 | returns these lists in one larger list, with the server- 381 | specific lists indexed by server_num""" 382 | servers_list = {} 383 | 384 | entries = db[collName + ".entries"] 385 | servers = db[collName + ".servers"] 386 | 387 | for server in servers.find(): 388 | num = server["server_num"] 389 | servers_list[num] = sorted(list(entries.find({"origin_server": num})), key=itemgetter("date")) 390 | 391 | return servers_list 392 | 393 | 394 | def check_and_assign(network_name, doc, servers): 395 | if doc["network_name"] == "unknown": 396 | LOGGER.info("Assigning network name {0} to server {1}".format(network_name, doc["self_name"])) 397 | doc["network_name"] == network_name 398 | servers.save(doc) 399 | return True 400 | return False 401 | -------------------------------------------------------------------------------- /edda/post/replace_clock_skew.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # anatomy of a clock skew document: 16 | # document = { 17 | # "type" = "clock_skew" 18 | # "server_name" = "name" 19 | # "partners" = { 20 | # server_name : 21 | # [time_delay1 : weight1, time_delay2 : weight2, ...] 22 | # } 23 | # } 24 | 25 | from pymongo import * 26 | import logging 27 | from datetime import timedelta 28 | 29 | 30 | def replace_clock_skew(db, collName): 31 | logger = logging.getLogger(__name__) 32 | fixed_servers = {} 33 | first = True 34 | """"Using clock skew values that we have recieved from the 35 | clock skew method, fixes these values in the 36 | original DB, (.entries).""""" 37 | entries = db[collName + ".entries"] 38 | clock_skew = db[collName + ".clock_skew"] 39 | servers = db[collName + ".servers"] 40 | logger.debug("\n------------List of Collections------------" 41 | "\n".format(db.collection_names())) 42 | 43 | for doc in clock_skew.find(): 44 | #if !doc["name"] in fixed_servers: 45 | logger.debug("---------------Start of first Loop----------------") 46 | if first: 47 | fixed_servers[doc["server_num"]] = 0 48 | first = False 49 | logger.debug("Our supreme leader is: {0}".format( 50 | doc["server_num"])) 51 | for server_num in doc["partners"]: 52 | if server_num in fixed_servers: 53 | logger.debug("Server name already in list of fixed servers. " 54 | "EXITING: {}".format(server_num)) 55 | logger.debug("---------------------------------------------\n") 56 | continue 57 | 58 | #could potentially use this 59 | largest_weight = 0 60 | largest_time = 0 61 | logger.debug("Server Name is: {0}".format(server_num)) 62 | 63 | for skew in doc["partners"][server_num]: 64 | weight = doc["partners"][server_num][skew] 65 | logger.debug("Skew Weight is: {0}".format(weight)) 66 | 67 | if abs(weight) > largest_weight: 68 | largest_weight = weight 69 | logger.debug("Skew value on list: {}".format(skew)) 70 | largest_time = int(skew) 71 | #int(doc["partners"][server_name][skew]) 72 | 73 | adjustment_value = largest_time 74 | logger.debug("Skew value: {}".format(largest_time)) 75 | adjustment_value += fixed_servers[doc["server_num"]] 76 | logger.debug("Strung server name: {}".format(doc["server_num"])) 77 | 78 | logger.debug("Adjustment Value: {0}".format(adjustment_value)) 79 | #weight = doc["partners"][server_num][skew] 80 | logger.debug("Server is added to list of fixed servers: {}") 81 | fixed_servers[server_num 82 | ] = adjustment_value + fixed_servers[doc["server_num"]] 83 | logger.debug("Officially adding: {0} to fixed " 84 | "servers".format(server_num)) 85 | 86 | cursor = entries.find({"origin_server": server_num}) 87 | if not cursor: 88 | continue 89 | 90 | for entry in cursor: 91 | logger.debug('Entry adjusted from: {0}'.format(entry["date"])) 92 | entry["adjusted_date" 93 | ] = entry["date"] + timedelta(seconds=adjustment_value) 94 | 95 | entries.save(entry) 96 | logger.debug("Entry adjusted to: {0}" 97 | "".format(entry["adjusted_date"])) 98 | logger.debug(entry["origin_server"]) 99 | -------------------------------------------------------------------------------- /edda/post/server_matchup.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | import logging 17 | 18 | from copy import deepcopy 19 | from edda.supporting_methods import * 20 | 21 | LOGGER = logging.getLogger(__name__) 22 | 23 | 24 | def address_matchup(db, coll_name, hint): 25 | """Runs an algorithm to match servers with their 26 | corresponding hostnames/IP addresses. The algorithm works as follows, 27 | using replica set status messages from the logs to find addresses: 28 | 29 | - Make a list, mentioned_names of all the IPs being talked about; 30 | these must be all the servers in the network. 31 | - For each server (S) in the collName.servers collection, if it 32 | already has been matched to an IP address and hostname, remove 33 | these addresses from mentioned_names. Move to next server. 34 | - Else, make a list of the addresses (S) mentions, neighbors_of_s 35 | - If (S) has a known IP address or hostname: 36 | (stronger algorithm) 37 | - find all neighbors of (S) and the addresses 38 | they mention (their neighbors) 39 | - Make a list of addresses that ALL neighbors of (S) 40 | mention, neighbor_neighbors 41 | - By process of elimination between neighbors_of_s and 42 | neighbors_neighbors, see if there remains one address 43 | in neighbors_neighbors that (S) has not 44 | mentioned in its log entries. This must be (S)'s address. 45 | Remove this address from mentioned_names. 46 | - Else (weaker algorithm): 47 | - By process of elimination between neighbors_of_s and 48 | mentioned_names, see if there remains one address in 49 | mentioned_names that (S) has not mentioned in its log entries. 50 | This must be (S)'s address. Remove this address from 51 | mentioned_names. 52 | - Repeat this process until mentioned_names is empty trying 53 | each server round-robin, or until all servers have been unsuccessfully 54 | tried since the last change was made to mentioned_names. 55 | 56 | This algorithm is only sound when the user provides a 57 | log file from every server in the network, and complete when 58 | the network graph was complete, or was a tree (connected and acyclic) 59 | """ 60 | 61 | # find a list of all unnamed servers being talked about 62 | mentioned_names = [] 63 | 64 | servers = db[coll_name + ".servers"] 65 | entries = db[coll_name + ".entries"] 66 | 67 | all_servers_cursor = entries.distinct("info.server") 68 | 69 | LOGGER.info("Attempting to match up all servers:") 70 | for doc in servers.find({}): 71 | LOGGER.info("{0}".format(doc)) 72 | 73 | # use hints, if we have them 74 | if hint: 75 | # split on commas 76 | hints = hint.split(",") 77 | for server_hint in hints: 78 | names = server_hint.split("/") 79 | if len(names) != 2: 80 | LOGGER.warning("Malformed hint, should be /: {0}" 81 | .format(server_hint)); 82 | continue 83 | 84 | # see if we have a server with this self-name 85 | doc = servers.find_one({"self_name": names[0]}) 86 | if doc: 87 | if doc["network_name"] == "unknown": 88 | LOGGER.info("Applying hint {0} to server {1}" 89 | .format(server_hint, doc["self_name"])) 90 | doc["network_name"] = names[1] 91 | servers.save(doc) 92 | continue 93 | LOGGER.info("Found entry for self-name hint {0}, but it already has a network-name" 94 | .format(server_hint)) 95 | 96 | # if not, see if we have a server with this network-name 97 | # TODO: can this actually happen? 98 | doc = servers.find_one({"network_name": names[1]}) 99 | if doc: 100 | if doc["self_name"] == "unknown": 101 | LOGGER.info("Applying hint {0} to server {1}" 102 | .format(server_hint, doc["network_name"])) 103 | doc["self_name"] = names[0] 104 | servers.save(doc) 105 | continue 106 | LOGGER.info("Found entry for network-name in hint {0}, but it already has a self-name" 107 | .format(server_hint)) 108 | 109 | # if we didn't find a match for our hint, enter it as a new server 110 | LOGGER.info("Adding a new server entry for hint {0}".format(server_hint)) 111 | index = get_server_num(names[0], True, servers) 112 | servers.update({ "server_num" : index }, { "$set" : { "network_name" : names[1] }}) 113 | 114 | # weed out servers that we already have names for 115 | # TODO: this is wildly inefficient 116 | for addr in all_servers_cursor: 117 | if addr == "self": 118 | continue 119 | # if we have already matched this, continue 120 | if servers.find_one({"network_name": addr}): 121 | continue 122 | # if a server's self and network names are the same, set and continue 123 | doc = servers.find_one({"self_name": addr}) 124 | if doc: 125 | doc["network_name"] = addr 126 | servers.save(doc) 127 | continue 128 | # do we have a hint for this server? 129 | # otherwise, we have an unclaimed network name 130 | if not addr in mentioned_names: 131 | mentioned_names.append(addr) 132 | 133 | LOGGER.info("All unclaimed mentioned network names:\n{0}".format(mentioned_names)) 134 | 135 | round = 0 136 | change_this_round = False 137 | while mentioned_names: 138 | round += 1 139 | 140 | # ignore mongos and configsvr 141 | #unknowns = list(servers.find({"network_name": "unknown", "type" : "mongod"})) 142 | unknowns = list(servers.find({"network_name": "unknown"})) 143 | 144 | if len(unknowns) == 0: 145 | LOGGER.debug("All servers have matched-up names, breaking") 146 | break 147 | 148 | for s in unknowns: 149 | 150 | # extract server information 151 | num = s["server_num"] 152 | 153 | # QUESTION: how could network_name be unknown here? We checked above? 154 | name = None 155 | if s["network_name"] != "unknown": 156 | name = s["network_name"] 157 | 158 | # get neighbors of s into list 159 | # (these are servers s mentions) 160 | c = list(entries.find({"origin_server": num}) 161 | .distinct("info.server")) 162 | LOGGER.debug("Found {0} neighbors of (S)".format(len(c))) 163 | neighbors_of_s = [] 164 | for entry in c: 165 | if entry != "self": 166 | neighbors_of_s.append(entry) 167 | 168 | # if possible, make a list of servers who mention s 169 | # and then, the servers they in turn mention 170 | # (stronger algorithm) 171 | if name: 172 | # TODO: refactor this into a function 173 | LOGGER.debug("Server (S) is named! Running stronger algorithm") 174 | LOGGER.debug( 175 | "finding neighbors of (S) referring to name {0}".format(name)) 176 | neighbors_neighbors = [] 177 | neighbors = list(entries.find( 178 | {"info.server": name}).distinct("origin_server")) 179 | 180 | # for each server that mentions s 181 | for n_addr in neighbors: 182 | LOGGER.debug("Find neighbors of (S)'s neighbor, {0}" 183 | .format(n_addr)) 184 | n_num, n_self_name, n_net_name = name_me(n_addr, servers) 185 | if n_num: 186 | n_addrs = list(entries.find( 187 | {"origin_server": n_num}).distinct("info.server")) 188 | if not neighbors_neighbors: 189 | # n_addr: the server name 190 | # n_addrs: the names that n_addr mentions 191 | for addr in n_addrs: 192 | if addr != "self": 193 | neighbors_neighbors.append(addr) 194 | else: 195 | n_n_copy = deepcopy(neighbors_neighbors) 196 | neighbors_neighbors = [] 197 | for addr in n_addrs: 198 | if addr in n_n_copy: 199 | neighbors_neighbors.append(addr) 200 | else: 201 | LOGGER.debug( 202 | "Unable to find server number for server {0}, skipping" 203 | .format(n_addr)) 204 | LOGGER.debug( 205 | "Examining for match:\n{0}\n{1}" 206 | .format(neighbors_of_s, neighbors_neighbors)) 207 | match = eliminate(neighbors_of_s, neighbors_neighbors) 208 | if not match: 209 | # (try weaker algorithm anyway, it catches some cases) 210 | LOGGER.debug( 211 | "No match found using strong algorith, running weak algorithm") 212 | match = eliminate(neighbors_of_s, mentioned_names) 213 | else: 214 | # (weaker algorithm) 215 | # is there one server that is mentioned by all that is NOT mentioned by S? 216 | LOGGER.debug( 217 | "Server {0} is unnamed. Running weaker algorithm" 218 | .format(num)) 219 | LOGGER.debug( 220 | "Examining for match:\n{0}\n{1}" 221 | .format(neighbors_of_s, mentioned_names)) 222 | match = eliminate(neighbors_of_s, mentioned_names) 223 | 224 | LOGGER.debug("match: {0}".format(match)) 225 | if match: 226 | change_this_round = True 227 | mentioned_names.remove(match) 228 | LOGGER.debug("Network name {0} matched to server {1}" 229 | .format(match, num)) 230 | assign_address(num, match, False, servers) 231 | else: 232 | LOGGER.debug("No match found for server {0} this round" 233 | .format(num)) 234 | 235 | # break if we've exhausted algorithm 236 | if not change_this_round: 237 | LOGGER.debug("Algorithm exhausted, breaking") 238 | break; 239 | 240 | break 241 | 242 | 243 | LOGGER.info("Servers after address matchup:") 244 | for doc in servers.find({}): 245 | LOGGER.info("{0}".format(doc)) 246 | 247 | if not mentioned_names: 248 | # for edda to succeed, it needs to match logs to servers 249 | # so, all servers must have a network name. 250 | s = list(servers.find({"network_name": "unknown"})) 251 | if len(s) == 0: 252 | LOGGER.debug("Successfully named all unnamed servers!") 253 | return 1 254 | LOGGER.critical( 255 | "Exhausted mentioned_names, but {0} servers remain unnamed" 256 | .format(len(s))) 257 | return -1 258 | 259 | LOGGER.critical( 260 | "Could not match {0} addresses: {1}" 261 | .format(len(mentioned_names), mentioned_names)) 262 | return -1 263 | 264 | 265 | def eliminate(small, big): 266 | """See if, by process of elimination, 267 | there is exactly one entry in big that 268 | is not in small. Return that entry, or None. 269 | """ 270 | 271 | # big list must have exactly one entry more than small list 272 | if (len(small) + 1) != len(big): 273 | return None 274 | 275 | if len(big) == 1: 276 | return big[0] 277 | 278 | # make copies of lists, because they are mutable 279 | # and changes made here will alter the lists outside 280 | s = deepcopy(small) 281 | b = deepcopy(big) 282 | 283 | for addr in s: 284 | if addr in b: 285 | b.remove(addr) 286 | 287 | if len(b) == 1: 288 | return b.pop() 289 | 290 | return None 291 | -------------------------------------------------------------------------------- /edda/supporting_methods.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | import logging 18 | import re 19 | 20 | from datetime import datetime 21 | 22 | # global variables 23 | ADDRESS = re.compile("\S+:[0-9]{1,5}") 24 | IP_PATTERN = re.compile("(0|(1?[0-9]{1,2})|(2[0-4][0-9])" 25 | "|(25[0-5]))(\.(0|(1?[0-9]{1,2})" 26 | "|(2[0-4][0-9])|(25[0-5]))){3}") 27 | MONTHS = { 28 | 'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 29 | 'May': 5, 'Jun': 6, 'Jul': 7, 'Aug': 8, 30 | 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12 31 | } 32 | WEEKDAYS = { 33 | 'Mon': 0, 'Tue': 1, 'Wed': 2, 'Thu': 3, 'Fri': 4, 'Sat': 5, 'Sun': 6 34 | } 35 | 36 | 37 | def capture_address(msg): 38 | """Given a message, extracts and returns the address, 39 | be it a hostname or an IP address, in the form 40 | 'address:port#' 41 | """ 42 | # capture the address, be it hostname or IP 43 | m = ADDRESS.search(msg[20:]) # skip date field 44 | if not m: 45 | return None 46 | return m.group(0) 47 | 48 | 49 | def is_IP(s): 50 | """Returns true if s contains an IP address, false otherwise. 51 | """ 52 | # note: this message will return True for strings that 53 | # have more than 4 dotted groups of numbers (like 1.2.3.4.5) 54 | return not (IP_PATTERN.search(s) == None) 55 | 56 | 57 | def add_shard(doc, config): 58 | """Create a document for this shard in the config collection. 59 | If one already exists, update it. 60 | """ 61 | existing_doc = config.find_one({ "replSet" : doc["replSet"] }) 62 | if not existing_doc: 63 | config.save({ "replSet" : doc["replSet"], 64 | "members" : doc["members"], 65 | "member_nums" : doc["member_nums"]}) 66 | return 67 | # else, make sure that all the members we have are in this doc. 68 | # Do not remove members from the doc, just add them. 69 | for m in doc["members"]: 70 | if m not in existing_doc["members"]: 71 | existing_doc["members"].append(m) 72 | for n in doc["member_nums"]: 73 | if n not in existing_doc["member_nums"]: 74 | existing_doc["member_nums"].append(n) 75 | config.save(existing_doc) 76 | 77 | 78 | def assign_server_type(num, server_type, servers): 79 | """Set the server type of this server to specified type. 80 | """ 81 | doc = servers.update({ "server_num" : num }, 82 | { "$set" : { "type" : server_type }}) 83 | 84 | def server_type(num, servers): 85 | doc = servers.find({ "server_num" : num }) 86 | if not "type" in doc: 87 | return "unknown" 88 | return doc["type"] 89 | 90 | 91 | def get_server_num(addr, self_name, servers): 92 | """Gets and returns a server_num for an 93 | existing .servers entry with 'addr', or creates a new .servers 94 | entry and returns the new server_num, as a string. If 95 | 'addr' is 'unknown', assume this is a new server and return 96 | a new number. 97 | """ 98 | logger = logging.getLogger(__name__) 99 | num = None 100 | addr = addr.replace('\n', "") 101 | addr = addr.replace(" ", "") 102 | 103 | if addr != "unknown": 104 | if self_name: 105 | num = servers.find_one({"self_name": addr}) 106 | if not num: 107 | num = servers.find_one({"network_name": addr}) 108 | if num: 109 | logger.debug("Found server number {0} for address {1}" 110 | .format(num["server_num"], addr)) 111 | return str(num["server_num"]) 112 | 113 | # no .servers entry found for this target, make a new one 114 | # make sure that we do not overwrite an existing server's index 115 | for i in range(1, 500): 116 | if not servers.find_one({"server_num": str(i)}): 117 | logger.info("Adding {0} to the .servers collection with server_num {1}" 118 | .format(addr, i)) 119 | assign_address(str(i), addr, self_name, servers) 120 | return str(i) 121 | logger.critical("Ran out of server numbers!") 122 | 123 | 124 | def update_mongo_version(version, server_num, servers): 125 | doc = servers.find_one({"server_num": server_num}) 126 | if not doc: 127 | return 128 | if doc["version"] != version or doc["version"] == "unknown": 129 | doc["version"] = version 130 | servers.save(doc) 131 | 132 | 133 | def name_me(s, servers): 134 | """Given a string s (which can be a server_num, 135 | server_name, or server_IP), method returns all info known 136 | about the server in a tuple [server_num, self_name, network_name] 137 | """ 138 | s = str(s) 139 | s = s.replace('\n', "") 140 | s = s.replace(" ", "") 141 | self_name = None 142 | network_name = None 143 | num = None 144 | docs = [] 145 | docs.append(servers.find_one({"server_num": s})) 146 | docs.append(servers.find_one({"self_name": s})) 147 | docs.append(servers.find_one({"network_name": s})) 148 | for doc in docs: 149 | if not doc: 150 | continue 151 | if doc["self_name"] != "unknown": 152 | self_name = doc["self_name"] 153 | if doc["network_name"] != "unknown": 154 | network_name = doc["network_name"] 155 | num = doc["server_num"] 156 | return [num, self_name, network_name] 157 | 158 | 159 | def assign_address(num, addr, self_name, servers): 160 | """Given this num and addr, sees if there exists a document 161 | in the .servers collection for that server. If so, adds addr, if 162 | not already present, to the document. If not, creates a new doc 163 | for this server and saves to the db. 'self_name' is either True 164 | or False, and indicates whether addr is a self_name or a 165 | network_name. 166 | """ 167 | # in the case that multiple addresses are found for the 168 | # same server, we choose to log a warning and ignore 169 | # all but the first address found. We will 170 | # store all fields as strings, including server_num 171 | # server doc = { 172 | # "server_num" : int, as string 173 | # "self_name" : what I call myself 174 | # "network_name" : the name other servers use for me 175 | # } 176 | logger = logging.getLogger(__name__) 177 | 178 | # if "self" is the address, ignore 179 | if addr == "self": 180 | logger.debug("Returning, will not save 'self'") 181 | return 182 | 183 | num = str(num) 184 | addr = str(addr) 185 | addr = addr.replace('\n', "") 186 | doc = servers.find_one({"server_num": num}) 187 | if not doc: 188 | if addr != "unknown": 189 | if self_name: 190 | doc = servers.find_one({"self_name": addr}) 191 | if not doc: 192 | doc = servers.find_one({"network_name": addr}) 193 | if doc: 194 | logger.debug("Entry already exists for server {0}".format(addr)) 195 | return 196 | logger.debug("No doc found for this server, making one") 197 | doc = {} 198 | doc["server_num"] = num 199 | doc["self_name"] = "unknown" 200 | doc["network_name"] = "unknown" 201 | doc["version"] = "unknown" 202 | else: 203 | logger.debug("Fetching existing doc for server {0}".format(num)) 204 | # NOTE: case insensitive! 205 | if self_name: 206 | if (doc["self_name"] != "unknown" and 207 | doc["self_name"].lower() != addr.lower()): 208 | logger.warning("conflicting self_names found for server {0}:".format(num)) 209 | logger.warning("\n{0}\n{1}".format(repr(addr), repr(doc["self_name"]))) 210 | else: 211 | doc["self_name"] = addr 212 | else: 213 | if (doc["network_name"] != "unknown" and 214 | doc["network_name"].lower() != addr.lower()): 215 | logger.warning("conflicting network names found for server {0}:".format(num)) 216 | logger.warning("\n{0}\n{1}".format(repr(addr), repr(doc["network_name"]))) 217 | else: 218 | doc["network_name"] = addr 219 | logger.debug("I am saving {0} to the .servers collection".format(doc)) 220 | servers.save(doc) 221 | 222 | 223 | def date_parser(msg): 224 | """extracts the date information from the given line. If 225 | line contains incomplete or no date information, skip 226 | and return None.""" 227 | try: 228 | # 2.6 logs begin with the year 229 | if msg[0:2] == "20": 230 | return datetime.strptime(msg[0:19], "%Y-%m-%dT%H:%M:%S") 231 | # for older logs 232 | return old_style_log_date(msg) 233 | except (KeyError, ValueError): 234 | return None 235 | 236 | 237 | def has_same_weekday(log_day, log_month, log_weekday, test_year): 238 | """If this log message's date occurred on the same weekday 239 | as this year, return true. 240 | """ 241 | test_date = datetime(test_year, log_month, log_day) 242 | return test_date.weekday() == log_weekday 243 | 244 | 245 | def guess_log_year(log_day, log_month, log_weekday): 246 | """Guess the year in which this log line was created.""" 247 | # Pre-2.6 servers do not record the year in their logs. 248 | # 249 | # Beginning in the current year, compare the date in this year 250 | # to the date in this log message. If the weekday is the same for 251 | # both, assume that this is the correct year. 252 | # 253 | # Note: because of MongoDB's relatively short lifespan, this 254 | # algorithm should be correct with the exception that dates in 255 | # 2014 have the same weekdays as in 2008. Going forward, we will have 256 | # such conflicts between any two years that are six years apart. 257 | # In these cases, we choose the more recent year. 258 | 259 | # for years between now and 2008: 260 | # if has_same_weekday, return year. 261 | current_year = datetime.now().year 262 | for y in range(current_year, 2008, -1): 263 | if has_same_weekday(log_day, log_month, log_weekday, y): 264 | return y 265 | return current_year 266 | 267 | 268 | def parse_old_style_log_date(msg): 269 | """Return the date in this log line as a dictionary.""" 270 | # Note: we cannot use strptime here to process the date, because 271 | # we do not have a year to work with. Strptime, given a day, month, 272 | # and weekday, will ignore the weekday, set itself to the year 1900, 273 | # and use whatever weekday the "day"th of "month" was in 1900. 274 | date = { "day" : int(msg[8:10]), 275 | "month" : MONTHS[msg[4:7]], 276 | "weekday" : WEEKDAYS[msg[0:3]] } 277 | return date 278 | 279 | 280 | def old_style_log_date(msg): 281 | """Return a datetime object for this log line.""" 282 | date = parse_old_style_log_date(msg) 283 | proper_year = guess_log_year(date["day"], date["month"], date["weekday"]) 284 | return datetime(proper_year, date["month"], date["day"], 285 | int(msg[11:13]), int(msg[14:16]), int(msg[17:19])) 286 | -------------------------------------------------------------------------------- /edda/ui/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2009-2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import os 16 | import re 17 | 18 | __all__ = [] 19 | dirList = os.listdir(os.path.dirname(os.path.abspath(__file__))) 20 | pattern = re.compile(".py$") 21 | 22 | for d in dirList: 23 | # ignore anything that isn't strictly .py 24 | m = pattern.search(d) 25 | if (m != None): 26 | d = d[0:len(d) - 3] 27 | __all__.append(d) 28 | -------------------------------------------------------------------------------- /edda/ui/connection.py: -------------------------------------------------------------------------------- 1 | # Copyright 2009-2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | #!/usr/bin/env python 16 | 17 | import os 18 | from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer 19 | 20 | import socket 21 | import webbrowser 22 | 23 | try: 24 | import json 25 | except ImportError: 26 | import simplejson as json 27 | import threading 28 | 29 | data = None 30 | server_config = None 31 | admin = None 32 | 33 | 34 | def run(http_port): 35 | """Open page and send GET request to server""" 36 | # open the JS page 37 | url = "http://localhost:" + http_port 38 | try: 39 | webbrowser.open(url, 1, True) 40 | except webbrowser.Error as e: 41 | print "Webbrowser failure: Unable to launch webpage:" 42 | print e 43 | print "Enter the following url into a browser to bring up edda:" 44 | print url 45 | # end of thread 46 | 47 | 48 | def send_to_js(frames, info, http_port): 49 | """Sends information to the JavaScript 50 | client""" 51 | 52 | global data 53 | global admin 54 | 55 | admin = info 56 | data = frames 57 | 58 | # fork here! 59 | t = threading.Thread(target=run(http_port)) 60 | t.start() 61 | # parent starts a server listening on localhost:27080 62 | # child opens page to send GET request to server 63 | # open socket, bind and listen 64 | print " \n" 65 | print "=================================================================" 66 | print "Opening server, kill with Ctrl+C once you are finished with edda." 67 | print "=================================================================" 68 | try: 69 | server = HTTPServer(('', int(http_port)), eddaHTTPRequest) 70 | except socket.error, (value, message): 71 | if value == 98: 72 | print "Error: could not bind to localhost:28018" 73 | else: 74 | print message 75 | return 76 | try: 77 | server.serve_forever() 78 | except KeyboardInterrupt: 79 | server.socket.close() 80 | # return upon completion 81 | print "Done serving, exiting" 82 | return 83 | 84 | 85 | class eddaHTTPRequest(BaseHTTPRequestHandler): 86 | 87 | mimetypes = mimetypes = {"html": "text/html", 88 | "htm": "text/html", 89 | "gif": "image/gif", 90 | "jpg": "image/jpeg", 91 | "png": "image/png", 92 | "json": "application/json", 93 | "css": "text/css", 94 | "js": "text/javascript", 95 | "ico": "image/vnd.microsoft.icon"} 96 | 97 | docroot = str(os.path.dirname(os.path.abspath(__file__))) 98 | docroot += "/display/" 99 | 100 | def process_uri(self, method): 101 | """Process the uri""" 102 | if method == "GET": 103 | (uri, q, args) = self.path.partition('?') 104 | else: 105 | return 106 | 107 | uri = uri.strip('/') 108 | 109 | # default "/" to "edda.html" 110 | if len(uri) == 0: 111 | uri = "edda.html" 112 | 113 | # find type of file 114 | (temp, dot, file_type) = uri.rpartition('.') 115 | if len(dot) == 0: 116 | file_type = "" 117 | 118 | return (uri, args, file_type) 119 | 120 | def do_GET(self): 121 | # do nothing with message 122 | # return data 123 | (uri, args, file_type) = self.process_uri("GET") 124 | 125 | if len(file_type) == 0: 126 | return 127 | 128 | if file_type == "admin": 129 | #admin = {} 130 | admin["total_frame_count"] = len(data) 131 | self.send_response(200) 132 | self.send_header("Content-type", 'application/json') 133 | self.end_headers(); 134 | self.wfile.write(json.dumps(admin)) 135 | 136 | elif file_type == "all_frames": 137 | self.wfile.write(json.dumps(data)) 138 | 139 | # format of a batch request is 140 | # 'start-end.batch' 141 | elif file_type == "batch": 142 | uri = uri[:len(uri) - 6] 143 | parts = uri.partition("-") 144 | try: 145 | start = int(parts[0]) 146 | except ValueError: 147 | start = 0 148 | try: 149 | end = int(parts[2]) 150 | except ValueError: 151 | end = 0 152 | batch = {} 153 | 154 | # check for entries out of range 155 | if end < 0: 156 | return 157 | if start < 0: 158 | start = 0; 159 | if start >= len(data): 160 | print "start is past data" 161 | return 162 | if end >= len(data): 163 | end = len(data) 164 | 165 | for i in range(start, end): 166 | if not str(i) in data: 167 | break 168 | batch[str(i)] = data[str(i)]; 169 | 170 | self.send_response(200) 171 | self.send_header("Content-type", 'application/json') 172 | self.end_headers() 173 | self.wfile.write(json.dumps(batch)) 174 | 175 | elif file_type in self.mimetypes and os.path.exists(self.docroot + uri): 176 | f = open(self.docroot + uri, 'r') 177 | 178 | self.send_response(200, 'OK') 179 | self.send_header('Content-type', self.mimetypes[file_type]) 180 | self.end_headers() 181 | self.wfile.write(f.read()) 182 | f.close() 183 | return 184 | 185 | else: 186 | self.send_error(404, 'File Not Found: ' + uri) 187 | return 188 | -------------------------------------------------------------------------------- /edda/ui/display/edda.html: -------------------------------------------------------------------------------- 1 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | Edda 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 |
51 | 52 | 55 |


56 | 57 |
58 | 59 |
60 |
61 |
62 |

Replica Set


63 |
64 |
65 |
66 | 67 |
68 | 69 |
70 | 71 |
72 | 73 |
74 | 75 | 76 |   77 | Summary 78 |
79 | 80 | 81 |   82 | Witnesses to Event 83 |
84 | 85 | 86 |   87 | Blind to Event 88 | 89 |
90 | 91 |
92 | 93 | 94 |
95 |
96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 | 111 | 112 |
113 | 114 |   115 | Files 116 |
117 | 118 | 119 |   120 | Topology 121 |
122 | 123 | 127 |
128 | 129 |
130 | 131 |
132 |
133 | 134 | 135 | 136 |
137 |
138 | 139 | 140 | 141 | -------------------------------------------------------------------------------- /edda/ui/display/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mongodb-labs/edda/5acaf9918d9aae6a6acbbfe48741d8a20aae1f21/edda/ui/display/favicon.ico -------------------------------------------------------------------------------- /edda/ui/display/img/texture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mongodb-labs/edda/5acaf9918d9aae6a6acbbfe48741d8a20aae1f21/edda/ui/display/img/texture.png -------------------------------------------------------------------------------- /edda/ui/display/js/arrows.js: -------------------------------------------------------------------------------- 1 | // Copyright 2014 MongoDB, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | /* 16 | * Draw an arrow going from (x1, y1) to (x2, y2). 17 | */ 18 | var drawOneArrow = function(x1, y1, x2, y2, ctx) { 19 | var dx = Math.abs(x2 - x1); 20 | var dy = Math.abs(y2 - y1); 21 | var line_length = Math.sqrt(dx*dx + dy*dy); 22 | var ratio = .75; 23 | 24 | // move endpoints in along axis 25 | var x_difference = dx * ratio; 26 | var y_difference = dy * ratio; 27 | 28 | // TODO: a better way of doing this. 29 | if (line_length < 100) { 30 | x_difference = dx * .8; 31 | y_difference = dy * .8; 32 | } 33 | 34 | // There are 4 cases for the adjustments that need to be made. 35 | // Arrows going in each diagonal dirrection. 36 | if (x2 > x1) { 37 | if (y2 > y1) { 38 | // Case where arrow originates on the 39 | // bottom left and goes toward the bottom right. 40 | x1 += x_difference; 41 | y1 += y_difference; 42 | 43 | x2 -= x_difference; 44 | y2 -= y_difference; 45 | } 46 | else { 47 | // Case where arrow originates on the 48 | // top left and goes toward the bottom right. 49 | x1 += x_difference; 50 | y1 -= y_difference; 51 | 52 | x2 -= x_difference; 53 | y2 += y_difference; 54 | } 55 | } 56 | else { 57 | if (y2 > y1) { 58 | // Case where arrow originates on the 59 | // bottom right and goes towards the top left 60 | x1 -= x_difference; 61 | y1 += y_difference; 62 | 63 | x2 += x_difference; 64 | y2 -= y_difference; 65 | } 66 | else { 67 | x1 -= x_difference; 68 | y1 -= y_difference; 69 | 70 | x2 += x_difference; 71 | y2 += y_difference; 72 | } 73 | } 74 | 75 | ctx.beginPath(); 76 | ctx.moveTo(x1, y1); 77 | ctx.strokeStyle = "#F0E92F"; 78 | ctx.lineWidth = 3; 79 | ctx.lineTo(x2, y2); 80 | ctx.lineCap = "butt"; 81 | ctx.stroke(); 82 | drawArrowHead(x1, x2, y1, y2, ctx); 83 | }; 84 | 85 | /* 86 | * Render an arrowhhead with its forward tip at (tox, toy) along an 87 | * axis that goes from (fromx, fromy) to (tox, toy). 88 | */ 89 | var drawArrowHead = function (fromx, tox, fromy, toy, ctx) { 90 | // adapted from http://stackoverflow.com/questions/808826/draw-arrow-on-canvas-tag 91 | ctx.beginPath(); 92 | 93 | var headlen = ICON_RADIUS < 18 ? 8 : 20; // length of arrow head in pixels 94 | var angle = Math.atan2(toy - fromy, tox - fromx); 95 | 96 | ctx.strokeStyle = "#F0E92F"; 97 | ctx.fillStyle = "#F0E92F"; 98 | ctx.lineWidth = 4; 99 | 100 | ctx.moveTo(tox, toy); 101 | ctx.lineTo(tox - headlen*Math.cos(angle-Math.PI/8), 102 | toy - headlen*Math.sin(angle-Math.PI/8)); 103 | ctx.lineTo(tox - headlen*Math.cos(angle+Math.PI/8), 104 | toy - headlen*Math.sin(angle+Math.PI/8)); 105 | ctx.lineTo(tox, toy); 106 | ctx.stroke(); 107 | ctx.fill(); 108 | }; 109 | -------------------------------------------------------------------------------- /edda/ui/display/js/connection.js: -------------------------------------------------------------------------------- 1 | // Copyright 2009-2012 10gen, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | // without the async messing things up? 16 | function connect() { 17 | 18 | // get first batch and first buffer 19 | get_batch(0, batch_size); 20 | frame_bottom = 0; 21 | frame_top = batch_size; 22 | if (frame_top > total_frame_count) { frame_top = total_frame_count; } 23 | 24 | // poll for additional setup data 25 | url_string = document.URL + "data.admin"; 26 | $.ajax({ 27 | async: false, 28 | url: url_string, 29 | dataType: "json", 30 | success: function(data) { 31 | admin = data; 32 | total_frame_count = data["total_frame_count"]; 33 | } 34 | }); 35 | } 36 | 37 | // this function is not finished 38 | function get_batch(a, b) { 39 | // poll for a batch of frames 40 | var s = document.URL + a + "-" + b + ".batch"; 41 | $.ajax({ 42 | async: false, 43 | url: s, 44 | dataType: "json", 45 | success: function(data) { 46 | frames = data; 47 | } 48 | }); 49 | } 50 | -------------------------------------------------------------------------------- /edda/ui/display/js/draw_servers.js: -------------------------------------------------------------------------------- 1 | // Copyright 2014 MongoDB, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | ICON_RADIUS = 20; 16 | ICON_STROKE = ICON_RADIUS > 18 ? 12 : 6; 17 | ICONS = { 18 | // state : [ fill color, stroke color, dotted(bool) ] 19 | "PRIMARY" : ["#24B314", "#F0E92F", false], 20 | "SECONDARY" : ["#24B314", "#196E0F", false], 21 | "MONGOS-UP" : ["#DB9EBD", "#C969A8", false], 22 | "ROLLBACK" : ["#3F308F", "#1D0C37", false], 23 | "REMOVED" : ["#1C0E0B", "#9D7E51", true], 24 | "STALE" : ["#6E6134", "#4C4725", false], 25 | "FATAL" : ["#722714", "#722714", false], 26 | "RECOVERING" : ["#3FA5A9", "#2B4E66", false], 27 | "UNKNOWN" : ["#4E3629", "#9D7E51", true], 28 | "SHUNNED" : ["#706300", "403902", false], 29 | "UNDISCOVERED" : ["#4E3629", "#3B1E0B", false], 30 | "STARTUP1" : ["#4E3629", "#E1E2E5", false], 31 | "STARTUP2" : ["#85807D", "#E1E2E5", false], 32 | "ARBITER" : ["#BFB1A4", "#1A1007", false], 33 | "DOWN" : ["#4E3629", "#9D7E51", true], 34 | "MONGOS-DOWN" : ["#4E3629", "#9D7E51", true], 35 | "CONFIG-UP" : ["#706300", "403902", false], 36 | "CONFIG-DOWN" : ["#706300", "403902", false] 37 | }; 38 | 39 | /* Render all server icons for a single point in time */ 40 | var drawServers = function(serverCoordinates, frame, ctx) { 41 | addTopology(serverCoordinates.clusterType, serverCoordinates); 42 | 43 | for (var s in frame.servers) { 44 | if (serverCoordinates.hasOwnProperty(s)) { 45 | var x = serverCoordinates[s]["x"]; 46 | var y = serverCoordinates[s]["y"]; 47 | drawSingleServer( 48 | x, 49 | y, 50 | frame.servers[s], 51 | ctx); 52 | } 53 | } 54 | }; 55 | 56 | /* Render a single server icon for a point in time */ 57 | var drawSingleServer = function(x, y, type, ctx) { 58 | // parse out .LOCKED. 59 | var n = type.split("."); 60 | var state = n[0]; 61 | 62 | // is it dotted? 63 | if (ICONS[state][2]) { 64 | drawCircle(x, y, ICON_RADIUS, ICONS[state][0], ICONS[state][0], ctx); 65 | drawDottedCircle(x, y, ICON_RADIUS, ICONS[state][1], ICON_STROKE, ctx); 66 | } 67 | else { 68 | drawCircle(x, y, ICON_RADIUS, ICONS[state][0], ICONS[state][1], ctx); 69 | } 70 | 71 | // if a special type, add details 72 | if (state == "PRIMARY") drawIconCrown(x, (y - 0.22*ICON_RADIUS), 73 | 0.6*ICON_RADIUS, 0.5*ICON_RADIUS, ctx); 74 | else if (state == "ARBITER") drawIconStripes(x, y, ICON_RADIUS, ctx); 75 | else if (state == "UNKNOWN") drawIconQuestionMark(x, y, ICON_RADIUS, ctx); 76 | 77 | // if we had a lock, add a lock 78 | if (n.length > 1) { 79 | drawIconLock(x, y, ICON_RADIUS, ctx); 80 | } 81 | }; 82 | 83 | function calculateServerCoordinates(frame) { 84 | var server_groups = frame["server_groups"]; 85 | var serverCoordinates = {clusterType: null}; 86 | 87 | // Handle replica sets 88 | if (server_groups.length == 1) { 89 | var group = server_groups[0]; 90 | serverCoordinates.clusterType = group["type"]; 91 | $("#clustername").html(group["name"]); 92 | ICON_RADIUS = 30; 93 | group_info = generateIconCoords( 94 | frame, 95 | group["members"], 96 | CANVAS_W/2, 97 | CANVAS_H/2, 98 | CANVAS_H/2 - 60); 99 | 100 | for (var s in group_info) { 101 | serverCoordinates[s] = group_info[s]; 102 | } 103 | } 104 | // Handle sharded clusters 105 | else { 106 | serverCoordinates.clusterType = "sharded"; 107 | $("#clustername").html("Sharded Cluster"); 108 | ICON_RADIUS = 15; 109 | // how many shards do we have? 110 | var shards = []; 111 | for (var k in server_groups) { 112 | var group = server_groups[k] 113 | if (group["type"] == "replSet") { 114 | rs = { "n" : group["name"] }; 115 | shards.push(rs); 116 | } 117 | } 118 | // get center points for each shard 119 | // these will be in a dictionary with key "name". 120 | var shard_coords = generateIconCoords( 121 | frame, 122 | shards, 123 | CANVAS_W/2, 124 | CANVAS_H/2, 125 | CANVAS_H/2 - 85); 126 | 127 | // get coords for each server 128 | for (var g in server_groups) { 129 | var group = server_groups[g]; 130 | var group_info = {}; 131 | 132 | if (group["type"] == "replSet") { 133 | group_info = generateIconCoords( 134 | frame, 135 | group["members"], 136 | shard_coords[group["name"]]["x"], 137 | shard_coords[group["name"]]["y"], 138 | 50); 139 | 140 | // format replsets entry 141 | var members = []; 142 | for (var s in group["members"]) { 143 | members.push(group["members"][s]["server_num"]); 144 | } 145 | 146 | var rs = { "members" : members, 147 | "x" : shard_coords[group["name"]]["x"], 148 | "y" : shard_coords[group["name"]]["y"], 149 | "r" : 50, 150 | "on" : false }; 151 | 152 | replsets[group["name"]] = rs; 153 | } 154 | 155 | // Handle mongos 156 | else if (group["type"] == "mongos") { 157 | group_info = generateIconCoords( 158 | frame, 159 | group["members"], 160 | CANVAS_W/2, 161 | CANVAS_H/2, 162 | 30); 163 | 164 | // add the mongos to the topology 165 | members = [] 166 | for (var s in group_info) { 167 | members.push(group_info[s]); 168 | } 169 | 170 | mongos = { "members" : members, 171 | "x" : CANVAS_W/2, 172 | "y" : CANVAS_H/2, 173 | "r" : 30, 174 | "on" : false }; 175 | } 176 | 177 | // handle configs? 178 | else if (group["type"] == "config") { 179 | group_info = generateIconCoords( 180 | frame, 181 | group["members"], 182 | CANVAS_W/2, 183 | CANVAS_H/2, 184 | CANVAS_W); 185 | } 186 | 187 | // merge into parent servers dictionary 188 | for (var s in group_info) { 189 | serverCoordinates[s] = group_info[s]; 190 | } 191 | } 192 | } 193 | 194 | ICON_STROKE = ICON_RADIUS > 18 ? 12 : 6; 195 | return serverCoordinates; 196 | } 197 | 198 | /* 199 | * Fill the #topology div with the structure of this cluster. 200 | */ 201 | function addTopology(clusterType, servers) { 202 | var topology = ""; 203 | 204 | // standalone 205 | if (servers.length == 1) { 206 | topology = server_label(servers, servers.keys[0]); 207 | } 208 | // repl set 209 | else if (clusterType == "replSet") { 210 | for (var server in servers) { 211 | topology += "- " + server_label(servers, server) + "
"; 212 | } 213 | } 214 | // sharded cluster 215 | else { 216 | for (var repl in replsets) { 217 | topology += repl + "
"; 218 | var replset = replsets[repl]; 219 | for (var m in replset["members"]) { 220 | topology += "    - "; 221 | topology += server_label(servers, replset["members"][m]) + "
"; 222 | } 223 | } 224 | topology += "Mongos
"; 225 | for (var server in mongos["members"]) { 226 | topology += "    - "; 227 | topology += server_label(servers, mongos["members"][server]["server_num"]) + "
"; 228 | } 229 | } 230 | $("#topology").html(topology); 231 | } 232 | 233 | /* 234 | * Generate a label for this server, in most cases, the network name. 235 | */ 236 | function server_label(servers, num) { 237 | if (servers[num]) { 238 | if (servers[num]["network_name"] != "unknown") 239 | return servers[num]["network_name"]; 240 | if (servers[num]["self_name"] != "unknown") 241 | return servers[num]["self_name"]; 242 | return "no name found"; 243 | } 244 | 245 | /* This server was not up at the time, no entry in frame. */ 246 | return "unknown"; 247 | } 248 | 249 | /* 250 | * Generate coordinates for this group of servers. 251 | * "frame" is the current frame, 252 | * "group" is an array of server config documents by server_num, 253 | * "cx" and "cy" are the coordinates of the center of the 254 | * group, and "r" is the radius of the group. 255 | */ 256 | var generateIconCoords = function(frame, group, cx, cy, r) { 257 | var count = Object.keys(group).length; 258 | var all = {}; 259 | 260 | // For 0, 1, or 2 servers, special case coordinates 261 | switch(count) { 262 | case 0: 263 | return; 264 | case 1: 265 | var server = group[0]; 266 | server["x"] = cx; 267 | server["y"] = cy; 268 | server["on"] = false; 269 | server["type"] = "UNDISCOVERED"; 270 | server["r"] = ICON_RADIUS; 271 | all[server["server_num"]] = server; 272 | return all; 273 | case 2: 274 | var s1 = group[0]; 275 | s1["x"] = cx + r; 276 | s1["y"] = cy; 277 | s1["on"] = false; 278 | s1["r"] = ICON_RADIUS; 279 | s1["type"] = "UNDISCOVERED"; 280 | 281 | var s2 = group[1]; 282 | s2["x"] = cx - r; 283 | s2["y"] = cy; 284 | s2["on"] = false; 285 | s2["r"] = ICON_RADIUS; 286 | s2["type"] = "UNDISCOVERED"; 287 | 288 | all[s1["server_num"]] = s1; 289 | all[s2["server_num"]] = s2; 290 | return all; 291 | } 292 | 293 | // all other numbers of servers are drawn on a circular pattern 294 | var xVal = 0; 295 | var yVal = 0; 296 | var start_angle = (count % 2 == 0) ? -35 : -70; 297 | 298 | for (var i = 0; i < count; i++) { 299 | xVal = (r * Math.cos(start_angle * (Math.PI)/180)) + cx; 300 | yVal = (r * Math.sin(start_angle * (Math.PI)/180)) + cy; 301 | var s = group[i]; 302 | s["x"] = xVal; 303 | s["y"] = yVal; 304 | s["on"] = false; 305 | s["r"] = ICON_RADIUS; 306 | s["type"] = "UNDISCOVERED"; 307 | if (s.hasOwnProperty("server_num")) { 308 | all[s["server_num"]] = s; 309 | } else { 310 | all[s["n"]] = s; 311 | } 312 | start_angle += 360/count; 313 | } 314 | return all; 315 | }; 316 | 317 | /* Generate a dotted circle outline, with no fill */ 318 | var drawDottedCircle = function(x, y, r, color, wt, ctx) { 319 | ctx.strokeStyle = color; 320 | ctx.lineWidth = wt; 321 | var a = 0; 322 | var b = 0.05; 323 | while (b <= 2) { 324 | ctx.beginPath(); 325 | ctx.arc(x, y, r, a*Math.PI, b*Math.PI, false); 326 | ctx.stroke(); 327 | a += 0.08; 328 | b += 0.08; 329 | } 330 | }; 331 | 332 | /* Draw the stripes for an Arbiter icon */ 333 | var drawIconStripes = function(x, y, r, ctx) { 334 | // right stripe 335 | ctx.fillStyle = "#33312F"; 336 | ctx.beginPath(); 337 | ctx.arc(x, y, r, -0.4 * Math.PI, -0.25 * Math.PI, false); 338 | ctx.lineTo(x + r * Math.sin(0.25 * Math.PI), y + r * Math.sin(45)); 339 | ctx.arc(x, y, r, 0.25 * Math.PI, 0.4 * Math.PI, false); 340 | ctx.lineTo(x + r * Math.cos(0.4 * Math.PI), y - r * Math.sin(0.4 * Math.PI)); 341 | ctx.fill(); 342 | 343 | // left stripe 344 | ctx.beginPath(); 345 | ctx.arc(x, y, r, -0.6 * Math.PI, -0.75 * Math.PI, true); 346 | ctx.lineTo(x - r * Math.sin(0.25 * Math.PI), y + r * Math.sin(45)); 347 | ctx.arc(x, y, r, -1.25*Math.PI, -1.4*Math.PI, true); 348 | ctx.lineTo(x - r * Math.cos(0.4 * Math.PI), y - r * Math.sin(0.4*Math.PI)); 349 | ctx.fill(); 350 | 351 | // top circle, just for outline 352 | ctx.beginPath(); 353 | ctx.arc(x, y, r, 0, 360, false); 354 | ctx.lineWidth = (r > 30) ? 4 : 2; 355 | ctx.strokeStyle = "#1A1007"; 356 | ctx.stroke(); 357 | }; 358 | 359 | /* Draw a fairly generic circle of radius r, centered at (x,y) */ 360 | var drawCircle = function(x, y, r, fill, stroke, ctx) { 361 | ctx.fillStyle = fill; 362 | ctx.beginPath(); 363 | ctx.arc(x,y, r, 0, 360, false); 364 | ctx.strokeStyle = stroke; 365 | ctx.lineWidth = ICON_STROKE; 366 | ctx.stroke(); 367 | ctx.fill(); 368 | }; 369 | 370 | /* Render ? for a server in an Unknown state */ 371 | var drawIconQuestionMark = function(x, y, r, ctx) { 372 | ctx.font = "45pt Georgia"; 373 | ctx.fillStyle = "#9D7E51"; 374 | ctx.fillText("?", x - r / 4, y + (0.4*r)); 375 | }; 376 | 377 | /* Render a lock, for a locked server */ 378 | var drawIconLock = function(x, y, r, ctx) { 379 | var w = r/4; 380 | var h = 0.7 * r; 381 | var radius = h/4; 382 | var theta = Math.PI/8; // radians 383 | 384 | // draw the top circle 385 | ctx.beginPath(); 386 | ctx.lineWidth = 10; 387 | ctx.arc(x, y - radius, radius, 0.5 * Math.PI - theta, 0.5 * Math.PI + theta, true); 388 | y = y - h/2; 389 | ctx.moveTo(x + (radius*Math.sin(theta)), y + h/2 - (radius - radius*Math.cos(theta))); 390 | ctx.lineTo(x + w, y + h); 391 | ctx.lineTo(x - w, y + h); 392 | ctx.lineTo(x - (radius*Math.sin(theta)), y + h/2 - (radius - radius*Math.cos(theta))); 393 | ctx.stroke(); 394 | ctx.fillStyle = "#2D1F1C"; 395 | ctx.fill(); 396 | }; 397 | 398 | /* Draw the crown for a Primary server */ 399 | var drawIconCrown = function(x, y, w, h, ctx) { 400 | ctx.beginPath(); 401 | ctx.moveTo(x, y); 402 | // right side 403 | ctx.lineTo(x + w/5, y + h/2); 404 | ctx.lineTo(x + (2*w/5), y + h/5); 405 | ctx.lineTo(x + (3*w/5), y + (0.55)*h); 406 | ctx.lineTo(x + w, y + h/5); 407 | ctx.lineTo(x + (2*w/3), y + h); 408 | // left side 409 | ctx.lineTo(x - (2*w/3), y + h); 410 | ctx.lineTo(x - w, y + h/5); 411 | ctx.lineTo(x - (3*w/5), y + 0.55*h); 412 | ctx.lineTo(x - (2*w/5), y + h/5); 413 | ctx.lineTo(x - w/5, y + h/2); 414 | ctx.lineTo(x, y); 415 | // set fills and line weights 416 | ctx.lineWidth = 2; // consider setting dynamically for scaling 417 | ctx.lineJoin = "miter"; 418 | ctx.stroke(); 419 | // fill yellow 420 | ctx.fillStyle = "#F0E92F"; 421 | ctx.fill(); 422 | }; 423 | -------------------------------------------------------------------------------- /edda/ui/display/js/links.js: -------------------------------------------------------------------------------- 1 | // Copyright 2009-2014 MongoDB, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | /* Draw a line from (x, y) to (x2, y2) */ 16 | var drawOneLine = function(x, y, x2, y2, ctx) { 17 | ctx.beginPath(); 18 | ctx.moveTo(x, y); 19 | ctx.lineWidth = 1; 20 | ctx.strokeStyle = "#1C0E0B"; 21 | ctx.lineTo(x2, y2); 22 | ctx.stroke(); 23 | }; 24 | 25 | /* Draw a dotted line from (x, y) to (x2, y2) */ 26 | var drawBrokenLink = function(x, y, x2, y2, ctx) { 27 | // adapted from http://stackoverflow.com/questions/4576724/dotted-stroke-in-canvas 28 | var dy = y2 - y; 29 | var dx = x2 - x; 30 | var slope = dy/dx; 31 | var l = 10; // length of line segments, "dots" 32 | var distRemaining = Math.sqrt(dx*dx + dy*dy); 33 | var xStep = Math.sqrt(l*l / (1 + slope*slope)); 34 | 35 | // set the sign of xStep: 36 | if (x > x2) { xStep = -xStep; } 37 | var yStep = slope * xStep; 38 | 39 | ctx.lineWidth = 1; 40 | ctx.strokeStyle = "#9D7E51"; 41 | ctx.beginPath(); 42 | 43 | while (distRemaining >= 0.1) { 44 | ctx.moveTo(x, y); 45 | x += xStep; 46 | y += yStep; 47 | ctx.lineTo(x, y); 48 | x += xStep; 49 | y += yStep; 50 | distRemaining -= (2 * l); 51 | } 52 | ctx.stroke(); 53 | }; 54 | -------------------------------------------------------------------------------- /edda/ui/display/js/mouse_over.js: -------------------------------------------------------------------------------- 1 | // Copyright 2014 MongoDB, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | /* 16 | * On mouseover, draws a drop shadow under any server or 17 | * replica set that we've selected. 18 | */ 19 | var onCanvasMouseover = function(e) { 20 | if (mouseoverNodes(globalServers, e)) return; 21 | mouseoverNodes(replsets, e); 22 | }; 23 | 24 | /* 25 | * Given a group of nodes, check if we are mousing over any of them, 26 | * and respond by drawing a drop shadow. Return true if we were over 27 | * any of the nodes, false if over none. 28 | */ 29 | var mouseoverNodes = function(nodes, e) { 30 | for (var n in nodes) { 31 | var node = nodes[n]; 32 | if (isOverNode(node, e)) { 33 | if (node["on"]) return true; 34 | 35 | drawDropShadow(node); 36 | node["on"] = true; 37 | return true; 38 | } 39 | 40 | if (node["on"]) undrawDropShadows(); 41 | } 42 | return false; 43 | }; 44 | 45 | /* 46 | * Draw a drop shadow under a server or replica set. 47 | */ 48 | var drawDropShadow = function(node) { 49 | undrawDropShadows(); 50 | var shadow = contexts["shadow"]; 51 | shadow.beginPath(); 52 | shadow.arc(node["x"], node["y"], node["r"], 0, 360, false); 53 | shadow.strokeStyle = "red"; 54 | shadow.fillStyle = "rgba(10, 10, 10, 1)"; 55 | shadow.lineWidth = 10; 56 | shadow.shadowColor = "black"; 57 | shadow.shadowBlur = node["r"] + 30; 58 | shadow.fill(); 59 | node["on"] = true; 60 | }; 61 | 62 | /* 63 | * Erase all drop shadows, and turn off all nodes. 64 | */ 65 | var undrawDropShadows = function() { 66 | for (var n in replsets) replsets[n]["on"] = false; 67 | for (var n in globalServers) globalServers[n]["on"] = false; 68 | canvases["shadow"].width = canvases["shadow"].width; 69 | }; 70 | 71 | /* 72 | * When we click on a server, pop up a message box with 73 | * information about that server. 74 | */ 75 | var onCanvasClick = function(e) { 76 | for (var s in globalServers) { 77 | if (isOverNode(globalServers[s], e)) { 78 | var info = server_label(globalServers, s); 79 | var info = "self_name: "; 80 | info += globalServers[s]["self_name"]; 81 | info += "
network_name: " + globalServers[s]["network_name"]; 82 | info += "
status: " + frames[current_frame]["servers"][s]; 83 | info += "
version: " + globalServers[s]["version"]; 84 | formatInfoBox(info, e); 85 | return; 86 | } 87 | } 88 | for (var rs in replsets) { 89 | var repl = replsets[rs]; 90 | if (isOverNode(repl, e)) { 91 | var info = "Replica set " + rs + "
"; 92 | info += "Members:
"; 93 | for (var i = 0; i < repl["members"].length; i++) { 94 | info += "   "; 95 | info += server_label(globalServers, repl["members"][i]) + "
"; 96 | } 97 | formatInfoBox(info, e); 98 | return; 99 | } 100 | } 101 | hideInfoBox(); 102 | return; 103 | }; 104 | 105 | /* 106 | * Fill the info box with this message and display at the 107 | * correct coordinates. 108 | */ 109 | var formatInfoBox = function(msg, e) { 110 | var offset = $("#shadow_layer").offset(); 111 | var x = e.clientX - offset.left; 112 | var y = e.clientY - offset.top; 113 | $("#message_box").html(msg); 114 | $("#message_box").css({ left : x + 'px' }); 115 | $("#message_box").css({ top : y + 'px' }); 116 | showInfoBox(); 117 | }; 118 | 119 | /* 120 | * Show the info box. 121 | */ 122 | var showInfoBox = function() { 123 | $("#message_box").show(); 124 | }; 125 | 126 | /* 127 | * Hide the info box. 128 | */ 129 | var hideInfoBox = function() { 130 | $("#message_box").hide(); 131 | }; 132 | 133 | /* 134 | * Is the mouse over this node? 135 | */ 136 | var isOverNode = function(node, e) { 137 | var offset = $("#shadow_layer").offset(); 138 | var x = e.clientX - offset.left; 139 | var y = e.clientY - offset.top; 140 | diffX = x - node["x"]; 141 | diffY = y - node["y"]; 142 | distance = Math.sqrt(Math.pow(diffX,2) + Math.pow(diffY,2)); 143 | if (distance <= node["r"]) return true; 144 | return false; 145 | }; 146 | -------------------------------------------------------------------------------- /edda/ui/display/js/pop.js: -------------------------------------------------------------------------------- 1 | // Copyright 2009-2012 10gen, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | // opens a popup div 16 | function popUp(name) { 17 | document.getElementById("mask").style.visibility='visible'; 18 | document.getElementById(name).style.visibility='visible'; 19 | } 20 | 21 | // close the popup div 22 | function popDown(name) { 23 | document.getElementById(name).style.visibility='hidden'; 24 | document.getElementById("mask").style.visibility='hidden'; 25 | } -------------------------------------------------------------------------------- /edda/ui/display/js/render.js: -------------------------------------------------------------------------------- 1 | // Copyright 2014 MongoDB, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | /* Render the cluster at a single point in time */ 16 | var renderFrame = function(time) { 17 | if (!frames[time]) return; 18 | clearLayer("arrow"); 19 | clearLayer("server"); 20 | 21 | // set the global servers 22 | var frame = frames[time]; 23 | globalServers = calculateServerCoordinates(frame); 24 | 25 | console.log("Setting global servers:"); 26 | console.log(globalServers); 27 | 28 | renderLinks(globalServers, frame, contexts["arrow"]); 29 | renderBrokenLinks(globalServers, frame, contexts["arrow"]); 30 | renderMigrations(globalServers, frame, contexts["arrow"]); 31 | renderSyncs(globalServers, frame, contexts["arrow"]); 32 | drawServers(globalServers, frame, contexts["server"]); 33 | }; 34 | 35 | /* Render broken links between servers at a single point in time */ 36 | var renderBrokenLinks = function(servers, frame, ctx) { 37 | for (var server in frame["broken_links"]) { 38 | var list = frame["broken_links"][server]; 39 | for (i = 0; i < list.length; i++) { 40 | /* Coords only calculated for servers present in this frame. */ 41 | if (servers[server] && servers[list[i]]) { 42 | drawBrokenLink( 43 | servers[server]["x"], servers[server]["y"], 44 | servers[list[i]]["x"], servers[list[i]]["y"], 45 | ctx); 46 | } 47 | } 48 | } 49 | }; 50 | 51 | /* Render chunk migrations */ 52 | var renderMigrations = function(servers, frame, ctx) { 53 | console.log("replsets:"); 54 | console.log(replsets); 55 | migrations = frame["migrations"]; 56 | for (var from_shard in migrations) { 57 | for (var to_shard in migrations[from_shard]) { 58 | if (replsets.hasOwnProperty(from_shard) && replsets.hasOwnProperty(to_shard)) { 59 | drawOneArrow(replsets[from_shard]["x"], replsets[from_shard]["y"], 60 | replsets[to_shard]["x"], replsets[to_shard]["x"], ctx); 61 | } 62 | } 63 | } 64 | }; 65 | 66 | /* Render links between servers for a single point in time */ 67 | var renderLinks = function(servers, frame, ctx) { 68 | for (var server in frame["links"]) { 69 | var list = frame["links"][server]; 70 | for (i = 0; i < list.length; i++) { 71 | /* Coords only calculated for servers present in this frame. */ 72 | if (servers[server] && servers[list[i]]) { 73 | drawOneLine( 74 | servers[server]["x"], servers[server]["y"], 75 | servers[list[i]]["x"], servers[list[i]]["y"], 76 | ctx); 77 | } 78 | } 79 | } 80 | }; 81 | 82 | /* Render syncs between servers for a single point in time */ 83 | var renderSyncs = function(servers, frame, ctx) { 84 | for (var server in frame["syncs"]) { 85 | var list = frame["syncs"][server]; 86 | for (i = 0; i < list.length; i++) { 87 | /* Coords only calculated for servers present in this frame. */ 88 | if (servers[server] && servers[list[i]]) { 89 | drawOneArrow( 90 | servers[list[i]]["x"], servers[list[i]]["y"], 91 | servers[server]["x"], servers[server]["y"], 92 | ctx); 93 | } 94 | } 95 | } 96 | }; 97 | 98 | /* Clear the specified canvas layer */ 99 | var clearLayer = function(name) { 100 | canvases[name].width = canvases[name].width; 101 | }; 102 | -------------------------------------------------------------------------------- /edda/ui/display/js/setup.js: -------------------------------------------------------------------------------- 1 | // Copyright 2014 MongoDB, Inc. 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | // define program-wide global variables 16 | layers = new Array("background", "shadow", "link", "arrow", "server"); 17 | canvases = {}; 18 | globalServers = {}; 19 | contexts = {}; 20 | replsets = {}; 21 | mongos = {}; 22 | 23 | CANVAS_W = 0; 24 | CANVAS_H = 0; 25 | 26 | // batch information 27 | current_frame = 0; 28 | batch_size = 100; 29 | half_batch = parseInt(batch_size/2, 10); 30 | trigger = parseInt(batch_size/4, 10); 31 | frame_top = 0; 32 | frame_bottom = 0; 33 | 34 | // stored information 35 | frames = {}; 36 | admin = {}; 37 | total_frame_count = 0; 38 | 39 | // call various setup functions 40 | function edda_setup() { 41 | canvases_and_contexts(); 42 | connect(); // see connection.js 43 | time_setup(size(frames)); 44 | visual_setup(); 45 | version_number(); 46 | file_names(); 47 | mouse_over_setup(); 48 | } 49 | 50 | /* set up canvases and associated contexts */ 51 | function canvases_and_contexts() { 52 | for (var i in layers) { 53 | canvases[layers[i]] = document.getElementById(layers[i] + "_layer"); 54 | contexts[layers[i]] = canvases[layers[i]].getContext("2d"); 55 | } 56 | CANVAS_W = canvases[layers[0]].width; 57 | CANVAS_H = canvases[layers[0]].height; 58 | } 59 | 60 | /* set up mouse-over functionality */ 61 | function mouse_over_setup() { 62 | canvases["server"].addEventListener("mousemove", onCanvasMouseover, false); 63 | canvases["server"].addEventListener("click", onCanvasClick, false); 64 | } 65 | 66 | // set up display-related things from frames 67 | // generate coordinates 68 | // set background to brown 69 | function visual_setup() { 70 | canvases_and_contexts(); 71 | 72 | // clear all layers 73 | for (var name in layers) { 74 | canvases[layers[name]].width = canvases[layers[name]].width; 75 | } 76 | 77 | // color background 78 | var b_cvs = canvases["background"]; 79 | var b_ctx = contexts["background"]; 80 | b_ctx.beginPath(); 81 | var w = b_cvs.width; 82 | var h = b_cvs.height; 83 | var grad = b_ctx.createRadialGradient(w/2, h/2, 180, w/2, h/2, h); 84 | grad.addColorStop(0, "#4E3629"); 85 | grad.addColorStop(1, "#000000"); 86 | b_ctx.rect(0, 0, w, h); 87 | b_ctx.fillStyle = grad; 88 | b_ctx.fill(); 89 | 90 | // render first frame 91 | renderFrame("0"); 92 | } 93 | 94 | // set up slider functionality 95 | function time_setup(max_time) { 96 | 97 | $("#slider").slider({ slide: function(event, ui) { 98 | // handle frame batches 99 | if (ui.value >= current_frame) { direction = 1; } 100 | else { direction = -1; } 101 | 102 | current_frame = ui.value 103 | handle_batches(); 104 | 105 | frame = frames[ui.value]; 106 | 107 | // set info divs 108 | $("#timestamp").html("Time: " + frame["date"].substring(0, 50)); 109 | $("#summary").html(frame["summary"]); 110 | $("#log_message_box").html(frame["log_line"]); 111 | 112 | // erase pop-up box 113 | $("#message_box").attr("visibility", "hidden"); 114 | 115 | renderFrame(ui.value); 116 | 117 | // print witnesses, as hostnames 118 | var w = ""; 119 | var s; 120 | for (s in frame["witnesses"]) { 121 | if (w !== "") w += "
"; 122 | w += server_label(globalServers, [frame["witnesses"][s]]); 123 | } 124 | $("#witnesses").html(w); 125 | 126 | // print dissenters, as hostnames 127 | var d = ""; 128 | for (s in frame["dissenters"]) { 129 | if (d !== "") d += "
"; 130 | d += server_label(globalServers, frame["dissenters"][s]); 131 | } 132 | $("#dissenters").html(d); 133 | }}); 134 | $("#slider").slider( "option", "max", total_frame_count - 2); 135 | } 136 | 137 | // handle frame batches 138 | function handle_batches() { 139 | 140 | // do we even have to batch? 141 | if (total_frame_count <= batch_size) { 142 | return; 143 | } 144 | 145 | // handle case where user clicked entirely outside 146 | // aka load frames and then render 147 | if (current_frame > frame_top || 148 | current_frame < frame_bottom) { 149 | // force some garbage collection? 150 | slide_batch_window(); 151 | } 152 | 153 | // handle case where user is still within frame buffer 154 | // but close enough to edge to reload 155 | else if ((frame_top - current_frame < trigger && frame_top !== total_frame_count) || 156 | (current_frame - frame_bottom < trigger && frame_bottom !== 0)) { 157 | // force some garbage collection? 158 | slide_batch_window(); 159 | } 160 | } 161 | 162 | function slide_batch_window() { 163 | // get new frames 164 | frame_bottom = current_frame - half_batch; 165 | frame_top = current_frame + half_batch; 166 | 167 | // frame_bottom is less than 0 168 | if (frame_bottom < 0) { 169 | frame_bottom = 0; 170 | frame_top = batch_size; 171 | } 172 | 173 | // frame_top is past the last frame 174 | if (frame_top >= total_frame_count) { 175 | frame_top = total_frame_count - 1; 176 | frame_bottom = frame_top - batch_size; 177 | } 178 | get_batch(frame_bottom, frame_top); 179 | } 180 | 181 | // define .size function for object 182 | // http://stackoverflow.com/questions/5223/length-of-javascript-object-ie-associative-array 183 | function size(obj) { 184 | var len = 0, key; 185 | for (key in obj) { 186 | if(obj.hasOwnProperty(key)) 187 | len++; 188 | } 189 | return len; 190 | } 191 | 192 | function version_number() { 193 | $("#version").html("Version " + admin["version"]); 194 | return; 195 | } 196 | 197 | function file_names() { 198 | var s = ""; 199 | for (var i = 0; i < admin["file_names"].length; i++) { 200 | s += admin["file_names"][i]; 201 | s += "
"; 202 | } 203 | $("#log_files").html(s); 204 | } 205 | -------------------------------------------------------------------------------- /edda/ui/display/server_icons.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mongodb-labs/edda/5acaf9918d9aae6a6acbbfe48741d8a20aae1f21/edda/ui/display/server_icons.jpg -------------------------------------------------------------------------------- /edda/ui/display/style/bootstrap3_override.css: -------------------------------------------------------------------------------- 1 | div.container#content { 2 | max-width: none; 3 | margin-top: 80px; 4 | } 5 | ul { 6 | padding-left: 0px !important; 7 | } 8 | ul.inline { 9 | list-style: none !important; 10 | margin-left: 0px !important; 11 | } 12 | ul.inline li { 13 | display: inline-block !important; 14 | line-height: 20px !important; 15 | } 16 | h1, 17 | h2, 18 | h3, 19 | h4, 20 | h5, 21 | h6 { 22 | font-family: 'PT Sans', sans-serif !important; 23 | } 24 | .label { 25 | font-family: Helvetica, sans-serif !important; 26 | } 27 | .progress { 28 | margin-bottom: 0px !important; 29 | } 30 | .progress-bar-default { 31 | background-color: #bbb; 32 | } 33 | .one-liner { 34 | overflow: hidden; 35 | white-space: nowrap; 36 | text-overflow: ellipsis; 37 | display: block; 38 | } 39 | h1.one-liner { 40 | margin-top: 5px; 41 | padding-top: 5px; 42 | margin-bottom: 5px; 43 | padding-bottom: 5px; 44 | } 45 | .transparent { 46 | background-color: transparent !important; 47 | } 48 | .highlight-bg { 49 | background-color: #fcf0c6; 50 | border-bottom: 1px solid #fcf0c6; 51 | border-top: 1px solid #fcf0c6; 52 | } 53 | .one-char-div { 54 | float: left; 55 | width: 1.25em; 56 | } 57 | .zero-margin { 58 | margin: 0px; 59 | } 60 | .commit-panel { 61 | margin: 15px; 62 | font-size: 14px; 63 | font-family: Helvetica; 64 | color: #333; 65 | border-top: 1px solid #e2eaee; 66 | border-bottom: 1px solid #e2eaee; 67 | border-left: 1px solid #c5d5dd; 68 | border-right: 1px solid #c5d5dd; 69 | padding: 8px; 70 | position: relative; 71 | } 72 | .commit-message-large { 73 | font-size: 16px; 74 | } 75 | .commit-message { 76 | font-family: Helvetica, arial, freesans, clean, sans-serif; 77 | color: #333333; 78 | overflow: hidden; 79 | white-space: nowrap; 80 | text-overflow: ellipsis; 81 | display: block; 82 | line-height: 1.4em; 83 | } 84 | .commit-message-author { 85 | font-family: Helvetica, arial, freesans, clean, sans-serif; 86 | color: #333333; 87 | overflow: hidden; 88 | white-space: nowrap; 89 | text-overflow: ellipsis; 90 | display: block; 91 | font-size: 13px; 92 | line-height: 1.4em; 93 | } 94 | .github_gobutton { 95 | height: 22px; 96 | padding: 0 7px; 97 | line-height: 22px; 98 | font-size: 12px; 99 | color: #4e575b; 100 | text-shadow: 0 1px rgba(255, 255, 255, 0.5); 101 | background-color: #ddecf3; 102 | background-image: -moz-linear-gradient(#eff6f9, #ddecf3); 103 | background-image: -webkit-linear-gradient(#eff6f9, #ddecf3); 104 | background-image: linear-gradient(#eff6f9, #ddecf3); 105 | background-repeat: repeat-x; 106 | border: 1px solid #cedee5; 107 | border-radius: 3px; 108 | font-family: Monaco, "Liberation Mono", Courier, monospace; 109 | width: 112px; 110 | text-align: center; 111 | } 112 | .commit-left { 113 | margin-right: 122px; 114 | } 115 | .commit-right { 116 | position: absolute; 117 | right: 8px; 118 | top: 8px; 119 | } 120 | .commit-right a.browse { 121 | margin-top: 1px; 122 | font-size: 11px; 123 | font-weight: bold; 124 | float: right; 125 | color: #999; 126 | } 127 | .commit-right a.browse:hover { 128 | text-decoration: underline; 129 | } 130 | .label-pt-sans-align { 131 | display: inline; 132 | position: relative; 133 | top: -0.125em; 134 | } 135 | .commit-message a { 136 | color: #333333; 137 | } 138 | a.muted-link { 139 | color: #333; 140 | } 141 | .commit-message a.highlight { 142 | color: #4183C4; 143 | } 144 | img.gravatar-small { 145 | float: left; 146 | margin-right: 8px; 147 | border-radius: 4px; 148 | height: 36px; 149 | width: 36px; 150 | } 151 | img.gravatar-medium { 152 | float: left; 153 | margin-right: 10px; 154 | border-radius: 6px; 155 | height: 50px; 156 | width: 50px; 157 | } 158 | a.jira-ticket-link { 159 | color: #4183C4; 160 | } 161 | .btn-dropdown { 162 | border: 1px solid #ddd; 163 | color: #232323; 164 | text-decoration: none !important; 165 | } 166 | .btn-dropdown:hover { 167 | border: 1px solid #ddd; 168 | color: #232323; 169 | text-decoration: none !important; 170 | } 171 | -------------------------------------------------------------------------------- /edda/ui/display/style/edda.css: -------------------------------------------------------------------------------- 1 | /* div that holds all the canvases */ 2 | body { 3 | background-image: url('../img/texture.png'); 4 | } 5 | 6 | #canvas { 7 | height: 440px; 8 | width: 580px; 9 | box-shadow: 0px 0px 40px #444444; 10 | } 11 | 12 | #background_layer, #shadow_layer, #server_layer, #arrow_layer, #text_layer, #message_layer, #link_layer { 13 | position: absolute; 14 | left: 0; 15 | top: 0; 16 | } 17 | 18 | /* information for each layer */ 19 | #background_layer { 20 | z-index: 4; 21 | } 22 | 23 | #shadow_layer { 24 | z-index: 5; 25 | } 26 | 27 | #server_layer { 28 | z-index: 8; 29 | } 30 | 31 | #link_layer { 32 | z-index: 6; 33 | } 34 | #arrow_layer { 35 | z-index: 7; 36 | } 37 | 38 | /* slider element */ 39 | 40 | .ui-slider-horizontal .ui-slider-handle { 41 | padding: 0px 0 0 0; 42 | } 43 | 44 | /* information divs */ 45 | 46 | #mask { 47 | /* a brown mask to go under the pop-up key */ 48 | opacity:0.8; 49 | filter:alpha(opacity=80); 50 | background-color: #1A0C09; 51 | position: fixed; 52 | z-index: 14; 53 | margin: auto; 54 | left: 0; 55 | top: 0; 56 | width: 100%; 57 | height: 100%; 58 | visibility: hidden; 59 | } 60 | 61 | #icon_key { 62 | background-color: #e2e2e2; 63 | width: 600px; 64 | height: 600px; 65 | position: fixed; 66 | z-index: 15; 67 | margin: auto; 68 | left: 0; right: 0; 69 | top: 0; bottom: 0; 70 | visibility: hidden; 71 | } 72 | 73 | #timestamp, #summary, #log_files, #witnesses, #dissenters { 74 | overflow-x: scroll; 75 | font-size: 10pt; 76 | } 77 | 78 | #summary { 79 | height: 40px; 80 | } 81 | 82 | #witnesses, #dissenters { 83 | height: 100px; 84 | } 85 | 86 | #pop_up_links { 87 | /*position: absolute;*/ 88 | width: 160px; 89 | top: 100%; 90 | padding: 10px; 91 | left: 100%; 92 | } 93 | 94 | /* link formatting */ 95 | a:link {color:#DECBA2; text-decoration: none;} 96 | a:hover {color: green; text-decoration: none;} 97 | a:active {color: #6a5178; text-decoration: none;} 98 | 99 | /* message box!! */ 100 | 101 | #message_box { 102 | font-size: 12px; 103 | position: absolute; 104 | opacity:0.8; 105 | filter:alpha(opacity=80); 106 | background-color: #1C0E0B; 107 | visibility: "hidden"; 108 | color: #FFFFFF; 109 | z-index: 9; 110 | padding: 10px; 111 | max-width: 400px; 112 | } 113 | 114 | 115 | -------------------------------------------------------------------------------- /edda/ui/frames.py: -------------------------------------------------------------------------------- 1 | # Copyright 2009-2012 10gen, Inc. 2 | # 3 | # you may not use this file except in compliance with the License. 4 | # You may obtain a copy of the License at 5 | # 6 | # http://www.apache.org/licenses/LICENSE-2.0 7 | # 8 | # Unless required by applicable law or agreed to in writing, software 9 | # distributed under the License is distributed on an "AS IS" BASIS, 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11 | # See the License for the specific language governing permissions and 12 | # limitations under the License. 13 | 14 | #!/usr/bin/env python 15 | import logging 16 | import string 17 | 18 | from copy import deepcopy 19 | from operator import itemgetter 20 | 21 | LOGGER = logging.getLogger(__name__) 22 | 23 | # The documents this module 24 | # generates will include the following information: 25 | 26 | # date : (string) 27 | # summary : (string) 28 | # log_line : (string) 29 | # witnesses : (list of server_nums) 30 | # dissenters : (list of server_nums) 31 | # flag : (something conflicted about this view of the world?) 32 | # server_groups: [ 33 | # { "type" : , "name" : "string", "members" : [ list of servers ] }, 34 | # ... 35 | # ] 36 | # servers : { 37 | # server : (state as string)... 38 | # } 39 | # links : { 40 | # server : [ list of servers ] 41 | # } 42 | # broken_links : { 43 | # server : [ list of servers ] 44 | # } 45 | # migrations : { 46 | # repl_name : [ list of shards it is migrating to ] 47 | # } 48 | # syncs : { 49 | # server : [ list of servers ] 50 | # } 51 | # users : { 52 | # server : [ list of users ] 53 | # } 54 | 55 | 56 | def generate_frames(unsorted_events, db, collName): 57 | """Given a list of events, generates and returns a list of frames 58 | to be passed to JavaScript client to be animated""" 59 | # for now, program will assume that all servers 60 | # view the world in the same way. If it detects something 61 | # amiss between two or more servers, it will set the 'flag' 62 | # to true, but will do nothing further. 63 | 64 | frames = {} 65 | last_frame = None 66 | i = 0 67 | 68 | # sort events by date 69 | events = sorted(unsorted_events, key=itemgetter("date")) 70 | 71 | # get all servers 72 | servers = list(db[collName + ".servers"].distinct("server_num")) 73 | 74 | for e in events: 75 | LOGGER.debug("Generating frame for a type {0} event with target {1}" 76 | .format(e["type"], e["target"])) 77 | f = new_frame(servers) 78 | # fill in various fields 79 | f["date"] = str(e["date"]) 80 | f["summary"] = e["summary"] 81 | f["log_line"] = e["log_line"] 82 | f["witnesses"] = e["witnesses"] 83 | f["dissenters"] = e["dissenters"] 84 | # see what data we can glean from the last frame 85 | if last_frame: 86 | f["servers"] = deepcopy(last_frame["servers"]) 87 | f["links"] = deepcopy(last_frame["links"]) 88 | f["migrations"] = deepcopy(last_frame["migrations"]) 89 | f["broken_links"] = deepcopy(last_frame["broken_links"]) 90 | f["users"] = deepcopy(last_frame["users"]) 91 | f["syncs"] = deepcopy(last_frame["syncs"]) 92 | f = witnesses_dissenters(f, e) 93 | f = info_by_type(f, e) 94 | last_frame = f 95 | frames[str(i)] = f 96 | i += 1 97 | return frames 98 | 99 | 100 | def new_frame(server_nums): 101 | """Given a list of servers, generates an empty frame 102 | with no links, syncs, users, or broken_links, and 103 | all servers set to UNDISCOVERED. Does not 104 | generate 'summary' or 'date' field""" 105 | f = {} 106 | f["server_count"] = len(server_nums) 107 | f["flag"] = False 108 | f["links"] = {} 109 | f["broken_links"] = {} 110 | f["migrations"] = {} 111 | f["syncs"] = {} 112 | f["users"] = {} 113 | f["servers"] = {} 114 | for s in server_nums: 115 | # ensure servers are given as strings 116 | s = str(s) 117 | f["servers"][s] = "UNDISCOVERED" 118 | f["links"][s] = [] 119 | f["broken_links"][s] = [] 120 | f["users"][s] = [] 121 | f["syncs"][s] = [] 122 | return f 123 | 124 | 125 | def witnesses_dissenters(f, e): 126 | """Using the witnesses and dissenters 127 | lists in event e, determine links that should 128 | exist in frame, and if this frame should be flagged""" 129 | LOGGER.debug("Resolving witnesses and dissenters into links") 130 | f["witnesses"] = e["witnesses"] 131 | f["dissenters"] = e["dissenters"] 132 | if e["witnesses"] <= e["dissenters"]: 133 | f["flag"] = True 134 | # a witness means a new link 135 | # links are always added to the TARGET's queue. 136 | # unless target server just went down 137 | if e["type"] == "status": 138 | if (e["state"] == "REMOVED" or 139 | e["state"] == "DOWN" or 140 | e["state"] == "FATAL"): 141 | return f 142 | 143 | for w in e["witnesses"]: 144 | if w == e["target"]: 145 | continue 146 | # if w is DOWN, do not add link 147 | if f["servers"][w] == "DOWN": 148 | continue 149 | # do not add duplicate links 150 | if (not e["target"] in f["links"][w] and 151 | not w in f["links"][e["target"]]): 152 | f["links"][e["target"]].append(w) 153 | # fix any broken links 154 | if w in f["broken_links"][e["target"]]: 155 | f["broken_links"][e["target"]].remove(w) 156 | if e["target"] in f["broken_links"][w]: 157 | f["broken_links"][w].remove(e["target"]) 158 | # a dissenter means that link should be removed 159 | # add broken link only if link existed 160 | for d in e["dissenters"]: 161 | if e["target"] in f["links"][d]: 162 | f["links"][d].remove(e["target"]) 163 | # do not duplicate broken links 164 | if (not d in f["broken_links"][e["target"]] and 165 | not e["target"] in f["broken_links"][d]): 166 | f["broken_links"][d].append(e["target"]) 167 | if d in f["links"][e["target"]]: 168 | f["links"][e["target"]].remove(d) 169 | # do not duplicate broken links 170 | if (not e["target"] in f["broken_links"][d] and 171 | not d in f["broken_links"][e["target"]]): 172 | f["broken_links"][e["target"]].append(d) 173 | return f 174 | 175 | 176 | def break_links(me, f): 177 | # find my links and make them broken links 178 | LOGGER.debug("Breaking all links to server {0}".format(me)) 179 | for link in f["links"][me]: 180 | # do not duplicate broken links 181 | if (not link in f["broken_links"][me] and 182 | not me in f["broken_links"][link]): 183 | f["broken_links"][me].append(link) 184 | f["links"][me] = [] 185 | for sync in f["syncs"][me]: 186 | # do not duplicate broken links 187 | if (not sync in f["broken_links"][me] and 188 | not me in f["broken_links"][sync]): 189 | f["broken_links"][me].append(sync) 190 | f["syncs"][me].remove(sync) 191 | 192 | # find links and syncs that reference me 193 | for s in f["servers"].keys(): 194 | if s == me: 195 | continue 196 | for link in f["links"][s]: 197 | if link == me: 198 | f["links"][s].remove(link) 199 | # do not duplicate broken links 200 | if (not link in f["broken_links"][s] and 201 | not s in f["broken_links"][link]): 202 | f["broken_links"][s].append(link) 203 | for sync in f["syncs"][s]: 204 | if sync == me: 205 | f["syncs"][s].remove(sync) 206 | # do not duplicate broken links! 207 | if (not sync in f["broken_links"][s] and 208 | not s in f["broken_links"][sync]): 209 | f["broken_links"][s].append(sync) 210 | 211 | # remove all of my user connections 212 | f["users"][me] = [] 213 | return f 214 | 215 | 216 | def info_by_type(f, e): 217 | just_set = False 218 | # add in information from this event 219 | # by type: 220 | # make sure it is a string! 221 | s = str(e["target"]) 222 | # check to see if previous was down, and if any other messages were sent from it, to bring it back up 223 | 224 | if f["servers"][s] == "DOWN" or f["servers"][s] == "STARTUP1": 225 | if e["witnesses"] or e["type"] == "new_conn": 226 | #f['servers'][s] = "UNDISCOVERED" 227 | pass 228 | 229 | # status changes 230 | if e["type"] == "status": 231 | # if server was previously stale, 232 | # do not change icon if RECOVERING 233 | if not (f["servers"][s] == "STALE" and 234 | e["state"] == "RECOVERING"): 235 | just_set = True 236 | f["servers"][s] = e["state"] 237 | # if server went down, change links and syncs 238 | if (e["state"] == "DOWN" or 239 | e["state"] == "REMOVED" or 240 | e["state"] == "FATAL"): 241 | f = break_links(s, f) 242 | 243 | # stale secondaries 244 | if e["type"] == "stale": 245 | f["servers"][s] = "STALE" 246 | 247 | # reconfigs 248 | elif e["type"] == "reconfig": 249 | # nothing to do for a reconfig? 250 | pass 251 | 252 | # startups 253 | elif e["type"] == "init": 254 | f["servers"][s] = "STARTUP1" 255 | 256 | # connections 257 | elif e["type"] == "new_conn": 258 | if not e["conn_addr"] in f["users"][s]: 259 | f["users"][s].append(e["conn_addr"]) 260 | elif e["type"] == "end_conn": 261 | if e["conn_addr"] in f["users"][s]: 262 | f["users"][s].remove(e["conn_addr"]) 263 | 264 | # chunk migrations 265 | # start: nothing 266 | # commit or abort: remove arrow 267 | elif e["type"] == "migration": 268 | # add the migration, if it's not already there 269 | if not e["from_shard"] in f["migrations"]: 270 | f["migrations"][e["from_shard"]] = [] 271 | f["migrations"][e["from_shard"]].append(e["from_shard"]) 272 | elif not e["to_shard"] in f["migrations"][e["from_shard"]]: 273 | f["migrations"][e["from_shard"]].append(e["to_shard"]) 274 | elif e["type"] == "commit_migration" or e["type"] == "abort_migration": 275 | # remove arrow 276 | f["migrations"][e["from_shard"]].remove(e["to_shard"]) 277 | 278 | # syncs 279 | elif e["type"] == "sync": 280 | s_to = e["sync_to"] 281 | s_from = s 282 | # do not allow more than one sync per server 283 | if not s_to in f["syncs"][s_from]: 284 | f["syncs"][s_from] = [] 285 | f["syncs"][s_from].append(s_to) 286 | # if links do not exist, add 287 | if (not s_to in f["links"][s_from] and 288 | not s_from in f["links"][s_to]): 289 | f["links"][s_from].append(s_to) 290 | # remove broken links 291 | if s_to in f["broken_links"][s_from]: 292 | f["broken_links"][s_from].remove(s_to) 293 | if s_from in f["broken_links"][s_to]: 294 | f["broken_links"][s_to].remove(s_from) 295 | 296 | # end syncs 297 | elif e["type"] == "end_sync": 298 | # remove sync arrow for this server 299 | f["syncs"][s] = [] 300 | 301 | # exits 302 | elif e["type"] == "exit": 303 | just_set = True 304 | f["servers"][s] = "DOWN" 305 | f = break_links(s, f) 306 | 307 | # fsync and locking 308 | elif e["type"] == "LOCKED": 309 | # make sure .LOCKED is not already appended 310 | if string.find(f["servers"][s], ".LOCKED") < 0: 311 | f["servers"][s] += ".LOCKED" 312 | elif e["type"] == "UNLOCKED": 313 | n = string.find(f["servers"][s], ".LOCKED") 314 | f["servers"][s] = f["servers"][s][:n] 315 | elif e["type"] == "FSYNC": 316 | # nothing to do for fsync? 317 | # render a lock, if not already locked 318 | if string.find(f["servers"][s], ".LOCKED") < 0: 319 | f["servers"][s] += ".LOCKED" 320 | 321 | #if f servers of f was a witness to e[] bring f up 322 | for server in f["servers"]: 323 | if f["servers"][server] == "DOWN" and server in e["witnesses"] and len(e["witnesses"]) < 2: 324 | f['servers'][s] = "UNDISCOVERED" 325 | return f 326 | 327 | 328 | def update_frames_with_config(frames, config): 329 | i = 0 330 | while str(i) in frames: 331 | f = frames[str(i)] 332 | f["server_groups"] = config["groups"] 333 | i += 1 334 | -------------------------------------------------------------------------------- /scripts/edd3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mongodb-labs/edda/5acaf9918d9aae6a6acbbfe48741d8a20aae1f21/scripts/edd3 -------------------------------------------------------------------------------- /scripts/edda: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from edda import run_edda 4 | 5 | if __name__ == '__main__': 6 | run_edda.main() 7 | -------------------------------------------------------------------------------- /scripts/edda2: -------------------------------------------------------------------------------- 1 | from edda import run_edda 2 | 3 | if __name__ == '__main__': 4 | run_edda.main() 5 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # This file will be used with PyPi in order to package and distribute the final 16 | # product. 17 | 18 | classifiers = """ 19 | Development Status :: 4 - Beta 20 | Intended Audience :: Developers 21 | License :: OSI Approved :: Apache Software License 22 | Programming Language :: Python 23 | Programming Language :: JavaScript 24 | Topic :: Database 25 | Topic :: Software Development :: Libraries :: Python Modules 26 | Operating System :: Unix 27 | """ 28 | 29 | from distutils.core import setup 30 | 31 | __doc__ = "" 32 | doclines = __doc__.split("\n") 33 | 34 | setup( 35 | name="edda", 36 | version="0.7.0+", 37 | maintainer="MongoDB", 38 | maintainer_email="samantha.ritter@mongodb.com", 39 | url="https://github.com/10gen-labs/edda", 40 | license="http://www.apache.org/licenses/LICENSE-2.0.html", 41 | platforms=["any"], 42 | description=doclines[0], 43 | classifiers=filter(None, classifiers.split("\n")), 44 | long_description="\n".join(doclines[2:]), 45 | #include_package_data=True, 46 | packages=['edda', 'edda.filters', 'edda.post', 'edda.ui', 47 | 'edda.sample_logs', 'edda.ui.display.js', 'edda.ui.display.style', 48 | 'edda.ui.display', 'edda.sample_logs.hp', 'edda.sample_logs.pr' 49 | ], 50 | #packages = find_packages('src'), # include all packages under src 51 | #package_dir = {'':'src'}, # tell distutils packages are under src 52 | scripts=['scripts/edda'], 53 | install_requires=['pymongo'], 54 | 55 | package_data={ 56 | # If any package contains *.txt files, include them: 57 | 'edda.ui.display.js': ['*.js'], 58 | 'edda.ui.display.style': ['*.css', '*.ico'], 59 | 'edda.ui.display': ['*.jpg', '*.html'], 60 | 'edda.sample_logs.hp': ['*.log'], 61 | 'edda.sample_logs.pr': ['*.log'], 62 | # And include any *.dat files found in the 'data' subdirectory 63 | # of the 'mypkg' package, also: 64 | } 65 | ) 66 | -------------------------------------------------------------------------------- /test/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. -------------------------------------------------------------------------------- /test/repl_config.js: -------------------------------------------------------------------------------- 1 | // This file is only for scratch code. They are the commands in order for 2 | // generating replsets 3 | 4 | sudo rm -R /data/rs1 5 | sudo rm -R /data/rs2 6 | sudo rm -R /data/rs3 7 | 8 | sudo killall mongod 9 | sudo killall mongo 10 | 11 | sudo mkdir -p /data/rs1 12 | sudo mkdir -p /data/rs2 13 | sudo mkdir -p /data/rs3 14 | 15 | sudo chown kaushalparikh /data/rs1 16 | sudo chown kaushalparikh /data/rs2 17 | sudo chown kaushalparikh /data/rs3 18 | 19 | sudo ./mongod --replSet foo --logpath '1.log' --dbpath /data/rs1 --port 27017 --fork 20 | sudo ./mongod --replSet foo --logpath '2.log' --dbpath /data/rs2 --port 27018 --fork 21 | sudo ./mongod --replSet foo --logpath '3.log' --dbpath /data/rs3 --port 27019 --fork 22 | 23 | sudo ./mongo localhost:27017/foo 24 | 25 | config = { _id: "foo", members:[ 26 | { _id:0, host : 'localhost:27017' }, 27 | { _id:1, host : 'localhost:27018' }, 28 | { _id:2, host : 'localhost:27019', arbiterOnly: true }] 29 | } 30 | 31 | rs.initiate(config) 32 | 33 | rs.status() 34 | 35 | 36 | 37 | for (var i = 1; i <= Math.floor(Math.random()*10000); i++) db.foo.save({"name" : Math.floor(Math.random()*10000*Math.random()*4)}); 38 | 39 | 40 | 41 | var random = Math.floor(Math.random()*5) 42 | for(i = 0; i= 0): 43 | e["conn_addr"] = more["conn_addr"] 44 | e["conn_number"] = more["conn_number"] 45 | e["target"] = target 46 | if not w: 47 | e["witnesses"] = target 48 | else: 49 | e["witnesses"] = w 50 | e["dissenters"] = d 51 | return e 52 | 53 | 54 | def new_frame(self, servers): 55 | """Generate a new frame, with no links, broken_links, 56 | syncs, or users, and all servers set to UNDISCOVERED 57 | does not set the 'summary' field""" 58 | f = {} 59 | f["date"] = datetime.now() 60 | f["server_count"] = len(servers) 61 | f["witnesses"] = [] 62 | f["dissenters"] = [] 63 | f["flag"] = False 64 | f["links"] = {} 65 | f["broken_links"] = {} 66 | f["syncs"] = {} 67 | f["users"] = {} 68 | f["servers"] = {} 69 | for s in servers: 70 | f["servers"][s] = "UNDISCOVERED" 71 | f["links"][s] = [] 72 | f["broken_links"][s] = [] 73 | f["users"][s] = [] 74 | f["syncs"][s] = [] 75 | return f 76 | 77 | 78 | #--------------------- 79 | # test info_by_type() 80 | #--------------------- 81 | 82 | 83 | def test_info_by_type_status(self): 84 | """Test method on status type event""" 85 | e = self.generate_event("3", "status", {"state": "PRIMARY"}, ["3"], None) 86 | f = info_by_type(new_frame(["3"]), e) 87 | assert f 88 | assert f["servers"]["3"] == "PRIMARY" 89 | 90 | 91 | def test_info_by_type_reconfig(self): 92 | """Test method on reconfig type event""" 93 | e = self.generate_event("1", "reconfig", None, ["1"], None) 94 | f = info_by_type(new_frame(["1"]), e) 95 | assert f 96 | 97 | 98 | def test_info_by_type_new_conn(self): 99 | """Test method on new_conn type event""" 100 | e = self.generate_event("1", "new_conn", 101 | {"conn_addr": "1.2.3.4", 102 | "conn_number": 14}, ["1"], None) 103 | f = info_by_type(new_frame(["1"]), e) 104 | assert f 105 | assert f["users"]["1"] 106 | assert len(f["users"]["1"]) == 1 107 | assert "1.2.3.4" in f["users"]["1"] 108 | 109 | 110 | def test_info_by_type_end_conn(self): 111 | """Test method on end_conn type event""" 112 | # first, when there was no user stored 113 | e = self.generate_event("1", "end_conn", 114 | {"conn_addr": "1.2.3.4", 115 | "conn_number": 14}, ["1"], None) 116 | f = info_by_type(new_frame(["1"]), e) 117 | assert f 118 | assert not f["users"]["1"] 119 | # next, when there was a user stored 120 | f = new_frame(["1"]) 121 | f["users"]["1"].append("1.2.3.4") 122 | f = info_by_type(f, e) 123 | assert f 124 | assert not f["users"]["1"] 125 | 126 | 127 | def test_info_by_type_sync(self): 128 | """Test method on sync type event""" 129 | e = self.generate_event("4", "sync", {"sync_to":"3"}, ["4"], None) 130 | e2 = self.generate_event("2", "sync", {"sync_to":"1"}, ["2"], None) 131 | f = info_by_type(new_frame(["1", "2", "3", "4"]), e) 132 | f2 = info_by_type(new_frame(["1", "2", "3", "4"]), e2) 133 | assert f 134 | assert f2 135 | assert f["syncs"]["4"] 136 | assert f2["syncs"]["2"] 137 | assert len(f2["syncs"]["2"]) == 1 138 | assert len(f["syncs"]["4"]) == 1 139 | assert "1" in f2["syncs"]["2"] 140 | assert "3" in f["syncs"]["4"] 141 | 142 | 143 | def test_info_by_type_exit(self): 144 | """Test method on exit type event""" 145 | # no links established 146 | e = self.generate_event("3", "status", {"state": "DOWN"}, ["3"], None) 147 | f = info_by_type(new_frame(["3"]), e) 148 | assert f 149 | assert not f["links"]["3"] 150 | assert not f["broken_links"]["3"] 151 | # only broken links established 152 | f = new_frame(["3"]) 153 | f["broken_links"]["3"] = ["1", "2"] 154 | f = info_by_type(f, e) 155 | assert f 156 | assert not f["links"]["3"] 157 | assert f["broken_links"]["3"] 158 | assert len(f["broken_links"]["3"]) == 2 159 | assert "1" in f["broken_links"]["3"] 160 | assert "2" in f["broken_links"]["3"] 161 | # links and syncs established 162 | f = new_frame(["1", "2", "3", "4"]) 163 | f["links"]["3"] = ["1", "2"] 164 | f["syncs"]["3"] = ["4"] 165 | f = info_by_type(f, e) 166 | assert f 167 | assert not f["links"]["3"] 168 | assert not f["syncs"]["3"] 169 | assert f["broken_links"]["3"] 170 | assert "1" in f["broken_links"]["3"] 171 | assert "2" in f["broken_links"]["3"] 172 | 173 | 174 | def test_info_by_type_lock(self): 175 | """Test method on lock type event""" 176 | pass 177 | 178 | 179 | def test_info_by_type_unlock(self): 180 | """Test method on unlock type event""" 181 | pass 182 | 183 | 184 | def test_info_by_type_new(self): 185 | """Test method on a new type of event""" 186 | pass 187 | 188 | 189 | def test_info_by_type_down_server(self): 190 | """Test that this method properly handles 191 | servers going down (removes any syncs or links)""" 192 | pass 193 | 194 | 195 | #---------------------------- 196 | # test break_links() 197 | #---------------------------- 198 | 199 | 200 | 201 | #---------------------------- 202 | # test witnesses_dissenters() 203 | #---------------------------- 204 | 205 | 206 | def test_w_d_no_dissenters(self): 207 | """Test method on an entry with no dissenters""" 208 | pass 209 | 210 | 211 | def test_w_d_equal_w_d(self): 212 | """Test method an an entry with an 213 | equal number of witnesses and dissenters""" 214 | e = self.generate_event("1", "status", {"state":"ARBITER"}, ["1", "2"], ["3", "4"]) 215 | f = new_frame(["1", "2", "3", "4"]) 216 | f = witnesses_dissenters(f, e) 217 | assert f 218 | # assert only proper links were added, to target's queue 219 | assert f["links"]["1"] 220 | assert len(f["links"]["1"]) == 1 221 | assert "2" in f["links"]["1"] 222 | assert not "1" in f["links"]["1"] 223 | assert not "3" in f["links"]["1"] 224 | assert not "4" in f["links"]["1"] 225 | assert not f["links"]["2"] 226 | assert not f["links"]["3"] 227 | assert not f["links"]["4"] 228 | # make sure no broken links were wrongly added 229 | assert not f["broken_links"]["1"] 230 | assert not f["broken_links"]["2"] 231 | assert not f["broken_links"]["3"] 232 | assert not f["broken_links"]["4"] 233 | 234 | 235 | def test_w_d_more_witnesses(self): 236 | """Test method on an entry with more 237 | witnesses than dissenters""" 238 | pass 239 | 240 | 241 | def test_w_d_more_dissenters(self): 242 | """Test method on an entry with more 243 | dissenters than witnesses""" 244 | pass 245 | 246 | 247 | def test_w_d_link_goes_down(self): 248 | """Test a case where a link was lost 249 | (ie a server was a dissenter in an event 250 | about a server it was formerly linked to)""" 251 | pass 252 | 253 | 254 | def test_w_d_new_link(self): 255 | """Test a case where a link was created 256 | between two servers (a server was a witness to 257 | and event involving a server it hadn't previously 258 | been linked to""" 259 | pass 260 | 261 | 262 | def test_w_d_same_link(self): 263 | """Test a case where an existing link 264 | is reinforced by the event's witnesses""" 265 | pass 266 | 267 | 268 | #----------------------- 269 | # test generate_frames() 270 | #----------------------- 271 | 272 | 273 | def test_generate_frames_empty(self): 274 | """Test generate_frames() on an empty list 275 | of events""" 276 | pass 277 | 278 | 279 | def test_generate_frames_one(self): 280 | """Test generate_frames() on a list with 281 | one event""" 282 | pass 283 | 284 | 285 | def test_generate_frames_all_servers_discovered(self): 286 | """Test generate_frames() on a list of events 287 | that defines each server with a status by the end 288 | (so when the final frame is generated, no servers 289 | should be UNDISCOVERED)""" 290 | pass 291 | 292 | 293 | def test_generate_frames_all_linked(self): 294 | """Test generate_frames() on a list of events 295 | that defines links between all servers, by the 296 | final frame""" 297 | pass 298 | 299 | 300 | def test_generate_frames_users(self): 301 | """Test generate_frames() on a list of events 302 | that involves a user connection""" 303 | pass 304 | 305 | 306 | def test_generate_frames_syncs(self): 307 | """Test generate_frames() on a list of events 308 | that creates a chain of syncing by the last frame""" 309 | pass 310 | 311 | if __name__ == '__main__': 312 | unittest.main() 313 | -------------------------------------------------------------------------------- /test/test_fsync_lock.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | from edda.filters.fsync_lock import * 17 | from datetime import datetime 18 | 19 | 20 | # Mon Jul 2 10:00:11 [conn2] CMD fsync: sync:1 lock:1 21 | # Mon Jul 2 10:00:04 [conn2] command: unlock requested 22 | # Mon Jul 2 10:00:10 [conn2] db is now locked for snapshotting, no writes allowed. db.fsyncUnlock() to unlock 23 | 24 | 25 | class test_fsync_lock(unittest.TestCase): 26 | def test_criteria(self): 27 | assert criteria("this should not pass") == -1 28 | assert criteria("Mon Jul 2 10:00:10 [conn2] db is now locked for " 29 | "snapshotting, no writes allowed. db.fsyncUnlock() to unlock") == 3 30 | assert criteria("Mon Jul 2 10:00:04 [conn2] command: " 31 | "unlock requested") == 1 32 | assert criteria("Mon Jul 2 10:00:11 [conn2] " 33 | "CMD fsync: sync:1 lock:1") == 2 34 | assert criteria("Thu Jun 14 11:25:18 [conn2] replSet RECOVERING") == -1 35 | 36 | def test_process(self): 37 | date = datetime.now() 38 | self.check_state("Mon Jul 2 10:00:10 [conn2] db is now locked for snapshotting" 39 | ", no writes allowed. db.fsyncUnlock() to unlock", "LOCKED", date, 0, 0) 40 | self.check_state("Mon Jul 2 10:00:04 [conn2] command: unlock requested" 41 | "", "UNLOCKED", date, 0, 0) 42 | self.check_state("Mon Jul 2 10:00:11 [conn2] CMD fsync: sync:1 lock:1" 43 | "", "FSYNC", date, 1, 1) 44 | 45 | # All of the following should return None 46 | assert process("Thu Jun 14 11:25:18 [conn2] replSet RECOVERING", date) == None 47 | assert process("This should fail", date) == None 48 | assert process("Thu Jun 14 11:26:05 [conn7] replSet info voting yea for localhost:27019 (2)\n", date) == None 49 | assert process("Thu Jun 14 11:26:10 [rsHealthPoll] couldn't connect to localhost:27017: couldn't connect to server localhost:27017\n", date) == None 50 | assert process("Thu Jun 14 11:28:57 [websvr] admin web console waiting for connections on port 28020\n", date) == None 51 | 52 | def check_state(self, message, code, date, sync, lock): 53 | doc = process(message, date) 54 | assert doc 55 | assert doc["type"] == "fsync" 56 | assert doc["original_message"] == message 57 | assert doc["info"]["server"] == "self" 58 | #if sync != 0: 59 | # print "Sync Num: {}".format(doc["info"]["sync_num"]) 60 | # assert doc["info"]["sync_num"] == sync 61 | # assert doc["info"]["lock_num"] == lock 62 | 63 | #print 'Server number is: *{0}*, testing against, *{1}*'.format(doc["info"]["server"], server) 64 | if __name__ == '__main__': 65 | unittest.main() -------------------------------------------------------------------------------- /test/test_init_and_listen.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | from edda.filters.init_and_listen import * 17 | from datetime import datetime 18 | 19 | class test_init_and_listen(unittest.TestCase): 20 | def test_criteria(self): 21 | """Test the criteria() method of this module""" 22 | # these should not pass 23 | assert criteria("this should not pass") < 1 24 | assert criteria("Mon Jun 11 15:56:40 [conn5] end connection " 25 | "127.0.0.1:55224 (2 connections now open)") == 0 26 | assert criteria("Mon Jun 11 15:56:16 [initandlisten] ** WARNING: soft " 27 | "rlimits too low. Number of files is 256, should be at least 1000" 28 | "") == 0 29 | assert criteria("init and listen starting") == 0 30 | assert criteria("[initandlisten]") == 0 31 | assert criteria("starting") == 0 32 | assert criteria("connection accepted") == 0 33 | # these should pass 34 | assert criteria("Mon Jun 11 15:56:16 [initandlisten] MongoDB starting " 35 | ": pid=7029 port=27018 dbpath=/data/rs2 64-bit " 36 | "host=Kaushals-MacBook-Air.local") == 1 37 | return 38 | 39 | 40 | def test_process(self): 41 | """test the process() method of this module""" 42 | date = datetime.now() 43 | # non-valid message 44 | assert process("this is an invalid message", date) == None 45 | # these should pass 46 | doc = process("Mon Jun 11 15:56:16 [initandlisten] MongoDB starting : " 47 | "pid=7029 port=27018 dbpath=/data/rs2 64-bit host=Kaushals-MacBook-Air" 48 | ".local", date) 49 | assert doc 50 | assert doc["type"] == "init" 51 | assert doc["info"]["server"] == "self" 52 | assert doc["info"]["subtype"] == "startup" 53 | assert doc["info"]["addr"] == "Kaushals-MacBook-Air.local:27018" 54 | return 55 | 56 | 57 | def test_starting_up(self): 58 | """test the starting_up() method of this module""" 59 | doc = {} 60 | doc["type"] = "init" 61 | doc["info"] = {} 62 | # non-valid message 63 | assert not starting_up("this is a nonvalid message", doc) 64 | assert not starting_up("Mon Jun 11 15:56:16 [initandlisten] MongoDB starting " 65 | ": 64-bit host=Kaushals-MacBook-Air.local", doc) 66 | # valid messages 67 | doc = starting_up("Mon Jun 11 15:56:16 [initandlisten] MongoDB starting : " 68 | "pid=7029 port=27018 dbpath=/data/rs2 64-bit " 69 | "host=Kaushals-MacBook-Air.local", doc) 70 | assert doc 71 | assert doc["type"] == "init" 72 | assert doc["info"]["subtype"] == "startup" 73 | assert doc["info"]["addr"] == "Kaushals-MacBook-Air.local:27018" 74 | return 75 | 76 | if __name__ == '__main__': 77 | unittest.main() 78 | -------------------------------------------------------------------------------- /test/test_organizing_servers.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest #organizing servers uses the supporting methods module and is going to have the same import problem that replacing clock skew has. TO BE FIXED. 16 | from edda.post.event_matchup import organize_servers 17 | from edda.run_edda import assign_address 18 | import pymongo 19 | import logging 20 | from datetime import * 21 | from pymongo import MongoClient 22 | from time import sleep 23 | from nose.plugins.skip import Skip, SkipTest 24 | 25 | 26 | class test_organizing_servers(unittest.TestCase): 27 | def db_setup(self): 28 | """Set up a database for use by tests""" 29 | c = MongoClient() 30 | db = c["test"] 31 | servers = db["fruit.servers"] 32 | entries = db["fruit.entries"] 33 | clock_skew = db["fruit.clock_skew"] 34 | db.drop_collection(servers) 35 | db.drop_collection(entries) 36 | db.drop_collection(clock_skew) 37 | return [servers, entries, clock_skew, db] 38 | 39 | 40 | def test_organize_two_servers(self): 41 | logger = logging.getLogger(__name__) 42 | servers, entries, clock_skew, db = self.db_setup() 43 | original_date = datetime.now() 44 | 45 | entries.insert(self.generate_doc( 46 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 47 | entries.insert(self.generate_doc("status", "pear", "STARTUP2" 48 | "", 5, "apple", original_date + timedelta(seconds=5))) 49 | 50 | assign_address(self, 1, "apple", servers) 51 | assign_address(self, 2, "pear", servers) 52 | 53 | organized_servers = organize_servers(db, "fruit") 54 | logger.debug("Organized servers Printing: {}".format(organized_servers)) 55 | for server_name in organized_servers: 56 | logger.debug("Server Name: {}".format(server_name)) 57 | for item in organized_servers[server_name]: 58 | logger.debug("Item list: {}".format(item)) 59 | logger.debug("Item: {}".format(item)) 60 | assert item 61 | 62 | 63 | def test_organizing_three_servers(self): 64 | servers, entries, clock_skew, db = self.db_setup() 65 | logger = logging.getLogger(__name__) 66 | original_date = datetime.now() 67 | 68 | entries.insert(self.generate_doc( 69 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 70 | entries.insert(self.generate_doc("status", "apple", "STARTUP2" 71 | "", 5, "pear", original_date + timedelta(seconds=14))) 72 | entries.insert(self.generate_doc("status", "pear", "STARTUP2" 73 | "", 5, "apple", original_date + timedelta(seconds=5))) 74 | entries.insert(self.generate_doc("status", "pear", "STARTUP2" 75 | "", 5, "apple", original_date + timedelta(seconds=15))) 76 | entries.insert(self.generate_doc("status", "plum", "STARTUP2" 77 | "", 5, "apple", original_date + timedelta(seconds=9))) 78 | entries.insert(self.generate_doc("status", "plum", "STARTUP2" 79 | "", 5, "apple", original_date + timedelta(seconds=11))) 80 | 81 | servers.insert(self.generate_server_doc( 82 | "status", "plum", "STARTUP2", 5, "apple", original_date)) 83 | servers.insert(self.generate_server_doc("status", "apple", "STARTUP2" 84 | "", 5, "plum", original_date + timedelta(seconds=9))) 85 | servers.insert(self.generate_server_doc("status", "pear", "STARTUP2" 86 | "", 5, "apple", original_date + timedelta(seconds=6))) 87 | 88 | organized_servers = organize_servers(db, "fruit") 89 | logger.debug("Organized servers Printing: {}".format(organized_servers)) 90 | for server_name in organized_servers: 91 | logger.debug("Server Name: {}".format(server_name)) 92 | first = True 93 | for item in organized_servers[server_name]: 94 | logger.debug("Item list: {}".format(item)) 95 | if first: 96 | past_date = item["date"] 97 | first = False 98 | continue 99 | current_date = item["date"] 100 | assert past_date <= current_date 101 | past_date = current_date 102 | 103 | #ogger.debug("Item: {}".format(item)) 104 | 105 | 106 | def test_organize_same_times(self): 107 | servers, entries, clock_skew, db = self.db_setup() 108 | logger = logging.getLogger(__name__) 109 | original_date = datetime.now() 110 | 111 | entries.insert(self.generate_doc( 112 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 113 | entries.insert(self.generate_doc( 114 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 115 | entries.insert(self.generate_doc( 116 | "status", "pear", "STARTUP2", 5, "apple", original_date)) 117 | entries.insert(self.generate_doc( 118 | "status", "pear", "STARTUP2", 5, "apple", original_date)) 119 | entries.insert(self.generate_doc( 120 | "status", "plum", "STARTUP2", 5, "apple", original_date)) 121 | entries.insert(self.generate_doc( 122 | "status", "plum", "STARTUP2", 5, "apple", original_date)) 123 | 124 | servers.insert(self.generate_server_doc( 125 | "status", "plum", "STARTUP2", 5, "apple", original_date)) 126 | servers.insert(self.generate_server_doc( 127 | "status", "apple", "STARTUP2", 5, "plum", original_date)) 128 | servers.insert(self.generate_server_doc( 129 | "status", "pear", "STARTUP2", 5, "apple", original_date)) 130 | 131 | organized_servers = organize_servers(db, "fruit") 132 | logger.debug("Organized servers Printing: {}".format(organized_servers)) 133 | for server_name in organized_servers: 134 | logger.debug("Server Name: {}".format(server_name)) 135 | first = True 136 | for item in organized_servers[server_name]: 137 | logger.debug("Item list: {}".format(item)) 138 | if first: 139 | past_date = item["date"] 140 | first = False 141 | continue 142 | current_date = item["date"] 143 | assert past_date <= current_date 144 | past_date = current_date 145 | 146 | 147 | def generate_doc(self, type, server, label, code, target, date): 148 | """Generate an entry""" 149 | doc = {} 150 | doc["type"] = type 151 | doc["origin_server"] = server 152 | doc["info"] = {} 153 | doc["info"]["state"] = label 154 | doc["info"]["state_code"] = code 155 | doc["info"]["server"] = target 156 | doc["date"] = date 157 | return doc 158 | 159 | 160 | def generate_server_doc(self, type, server, label, code, target, date): 161 | """Generate an entry""" 162 | doc = {} 163 | doc["type"] = type 164 | doc["server_num"] = server 165 | doc["origin_server"] = server 166 | doc["info"] = {} 167 | doc["info"]["state"] = label 168 | doc["info"]["state_code"] = code 169 | doc["info"]["server"] = target 170 | doc["date"] = date 171 | return doc 172 | 173 | if __name__ == '__main__': 174 | unittest.main() 175 | 176 | -------------------------------------------------------------------------------- /test/test_replacing_clock_skew.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import logging 16 | import unittest #replacing clock skew uses supporting methods, so there is the problem with the import statement 17 | 18 | from edda.post.replace_clock_skew import replace_clock_skew 19 | from edda.supporting_methods import assign_address 20 | from datetime import * 21 | from pymongo import MongoClient #The tests fail, but this module is not currently used. 22 | 23 | class test_replacing_clock_skew(unittest.TestCase): 24 | def db_setup(self): 25 | """Set up a database for use by tests""" 26 | c = MongoClient() 27 | db = c["test"] 28 | servers = db["fruit.servers"] 29 | entries = db["fruit.entries"] 30 | clock_skew = db["fruit.clock_skew"] 31 | db.drop_collection(servers) 32 | db.drop_collection(entries) 33 | db.drop_collection(clock_skew) 34 | return [servers, entries, clock_skew, db] 35 | 36 | 37 | def test_replacing_none(self): 38 | logger = logging.getLogger(__name__) 39 | """"Replaces servers without skews.""""" 40 | #result = self.db_setup() 41 | servers, entries, clock_skew, db = self.db_setup() 42 | original_date = datetime.now() 43 | 44 | entries.insert(self.generate_doc( 45 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 46 | entries.insert(self.generate_doc( 47 | "status", "pear", "STARTUP2", 5, "apple", original_date)) 48 | assign_address(self, 5, "pear", servers) 49 | assign_address(self, 6, "apple", servers) 50 | doc1 = self.generate_cs_doc("5", "6") 51 | doc1["partners"]["6"]["0"] = 5 52 | clock_skew.insert(doc1) 53 | doc1 = self.generate_cs_doc("6", "5") 54 | doc1["partners"]["5"]["0"] = 5 55 | clock_skew.insert(doc1) 56 | 57 | replace_clock_skew(db, "fruit") 58 | 59 | docs = entries.find({"origin_server": "apple"}) 60 | for doc in docs: 61 | logger.debug("Original Date: {}".format(doc["date"])) 62 | delta = original_date - doc["date"] 63 | logger.debug("Delta: {}".format(repr(delta))) 64 | 65 | if delta < timedelta(milliseconds=1): 66 | assert True 67 | continue 68 | assert False 69 | #assert 4 == 5 70 | #assert original_date == entries.find(). 71 | 72 | 73 | def test_replacing_one_value(self): 74 | assert True 75 | return 76 | logger = logging.getLogger(__name__) 77 | servers, entries, clock_skew, db = self.db_setup() 78 | skew1 = 5 79 | 80 | original_date = datetime.now() 81 | entries.insert(self.generate_doc( 82 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 83 | entries.insert(self.generate_doc( 84 | "status", "pear", "STARTUP2", 5, "apple", original_date)) 85 | assign_address(self, 5, "pear", servers) 86 | assign_address(self, 6, "apple", servers) 87 | doc1 = self.generate_cs_doc("5", "6") 88 | doc1["partners"]["6"]["5"] = skew1 89 | clock_skew.insert(doc1) 90 | doc1 = self.generate_cs_doc("6", "5") 91 | doc1["partners"]["5"]["0"] = -skew1 92 | clock_skew.insert(doc1) 93 | 94 | clock_skew.insert(doc1) 95 | replace_clock_skew(db, "fruit") 96 | 97 | docs = entries.find({"origin_server": "apple"}) 98 | for doc in docs: 99 | logger.debug("Original Date: {}".format(doc["date"])) 100 | #logger.debug("Adjusted Date: {}".format(doc["adjusted_date"])) 101 | delta = abs(original_date - doc["adjusted_date"]) 102 | logger.debug("Delta: {}".format(repr(delta))) 103 | if delta - timedelta(seconds=skew1) < timedelta(milliseconds=1): 104 | assert True 105 | continue 106 | assert False 107 | 108 | 109 | def test_replacing_multiple(self): 110 | assert True 111 | return 112 | logger = logging.getLogger(__name__) 113 | servers, entries, clock_skew, db = self.db_setup() 114 | skew = "14" 115 | neg_skew = "-14" 116 | weight = 10 117 | 118 | original_date = datetime.now() 119 | entries.insert(self.generate_doc( 120 | "status", "apple", "STARTUP2", 5, "pear", original_date)) 121 | entries.insert(self.generate_doc( 122 | "status", "pear", "STARTUP2", 5, "apple", original_date)) 123 | entries.insert(self.generate_doc( 124 | "status", "plum", "STARTUP2", 5, "apple", original_date)) 125 | entries.insert(self.generate_doc( 126 | "status", "apple", "STARTUP2", 5, "plum", original_date)) 127 | entries.insert(self.generate_doc( 128 | "status", "pear", "STARTUP2", 5, "plum", original_date)) 129 | entries.insert(self.generate_doc( 130 | "status", "plum", "STARTUP2", 5, "pear", original_date)) 131 | 132 | assign_address(self, 4, "apple", servers) 133 | assign_address(self, 5, "pear", servers) 134 | assign_address(self, 6, "plum", servers) 135 | 136 | doc1 = self.generate_cs_doc("5", "4") 137 | doc1["partners"]["4"][skew] = weight 138 | doc1["partners"]["6"] = {} 139 | doc1["partners"]["6"][skew] = weight 140 | clock_skew.insert(doc1) 141 | doc1 = self.generate_cs_doc("4", "5") 142 | doc1["partners"]["6"] = {} 143 | doc1["partners"]["6"][skew] = weight 144 | doc1["partners"]["5"][neg_skew] = weight 145 | clock_skew.insert(doc1) 146 | doc1 = self.generate_cs_doc("6", "5") 147 | doc1["partners"]["4"] = {} 148 | doc1["partners"]["4"][neg_skew] = weight 149 | doc1["partners"]["5"][neg_skew] = weight 150 | clock_skew.insert(doc1) 151 | replace_clock_skew(db, "fruit") 152 | docs = entries.find({"origin_server": "plum"}) 153 | for doc in docs: 154 | logger.debug("Original Date: {}".format(doc["date"])) 155 | logger.debug("Adjusted Date: {}".format(doc["adjusted_date"])) 156 | delta = abs(original_date - doc["adjusted_date"]) 157 | logger.debug("Delta: {}".format(repr(delta))) 158 | if delta - timedelta(seconds=int(skew)) < timedelta(milliseconds=1): 159 | assert True 160 | continue 161 | assert False 162 | 163 | docs = entries.find({"origin_server": "apple"}) 164 | for doc in docs: 165 | logger.debug("Original Date: {}".format(doc["date"])) 166 | logger.debug("Adjusted Date: {}".format(doc["adjusted_date"])) 167 | delta = abs(original_date - doc["adjusted_date"]) 168 | logger.debug("Delta: {}".format(repr(delta))) 169 | if delta - timedelta(seconds=int(skew)) < timedelta(milliseconds=1): 170 | assert True 171 | continue 172 | assert False 173 | 174 | docs = entries.find({"origin_server": "pear"}) 175 | 176 | for doc in docs: 177 | if not "adjusted_date" in doc: 178 | assert True 179 | continue 180 | assert False 181 | 182 | 183 | def generate_doc(self, type, server, label, code, target, date): 184 | """Generate an entry""" 185 | doc = {} 186 | doc["type"] = type 187 | doc["origin_server"] = server 188 | doc["info"] = {} 189 | doc["info"]["state"] = label 190 | doc["info"]["state_code"] = code 191 | doc["info"]["server"] = target 192 | doc["date"] = date 193 | return doc 194 | 195 | # anatomy of a clock skew document: 196 | # document = { 197 | # "type" = "clock_skew" 198 | # "server_name" = "name" 199 | # "partners" = { 200 | # server_name : { 201 | # "skew_1" : weight, 202 | # "skew_2" : weight... 203 | # } 204 | # } 205 | 206 | 207 | def generate_cs_doc(self, name, referal): 208 | doc = {} 209 | doc["type"] = "clock_skew" 210 | doc["server_num"] = name 211 | doc["partners"] = {} 212 | doc["partners"][referal] = {} 213 | return doc 214 | 215 | if __name__ == 'main': 216 | unittest.main() 217 | -------------------------------------------------------------------------------- /test/test_rs_exit.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | 17 | from edda.filters.rs_exit import * 18 | from datetime import datetime 19 | 20 | 21 | class test_rs_exit(unittest.TestCase): 22 | def test_criteria(self): 23 | assert not criteria("this should not pass") 24 | assert criteria("Thu Jun 14 11:43:28 dbexit: really exiting now") == 1 25 | assert not criteria("Foo bar") 26 | 27 | 28 | def test_process(self): 29 | date = datetime.now() 30 | self.check_state("Thu Jun 14 11:43:28 dbexit: really exiting now", 2, date) 31 | self.check_state("Thu Jun 14 11:43:28 dbexit: really exiting now", 2, date) 32 | assert not process("This should fail", date) 33 | 34 | 35 | def check_state(self, message, code, date): 36 | doc = process(message, date) 37 | print doc 38 | assert doc 39 | assert doc["type"] == "exit" 40 | assert doc["msg"] == message 41 | assert doc["info"]["server"] == "self" 42 | #print 'Server number is: *{0}*, testing against, *{1}*'.format(doc["info"]["server"], server) 43 | 44 | if __name__ == '__main__': 45 | unittest.main() 46 | -------------------------------------------------------------------------------- /test/test_rs_reconfig.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import string 16 | import unittest 17 | 18 | from datetime import datetime 19 | from edda.filters.rs_reconfig import * 20 | 21 | 22 | class test_rs_reconfig(unittest.TestCase): 23 | def test_criteria(self): 24 | assert not criteria("this should not pass") 25 | assert criteria("Tue Jul 3 10:20:15 [rsMgr]" 26 | " replSet replSetReconfig new config saved locally") == 1 27 | assert not criteria("Tue Jul 3 10:20:15 [rsMgr]" 28 | " replSet new config saved locally") 29 | assert not criteria("Tue Jul 3 10:20:15 [rsMgr] replSet info : additive change to configuration") 30 | 31 | def test_process(self): 32 | date = datetime.now() 33 | self.check_state("Tue Jul 3 10:20:15 [rsMgr] replSet" 34 | " replSetReconfig new config saved locally", 0, date, None) 35 | assert process("This should fail", date) == None 36 | 37 | def check_state(self, message, code, date, server): 38 | doc = process(message, date) 39 | assert doc 40 | assert doc["date"] == date 41 | assert doc["type"] == "reconfig" 42 | assert doc["msg"] == message 43 | assert doc["info"]["server"] == "self" 44 | 45 | if __name__ == '__main__': 46 | unittest.main() 47 | -------------------------------------------------------------------------------- /test/test_rs_status.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | from edda.filters.rs_status import * 17 | from datetime import datetime 18 | 19 | 20 | class test_rs_status(unittest.TestCase): 21 | def test_criteria(self): 22 | """test the criteria() method of this module""" 23 | # invalid messages 24 | assert criteria("this is an invalid message") < 0 25 | assert criteria("I am the primary") < 0 26 | assert criteria("I am the secondary") < 0 27 | assert criteria("the server is down") < 0 28 | assert criteria("the server is back up") < 0 29 | # check for proper return codes 30 | assert criteria( 31 | "Mon Jun 11 15:56:16 [rsStart] replSet I am localhost:27018") == 0 32 | assert criteria("Mon Jun 11 15:57:04 [rsMgr] replSet PRIMARY") == 1 33 | assert criteria("Mon Jun 11 15:56:16 [rsSync] replSet SECONDARY") == 2 34 | assert criteria("replSet RECOVERING") == 3 35 | assert criteria("replSet encountered a FATAL ERROR") == 4 36 | assert criteria("Mon Jun 11 15:56:16 [rsStart] replSet STARTUP2") == 5 37 | assert criteria("replSet member is now in state UNKNOWN") == 6 38 | assert criteria("Mon Jun 11 15:56:18 [rsHealthPoll] " 39 | "replSet member localhost:27019 is now in state ARBITER") == 7 40 | assert criteria("Mon Jun 11 15:56:58 [rsHealthPoll] " 41 | "replSet member localhost:27017 is now in state DOWN") == 8 42 | assert criteria("replSet member is now in state ROLLBACK") == 9 43 | assert criteria("replSet member is now in state REMOVED") == 10 44 | return 45 | 46 | def test_process(self): 47 | """test the process() method of this module""" 48 | date = datetime.now() 49 | # invalid lines 50 | assert process("Mon Jun 11 15:56:16 " 51 | "[rsStart] replSet localhost:27018", date) == None 52 | assert process("Mon Jun 11 15:56:18 " 53 | "[rsHealthPoll] replSet member localhost:27019 is up", date) == None 54 | # valid lines 55 | self.check_state("Mon Jun 11 15:56:16 " 56 | "[rsStart] replSet I am", "STARTUP1", 0, "self") 57 | self.check_state("[rsMgr] replSet PRIMARY", "PRIMARY", 1, "self") 58 | self.check_state("[rsSync] replSet SECONDARY", "SECONDARY", 2, "self") 59 | self.check_state("[rsSync] replSet is RECOVERING", "RECOVERING", 3, "self") 60 | self.check_state("[rsSync] replSet member " 61 | "encountered FATAL ERROR", "FATAL", 4, "self") 62 | self.check_state("[rsStart] replSet STARTUP2", "STARTUP2", 5, "self") 63 | self.check_state( 64 | "Mon Jul 11 11:56:32 [rsSync] replSet member" 65 | " 10.4.3.56:45456 is now in state UNKNOWN", 66 | "UNKNOWN", 6, "10.4.3.56:45456") 67 | self.check_state("Mon Jul 11 11:56:32" 68 | " [rsHealthPoll] replSet member localhost:27019" 69 | " is now in state ARBITER", "ARBITER", 7, "localhost:27019") 70 | self.check_state("Mon Jul 11 11:56:32" 71 | " [rsHealthPoll] replSet member " 72 | "localhost:27017 is now in state DOWN", "DOWN", 8, "localhost:27017") 73 | self.check_state("Mon Jul 11 11:56:32" 74 | " [rsSync] replSet member example@domain.com:22234" 75 | " is now in state ROLLBACK", "ROLLBACK", 9, "example@domain.com:22234") 76 | self.check_state("Mon Jul 11 11:56:32" 77 | " [rsSync] replSet member my-MacBook-pro:43429 has been REMOVED" 78 | "", "REMOVED", 10, "my-MacBook-pro:43429") 79 | 80 | 81 | def test_startup_with_network_name(self): 82 | """Test programs's ability to capture IP address from 83 | a STARTUP message""" 84 | self.check_state_with_addr("Mon Jun 11 15:56:16 [rsStart]" 85 | " replSet I am 10.4.65.7:27018", "STARTUP1", 0, "10.4.65.7:27018") 86 | 87 | 88 | def test_startup_with_hostname(self): 89 | """Test that program captures hostnames from 90 | STARTUP messages as well as IP addresses""" 91 | self.check_state_with_addr("Mon Jun 11 15:56:16 [rsStart]" 92 | " replSet I am sam@10gen.com:27018", "STARTUP1", 0, "sam@10gen.com:27018") 93 | 94 | 95 | def check_state_with_addr(self, msg, state, code, server): 96 | date = datetime.now() 97 | doc = process(msg, date) 98 | assert doc 99 | assert doc["type"] == "status" 100 | assert doc["info"]["state_code"] == code 101 | assert doc["info"]["state"] == state 102 | assert doc["info"]["server"] == "self" 103 | assert doc["info"]["addr"] == server 104 | 105 | 106 | def check_state(self, msg, state, code, server): 107 | """Helper method to test documents generated by rs_status.process()""" 108 | date = datetime.now() 109 | doc = process(msg, date) 110 | assert doc 111 | assert doc["type"] == "status" 112 | assert doc["info"]["state_code"] == code 113 | assert doc["info"]["state"] == state 114 | assert doc["info"]["server"] == server 115 | 116 | if __name__ == '__main__': 117 | unittest.main() 118 | -------------------------------------------------------------------------------- /test/test_rs_sync.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | 17 | from datetime import datetime 18 | from edda.filters.rs_sync import * 19 | 20 | 21 | class test_rs_sync(unittest.TestCase): 22 | def test_criteria(self): 23 | assert (criteria("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: localhost:27017") == 1) 24 | #should fail, absence of word "syncing": malformed message 25 | assert not criteria("Tue Jun 12 13:08:47 [rsSync] replSet to: localhost:27017") 26 | #should fail, absence of [rsSync]: malformed message 27 | assert not criteria("Tue Jun 12 13:08:47 replSet to: localhost:27017") 28 | #should pass, it doesn't test to see if there is a valid port number until test_syncingDiff: malformed message to fail at another point 29 | assert criteria("Tue Jun 12 13:08:47 [rsSync] replSet syncing to:") == 1 30 | #should pass in this situation, date is irrevealant 31 | assert criteria("[rsSync] replSet syncing to: localhost:27017") == 1 32 | #foo bar test from git comment 33 | assert not criteria("foo bar") 34 | assert criteria("[rsSync] replSet syncing to:") == 1 35 | assert criteria("[rsSync] syncing [rsSync]") == 1 36 | assert not criteria("This should fail!!! [rsSync]") 37 | return 38 | 39 | 40 | def test_process(self): 41 | date = datetime.now() 42 | assert process("Mon Jun 11 15:56:16 [rsStart] replSet localhost:27018", date) == None 43 | assert process("Mon Jun 11 15:56:18 [rsHealthPoll] replSet member localhost:27019 is up", date) == None 44 | self.check_state("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: localhost:27017", "localhost:27017") 45 | self.check_state("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: 10.4.3.56:45456", "10.4.3.56:45456") 46 | self.check_state("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: 10.4.3.56:45456", "10.4.3.56:45456") 47 | self.check_state("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: 10.4.3.56:45456", "10.4.3.56:45456") 48 | self.check_state("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: 10.4.3.56:45456", "10.4.3.56:45456") 49 | self.check_state("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: localhost:1234", "localhost:1234") 50 | self.check_state("[rsSync] syncing to: 10.4.3.56:45456", "10.4.3.56:45456") 51 | 52 | 53 | def test_syncing_diff(self): 54 | 55 | currTime = datetime.now() 56 | test = syncing_diff("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: localhost:27017", process("Tue Jun 12 13:08:47 [rsSync] replSet syncing to: localhost:27017", currTime)) 57 | assert test 58 | assert test["type"] == 'sync' 59 | 60 | 61 | def check_state(self, message, server): 62 | date = datetime.now() 63 | doc = process(message, date) 64 | assert doc["type"] == "sync" 65 | #print 'Server number is: *{0}*, testing against, *{1}*'.format(doc["info"]["server"], server) 66 | assert doc["info"]["sync_server"] == server 67 | assert doc["info"]["server"] == "self" 68 | 69 | if __name__ == '__main__': 70 | unittest.main() 71 | -------------------------------------------------------------------------------- /test/test_stale_secondary.py: -------------------------------------------------------------------------------- 1 | # Copyright 2012 10gen, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | from edda.filters.stale_secondary import * 17 | from datetime import datetime 18 | 19 | 20 | class test_stale_secondary(unittest.TestCase): 21 | def test_criteria(self): 22 | """Test the criteria() method of stale_secondary.py""" 23 | assert criteria("this should not pass") == 0 24 | assert criteria("Thu Sep 9 17:22:46 [rs_sync] replSet error RS102 too stale to catch up") == 1 25 | assert criteria("Thu Sep 9 17:24:46 [rs_sync] replSet error RS102 too stale to catch up, at least from primary: 127.0.0.1:30000") == 1 26 | 27 | 28 | def test_process(self): 29 | """Test the process() method of stale_secondary.py""" 30 | date = datetime.now() 31 | self.check_state("Thu Sep 9 17:22:46 [rs_sync] replSet error RS102 too stale to catch up", 0, date) 32 | self.check_state("Thu Sep 9 17:24:46 [rs_sync] replSet error RS102 too stale to catch up, at least from primary: 127.0.0.1:30000", 0, date) 33 | self.check_state("Thu Sep 9 17:24:46 [rs_sync] replSet error RS102 too stale to catch up, at least from primary: sam@10gen.com:27017", 0, date) 34 | assert process("This should fail", date) == None 35 | 36 | 37 | def check_state(self, message, code, date): 38 | """Helper method for tests""" 39 | doc = process(message, date) 40 | assert doc 41 | assert doc["type"] == "stale" 42 | assert doc["msg"] == message 43 | assert doc["info"]["server"] == "self" 44 | 45 | if __name__ == '__main__': 46 | unittest.main() 47 | -------------------------------------------------------------------------------- /test/test_supporting_methods.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014 MongoDB, Inc. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | import unittest 16 | from datetime import datetime 17 | from edda.supporting_methods import * 18 | 19 | # Some months for testing 20 | JUL = 7 21 | MAY = 5 22 | 23 | # Some weekdays for testing 24 | MON = 0 25 | TUE = 1 26 | WED = 2 27 | THU = 3 28 | FRI = 4 29 | SAT = 5 30 | SUN = 6 31 | 32 | class test_supporting_methods(unittest.TestCase): 33 | 34 | #-------------------------------- 35 | # test date-related functionality 36 | #-------------------------------- 37 | 38 | def test_has_same_weekday(self): 39 | # The equivalent weekday in different years 40 | assert has_same_weekday(15, JUL, WED, 2009) 41 | assert has_same_weekday(14, JUL, WED, 2010) 42 | assert has_same_weekday(13, JUL, WED, 2011) 43 | assert has_same_weekday(18, JUL, WED, 2012) 44 | assert has_same_weekday(17, JUL, WED, 2013) 45 | assert has_same_weekday(16, JUL, WED, 2014) 46 | 47 | # The same date in multiple years 48 | assert has_same_weekday(31, MAY, SUN, 2009) 49 | assert has_same_weekday(31, MAY, MON, 2010) 50 | assert has_same_weekday(31, MAY, TUE, 2011) 51 | assert has_same_weekday(31, MAY, THU, 2012) 52 | assert has_same_weekday(31, MAY, FRI, 2013) 53 | assert has_same_weekday(31, MAY, SAT, 2014) 54 | 55 | # Wrong things 56 | assert not has_same_weekday(31, MAY, SAT, 2009) 57 | assert not has_same_weekday(31, MAY, SUN, 2008) 58 | assert not has_same_weekday(30, MAY, MON, 2010) 59 | assert not has_same_weekday(30, MAY, THU, 2011) 60 | assert not has_same_weekday(30, MAY, FRI, 2013) 61 | assert not has_same_weekday(30, MAY, SAT, 2014) 62 | 63 | def test_guess_log_year(self): 64 | assert guess_log_year(15, JUL, WED) == 2009 65 | assert guess_log_year(14, JUL, WED) == 2010 66 | assert guess_log_year(13, JUL, WED) == 2011 67 | assert guess_log_year(18, JUL, WED) == 2012 68 | assert guess_log_year(17, JUL, WED) == 2013 69 | assert guess_log_year(16, JUL, WED) == 2014 70 | 71 | assert not guess_log_year(16, JUL, WED) == 2009 72 | assert not guess_log_year(16, JUL, WED) == 2010 73 | assert not guess_log_year(16, JUL, WED) == 2011 74 | assert not guess_log_year(16, JUL, WED) == 2012 75 | assert not guess_log_year(16, JUL, WED) == 2013 76 | 77 | # July 16 was on a Wednesday in 2008, but 2014 precedes. 78 | assert not guess_log_year(16, JUL, WED) == 2008 79 | 80 | def test_date_parser_old_logs(self): 81 | parsed_date = date_parser("Wed Jul 18 13:14:15") 82 | assert parsed_date.day == 18 83 | assert parsed_date.year == 2012 84 | assert parsed_date.month == 7 85 | assert parsed_date.hour == 13 86 | assert parsed_date.minute == 14 87 | assert parsed_date.second == 15 88 | 89 | def test_date_parser_new_logs(self): 90 | # 2.6 and beyond 91 | parsed_date = date_parser("2014-04-10T16:20:59.271-0400") 92 | assert parsed_date.year == 2014 93 | assert parsed_date.month == 4 94 | assert parsed_date.day == 10 95 | assert parsed_date.hour == 16 96 | assert parsed_date.minute == 20 97 | 98 | if __name__ == '__main__': 99 | unittest.main() 100 | --------------------------------------------------------------------------------