├── .gitignore ├── .gitmodules ├── LICENSE ├── README.md ├── __init__.py ├── example.ini ├── examples ├── DieHard │ ├── DieHard-simulate-replace-init.ini │ ├── DieHard.ini │ ├── DieHard.tla │ ├── DieHard_json.ini │ ├── DieHard_json.tla │ ├── MC.out │ ├── README │ ├── python_init.py │ └── run.sh ├── Makefile └── TPaxos │ ├── README │ ├── TPaxos-simulate.ini │ ├── TPaxos.ini │ ├── TPaxos.tla │ └── run.sh ├── requirements.txt ├── setup.py ├── tlcwrapper.py ├── trace_action_counter.py ├── trace_counter.py ├── trace_generator.py └── trace_reader.py /.gitignore: -------------------------------------------------------------------------------- 1 | .idea 2 | model_* 3 | states/* 4 | *.jar 5 | MC_* 6 | trace_* 7 | *.toolbox 8 | .vscode 9 | !*.py 10 | __pycache__ 11 | tlccmd.egg-info 12 | dist 13 | build -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "spssh"] 2 | path = spssh 3 | url = https://github.com/tangruize/spssh.git 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 Ruize Tang 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TLC cmd tools 2 | 3 | Welcome! Here you'll find scripts that automate TLC config file generation, batch model processing, simulation trace conversion to JSON, and Graphviz dot file conversion into traces. 4 | 5 | Features: 6 | 7 | - Comprehensive support for nearly all options in the TLA+ toolbox model checking panel 8 | - Efficient batch processing of models 9 | - Convenient saving of batch result tables 10 | - Initialization of states from trace files 11 | - Script for reading trace files into Python objects or JSON dumps 12 | - Conversion of Graphviz dot state graph files into unique traces 13 | - Seamless execution in distributed mode for easy scalability 14 | 15 | ## Dependencies 16 | 17 | python3+, Java 11+ 18 | 19 | Optional modules: 20 | 21 | ```sh 22 | pip3 install requests # to download tla2tools.jar 23 | pip3 install psutil # for "memory ratio" option (see example.ini) 24 | pip3 install networkx # for conversion of Graphviz dot state graph file 25 | git submodule update --init --recursive # for distributed mode 26 | ``` 27 | 28 | ## How to run 29 | 30 | ### tlcwrapper.py 31 | 32 | ```txt 33 | usage: tlcwrapper.py [-h] [-j CLASSPATH] [-g] [-r] [-s] [-d] [-c] [-m] [-n] [config.ini] 34 | 35 | Run TLC in CMD 36 | 37 | positional arguments: 38 | config.ini Configuration file (if not presented, stdin is used) 39 | 40 | options: 41 | -h, --help show this help message and exit 42 | -j CLASSPATH Java classpath to use 43 | -g Generate TLC config files and print Java CMD strings 44 | -r Run without processing TLC output 45 | -s Do not save summary file 46 | -d Download tla2tools.jar and CommunityModules-deps.jar and exit 47 | -c Separate constants and model options into two files 48 | -m Require community modules 49 | -n Not to print debug messages 50 | ``` 51 | 52 | An example: [DieHard/run.sh](./examples/DieHard/run.sh) 53 | 54 | ### trace_reader.py 55 | 56 | ```txt 57 | usage: trace_reader.py [-h] [-o JSON_FILE] [-i INDENT] [-p HANDLER] [-a] [-d] [-s] [-g] trace_file 58 | 59 | Read TLA traces into Python objects 60 | 61 | positional arguments: 62 | trace_file TLA trace file 63 | 64 | options: 65 | -h, --help show this help message and exit 66 | -o JSON_FILE output to json file 67 | -i INDENT json file indent 68 | -p HANDLER python user_dict and list/kv handers 69 | -a save action name in '_action' key if available 70 | -d make data structures hashable 71 | -s sort dict by keys, true if -d is defined 72 | -g get dot file graph 73 | ``` 74 | 75 | The trace file can be either the MC.out file or generated through the "simulation dump traces" option. 76 | 77 | Python `-p` handler example: 78 | 79 | ```py 80 | # 'user_dict' replaces dict and list values (example: replace string 'None' to Python None obj) 81 | user_dict = {'None': None} 82 | 83 | # 'outside_kv_handler' replaces the outside dicts (i.e. keys are TLA+ variable names) keys and values 84 | # 'inside_kv_handler' replaces the inside dicts/TLA+ records (nested in variables) keys and values 85 | # Example: if key (TLA+ variable name) is "state", then return (k, v) without changing. 86 | def outside_kv_handler(k, v): 87 | if k != "state": 88 | return k, v 89 | else: 90 | return k, "value is changed" 91 | 92 | # 'list_handler' gets TLA+ set/sequence as list `l` with type annotation `k` (set/seq), 93 | # and returns data after processing 94 | def list_handler(l, k): 95 | if k == "set": 96 | return l 97 | else: 98 | return [ "changed" ] 99 | ``` 100 | 101 | ### trace_counter.py 102 | 103 | ```txt 104 | usage: trace_counter.py [-h] [-p NPROC] [-n NTRACE] [-l LOGFILE] [-f HASHFILE] [-r] trace_dir 105 | 106 | Simulation unique traces and distinct states counter 107 | 108 | positional arguments: 109 | trace_dir Trace dir 110 | 111 | options: 112 | -h, --help show this help message and exit 113 | -p NPROC Number of processes 114 | -n NTRACE Print progress every n traces 115 | -l LOGFILE Log output to file 116 | -f HASHFILE Hash file 117 | -r Reduce only 118 | ``` 119 | 120 | ### trace_generator.py 121 | 122 | ```txt 123 | usage: trace_generator.py [-h] [-p NPROC] [-s SAVE_DIR] dot_file 124 | 125 | Generate all simple paths of a dot file 126 | 127 | positional arguments: 128 | dot_file Dot file 129 | 130 | options: 131 | -h, --help show this help message and exit 132 | -p NPROC Number of processes 133 | -s SAVE_DIR Save all generated traces 134 | ``` 135 | 136 | ## How to write config.ini 137 | 138 | See [example.ini](./example.ini) 139 | 140 | An example: [TPaxos/TPaxos-simulate.ini](./examples/TPaxos/TPaxos-simulate.ini) 141 | 142 | An example of replacing Init: 143 | [DieHard/DieHard-simulate-replace-init.ini](./examples/DieHard/DieHard-simulate-replace-init.ini) 144 | 145 | ## Misc 146 | 147 | If you find bugs, welcome to submit issues and pull requests! 148 | 149 | If you can speak Chinese, this [video tutorial](https://www.bilibili.com/video/BV1B3411r71a) may help you get started. 150 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tangruize/tlc-cmd/b11b98bf1e7ee69d6ea1f9490b9cb9961ca1fa94/__init__.py -------------------------------------------------------------------------------- /example.ini: -------------------------------------------------------------------------------- 1 | ; examples.ini 2 | ; Sample tlcwrapper.py configuration file 3 | 4 | [options] ; TLC cmd arguments 5 | ; "target" specifies the top module TLA+ file 6 | target: path/to/top_module.tla 7 | ; "model name" creates a directory "path/to/model_name" + "_timestamp" + "_seq", copies all tla files to it 8 | model name: model_name 9 | ; default options are as follows (optional) 10 | 11 | ; "workers" worker numbers, default is 1. (use "auto" to automatically select the number of threads) 12 | workers: 1 13 | ; SHOW_IN_TABLE to show in the summary table, must be used consecutively after the option 14 | workers: SHOW_IN_TABLE 15 | ; "dfs depth" sets model checking search mode to dfs (default is bfs), and sets the search depth 16 | dfs depth: 100 17 | ; "simulation depth" enables simulation mode and sets the search depth 18 | simulation depth: 100 19 | ; "simulation traces" generates n*worker traces (default is infinite), also enables simulation mode 20 | simulation traces: 0 21 | ; "simulation dump traces" saves traces, also enables simulation mode 22 | simulation dump traces: false/true 23 | ; "simulation seed" sets seed for random simulation (can be used to reproduce an experiment), also enables simulation mode 24 | simulation seed: 0 25 | ; "check deadlock" whether or not to check deadlock, default is false 26 | check deadlock: false/true 27 | ; "checkpoint minute" interval between check point, default is 30 28 | checkpoint minute: 30 29 | ; "recover" recover from the checkpoint with the specified id (i.e. path, recommended to use absolute path), default is not set 30 | recover: /path/to/yyyy-MM-dd-HH-mm-ss.SSS 31 | ; "clean up" clean up the states directory (only when not recovering), default is false 32 | clean up: true/false 33 | ; "gzip" controls if gzip is applied to value input/output (checkpoint) streams, default is false (true saves ~88% disk spaces) 34 | gzip: false/true 35 | ; "dump states" saves states to "MC_states.dump" or "MC_states.dot". value range: "true", "dot", or "false" (default) 36 | dump states: false/true/dot/dot,colorize,actionlabels,stuttering 37 | ; "coverage minute" sets tlc computing coverage every n minutes, default is disabled 38 | coverage minute: 1 39 | ; "system memory" physical memory to use (MB) 40 | system memory: 4000 41 | ; "memory ratio" physical memory ratio to use (0..1) (overrides "system memory") 42 | memory ratio: 0.4 43 | ; "community modules" whether or not to use community modules, default is false 44 | community modules: false/true 45 | ; "generate spec TE" generating a trace exploration (TE) spec, default is false 46 | generate spec TE: false/true 47 | ; "dump trace" formats the TLA+ error trace as TLA and JSON, default is false 48 | dump trace: false/true 49 | ; "stop after" TLC stops after n seconds 50 | stop after: 600 51 | ; "liveness check" checks liveness properties at different times of model checking 52 | liveness check: default/final/seqfinal 53 | ; "diff trace" when printing trace, show only the differences between successive states 54 | diff trace: false/true 55 | ; "distributed mode" run an ad hoc TLC distributed server, the value should be hostname/ip (not true) 56 | distributed mode: false/true/hostname/ip 57 | ; distributed worker types: "distributed TLC and fingerprint" is recommended 58 | distributed TLC workers: user1@host1 [memory] [thread count] 59 | user2@host2 [memory] [thread count] 60 | distributed fingerprint server: user2@host2 61 | distributed TLC and fingerprint: user3@host3 62 | ; "other TLC options" other TLC options, each argument is split by line, and starts with space 63 | other TLC options: field 64 | split by 65 | line 66 | ; "other Java options" other Java options, each argument is split by line, and starts with space 67 | other Java options: field 68 | split by 69 | line 70 | 71 | [init state] ; (optional) replace Init with a specific state in the trace file 72 | ; trace file or MC.out (with counterexample traces) to provide a state 73 | trace file: path/to/trace_file 74 | ; the number of state to be selected, 0 to select the last state 75 | state: 0 76 | ; python callback handlers to select a specific state 77 | python init file: path/to/python_file 78 | 79 | [behavior] ; what is the behavior spec 80 | ; one or none: (init & next) OR (temporal formula) 81 | #init: Init 82 | #next: Next 83 | temporal formula: Spec 84 | 85 | [invariants] ; (for safety) formulas true in every reachable state 86 | ; format: "NAME: formula" 87 | TypeOK: TypeOK 88 | TCConsistent: TCConsistent 89 | ; Warning: multi line removes any leading spaces 90 | multi_line_inv: /\ multi 91 | /\ line 92 | /\ inv 93 | 94 | [properties] ; (for liveness) temporal formulas true for every possible behavior 95 | ; format is the same as [invariants] 96 | TCSpec: TCSpec 97 | 98 | [state constraint] ; A state constraint is a formula restrict the possible states by a state predicate 99 | ; format is the same as [invariants] 100 | StateConstraint: StateConstraint 101 | 102 | [action constraint] ; An action constraint is a formula restrict the possible transactions 103 | ; format is the same as [invariants] 104 | ActionConstraint: ActionConstraint 105 | 106 | [additional definitions]; definitions required for the model checkings 107 | Additional: abc == 1 108 | 109 | [constants] ; specify the values of declared constants 110 | ; continuous same options will be combined by Cartesian product (to support batch mode) 111 | Char: [model value]{a, b} 112 | Char: [model value]{a, b, c} 113 | Client: [model value]{c1, c2} 114 | Client: [model value]{c1, c2, c3} 115 | Server: [model value] 116 | InitState: <<>> 117 | InitState: SHOW_IN_TABLE 118 | Msg: Msg 119 | 120 | [override] ; direct TLC to use alternate definitions for operators 121 | ; the same as [constants] (but without "set of model values") 122 | Nop: [model value] 123 | Int: -10..10 124 | Int: -1000..1000 125 | 126 | [const expr] ; evaluate constant expression 127 | ; only one option "expr", you cannot define other names 128 | expr: GCD(1,1) 129 | 130 | [alias] 131 | alias: Alias 132 | -------------------------------------------------------------------------------- /examples/DieHard/DieHard-simulate-replace-init.ini: -------------------------------------------------------------------------------- 1 | ; DieHard-simulate-replace-init.ini 2 | 3 | [options] 4 | target: DieHard.tla 5 | model name: model_simulation 6 | workers: 1 7 | workers: SHOW_IN_TABLE 8 | check deadlock: true 9 | system memory: 2000 10 | simulation depth: 20 11 | simulation depth: SHOW_IN_TABLE 12 | simulation dump traces: true 13 | 14 | [init state] 15 | trace file: MC.out 16 | trace file: SHOW_IN_TABLE 17 | python init file: python_init.py 18 | python init file: SHOW_IN_TABLE 19 | 20 | [behavior] 21 | init: Init 22 | next: Next 23 | 24 | [invariants] 25 | TypeOK: TypeOK 26 | BigNE4: big /= 4 27 | -------------------------------------------------------------------------------- /examples/DieHard/DieHard.ini: -------------------------------------------------------------------------------- 1 | ; DieHard.ini 2 | 3 | [options] 4 | target: ./DieHard.tla 5 | model name: model 6 | workers: 4 7 | workers: SHOW_IN_TABLE 8 | check deadlock: true 9 | system memory: 2000 10 | 11 | [behavior] 12 | init: Init 13 | next: Next 14 | 15 | [invariants] 16 | TypeOK: TypeOK 17 | BigNE4: big /= 4 18 | -------------------------------------------------------------------------------- /examples/DieHard/DieHard.tla: -------------------------------------------------------------------------------- 1 | ------------------------------ MODULE DieHard ------------------------------ 2 | EXTENDS Integers 3 | 4 | VARIABLES small, big 5 | 6 | TypeOK == /\ small \in 0..3 7 | /\ big \in 0..5 8 | 9 | Init == /\ big = 0 10 | /\ small = 0 11 | 12 | FillSmall == /\ small' = 3 13 | /\ big' = big 14 | 15 | FillBig == /\ big' = 5 16 | /\ small' = small 17 | 18 | EmptySmall == /\ small' = 0 19 | /\ big' = big 20 | 21 | EmptyBig == /\ big' = 0 22 | /\ small' = small 23 | 24 | Min(m, n) == IF m < n THEN m ELSE n 25 | 26 | SmallToBig == 27 | LET poured == Min(big + small, 5) - big 28 | IN /\ big' = big + poured 29 | /\ small' = small - poured 30 | 31 | BigToSmall == 32 | LET poured == Min(big + small, 3) - small 33 | IN /\ big' = big - poured 34 | /\ small' = small + poured 35 | 36 | Next == \/ FillSmall 37 | \/ FillBig 38 | \/ EmptySmall 39 | \/ EmptyBig 40 | \/ SmallToBig 41 | \/ BigToSmall 42 | 43 | ============================================================================= 44 | -------------------------------------------------------------------------------- /examples/DieHard/DieHard_json.ini: -------------------------------------------------------------------------------- 1 | ; DieHard_json.ini 2 | 3 | [options] 4 | target: ./DieHard_json.tla 5 | model name: model_json 6 | workers: 4 7 | workers: SHOW_IN_TABLE 8 | check deadlock: true 9 | system memory: 2000 10 | community modules: true 11 | 12 | [behavior] 13 | init: Init 14 | next: Next 15 | 16 | [invariants] 17 | TypeOK: TypeOK 18 | JsonInv: JsonInv 19 | -------------------------------------------------------------------------------- /examples/DieHard/DieHard_json.tla: -------------------------------------------------------------------------------- 1 | ------------------------------- MODULE DieHard_json ------------------------------- 2 | EXTENDS DieHard, 3 | TLC, 4 | TLCExt, \* Trace operator & 5 | Json \* JsonSerialize operator (both in CommunityModules-deps.jar) 6 | 7 | (* 8 | The trick is that TLC evaluates disjunct 'Export' iff 'RealInv' equals FALSE. 9 | JsonInv is the invariant that we have TLC check, i.e. appears in the config. 10 | *) 11 | JsonInv == 12 | \/ RealInv:: big /= 4 \* The ordinary invariant to check in EWD840 module. 13 | \/ Export:: /\ JsonSerialize("trace.json", Trace) 14 | /\ FALSE \*TLCSet("exit", FALSE) \* Stop model-checking *with* TLC reporting 15 | \* the usual text-based error trace. Replace 16 | \* with TRUE to not to print the error-trace 17 | \* and terminate with zero process exit 18 | \* value. 19 | 20 | (* 3. 21 | Grab recent tla2tools.jar and CommunityModules-deps.jar (or Toolbox): 22 | wget -q https://nightly.tlapl.us/dist/tla2tools.jar \ 23 | https://modules.tlapl.us/releases/latest/download/CommunityModules-deps.jar 24 | *) 25 | 26 | ============================================================================= -------------------------------------------------------------------------------- /examples/DieHard/MC.out: -------------------------------------------------------------------------------- 1 | @!@!@STARTMSG 2262:0 @!@!@ 2 | TLC2 Version 2.16 of Day Month 20?? (rev: f56cf14) 3 | @!@!@ENDMSG 2262 @!@!@ 4 | @!@!@STARTMSG 2187:0 @!@!@ 5 | Running breadth-first search Model-Checking with fp 28 and seed -929351563955747441 with 4 workers on 6 cores with 592MB heap and 1332MB offheap memory [pid: 26281] (Linux 5.4.0-80-generic amd64, Private Build 14.0.2 x86_64, OffHeapDiskFPSet, DiskStateQueue). 6 | @!@!@ENDMSG 2187 @!@!@ 7 | @!@!@STARTMSG 2220:0 @!@!@ 8 | Starting SANY... 9 | @!@!@ENDMSG 2220 @!@!@ 10 | Semantic processing of module Naturals 11 | Semantic processing of module Integers 12 | Semantic processing of module DieHard 13 | Semantic processing of module Sequences 14 | Semantic processing of module FiniteSets 15 | Semantic processing of module TLC 16 | Semantic processing of module MC 17 | @!@!@STARTMSG 2219:0 @!@!@ 18 | SANY finished. 19 | @!@!@ENDMSG 2219 @!@!@ 20 | @!@!@STARTMSG 2185:0 @!@!@ 21 | Starting... (2021-08-30 16:01:07) 22 | @!@!@ENDMSG 2185 @!@!@ 23 | @!@!@STARTMSG 2189:0 @!@!@ 24 | Computing initial states... 25 | @!@!@ENDMSG 2189 @!@!@ 26 | @!@!@STARTMSG 2190:0 @!@!@ 27 | Finished computing initial states: 1 distinct state generated at 2021-08-30 16:01:09. 28 | @!@!@ENDMSG 2190 @!@!@ 29 | @!@!@STARTMSG 2110:1 @!@!@ 30 | Invariant inv_BigNE4 is violated. 31 | @!@!@ENDMSG 2110 @!@!@ 32 | @!@!@STARTMSG 2121:1 @!@!@ 33 | The behavior up to this point is: 34 | @!@!@ENDMSG 2121 @!@!@ 35 | @!@!@STARTMSG 2217:4 @!@!@ 36 | 1: 37 | /\ big = 0 38 | /\ small = 0 39 | 40 | @!@!@ENDMSG 2217 @!@!@ 41 | @!@!@STARTMSG 2217:4 @!@!@ 42 | 2: 43 | /\ big = 5 44 | /\ small = 0 45 | 46 | @!@!@ENDMSG 2217 @!@!@ 47 | @!@!@STARTMSG 2217:4 @!@!@ 48 | 3: 49 | /\ big = 2 50 | /\ small = 3 51 | 52 | @!@!@ENDMSG 2217 @!@!@ 53 | @!@!@STARTMSG 2217:4 @!@!@ 54 | 4: 55 | /\ big = 2 56 | /\ small = 0 57 | 58 | @!@!@ENDMSG 2217 @!@!@ 59 | @!@!@STARTMSG 2217:4 @!@!@ 60 | 5: 61 | /\ big = 0 62 | /\ small = 2 63 | 64 | @!@!@ENDMSG 2217 @!@!@ 65 | @!@!@STARTMSG 2217:4 @!@!@ 66 | 6: 67 | /\ big = 5 68 | /\ small = 2 69 | 70 | @!@!@ENDMSG 2217 @!@!@ 71 | @!@!@STARTMSG 2217:4 @!@!@ 72 | 7: 73 | /\ big = 4 74 | /\ small = 3 75 | 76 | @!@!@ENDMSG 2217 @!@!@ 77 | @!@!@STARTMSG 2200:0 @!@!@ 78 | Progress(9) at 2021-08-30 16:01:09: 84 states generated (2,721 s/min), 16 distinct states found (518 ds/min), 0 states left on queue. 79 | @!@!@ENDMSG 2200 @!@!@ 80 | @!@!@STARTMSG 2199:0 @!@!@ 81 | 84 states generated, 16 distinct states found, 0 states left on queue. 82 | @!@!@ENDMSG 2199 @!@!@ 83 | @!@!@STARTMSG 2194:0 @!@!@ 84 | The depth of the complete state graph search is 9. 85 | @!@!@ENDMSG 2194 @!@!@ 86 | @!@!@STARTMSG 2186:0 @!@!@ 87 | Finished in 1876ms at (2021-08-30 16:01:09) 88 | @!@!@ENDMSG 2186 @!@!@ 89 | -------------------------------------------------------------------------------- /examples/DieHard/README: -------------------------------------------------------------------------------- 1 | https://github.com/tlaplus/Examples/tree/master/specifications/DieHard 2 | -------------------------------------------------------------------------------- /examples/DieHard/python_init.py: -------------------------------------------------------------------------------- 1 | def init_replace_init(ri): 2 | def choose_state(state_num: int, state_dict: dict, state_str: str, is_last_state: bool) -> str: 3 | if state_dict['BIG'] != 0 or is_last_state: 4 | return state_str.replace('/', ' /') # custom state_str 5 | return '' 6 | 7 | ri.set_chooser_handler(choose_state) 8 | 9 | 10 | def init_trace_reader(tr): 11 | def kv_handler(k, v): 12 | if k != 'big': 13 | return k, v 14 | return 'BIG', v # change variable name 'big' to 'BIG' 15 | 16 | tr.set_kv_handler(kv_handler) 17 | -------------------------------------------------------------------------------- /examples/DieHard/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | python3 ../../tlcwrapper.py DieHard.ini 4 | MODEL_DIR=$(ls -dt model_* | head -1) 5 | python3 ../../trace_reader.py ${MODEL_DIR}/MC.out -o ${MODEL_DIR}/MC.json -i 2 6 | cat ${MODEL_DIR}/MC.json 7 | python3 ../../tlcwrapper.py DieHard-simulate-replace-init.ini 8 | python3 ../../tlcwrapper.py DieHard_json.ini 9 | MODEL_DIR=$(ls -dt model_* | head -1) 10 | cat ${MODEL_DIR}/trace.json 11 | -------------------------------------------------------------------------------- /examples/Makefile: -------------------------------------------------------------------------------- 1 | run: 2 | @for i in *; do if [ ! -d $$i ]; then continue; fi; echo "======== $$i ========"; cd $$i; ./run.sh; cd - > /dev/null; done 3 | 4 | clean: 5 | @find -maxdepth 2 \( -name model_\* -o -name MC_summary_\* -o -name __pycache__ \) -exec rm -rv '{}' + 6 | -------------------------------------------------------------------------------- /examples/TPaxos/README: -------------------------------------------------------------------------------- 1 | https://github.com/tlaplus/Examples/tree/master/specifications/TencentPaxos 2 | -------------------------------------------------------------------------------- /examples/TPaxos/TPaxos-simulate.ini: -------------------------------------------------------------------------------- 1 | ; TPaxos.ini 2 | 3 | [options] 4 | target: ./TPaxos.tla 5 | model name: model_simulation 6 | workers: auto 7 | coverage min: 1 8 | memory ratio: 0.3 9 | simulation depth: 100 10 | simulation traces: 10 11 | simulation dump traces: true 12 | 13 | [behavior] 14 | temporal formula: Spec 15 | 16 | [invariants] 17 | Consistency: Consistency 18 | 19 | [constants] 20 | Participant: [model value]{p1, p2, p3} 21 | Participant: SHOW_IN_TABLE 22 | Value: [model value]{v1, v2} 23 | Value: [model value]{v1, v2, v3} 24 | 25 | [override] 26 | None: [model value] 27 | Ballot: 0..2 28 | Ballot: 0..3 29 | -------------------------------------------------------------------------------- /examples/TPaxos/TPaxos.ini: -------------------------------------------------------------------------------- 1 | ; TPaxos.ini 2 | 3 | [options] 4 | target: ./TPaxos.tla 5 | model name: model 6 | workers: 6 7 | workers: SHOW_IN_TABLE 8 | coverage min: 1 9 | memory ratio: 0.4 10 | 11 | [behavior] 12 | temporal formula: Spec 13 | 14 | [invariants] 15 | Invariant: Consistency 16 | Invariant: SHOW_IN_TABLE 17 | 18 | [state constraint] 19 | SC: TLCSet("exit", TLCGet("distinct") > 50000) 20 | 21 | [constants] 22 | Participant: [model value]{p1, p2} 23 | Participant: [model value]{p1, p2, p3} 24 | Value: [model value]{v1, v2} 25 | Value: [model value]{v1, v2, v3} 26 | 27 | [override] 28 | None: [model value] 29 | Ballot: 0..2 30 | -------------------------------------------------------------------------------- /examples/TPaxos/TPaxos.tla: -------------------------------------------------------------------------------- 1 | ------------------------------ MODULE TPaxos -------------------------------- 2 | (* 3 | Specification of the consensus protocol in PaxosStore. 4 | 5 | See [PaxosStore@VLDB2017](https://www.vldb.org/pvldb/vol10/p1730-lin.pdf) 6 | by Tencent. 7 | 8 | TPaxos is an variant of Basic Paxos, every server maintain a two-dimension 9 | array contained the real state of itself and the view state from itself to 10 | other servers. Both the real state and the view state is the triple like 11 | Basic Paxos <>, m is the maximum ballot number that acceptor promised, 12 | <> is the latest proposal that acceptor accepted. The algorithm reaches 13 | consensus by meaning of updating the real state and the view state continually. 14 | *) 15 | EXTENDS Integers, FiniteSets 16 | ----------------------------------------------------------------------------- 17 | CONSTANTS 18 | Participant, \* the set of partipants 19 | Value \* the set of possible input values for Participant to propose 20 | 21 | None == CHOOSE b : b \notin Value \* an unspecified value that is not in Value 22 | NP == Cardinality(Participant) \* number of p \in Participants 23 | 24 | \* We generate the quorum system instead of input manually 25 | Quorum == {Q \in SUBSET Participant : Cardinality(Q) * 2 >= NP + 1} 26 | ASSUME QuorumAssumption == 27 | /\ \A Q \in Quorum : Q \subseteq Participant 28 | /\ \A Q1, Q2 \in Quorum : Q1 \cap Q2 # {} 29 | 30 | Ballot == Nat 31 | 32 | (* 33 | To make ballot total order, we let different participant use different ballot 34 | numbers. 35 | *) 36 | Max(m, n) == IF m > n THEN m ELSE n 37 | Injective(f) == \A a, b \in DOMAIN f: (a # b) => (f[a] # f[b]) 38 | PIndex == CHOOSE f \in [Participant -> 1 .. NP] : Injective(f) 39 | Bals(p) == {b \in Ballot : b % NP = PIndex[p] - 1} \* allocate ballots for each p \in Participant 40 | ----------------------------------------------------------------------------- 41 | \* state in the two-dimension array 42 | State == [maxBal: Ballot \cup {-1}, 43 | maxVBal: Ballot \cup {-1}, maxVVal: Value \cup {None}] 44 | 45 | InitState == [maxBal |-> -1, maxVBal |-> -1, maxVVal |-> None] 46 | (* 47 | For simplicity, in this specification, we choose to send the complete state 48 | of a participant each time. When receiving such a message, the participant 49 | processes only the "partial" state it needs. 50 | *) 51 | Message == [from: Participant, to : SUBSET Participant, state: [Participant -> State]] 52 | ----------------------------------------------------------------------------- 53 | VARIABLES 54 | state, \* state[p][q]: the state of q \in Participant from the view of p \in Participant 55 | msgs \* the set of messages that have been sent 56 | 57 | vars == <> 58 | 59 | TypeOK == 60 | /\ state \in [Participant -> [Participant -> State]] 61 | /\ msgs \subseteq Message 62 | 63 | Send(m) == msgs' = msgs \cup {m} 64 | ----------------------------------------------------------------------------- 65 | Init == 66 | /\ state = [p \in Participant |-> [q \in Participant |-> InitState]] 67 | /\ msgs = {} 68 | (* 69 | p \in Participant starts the prepare phase by issuing a ballot b \in Ballot. 70 | The participant p will update its real state which means p accept the prepare(b) 71 | request. We send the complete state for simplicity and the receiver will processes 72 | only the "partial" state it needs. 73 | 74 | The participant p can not send message to itself to decrease the states while 75 | model checking. 76 | *) 77 | Prepare(p, b) == 78 | /\ b \in Bals(p) 79 | /\ state[p][p].maxBal < b 80 | /\ state' = [state EXCEPT ![p][p].maxBal = b] 81 | /\ Send([from |-> p, to |-> Participant, state |-> state'[p]]) 82 | (* 83 | q \in Participant updates its own state state[q] according to the actual state 84 | pp of p \in Participant extracted from a message m \in Message it receives. 85 | This is called by OnMessage(q). 86 | 87 | Sometimes the method of updating real state will bring some different execution 88 | , e.g., when receiving <<3, 2, v1>> and its current state is <<1, -1, none>>, 89 | the message can be regard as combination of prepare(3) requestand accept(2, v1) 90 | request, and the participant can update to <<3, -1, none>> or <<3, 2, v1>>. 91 | Here the participant will update to <<3, -1, none>>. 92 | 93 | Note: pp is m.state[p]; it may not be equal to state[p][p] at the time 94 | UpdateState is called. 95 | *) 96 | UpdateState(q, p, pp) == 97 | LET maxB == Max(state[q][q].maxBal, pp.maxBal) 98 | IN state' = [state EXCEPT 99 | ![q][p].maxBal = Max(@, pp.maxBal),\* make promise 100 | ![q][p].maxVBal = Max(@, pp.maxVBal), 101 | ![q][p].maxVVal = IF state[q][p].maxVBal < pp.maxVBal 102 | THEN pp.maxVVal ELSE @, 103 | ![q][q].maxBal = maxB, \* make promise first and then accept 104 | ![q][q].maxVBal = IF maxB <= pp.maxVBal \* accept 105 | THEN pp.maxVBal ELSE @, 106 | ![q][q].maxVVal = IF maxB <= pp.maxVBal \* accept 107 | THEN pp.maxVVal ELSE @] 108 | (* 109 | q \in Participant receives and processes a message in Message. 110 | To 111 | *) 112 | OnMessage(q) == 113 | \E m \in msgs : 114 | /\ q \in m.to 115 | /\ LET p == m.from 116 | IN UpdateState(q, p, m.state[p]) 117 | /\ LET qm == [from |-> m.from, to |-> m.to \ {q}, state |-> m.state] \*remove q from to 118 | nm == [from |-> q, to |-> {m.from}, state |-> state'[q]] \*new message to reply 119 | IN IF \/ m.state[q].maxBal < state'[q][q].maxBal 120 | \/ m.state[q].maxVBal < state'[q][q].maxVBal 121 | THEN msgs' = (msgs \ {m}) \cup {qm, nm} 122 | ELSE msgs' = (msgs \ {m}) \cup {qm} 123 | (* 124 | p \in Participant starts the accept phase by issuing the ballot b \in Ballot 125 | with value v \in Value. 126 | 127 | The participant p can not send message to itself to decrease the states while 128 | model checking. 129 | *) 130 | Accept(p, b, v) == 131 | /\ b \in Bals(p) 132 | /\ state[p][p].maxBal <= b \*corresponding the first conjunction in Voting 133 | /\ state[p][p].maxVBal # b \* correspongding the second conjunction in Voting 134 | /\ \E Q \in Quorum : 135 | /\ \A q \in Q : state[p][q].maxBal = b 136 | \* pick the value from the quorum 137 | (*/\ \/ \A q \in Q : state[p][q].maxVBal = -1 \* free to pick its own value 138 | \/ \E q \in Q : \* v is the value with the highest maxVBal in the quorum 139 | /\ state[p][q].maxVVal = v 140 | /\ \A r \in Q : state[p][q].maxVBal >= state[p][r].maxVBal 141 | *) 142 | \*choose the value from all the local state 143 | /\ \/ \A q \in Participant : state[p][q].maxVBal = -1 \* free to pick its own value 144 | \/ \E q \in Participant : \* v is the value with the highest maxVBal 145 | /\ state[p][q].maxVVal = v 146 | /\ \A r \in Participant: state[p][q].maxVBal >= state[p][r].maxVBal 147 | /\ state' = [state EXCEPT ![p][p].maxVBal = b, 148 | ![p][p].maxVVal = v] 149 | /\ Send([from |-> p, to |-> Participant, state |-> state'[p]]) 150 | --------------------------------------------------------------------------- 151 | Next == \E p \in Participant : \/ OnMessage(p) 152 | \/ \E b \in Ballot : \/ Prepare(p, b) 153 | \/ \E v \in Value : Accept(p, b, v) 154 | Spec == Init /\ [][Next]_vars 155 | --------------------------------------------------------------------------- 156 | ChosenP(p) == \* the set of values chosen by p \in Participant 157 | {v \in Value : \E b \in Ballot : 158 | \E Q \in Quorum: \A q \in Q: /\ state[p][q].maxVBal = b 159 | /\ state[p][q].maxVVal = v} 160 | 161 | chosen == UNION {ChosenP(p) : p \in Participant} 162 | 163 | Consistency == Cardinality(chosen) <= 1 164 | 165 | THEOREM Spec => []Consistency 166 | ============================================================================= 167 | \* Modification History 168 | \* Last modified Mon Sep 09 15:59:38 CST 2019 by stary 169 | \* Created Mon Sep 02 15:47:52 GMT+08:00 2018 by stary -------------------------------------------------------------------------------- /examples/TPaxos/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | python3 ../../tlcwrapper.py TPaxos.ini 4 | python3 ../../tlcwrapper.py TPaxos-simulate.ini 5 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | psutil 3 | networkx -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: UTF-8 -*- 3 | 4 | from setuptools import setup 5 | 6 | setup(name='tlc-cmd', 7 | version='1.0.3', 8 | author='Ruize Tang', 9 | author_email='tangruize97@gmail.com', 10 | url='https://github.com/tangruize/tlc-cmd', 11 | description='TLC cmd tools', 12 | packages=['.'], 13 | classifiers = [ 14 | 'Development Status :: 3 - Alpha', 15 | 'Intended Audience :: Developers', 16 | 'Topic :: Software Development :: Testing', 17 | 'License :: OSI Approved :: MIT License', 18 | ], 19 | package_data = { 20 | '':['*.ini'] 21 | }, 22 | install_requires=['requests', 'psutil', 'networkx'], 23 | python_requires='>=3', 24 | scripts=['tlcwrapper.py', 'trace_reader.py', 'trace_counter.py', 25 | 'trace_generator.py'] 26 | ) 27 | -------------------------------------------------------------------------------- /tlcwrapper.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: UTF-8 -*- 3 | 4 | import sys 5 | import re 6 | import os 7 | import subprocess 8 | import argparse 9 | import signal 10 | 11 | from collections import OrderedDict 12 | from configparser import ConfigParser 13 | from itertools import chain, zip_longest, product 14 | from shutil import copy2 15 | from datetime import datetime 16 | from datetime import timedelta 17 | from io import StringIO 18 | from collections.abc import Mapping 19 | 20 | debug = True 21 | 22 | wrapper_out_file=None 23 | 24 | def xprint(*args, **kwargs): 25 | if wrapper_out_file: 26 | print(*args, **kwargs, file=wrapper_out_file, flush=True) 27 | print(*args, **kwargs, flush=True) 28 | 29 | def eprint(*args, **kwargs): 30 | if wrapper_out_file: 31 | print(*args, **kwargs, file=wrapper_out_file, flush=True) 32 | print(*args, **kwargs, file=sys.stderr, flush=True) 33 | 34 | class PrintTable: 35 | """Print CSV/Markdown table""" 36 | 37 | @classmethod 38 | def _print_table(cls, table, title=None, file=sys.stdout, sep=',', wrap='', default=''): 39 | def _w(string): return '{} {} {}'.format(wrap, string, wrap).strip() 40 | def _p(string): return string if wrap else '"{}"'.format(string.replace('"','""')) 41 | def _f(it): return filter(lambda x: x is not None, it) 42 | def _t(string): return _p(title[string]) if string in title else None 43 | def _v(k, d): 44 | if k not in d: 45 | return _p(str(default)) 46 | return _p(str(d[k])) if k in title else None 47 | 48 | table = list(table.values()) if isinstance(table, Mapping) else list(table) 49 | if title is None: 50 | title = dict(zip(table[0].keys(), table[0].keys())) 51 | xprint(_w(sep.join(_f(map(lambda k: _t(k), title.keys())))), file=file) 52 | if wrap: 53 | xprint(_w(sep.join(_f(map(lambda k: '---' if _t(k) else None, title.keys())))), file=file) 54 | for i in table: 55 | xprint(_w(sep.join(_f(map(lambda kd: _v(*kd), ((k, i) for k in title.keys()))))), file=file) 56 | 57 | @classmethod 58 | def print_csv_table(cls, table, title=None, file=sys.stdout, tab=False, default=''): 59 | cls._print_table(table, title, file=file, sep='\t' if tab else ',', default=default) 60 | 61 | @classmethod 62 | def print_md_table(cls, table, title=None, file=sys.stdout, default=''): 63 | cls._print_table(table, title, file=file, sep=' | ', wrap='|', default=default) 64 | 65 | @classmethod 66 | def print_table(cls, table, title=None, filename='', default=''): 67 | if not filename: 68 | cls.print_csv_table(table, title, tab=True, default=default) 69 | else: 70 | if filename.endswith('.md'): 71 | with open(filename, 'w') as out: 72 | cls.print_md_table(table, title, out, default=default) 73 | else: 74 | with open(filename, 'w', encoding='utf-8-sig') as out: 75 | cls.print_csv_table(table, title, out, default=default) 76 | 77 | 78 | class Summary: 79 | """Output summary table""" 80 | 81 | _title_list = ['Diameter', 'States Found', 'Distinct States', 'Queue Size', 'Start Time', 'End Time', 'Duration', 82 | 'Exit Status', 'Warnings', 'Errors'] 83 | _title_list_simulation = ['Traces', 'States Found', 'Start Time', 'End Time', 'Duration', 84 | 'Exit Status', 'Warnings', 'Errors'] 85 | 86 | def __init__(self): 87 | self.batch = [] 88 | self.current = None 89 | self._finished = True 90 | 91 | def init_title(self, is_simulation=False): 92 | if self._finished: 93 | self.new() 94 | titles = self._title_list if not is_simulation else self._title_list_simulation 95 | for t in titles: 96 | if t not in self.current: 97 | self.current[t] = None 98 | 99 | def add_option(self, opt, value): 100 | v = value.replace('[model value]', '').replace('', '').strip() 101 | if not v: 102 | v = value 103 | self.add_info(opt, v, force=True) 104 | if v.startswith('{'): 105 | n = len(v.split(',')) 106 | self.add_info('n {}'.format(opt), n, force=True) 107 | 108 | def add_info(self, name, value, force=False): 109 | if self._finished: 110 | self.new() 111 | name = name.title() 112 | if force or name in self.current: 113 | self.current[name] = value 114 | 115 | def new(self): 116 | self.batch.append(OrderedDict()) 117 | self.current = self.batch[-1] 118 | self.current['No.'] = len(self.batch) 119 | self._finished = False 120 | 121 | def finish_current(self): 122 | self._finished = True 123 | 124 | def _get_longest_title(self): 125 | title_list = [] 126 | for task in self.batch: 127 | tmp_list = list(task.keys()) 128 | if len(title_list) < len(tmp_list): 129 | title_list = tmp_list 130 | return title_list 131 | 132 | def __str__(self): 133 | if self.current is None: 134 | return '' 135 | lines = ['\t'.join(self._get_longest_title())] 136 | for task in self.batch: 137 | lines.append('\t'.join(str(i).replace('\n', ' ') for i in task.values())) 138 | return '\n'.join(lines) 139 | 140 | def print_to_file(self, file): 141 | title = self._get_longest_title() 142 | title = dict(zip(title, title)) 143 | PrintTable.print_table(self.batch, title, file) 144 | 145 | 146 | class BatchConfig: 147 | """Yield TLCConfigFile cfg files""" 148 | 149 | def __init__(self, cfg, summary=None): 150 | self.dup_option_info = OrderedDict() 151 | self.cfg_content = [] 152 | self.summary = summary 153 | if not hasattr(cfg, 'read'): 154 | cfg_file = open(cfg, 'r') 155 | else: 156 | cfg_file = cfg 157 | self._parse_cfg(cfg_file) 158 | cfg_file.close() 159 | 160 | def _parse_cfg(self, cfg_file): 161 | pre_opt_kv = ['', ''] 162 | pre_no = -1 163 | 164 | def rm_non_dup_option(): 165 | if pre_no in self.dup_option_info: 166 | if len(self.dup_option_info[pre_no]) <= 1: 167 | line = self.dup_option_info.pop(pre_no) 168 | self.cfg_content[pre_no] = line[0] + '\n' 169 | else: 170 | self.dup_option_info[pre_no] = [ 171 | i for i in self.dup_option_info[pre_no] if "SHOW_IN_TABLE" != i.split(':', 1)[1].strip()] 172 | 173 | for no, line in enumerate(cfg_file): 174 | self.cfg_content.append(line) 175 | line = line.rstrip() 176 | if len(line) == 0 or line[0] in '#;[': 177 | continue 178 | if line[0] in ' \t': 179 | if pre_no != -1: 180 | self.cfg_content[-1] = '' 181 | self.dup_option_info[pre_no][-1] = '{}\n{}'.format(self.dup_option_info[pre_no][-1], line) 182 | continue 183 | opt_kv = line.split(':', 1) 184 | if len(opt_kv) != 2: 185 | continue 186 | if opt_kv[0] == pre_opt_kv[0]: 187 | self.dup_option_info[pre_no].append(line) 188 | self.cfg_content[-1] = '' 189 | else: 190 | rm_non_dup_option() 191 | self.dup_option_info[no] = [line] 192 | pre_no = no 193 | pre_opt_kv = opt_kv 194 | rm_non_dup_option() 195 | 196 | def get(self): 197 | """yield cfg StringIO""" 198 | keys = list(self.dup_option_info.keys()) 199 | values = list(self.dup_option_info.values()) 200 | for comb in product(*values): 201 | for i, no in enumerate(keys): 202 | self.cfg_content[no] = comb[i] + '\n' 203 | opt, value = comb[i].split(':', 1) 204 | if self.summary: 205 | self.summary.add_option(opt.strip(), value.strip()) 206 | yield comb, StringIO(''.join(self.cfg_content)) 207 | 208 | 209 | class TLCConfigFile: 210 | """generate TLC config file: MC.cfg and MC.tla""" 211 | model_sym_pat = re.compile(r'\[model value]{(.*)}') 212 | model_pat = re.compile(r'\[model value]{(.*)}') 213 | tag = '\\* Generated by ' + os.path.basename(__file__) 214 | 215 | def __init__(self, cfg, output_cfg_fn, output_tla_fn, output_tla_constants_fn=None, target_tla_file=None): 216 | self.cfg = cfg 217 | self.output_cfg_fn = output_cfg_fn 218 | self.output_tla_fn = output_tla_fn 219 | 220 | if output_tla_constants_fn and not target_tla_file: 221 | eprint('Warning:', 'both output_tla_constants_fn and target_tla_file must be set') 222 | output_tla_constants_fn = None 223 | 224 | self.output_tla_constants_fn = output_tla_constants_fn 225 | self.target_tla_file = target_tla_file 226 | if self.output_tla_constants_fn: 227 | self.extends_string = self._get_extends_str(self.target_tla_file) 228 | else: 229 | self.extends_string = '' 230 | self.top_module = re.sub(r'.tla$', '', os.path.basename(cfg.get('options', 'target'))) 231 | self.output_cfg = [] 232 | self.output_tla_options = [] 233 | self.output_tla_constants = [] 234 | if not output_tla_constants_fn: 235 | self.output_tla_constants = self.output_tla_options 236 | self.parse() 237 | 238 | def _get_extends_str(self, target_tla_fn): 239 | """get 'x, y, z' in tla file 'EXTENDS x, y, z'""" 240 | with open(target_tla_fn) as f: 241 | for line in f: 242 | line = line.strip() 243 | if line.startswith('EXTENDS'): 244 | _, e = line.split(' ', 1) 245 | if 'TLC' in e: 246 | return e 247 | else: 248 | return e + ', TLC' 249 | 250 | def _set_extends_str(self, more): 251 | if not self.target_tla_file: 252 | return 253 | with open(self.target_tla_file) as f: 254 | lines = f.readlines() 255 | module_idx = -1 256 | find_extends = False 257 | for i, line in enumerate(lines): 258 | if module_idx == -1 and 'MODULE' in line: 259 | module_idx = i 260 | if line.strip().startswith('EXTENDS'): 261 | if more not in line: 262 | lines[i] = line.rstrip() + ', ' + more + '\n' 263 | find_extends = True 264 | break 265 | if not find_extends and module_idx != -1: 266 | lines.insert(module_idx + 1, 'EXTENDS ' + more + '\n') 267 | with open(self.target_tla_file, 'w') as f: 268 | f.writelines(lines) 269 | 270 | def _add_behavior(self, specifier, prefix, value): 271 | behavior_name = '{}'.format(prefix) 272 | behavior_value = '{} ==\n{}'.format(behavior_name, value) 273 | self.output_cfg.append('{}\n{}'.format(specifier, behavior_name)) 274 | self.output_tla_options.append(behavior_value) 275 | 276 | def _parse_behavior(self): 277 | """parse behavior section""" 278 | if 'behavior' in self.cfg: 279 | behavior = self.cfg['behavior'] 280 | init_predicate = behavior.get('init') 281 | next_state = behavior.get('next') 282 | temporal_formula = behavior.get('temporal formula') 283 | if (init_predicate or next_state) and (not init_predicate or not next_state or temporal_formula): 284 | raise ValueError('[behavior] choose one or none: "init/next" **OR** "temporal formula"') 285 | if temporal_formula: 286 | self._add_behavior('SPECIFICATION', 'spec', temporal_formula) 287 | else: 288 | self._add_behavior('INIT', 'init', init_predicate) 289 | self._add_behavior('NEXT', 'next', next_state) 290 | 291 | def _add_specifications(self, keyword, specifier, prefix): 292 | """invariants and properties share the same parser""" 293 | if keyword in self.cfg: 294 | spec = self.cfg[keyword] 295 | spec_names = '\n'.join('{}_{}'.format(prefix, i) for i in spec) 296 | if spec_names != '': 297 | self.output_cfg.append('{}\n{}'.format(specifier, spec_names)) 298 | spec_values = '\n'.join('{}_{} ==\n{}'.format(prefix, i, spec[i]) for i in spec) 299 | self.output_tla_options.append(spec_values) 300 | 301 | def _parse_invariants(self): 302 | """parse invariants section""" 303 | self._add_specifications('invariants', 'INVARIANT', 'inv') 304 | 305 | def _parse_properties(self): 306 | """parse properties section""" 307 | self._add_specifications('properties', 'PROPERTY', 'prop') 308 | 309 | def _parse_state_constraint(self): 310 | """parse state constraint section""" 311 | self._add_specifications('state constraint', 'CONSTRAINT', 'constr') 312 | 313 | def _parse_action_constraint(self): 314 | """parse action constraint section""" 315 | self._add_specifications('action constraint', 'ACTION_CONSTRAINT', 'action_constr') 316 | 317 | def _parse_constants(self, keyword='constants', prefix='const'): 318 | """parse constants section""" 319 | if keyword in self.cfg: 320 | symmetrical = [] 321 | constants = self.cfg[keyword] 322 | for name in constants: 323 | value = constants[name] 324 | is_model_value = False 325 | is_symmetrical = False 326 | if self.model_sym_pat.match(value): 327 | is_model_value = True 328 | value = self.model_sym_pat.match(value).groups()[0].replace(' ', '').split(',') 329 | if len(value) <= 1: 330 | eprint('Warning: "{}: {}": ignored'.format(name, constants[name])) 331 | else: 332 | is_symmetrical = True 333 | elif self.model_pat.match(value): 334 | is_model_value = True 335 | value = self.model_pat.match(value).groups()[0].replace(' ', '').split(',') 336 | elif value == '[model value]': 337 | is_model_value = True 338 | value = name 339 | if is_model_value: 340 | if isinstance(value, list): # set of model values 341 | model_val = '\n'.join('{} = {}'.format(i, i) for i in value) 342 | cfg_str = 'CONSTANTS\n{}\nCONSTANT\n{} <- const_{}'.format(model_val, name, name) 343 | model_val = ', '.join(i for i in value) 344 | tla_str = 'CONSTANTS\n{}\nconst_{} ==\n{{{}}}'.format(model_val, name, model_val) 345 | if is_symmetrical: # symmetry set 346 | # cfg_str = '{}\nSYMMETRY symm_{}'.format(cfg_str, name) 347 | # tla_str = '{}\nsymm_{} ==\nPermutations(const_{})'.format(tla_str, name, name) 348 | symmetrical.append('Permutations(const_{})'.format(name)) 349 | else: # model value 350 | cfg_str = 'CONSTANT {} = {}'.format(name, value) 351 | tla_str = None 352 | else: # ordinary assignment 353 | cfg_str = 'CONSTANT\n{} <- {}_{}'.format(name, prefix, name) 354 | tla_str = '{}_{} == \n{}'.format(prefix, name, value) 355 | self.output_cfg.append(cfg_str) 356 | self.output_tla_constants.append(tla_str) 357 | if symmetrical: 358 | self.output_cfg.append('SYMMETRY symm_{}'.format(len(symmetrical))) 359 | self.output_tla_constants.append( 360 | 'symm_{} ==\n{}'.format(len(symmetrical), ' \\union '.join(symmetrical))) 361 | 362 | def _parse_override(self): 363 | """parse override section""" 364 | self._parse_constants(keyword='override', prefix='over') 365 | 366 | def _parse_const_expr(self): 367 | """parse const expr section""" 368 | if 'const expr' in self.cfg: 369 | const_expr = self.cfg.get('const expr', 'expr', fallback=None) 370 | if const_expr: 371 | self.output_cfg.append(None) 372 | val = 'const_expr' 373 | self.output_tla_options.append( 374 | '{} ==\n{}\nASSUME PrintT(<<"$!@$!@$!@$!@$!",{}>>)'.format(val, const_expr, val)) 375 | 376 | def _parse_additional_definitions(self): 377 | """parse additional definitions section""" 378 | if 'additional definitions' in self.cfg: 379 | self.output_cfg.append(None) 380 | spec = self.cfg['additional definitions'] 381 | spec_values = '\n'.join(spec[i] for i in spec) 382 | self.output_tla_options.append(spec_values) 383 | 384 | def _parse_alias(self): 385 | """parse alias section""" 386 | self._add_specifications('alias', 'ALIAS', 'alias') 387 | # if 'alias' in self.cfg: 388 | # self.output_cfg.append(None) 389 | # spec = self.cfg['alias'] 390 | # spec_values = '\n'.join(spec[i] for i in spec) 391 | # self.output_cfg.append('{}\n{}'.format('ALIAS', spec_values)) 392 | 393 | def parse(self): 394 | self.output_cfg.clear() 395 | self.output_tla_options.clear() 396 | self.output_tla_constants.clear() 397 | self._parse_behavior() 398 | self._parse_invariants() 399 | self._parse_properties() 400 | self._parse_constants() 401 | self._parse_override() 402 | self._parse_const_expr() 403 | self._parse_state_constraint() 404 | self._parse_action_constraint() 405 | self._parse_additional_definitions() 406 | self._parse_alias() 407 | 408 | def write(self): 409 | """write parsed buf to file""" 410 | output_cfg_fn = self.output_cfg_fn 411 | output_tla_fn = self.output_tla_fn 412 | output_tla_constants_fn = self.output_tla_constants_fn 413 | with open(output_cfg_fn, 'w') as cfg_f: 414 | cfg_f.write('{} on {}\n'.format(self.tag, datetime.now())) 415 | cfg_f.write('\n\n'.join(filter(None, self.output_cfg))) 416 | cfg_f.write('\n') 417 | with open(output_tla_fn, 'w') as tla_f: 418 | module = '---- MODULE {} ----\n'.format(output_tla_fn.replace('.tla', '')) 419 | tla_f.write(module) 420 | tla_f.write('EXTENDS {}, TLC\n\n'.format(self.top_module)) 421 | tla_f.write('\n----\n\n'.join(filter(None, self.output_tla_options))) 422 | tla_f.write('\n{}\n'.format('=' * len(module))) 423 | tla_f.write('{} on {}\n'.format(self.tag, datetime.now())) 424 | if output_tla_constants_fn: 425 | with open(output_tla_constants_fn, 'w') as tla_f: 426 | module = '---- MODULE {} ----\n'.format(output_tla_constants_fn.replace('.tla', '')) 427 | tla_f.write(module) 428 | tla_f.write('EXTENDS {}\n\n'.format(self.extends_string)) 429 | tla_f.write('\n----\n\n'.join(filter(None, self.output_tla_constants))) 430 | tla_f.write('\n{}\n'.format('=' * len(module))) 431 | tla_f.write('{} on {}\n'.format(self.tag, datetime.now())) 432 | self._set_extends_str(output_tla_constants_fn.replace('.tla', '')) 433 | 434 | 435 | class ReplaceInit: 436 | """Replace TLA Init state with a specific state from a trace file""" 437 | 438 | def __init__(self, tla_file: str, trace_file: str, replace_state: int=0, python_init: str='') -> None: 439 | def choose_state(state_num: int, state_dict: dict, state_str: str, is_last_state: bool) -> str: 440 | if replace_state == state_num or is_last_state: 441 | return state_str 442 | return '' 443 | 444 | self.tla_file = tla_file 445 | self.trace_file = trace_file 446 | self.chooser_handler = choose_state 447 | self.init_module = None 448 | if python_init: 449 | try: 450 | sys.path.insert(1, os.path.dirname(python_init)) 451 | self.init_module = __import__(os.path.basename(python_init).replace('.py', '')) 452 | sys.path.pop(1) 453 | except ModuleNotFoundError: 454 | pass 455 | if hasattr(self.init_module, 'init_replace_init'): 456 | if debug: 457 | eprint('Debug: calling "init_replace_init"') 458 | self.init_module.init_replace_init(self) 459 | 460 | def set_chooser_handler(self, func) -> None: 461 | self.chooser_handler = func 462 | 463 | def get_replace_state_str(self): 464 | try: 465 | from trace_reader import TraceReader 466 | except ModuleNotFoundError: 467 | eprint('Warning:', 'failed to import "trace_reader",', '"init state" is disabled') 468 | return '' 469 | tr = TraceReader() 470 | if hasattr(self.init_module, 'init_trace_reader'): 471 | if debug: 472 | eprint('Debug: calling "init_trace_reader"') 473 | self.init_module.init_trace_reader(tr) 474 | states = list(tr.trace_reader_with_state_str(self.trace_file)) 475 | for i, state in enumerate(states): 476 | chosen = self.chooser_handler(i + 1, state[0], state[1], i + 1 == len(states)) 477 | if chosen: 478 | if debug: 479 | eprint('Debug: choose init state: {}{}'.format( 480 | i + 1, ' (last state)' if i + 1 == len(states) else '')) 481 | return chosen 482 | if debug: 483 | eprint('Debug: no init state is chosen') 484 | return '' 485 | 486 | def get_replaced_tla_file_lines(self, replace_str=None) -> list: 487 | lines = [] 488 | if replace_str is None: 489 | replace_str = self.get_replace_state_str() 490 | with open(self.tla_file) as f: 491 | started = False 492 | for line in f: 493 | if not started: 494 | if line.startswith('Init ==') and replace_str: 495 | started = True 496 | lines.append('Init ==\n') 497 | lines.append(replace_str) 498 | lines.append('\n\n') 499 | else: 500 | lines.append(line) 501 | elif line[0] == '\n': 502 | started = False 503 | return lines 504 | 505 | def write(self) -> None: 506 | replace_str = self.get_replace_state_str() 507 | if not replace_str: 508 | return 509 | tla_lines = self.get_replaced_tla_file_lines(replace_str) 510 | os.rename(self.tla_file, self.tla_file + '.bak') 511 | with open(self.tla_file, 'w') as f: 512 | f.writelines(tla_lines) 513 | if debug: 514 | eprint('Debug: replaced Init TLA+ file:', self.tla_file) 515 | 516 | 517 | class TLCWrapper: 518 | """TLC cmdline options""" 519 | _script_dir = os.path.dirname(os.path.realpath(__file__)) 520 | tla2tools_jar = os.path.join(_script_dir, 'tla2tools.jar') 521 | community_jar = os.path.join(_script_dir, 'CommunityModules-deps.jar') 522 | tla2tools_url = 'https://github.com/tlaplus/tlaplus/releases/download/{}/tla2tools.jar' 523 | tla2tool2_jar_latest_version = 'v1.8.0' 524 | tla2tool2_jar_stable_version = 'v1.7.2' # v1.7.3 becomes slower in some conditions 525 | community_url = 'https://github.com/tlaplus/CommunityModules/releases/download/{}/CommunityModules-deps.jar' 526 | community_jar_version = '202103291751' # for v1.7.2 tlc, some classses are not defined (KSubsetValue) in higher versions 527 | tla2tools_tlc = 'tlc2.TLC' 528 | tla2tools_server = 'tlc2.tool.distributed.TLCServer' 529 | tla2tools_worker = 'tlc2.tool.distributed.TLCWorker' 530 | tla2tools_fpset = 'tlc2.tool.distributed.fp.DistributedFPSet' 531 | tla2tools_worker_and_fpset = 'tlc2.tool.distributed.fp.TLCWorkerAndFPSet' 532 | 533 | default_config_file = 'config.ini' # default input file 534 | 535 | # default output files 536 | default_mc_cfg = 'MC.cfg' 537 | default_mc_tla = 'MC.tla' 538 | default_mc_tla_constants = 'MC_constants.tla' 539 | default_mc_log = 'MC.out' 540 | default_mc_user = 'MC_user.txt' 541 | default_mc_states = 'MC_states' 542 | default_mc_coverage = 'MC_coverage.txt' 543 | default_mc_ini = 'MC.ini' 544 | default_mc_trace = 'MC_trace' 545 | default_tlcwrapper_log = 'tlcwrapper.log' 546 | 547 | task_id_number = 0 548 | 549 | # third-party scripts to run distributed mode 550 | spssh_dir = os.path.join(_script_dir, 'spssh') 551 | spssh_sh = os.path.join(spssh_dir, 'spssh.sh') 552 | spssh_cp_sh = os.path.join(spssh_dir, 'spssh_cp.sh') 553 | 554 | def __init__(self, config_file=None, log_file=True, gen_cfg_fn=None, gen_tla_fn=None, gen_tla_constants_fn=None, 555 | summary=None, is_task_id=True, is_split_user_file=True, classpath='', need_community_modules=False, 556 | log_output=False): 557 | """create model dir, chdir, copy files and generate tlc configfile""" 558 | 559 | # save current dir 560 | self.orig_cwd = os.getcwd() 561 | 562 | # default is not simulation mode 563 | self.simulation_mode = False 564 | 565 | self.distributed_mode = False 566 | 567 | # open config file 568 | config_file = config_file if config_file is not None else self.default_config_file 569 | if not hasattr(config_file, 'read'): 570 | config_file = open(config_file, 'r') 571 | config_str = config_file.read() 572 | config_file.close() 573 | self.cfg = ConfigParser() 574 | self.cfg.optionxform = str # case sensitive 575 | self.cfg.read_string(config_str) 576 | 577 | if 'options' not in self.cfg: 578 | xprint('Error: config file has no "options" section, run "python3 {} -h" for help'.format(sys.argv[0])) 579 | raise ValueError('config file has no "options" section') 580 | 581 | # check dependencies and set classpath 582 | if not need_community_modules and 'community modules' in self.cfg['options']: 583 | need_community_modules = self.cfg.getboolean('options', 'community modules', fallback=False) 584 | if not classpath: 585 | classpath = self.tla2tools_jar 586 | else: 587 | classpath = ':'.join([os.path.realpath(i) for i in classpath.split(':')]) 588 | classpath = '{}:{}'.format(classpath, self.tla2tools_jar) 589 | self.need_community_modules = need_community_modules 590 | if need_community_modules: 591 | classpath = '{}:{}'.format(classpath, self.community_jar) 592 | self._tlc_cmd = ['java', '-XX:+UseParallelGC', '-cp', classpath] 593 | self.classpath = classpath 594 | 595 | # open log file 596 | if isinstance(log_file, str): # if log_file specified, open it before change cwd 597 | self.log_file = open(log_file, 'w') 598 | 599 | # take model dir 600 | target = self.cfg.get('options', 'target') 601 | 602 | TLCWrapper.task_id_number += 1 603 | task_id = '' if not is_task_id else '_{}'.format(TLCWrapper.task_id_number) 604 | model_name = self.cfg.get('options', 'model name') + datetime.now().strftime("_%Y-%m-%d_%H-%M-%S") + task_id 605 | os.chdir(os.path.dirname(os.path.realpath(target))) 606 | os.makedirs(model_name, exist_ok=True) 607 | for file in os.listdir('.'): 608 | if file.endswith('.tla'): 609 | copy2(file, model_name) 610 | model_dir = os.path.realpath(model_name) 611 | os.chdir(self.orig_cwd) 612 | need_separate_constants = self._parse_init_state(os.path.join(model_dir, os.path.basename(target))) 613 | os.chdir(model_dir) 614 | 615 | # gitignore all generated files 616 | with open('.gitignore', 'w') as gitignore_f: 617 | gitignore_f.write('# Generated by tlcwrapper.py\n') 618 | gitignore_f.write('*\n') 619 | 620 | # set log_output file 621 | if log_output: 622 | global wrapper_out_file 623 | wrapper_out_file = open(self.default_tlcwrapper_log, 'w') 624 | 625 | # check and open log file again 626 | if log_file: 627 | if not isinstance(log_file, str): 628 | self.log_file = open(self.default_mc_log, 'w') 629 | else: 630 | self.log_file = None 631 | 632 | # generate config files 633 | self.gen_cfg_fn = gen_cfg_fn if gen_cfg_fn is not None else self.default_mc_cfg 634 | self.gen_tla_fn = gen_tla_fn if gen_tla_fn is not None else self.default_mc_tla 635 | if need_separate_constants and not gen_tla_constants_fn: 636 | gen_tla_constants_fn = True 637 | if not gen_tla_constants_fn: 638 | self.gen_tla_constants_fn = None 639 | else: 640 | if gen_tla_constants_fn == True: 641 | self.gen_tla_constants_fn = self.default_mc_tla_constants 642 | else: 643 | self.gen_tla_constants_fn = gen_tla_constants_fn 644 | 645 | TLCConfigFile(self.cfg, self.gen_cfg_fn, self.gen_tla_fn, 646 | self.gen_tla_constants_fn, target_tla_file=os.path.basename(target)).write() 647 | 648 | # set Java and TLC options 649 | self.options = [] 650 | # distributed mode slave cmds 651 | self.cmd_workers = [] 652 | self.cmd_fpsets = [] 653 | self.cmd_workers_fpsets = [] 654 | self.is_split_user_file = is_split_user_file 655 | self._parse_options() 656 | 657 | with open(self.default_mc_ini, 'w') as f: 658 | f.write('; {}\n; {}\n\n'.format(*self.get_cmd_str().splitlines())) 659 | f.write(config_str) 660 | 661 | # init result and summary table 662 | self.result = None 663 | self.log_lines = None 664 | self.init_result() 665 | self.summary = summary if summary is not None else Summary() 666 | 667 | 668 | def __del__(self): 669 | if hasattr(self, 'log_file') and hasattr(self.log_file, 'close'): 670 | self.log_file.close() 671 | global wrapper_out_file 672 | if hasattr(wrapper_out_file, 'close'): 673 | wrapper_out_file.close() 674 | wrapper_out_file = None 675 | os.chdir(self.orig_cwd) 676 | 677 | def _parse_init_state(self, tla_file): 678 | if 'init state' in self.cfg: 679 | opt = self.cfg['init state'] 680 | else: 681 | return False 682 | trace_file = opt.get('trace file') 683 | if not trace_file: 684 | return False 685 | replace_state = opt.getint('state', fallback=0) 686 | python_init = opt.get('python init file') 687 | ReplaceInit(tla_file, trace_file, replace_state, python_init).write() 688 | return True 689 | 690 | def _parse_options(self): 691 | """parse options section""" 692 | self.options = [self.gen_tla_fn, '-config', self.gen_cfg_fn] 693 | opt = self.cfg['options'] 694 | 695 | def _parse_workers(opt_name, opt_list, class_name): 696 | client_opt = opt.get(opt_name) 697 | if not client_opt: 698 | return 699 | clients = client_opt.splitlines() 700 | for c in clients: 701 | args = self._tlc_cmd.copy() 702 | args[-1] = ':'.join([os.path.basename(i) for i in args[-1].split(':')]) 703 | args[-1] = os.path.basename(self.tla2tools_jar) 704 | if self.need_community_modules: 705 | args[-1] += ':' + os.path.basename(self.community_jar) 706 | args.append(class_name) 707 | args.append(server_name) 708 | c_opts = c.strip().split() 709 | _, c_name = c_opts[0].split('@') 710 | args.insert(1, '-Djava.rmi.server.hostname=' + c_name) 711 | mem = 0 712 | if len(c_opts) >= 2: 713 | mem = float(c_opts[1]) 714 | if mem >= 1: 715 | mem = int(mem) 716 | direct_mem_arg = '-XX:MaxDirectMemorySize=${DIRECT_MEM}m' 717 | xmx_mem_arg = '-Xmx${XMX_MEM}m' 718 | args.insert(1, xmx_mem_arg) 719 | args.insert(1, direct_mem_arg) 720 | args.insert(1, '-Dtlc2.tool.fp.FPSet.impl=tlc2.tool.fp.OffHeapDiskFPSet') 721 | if len(c_opts) >= 3: 722 | args.insert(1, '-Dtlc2.tool.distributed.TLCWorker.threadCount=' + c_opts[2]) 723 | opt_list.append((c_opts[0], mem, args)) 724 | 725 | if opt.get('distributed mode') and opt.get('distributed mode').lower() != 'false': 726 | server_name = opt.get('distributed mode') 727 | self.distributed_mode = True 728 | _parse_workers('distributed TLC workers', self.cmd_workers, self.tla2tools_worker) 729 | _parse_workers('distributed fingerprint server', self.cmd_fpsets, self.tla2tools_fpset) 730 | _parse_workers('distributed TLC and fingerprint', self.cmd_workers_fpsets, self.tla2tools_worker_and_fpset) 731 | self._tlc_cmd.insert(1, '-Dtlc2.tool.distributed.TLCServer.expectedFPSetCount={}'.format( 732 | len(self.cmd_fpsets) + len(self.cmd_workers_fpsets))) 733 | self.is_split_user_file = False 734 | self._tlc_cmd.append(self.tla2tools_server) 735 | if server_name.lower() != 'true': 736 | self._tlc_cmd.insert(1, '-Djava.rmi.server.hostname=' + server_name) 737 | else: 738 | self._tlc_cmd.append(self.tla2tools_tlc) 739 | 740 | if self.is_split_user_file: 741 | self.options += ['-userFile', self.default_mc_user] 742 | 743 | if opt.get('stop after'): 744 | self._tlc_cmd.insert(1, '-Dtlc2.TLC.stopAfter=' + opt.get('stop after')) 745 | 746 | mem = None 747 | mem_ratio = opt.getfloat('memory ratio') 748 | if mem_ratio: 749 | try: 750 | from psutil import virtual_memory 751 | mem = int(virtual_memory().total / 1024 / 1024 * mem_ratio) 752 | except ImportError: 753 | mem = None 754 | eprint('Warning:', 'failed to import "psutil",', '"memory ratio" is disabled') 755 | if mem is None: 756 | mem = opt.getint('system memory') 757 | if mem: 758 | direct_mem = '-XX:MaxDirectMemorySize=' + str(mem // 3 * 2) + 'm' 759 | xmx = '-Xmx' + str(mem // 3) + 'm' 760 | self._tlc_cmd.insert(1, xmx) 761 | self._tlc_cmd.insert(1, direct_mem) 762 | self._tlc_cmd.insert(1, '-Dtlc2.tool.fp.FPSet.impl=tlc2.tool.fp.OffHeapDiskFPSet') 763 | 764 | dump_states = opt.get('dump states') 765 | if dump_states: 766 | if dump_states.startswith('dot'): 767 | self.options += ['-dump', dump_states, self.default_mc_states] 768 | elif dump_states.lower() != 'false': 769 | self.options += ['-dump', self.default_mc_states] 770 | 771 | options_list = [opt.get('workers'), opt.getint('checkpoint minute'), opt.getint('dfs depth'), 772 | not opt.getboolean('check deadlock'), opt.getint('coverage minute'), 773 | opt.getint('simulation depth'), opt.getint('simulation seed'), opt.get('recover'), 774 | opt.getboolean('gzip'), opt.getboolean('generate spec TE'), opt.getboolean('clean up'), 775 | opt.get('liveness check'), opt.getboolean('diff trace')] 776 | options = ['-workers', '-checkpoint', '-dfid', '-deadlock', '-coverage', '-depth', '-seed', '-recover', 777 | '-gzip', '-generateSpecTE', '-cleanup', '-lncheck', '-difftrace'] 778 | for i, j in zip(options, options_list): 779 | if j: 780 | self.options.append(i) 781 | if not isinstance(j, bool): 782 | self.options.append(str(j)) 783 | 784 | simulation_options = [] 785 | simulation_traces_num = opt.getint('simulation traces') 786 | if simulation_traces_num: 787 | simulation_options.append('num=' + str(simulation_traces_num)) 788 | simulation_dump_traces = opt.getboolean('simulation dump traces') 789 | if simulation_dump_traces: 790 | simulation_options.append('file=' + os.path.join(os.path.realpath('.'), 'trace')) 791 | simulation_options_str = ','.join(simulation_options) 792 | 793 | if '-depth' in self.options or '-aril' in self.options or simulation_options_str: 794 | self.options.append('-simulate') 795 | if simulation_options_str: 796 | self.options.append(simulation_options_str) 797 | self.simulation_mode = True 798 | 799 | if opt.getboolean('dump trace'): 800 | self.options += ['-dumpTrace', 'tla', self.default_mc_trace + '.tla', 801 | '-dumpTrace', 'json', self.default_mc_trace + '.json'] 802 | 803 | if opt.get('other TLC options') is not None: 804 | for field in opt.get('other TLC options').split('\n'): 805 | if len(field) != 0: 806 | self.options.append(field) 807 | 808 | if opt.get('other Java options') is not None: 809 | for field in reversed(opt.get('other Java options').split('\n')): 810 | if len(field) != 0: 811 | self._tlc_cmd.insert(1, field) 812 | 813 | def get_cmd_str(self): 814 | """get tlc command line""" 815 | result = 'cd {}\n{}'.format(os.getcwd(), ' '.join(i for i in chain(self._tlc_cmd, self.options))) 816 | if self.distributed_mode: 817 | result += '\n' + '\n'.join(self.get_distributed_workers_cmd()[0]) 818 | return result 819 | 820 | def get_cmd_options(self): 821 | """get tlc command line list""" 822 | return os.getcwd(), self._tlc_cmd + self.options 823 | 824 | def get_distributed_workers_cmd(self): 825 | """get a list a cmd that workers run""" 826 | cmds = [] 827 | index = 0 828 | mem_options = [] 829 | client_run_cmds = [] 830 | clients = [] 831 | for i in self.cmd_workers, self.cmd_fpsets, self.cmd_workers_fpsets: 832 | for j in i: 833 | index += 1 834 | cmds.append("# cat <<'EOF' | ssh {} 'export SSH_NO={}; exec bash'".format(j[0], index)) 835 | mem_options.append(j[1]) 836 | client_run_cmds.append(' '.join(j[-1])) 837 | clients.append(j[0]) 838 | cmds.append('MEM_OPTIONS=(_ {})'.format(' '.join(str(i) for i in mem_options))) 839 | cmds.append('MY_MEM=${MEM_OPTIONS[$SSH_NO]}') 840 | cmds.append('''function set_memory() { 841 | if [ "$MY_MEM" != 0 ]; then 842 | if [[ "$MY_MEM" =~ 0\\. ]]; then 843 | MEM_TOTAL=$(($(sed -En '/MemTotal/s/.*[ ]+([0-9]+).*/\\1/p' version_numbers(default_version): 891 | version = request_version 892 | if debug: 893 | eprint('Debug: downloading:', url.format(version)) 894 | r = requests.get(url.format(version), allow_redirects=True) 895 | with open(target, 'wb') as f: 896 | f.write(r.content) 897 | except Exception as e: 898 | xprint('Error:', 'failed to download "{}", you should download it manually'.format( 899 | os.path.basename(target))) 900 | raise e 901 | 902 | @classmethod 903 | def download_tla2tools(cls, latest=False, overwrite=False): 904 | lastest_version_link = None if not latest else 'https://api.github.com/repos/tlaplus/tlaplus/releases/latest' 905 | tla2tool2_version = cls.tla2tool2_jar_stable_version if not latest else cls.tla2tool2_jar_latest_version 906 | cls.download_jar(cls.tla2tools_jar, cls.tla2tools_url, tla2tool2_version, lastest_version_link, overwrite) 907 | 908 | @classmethod 909 | def download_community_modules(cls, latest=False, overwrite=False): 910 | lastest_version_link = None if not latest else 'https://api.github.com/repos/tlaplus/CommunityModules/releases/latest' 911 | cls.download_jar(cls.community_jar, cls.community_url, cls.community_jar_version, 912 | lastest_version_link, overwrite) 913 | 914 | def download_dependencies(self): 915 | self.download_tla2tools() 916 | if self.need_community_modules: 917 | self.download_community_modules() 918 | 919 | def run_distributed_workers(self): 920 | if not self.distributed_mode: 921 | return 922 | xprint('-' * 16) 923 | if all(os.path.isfile(i) for i in [self.spssh_sh, self.spssh_cp_sh]): 924 | cmds, clients = self.get_distributed_workers_cmd() 925 | client_cmd_str = '\n'.join([ i for i in cmds if not i.startswith('#') ]) 926 | host_cmd_str = "tmux split-window -d -t 0 'exec tail -f -n +1 {}'; exec tail -f -n +1 {}".format( 927 | self.default_mc_log, self.default_tlcwrapper_log) 928 | jar_dir = os.path.dirname(self.classpath.split(':')[0]) 929 | run_spssh_cmd = \ 930 | """cat <<'EOF' | {} -f '-maxdepth 1 -name \\*.jar' {} 2>/dev/null | {} -H -S -b -t -s -e -r "{}" {} 2>&1\n""" \ 931 | .format(self.spssh_cp_sh, jar_dir, self.spssh_sh, host_cmd_str, ' '.join(clients)) 932 | run_spssh_cmd += "cd {}\n".format(os.path.basename(jar_dir)) 933 | run_spssh_cmd += client_cmd_str + '\n' 934 | if debug: 935 | eprint('Debug:', 'popen cmd:\n{}'.format(run_spssh_cmd)) 936 | output = os.popen(run_spssh_cmd).read() 937 | if output and debug: 938 | eprint('Debug:', 'popen cmd output:\n{}'.format(output)) 939 | xprint('Run "tmux attach-session -t {}" to check workers progress'.format( 940 | re.sub(r'.*\n.*SESSION=(.*)\n', r'\1', output))) 941 | else: 942 | xprint('Run below commands on clients:') 943 | for i, _ in self.get_distributed_workers_cmd(): 944 | xprint(i) 945 | xprint('-' * 16) 946 | 947 | def run(self): 948 | """call tlc and analyse output""" 949 | self.init_result() # clear result 950 | 951 | title_printed = False 952 | title_list = ['Current Time', 'Duration', 'Diameter', 'States Found', 'Distinct States', 'Queue Size'] 953 | if self.simulation_mode: 954 | title_list = [i if i != 'Diameter' else 'Traces' for i in title_list] 955 | self.summary.init_title(is_simulation=self.simulation_mode) 956 | 957 | def print_state(time): 958 | nonlocal title_printed 959 | if type(time) is timedelta: 960 | m, s = divmod(time.total_seconds(), 60) 961 | h, m = divmod(m, 60) 962 | time = '{:d}:{:02d}:{:02d}'.format(int(h), int(m), int(s)) 963 | value_list = [datetime.now().strftime("%H:%M:%S"), str(time), self.result['diameter'], 964 | self.result['total states'], self.result['distinct states'], self.result['queued states']] 965 | if all(i is not None for i in value_list): 966 | if not title_printed: 967 | title_printed = True 968 | xprint(('{:<15} ' * len(title_list)).format(*title_list)) 969 | xprint(('{:<15} ' * len(value_list)).format(*value_list)) 970 | for k, v in zip(title_list, value_list): 971 | self.summary.add_info(k, v) 972 | 973 | progress_pat = re.compile(r'Progress\(%?(-?[\d,]+)%?\) at (.*): ([\d,]+) s.*, (-?[\d,]+) d.*, (-?[\d,]+) s') 974 | # finish_pat = re.compile(r'(\d+) states generated, (\d+) distinct states found, (\d+) states left on queue') 975 | 976 | tmp_lines = [] 977 | message_code = -1 # see https://github.com/tlaplus/tlaplus/blob/master/tlatools/org.lamport.tlatools/src/tlc2/output/EC.java 978 | message_type = -1 # see https://github.com/tlaplus/tlaplus/blob/master/tlatools/org.lamport.tlatools/src/tlc2/output/MP.java 979 | message_type_key = ('info', 'errors', 'tlc bug', 'warnings', 'error trace', 'other msg') 980 | finish_flag = False 981 | interrupt_flag = False 982 | 983 | def int_handler(sig, frame): 984 | nonlocal interrupt_flag 985 | interrupt_flag = True 986 | 987 | signal.signal(signal.SIGINT, int_handler) 988 | 989 | def process_message(): 990 | if len(tmp_lines) == 0: 991 | return 992 | line = '\n'.join(tmp_lines) 993 | self.result[message_type_key[message_type]].append((datetime.now(), line)) 994 | # if message_type_key[message_type] in {'errors', 'warnings'}: 995 | # xprint('Error:' if message_type_key[message_type] == 'errors' else 'Warning:', line) 996 | if message_code == 2185: # Starting... 997 | self.result['start time'] = datetime.strptime(line, 'Starting... (%Y-%m-%d %H:%M:%S)') 998 | elif message_code == 2186: # Finished in... 999 | self.result['finish time'] = datetime.strptime(line.split('at')[1], ' (%Y-%m-%d %H:%M:%S)') 1000 | self.result['time consuming'] = self.result['finish time'] - self.result['start time'] 1001 | nonlocal finish_flag 1002 | finish_flag = True 1003 | # print_state(self.result['time consuming']) 1004 | elif message_code in {2200, 2206, 2209}: # Progress... 1005 | progress_match = progress_pat.match(line) 1006 | if not progress_match: 1007 | if debug: 1008 | eprint('Debug:', 'Please report this bug: match failed: "{}".'.format(line)) 1009 | else: 1010 | groups = progress_match.groups() 1011 | self.result['diameter'] = int(groups[0].replace(',', '')) 1012 | self.result['total states'] = int(groups[2].replace(',', '')) 1013 | self.result['distinct states'] = int(groups[3].replace(',', '')) 1014 | self.result['queued states'] = int(groups[4].replace(',', '')) 1015 | current_time = datetime.strptime(groups[1], '%Y-%m-%d %H:%M:%S') 1016 | self.result['time consuming'] = current_time - self.result['start time'] 1017 | print_state(self.result['time consuming']) 1018 | elif message_code == 2190: # Finished computing initial states ... 1019 | states = int(line.split(':')[1].split(' ')[1]) 1020 | self.result['diameter'] = 0 1021 | self.result['total states'] = states 1022 | self.result['distinct states'] = states 1023 | self.result['queued states'] = states 1024 | print_state(str(datetime.now() - self.result['start time']).split('.')[0]) 1025 | # elif message_code == 2199: # ... states generated, ... distinct states found, 0 states left on queue. 1026 | # groups = finish_pat.match(line).groups() 1027 | # self.result['total states'] = int(groups[0]) 1028 | # self.result['distinct states'] = int(groups[1]) 1029 | # self.result['queued states'] = int(groups[2]) 1030 | elif message_code == 2194: # The depth of the complete state graph search is ... 1031 | diameter = int(line.split(' ')[9].rstrip('.')) 1032 | self.result['diameter'] = diameter 1033 | elif message_code == 2201: # The coverage statistics 1034 | self.result['coverage'] = [line] 1035 | elif message_code == 2221: # coverage msg detail 1036 | self.result['coverage'].append(line) 1037 | elif message_code == 2202: # End of statistics 1038 | self.save_coverage() 1039 | 1040 | options = self._tlc_cmd + self.options + ['-tool'] # tool mode 1041 | if debug: 1042 | eprint('Debug: cwd:', os.getcwd()) 1043 | eprint('Debug: cmd:', options) 1044 | self.download_dependencies() 1045 | 1046 | with open(self.default_mc_ini, 'a') as f: 1047 | cur_time = datetime.now() 1048 | f.write('\n; CMD: {}\n; START TIME: {}\n'.format(options, cur_time)) 1049 | self.summary.add_info('Start Time', cur_time) 1050 | process = subprocess.Popen(options, stdout=subprocess.PIPE, universal_newlines=True) 1051 | if debug: 1052 | eprint('Debug:', 'JAVA PID: {}'.format(process.pid)) 1053 | cur_time = datetime.now() 1054 | self.summary.add_info('End Time', cur_time) 1055 | # f.write('; END TIME: {}\n'.format(cur_time)) 1056 | 1057 | self.run_distributed_workers() 1058 | 1059 | for msg_line in iter(process.stdout.readline, ''): 1060 | if msg_line == '': # sentinel 1061 | process_message() 1062 | break 1063 | self.log_lines.append(msg_line) 1064 | if self.log_file: 1065 | self.log_file.write(msg_line) 1066 | self.log_file.flush() 1067 | msg_line = msg_line.rstrip() 1068 | if message_code == -1 and msg_line.startswith('@!@!@STARTMSG'): 1069 | process_message() 1070 | message_code, message_type = tuple(int(i) for i in msg_line.split(' ')[1].split(':')) 1071 | elif message_code != -1 and msg_line.startswith('@!@!@ENDMSG ' + str(message_code)): 1072 | process_message() 1073 | message_code, message_type = -1, -1 1074 | tmp_lines = [] 1075 | else: 1076 | tmp_lines.append(msg_line) 1077 | if (finish_flag == True and self.distributed_mode == True) or interrupt_flag: 1078 | process.terminate() 1079 | 1080 | exit_state = process.poll() 1081 | self.result['exit state'] = 0 if exit_state is None else exit_state 1082 | self.summary.add_info('Exit Status', self.result['exit state']) 1083 | self.summary.add_info('Warnings', len(self.result['warnings'])) 1084 | self.summary.add_info('Errors', len(self.result['errors'])) 1085 | if len(self.result['error trace']): 1086 | self.summary.add_info('Error Trace Depth', len(self.result['error trace']), force=True) 1087 | with open(self.default_mc_ini, 'a') as f: 1088 | cur_time = datetime.now() 1089 | self.summary.add_info('End Time', cur_time) 1090 | self.summary.add_info('Duration', self.summary.current['End Time'] - self.summary.current['Start Time']) 1091 | f.write('; END TIME: {}\n'.format(cur_time)) 1092 | return self.result 1093 | 1094 | def get_log(self): 1095 | return self.log_lines 1096 | 1097 | def get_summary(self): 1098 | return self.summary 1099 | 1100 | def save_log(self, filename=None): 1101 | """save tlc output to file""" 1102 | if filename is None: 1103 | filename = self.default_mc_log 1104 | with open(filename, 'w') as f: 1105 | f.writelines(self.log_lines) 1106 | 1107 | def save_coverage(self, filename=None): 1108 | """save coverage msg to file if it has coverage msgs""" 1109 | if filename is None: 1110 | filename = self.default_mc_coverage 1111 | if len(self.result['coverage']) != 0: 1112 | with open(filename, 'w') as f: 1113 | f.write('\n'.join(self.result['coverage'])) 1114 | f.write('\n') 1115 | 1116 | def print_result(self): 1117 | for _, msg in self.result['warnings']: 1118 | xprint('Warning: ' + msg) 1119 | for _, msg in self.result['errors']: 1120 | xprint('Error: ' + msg) 1121 | for _, msg in self.result['error trace']: 1122 | xprint(msg) 1123 | xprint('Status: errors: {}, warnings: {}, exit_state: {}'.format( 1124 | len(self.result['errors']), len(self.result['warnings']), self.result['exit state'])) 1125 | 1126 | 1127 | def main(config_file, summary_file=None, separate_constants=None, classpath='', need_community_modules=False, 1128 | log_output=False): 1129 | summary = Summary() 1130 | options = tuple() 1131 | for options, config_stringio in BatchConfig(config_file, summary).get(): 1132 | xprint('\n{}'.format('#' * 16)) 1133 | tlc = TLCWrapper(config_stringio, summary=summary, gen_tla_constants_fn=separate_constants, 1134 | classpath=classpath, need_community_modules=need_community_modules, log_output=log_output) 1135 | if options: 1136 | xprint('Options:') 1137 | for i in options: 1138 | xprint(' ', i.replace('\n', '\n ')) 1139 | xprint('-' * 16) 1140 | tlc.run() 1141 | xprint('-' * 16) 1142 | tlc.print_result() 1143 | summary.finish_current() 1144 | del tlc 1145 | xprint('=' * 16) 1146 | xprint(summary) 1147 | if summary_file or (summary_file is None and options): 1148 | if isinstance(summary_file, str): 1149 | name = summary_file 1150 | else: 1151 | config_file_name = config_file if not hasattr(config_file, 'read') else 'stdin' 1152 | name = "MC_summary_{}_{}.csv".format( 1153 | os.path.basename(config_file_name).replace('.ini', ''), datetime.now().strftime("%Y-%m-%d_%H-%M-%S")) 1154 | summary.print_to_file(name) 1155 | 1156 | 1157 | def raw_run(config_file, is_print_cmd=False, separate_constants=None, classpath='', need_community_modules=False): 1158 | for _, config_stringio in BatchConfig(config_file).get(): 1159 | tlc = TLCWrapper(config_stringio, log_file=None, is_split_user_file=False, 1160 | gen_tla_constants_fn=separate_constants, classpath=classpath, need_community_modules=need_community_modules) 1161 | if is_print_cmd: 1162 | xprint(tlc.get_cmd_str()) 1163 | else: 1164 | tlc.raw_run() 1165 | del tlc 1166 | 1167 | 1168 | if __name__ == '__main__': 1169 | parser = argparse.ArgumentParser(description="Run TLC in CMD") 1170 | 1171 | parser.add_argument('-j', dest='classpath', action='store', required=False, 1172 | help='Java classpath to use') 1173 | parser.add_argument('-g', dest='get_cmd', action='store_true', required=False, 1174 | help="Generate TLC config files and print Java CMD strings") 1175 | parser.add_argument('-r', dest='raw_run', action='store_true', required=False, 1176 | help="Run without processing TLC output") 1177 | parser.add_argument('-s', dest='no_summary', action='store_true', required=False, 1178 | help="Do not save summary file", default=False) 1179 | parser.add_argument('-d', dest='download_jar', action='store_true', required=False, 1180 | help="Download tla2tools.jar and CommunityModules-deps.jar and exit", default=False) 1181 | parser.add_argument('-D', dest='stable_version', action='store_true', required=False, default=False, 1182 | help="Delete existing jars and download with stable version instead of latest version") 1183 | parser.add_argument('-c', dest='separate_constants', action='store_true', required=False, 1184 | help="Separate constants and model options into two files", default=False) 1185 | parser.add_argument(dest='config_ini', metavar='config.ini', action='store', nargs='?', 1186 | help='Configuration file (if not presented, stdin is used)') 1187 | parser.add_argument('-m', dest='community_modules', action='store_true', required=False, 1188 | help='Require community modules') 1189 | parser.add_argument('-n', dest='no_debug', action='store_true', required=False, 1190 | help='Not to print debug messages') 1191 | 1192 | args = parser.parse_args() 1193 | 1194 | if args.download_jar or args.stable_version: 1195 | if args.stable_version: 1196 | eprint('Info: please note that the stable TLC version has limited support for CommunityModules') 1197 | is_latest = not args.stable_version 1198 | TLCWrapper.download_tla2tools(latest=is_latest, overwrite=args.stable_version) 1199 | TLCWrapper.download_community_modules(latest=is_latest, overwrite=args.stable_version) 1200 | exit(0) 1201 | if args.no_debug: 1202 | debug = False 1203 | if not args.config_ini: 1204 | args.config_ini = sys.stdin 1205 | if args.get_cmd: 1206 | raw_run(args.config_ini, is_print_cmd=True, separate_constants=args.separate_constants, 1207 | classpath=args.classpath, need_community_modules=args.community_modules) 1208 | elif args.raw_run: 1209 | raw_run(args.config_ini, is_print_cmd=False, separate_constants=args.separate_constants, 1210 | classpath=args.classpath, need_community_modules=args.community_modules) 1211 | else: 1212 | main(args.config_ini, not args.no_summary, separate_constants=args.separate_constants, 1213 | classpath=args.classpath, need_community_modules=args.community_modules, log_output=True) 1214 | -------------------------------------------------------------------------------- /trace_action_counter.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: UTF-8 -*- 3 | 4 | import argparse 5 | import os 6 | import time 7 | import sys 8 | from collections import defaultdict 9 | from trace_reader import TraceReader 10 | from multiprocessing import Pool, cpu_count 11 | 12 | tr = TraceReader(save_action_name=True, hashable=True) 13 | 14 | # Data 15 | class SimulationSummaryData: 16 | @staticmethod 17 | def default_0(): 18 | return 0 19 | def __init__(self) -> None: 20 | self.processed_files = set() 21 | self.total_states = 0 22 | self.total_actions = defaultdict(self.default_0) 23 | self.diameters = defaultdict(self.default_0) 24 | self.states = dict() 25 | self.distinct_actions = None 26 | 27 | 28 | # Mapper 29 | class SimulationSummaryMapper: 30 | def __init__(self, is_delete=False, finish_file='MC.out'): 31 | self.is_delete = is_delete 32 | self.finish_file = finish_file 33 | 34 | def process_file(self, fn): 35 | diameter = 0 36 | data = SimulationSummaryData() 37 | for state in tr.trace_reader(fn): 38 | diameter += 1 39 | action = state['_action'] 40 | state['_action'] = 0 41 | state_hash = hash(state) 42 | if state_hash not in data.states: 43 | data.states[state_hash] = action 44 | else: 45 | pass # we did not check the equality if hashes are the same 46 | data.total_states += 1 47 | data.total_actions[action] += 1 48 | if diameter: 49 | data.diameters[diameter] += 1 50 | data.processed_files.add(fn) 51 | if self.is_delete and fn != self.finish_file: 52 | os.remove(fn) 53 | return data 54 | 55 | 56 | # Tasks submmitter, reducer and printer 57 | class ProgressManager: 58 | def __init__(self, nproc, is_delete=False, trace_dir=None, finish_file='MC.out', 59 | period=5, period_ntrace=0, logfile=None): 60 | if logfile is not None: 61 | self.logfile = open(logfile, 'w') 62 | else: 63 | self.logfile = sys.stdout 64 | if trace_dir is not None: 65 | os.chdir(trace_dir) 66 | self.prev_time = 0 67 | self.nproc = nproc 68 | self.pool = Pool(processes=self.nproc) 69 | self.mapper = SimulationSummaryMapper(is_delete=is_delete, finish_file=finish_file) 70 | self.data = SimulationSummaryData() 71 | self.submitted = 0 72 | self.finish_file = finish_file 73 | self.period = period 74 | self.period_ntrace = period_ntrace 75 | self.results = [] 76 | 77 | def reduce(self, data: SimulationSummaryData, reduce_actions=False): 78 | if data is not None: 79 | self.data.processed_files.update(data.processed_files) 80 | self.data.total_states += data.total_states 81 | self.data.states.update(data.states) 82 | for j in data.diameters: 83 | self.data.diameters[j] += data.diameters[j] 84 | for j in data.total_actions: 85 | self.data.total_actions[j] += data.total_actions[j] 86 | if reduce_actions: 87 | self.data.distinct_actions = defaultdict(lambda: 0) 88 | for value in self.data.states.values(): 89 | self.data.distinct_actions[value] += 1 90 | 91 | def reduce_result(self, reduce_actions=False): 92 | for i in range(len(self.results)-1, -1, -1): 93 | r = self.results[i] 94 | if r.ready(): 95 | self.reduce(r.get(), reduce_actions=reduce_actions) 96 | del self.results[i] 97 | else: 98 | self.print_progress() 99 | continue 100 | if self.period_ntrace: 101 | if len(self.data.processed_files) % self.period_ntrace == 0: 102 | self.print_progress(period=0) 103 | else: 104 | self.print_progress() 105 | 106 | def print(self, *args, **kwargs): 107 | print(*args, **kwargs, file=self.logfile, flush=True) 108 | 109 | def print_progress(self, period=None): 110 | current_time = time.time() 111 | if period is None: 112 | period = self.period 113 | if current_time - self.prev_time >= period: 114 | p_ratio = 0 if self.submitted == 0 else len(self.data.processed_files) / self.submitted 115 | s_ratio = 0 if self.data.total_states == 0 else len(self.data.states) / self.data.total_states 116 | self.print('Processed: {}/{} ({:.3g}%), distinct/total states: {}/{} ({:.3g}%)'.format( 117 | len(self.data.processed_files), self.submitted, p_ratio * 100, 118 | len(self.data.states), self.data.total_states, s_ratio * 100)) 119 | if period < 0: 120 | self.print('Diameters:') 121 | for key, value in sorted(self.data.diameters.items(), key=lambda x: x[0]): 122 | self.print(" {} : {}".format(key, value)) 123 | self.print('Actions:') 124 | for k in self.data.total_actions: 125 | self.print(' ', k, ':', self.data.distinct_actions[k], '/', self.data.total_actions[k]) 126 | self.prev_time = current_time 127 | 128 | def map(self, fn): 129 | self.results.append(self.pool.apply_async(self.mapper.process_file, args=(fn,))) 130 | self.submitted += 1 131 | 132 | def is_trace_file(self, fn): 133 | return fn.startswith("trace_") or fn == self.finish_file 134 | 135 | def iterate_dir(self): 136 | for i in os.listdir(): 137 | if self.is_trace_file(i): 138 | self.map(i) 139 | self.print_progress() 140 | self.pool.close() 141 | self.print('Map finished') 142 | while True: 143 | self.reduce_result() 144 | if len(self.results) == 0: 145 | self.reduce(data=None, reduce_actions=True) 146 | break 147 | self.print_progress() 148 | self.print('Reduce finished') 149 | self.pool.join() 150 | self.print_progress(period=-1) 151 | 152 | 153 | if __name__ == '__main__': 154 | parser = argparse.ArgumentParser(description='Get simulation mode summary') 155 | parser.add_argument(dest='trace_dir', action='store', help='Trace dir') 156 | parser.add_argument('-r', dest='remove', action='store_true', help='Remove processed files') 157 | parser.add_argument('-p', dest='nproc', action='store', type=int, default=cpu_count(), help='Number of processes') 158 | parser.add_argument('-n', dest='ntrace', action='store', type=int, default=0, help='Print progress every n traces') 159 | parser.add_argument('-l', dest='logfile', action='store', help='Log output to file') 160 | args = parser.parse_args() 161 | process_man = ProgressManager(nproc=args.nproc, is_delete=args.remove, trace_dir=args.trace_dir, 162 | period_ntrace=args.ntrace, logfile=args.logfile) 163 | process_man.iterate_dir() 164 | -------------------------------------------------------------------------------- /trace_counter.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: UTF-8 -*- 3 | 4 | import argparse 5 | import os 6 | import time 7 | import sys 8 | from trace_reader import TraceReader 9 | from multiprocessing import Pool, cpu_count 10 | 11 | tr = TraceReader(hashable=True) 12 | hash_file = None 13 | default_hash_filename = 'hashfile' 14 | 15 | # Mapper 16 | def process_file(fn): 17 | l = [] 18 | for state in tr.trace_reader(fn): 19 | if '_hash' in state: 20 | del state['_hash'] 21 | if '_action' in state: 22 | del state['_action'] 23 | l.append(str(hash(state))) 24 | hash_file.write('{}\n'.format(' '.join(l))) 25 | 26 | 27 | # Tasks submmitter, reducer and printer 28 | class ProgressManager: 29 | def __init__(self, hashfile, nproc, trace_dir=None, period=5, period_ntrace=0, logfile=None): 30 | if logfile is not None: 31 | self.logfile = open(logfile, 'w') 32 | else: 33 | self.logfile = sys.stdout 34 | self.hash_file_ro = open(hashfile, 'r') 35 | if trace_dir is not None: 36 | os.chdir(trace_dir) 37 | self.prev_time = 0 38 | self.nproc = nproc 39 | self.pool = Pool(processes=self.nproc) 40 | self.mapper = process_file 41 | self.submitted = 0 42 | self.processed = 0 43 | self.total_states = 0 44 | self.period = period 45 | self.period_ntrace = period_ntrace 46 | self.states = set() 47 | self.traces = set() 48 | 49 | def reduce(self): 50 | for line in self.hash_file_ro: 51 | self.traces.add(line) 52 | states = line.strip().split() 53 | self.states.update(map(int, states)) 54 | self.processed += 1 55 | self.total_states += len(states) 56 | if self.period_ntrace > 0 and self.processed % self.period_ntrace == 0: 57 | self.print_progress(period=0) 58 | else: 59 | self.print_progress() 60 | self.print_progress(period=0) 61 | 62 | def print(self, *args, **kwargs): 63 | print(*args, **kwargs, file=self.logfile, flush=True) 64 | 65 | def print_progress(self, period=None): 66 | current_time = time.time() 67 | if period is None: 68 | period = self.period 69 | if current_time - self.prev_time >= period: 70 | processed = self.processed if self.processed > 0 else self.submitted - len(self.pool._cache) 71 | submitted = self.submitted if self.submitted > 0 else processed 72 | p_ratio = 0 if submitted == 0 else processed / submitted 73 | if self.processed == 0: 74 | self.print('processed/total traces: {}/{} ({:.3g}%)'.format( 75 | processed, self.submitted, p_ratio * 100)) 76 | else: 77 | u_ratio = 0 if processed == 0 else len(self.traces) / processed 78 | s_ratio = 0 if self.total_states == 0 else len(self.states) / self.total_states 79 | self.print('unique/processed/total traces: {}/{}/{} ({:.3g}% {:.3g}%), distinct/total states: {}/{} ({:.3g}%)'.format( 80 | len(self.traces), processed, submitted, u_ratio * 100, p_ratio * 100, 81 | len(self.states), self.total_states, s_ratio * 100)) 82 | self.prev_time = current_time 83 | 84 | def map(self, fn): 85 | self.pool.apply_async(self.mapper, args=(fn,)) 86 | self.submitted += 1 87 | 88 | def is_trace_file(self, fn: str): 89 | return fn.startswith("trace_") or fn in {'MC.out', 'MC_states.dot'} 90 | 91 | def iterate_dir(self): 92 | for i in os.listdir(): 93 | if self.is_trace_file(i): 94 | self.map(i) 95 | self.print_progress() 96 | self.pool.close() 97 | self.print('Submit finished') 98 | while len(self.pool._cache) != 0: 99 | time.sleep(self.period) 100 | self.print_progress() 101 | self.pool.join() 102 | self.print('Map finished') 103 | self.reduce() 104 | self.print('Reduce finished') 105 | 106 | 107 | if __name__ == '__main__': 108 | parser = argparse.ArgumentParser(description='Simulation unique traces and distinct states counter') 109 | parser.add_argument(dest='trace_dir', action='store', help='Trace dir') 110 | parser.add_argument('-p', dest='nproc', action='store', type=int, default=cpu_count(), help='Number of processes') 111 | parser.add_argument('-n', dest='ntrace', action='store', type=int, default=0, help='Print progress every n traces') 112 | parser.add_argument('-l', dest='logfile', action='store', help='Log output to file') 113 | parser.add_argument('-f', dest='hashfile', action='store', help='Hash file') 114 | parser.add_argument('-r', dest='reduce', action='store_true', help='Reduce only') 115 | args = parser.parse_args() 116 | if args.hashfile is None: 117 | args.hashfile = default_hash_filename 118 | if not args.reduce: 119 | if os.path.exists(args.hashfile): 120 | os.remove(args.hashfile) 121 | hash_file = open(args.hashfile, 'a', buffering=1) 122 | process_man = ProgressManager(hashfile=args.hashfile, nproc=args.nproc, trace_dir=args.trace_dir, 123 | period_ntrace=args.ntrace, logfile=args.logfile) 124 | process_man.iterate_dir() 125 | else: 126 | process_man = ProgressManager(hashfile=args.hashfile, nproc=args.nproc, trace_dir=args.trace_dir, 127 | period_ntrace=args.ntrace, logfile=args.logfile) 128 | process_man.reduce() 129 | -------------------------------------------------------------------------------- /trace_generator.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | # -*- coding: UTF-8 -*- 3 | 4 | import networkx as nx 5 | import argparse 6 | import time 7 | import os 8 | from multiprocessing import Pool, cpu_count 9 | from trace_reader import get_dot_label_string 10 | 11 | 12 | parser = argparse.ArgumentParser(description='Generate all simple paths of a dot file') 13 | parser.add_argument(dest='dot_file', action='store', help='Dot file') 14 | parser.add_argument('-p', dest='nproc', action='store', type=int, default=cpu_count(), help='Number of processes') 15 | parser.add_argument('-s', dest='save_dir', action='store', help='Save all generated traces') 16 | args = parser.parse_args() 17 | 18 | 19 | def read_dot(dot_file, save_states=False): 20 | g = nx.DiGraph() 21 | s = dict() 22 | with open(dot_file) as f: 23 | for line in f: 24 | if ' -> ' in line: 25 | a, x = line.rstrip(';\n').split(' -> ') 26 | a = int(a) 27 | b = int(x.split(' ', maxsplit=1)[0]) 28 | g.add_edge(a, b) 29 | elif save_states and ' [label="' in line: 30 | state_hash, label = get_dot_label_string(line) 31 | s[state_hash] = label 32 | return g, s 33 | 34 | 35 | print('Reading dot file ... ', end='') 36 | G, S = read_dot(args.dot_file, save_states=args.save_dir is not None) 37 | roots = [v for v, d in G.in_degree() if d == 0] 38 | leaves = [v for v, d in G.out_degree() if d == 0] 39 | print('done. root: {}, leaves: {}, vertices: {}'.format(len(roots), len(leaves), len(G.nodes()))) 40 | all_paths = 0 41 | leaves_processed = 0 42 | leaves_submitted = 0 43 | if args.save_dir: 44 | save_dir = os.path.join(args.save_dir, 'trace_') 45 | os.makedirs(args.save_dir, exist_ok=True) 46 | else: 47 | save_dir = None 48 | 49 | prev_time = time.time() 50 | def print_progress(period=5): 51 | global prev_time 52 | curr_time = time.time() 53 | if curr_time - prev_time >= period: 54 | ratio = 0 if len(leaves) == 0 else leaves_processed / len(leaves) 55 | print('Processed/submitted/all: {}/{}/{} ({:.3g}%), all paths: {}'.format( 56 | leaves_processed, leaves_submitted, len(leaves) * len(roots), ratio * 100, all_paths)) 57 | prev_time = curr_time 58 | 59 | 60 | def process_path(a, b, save_fn_prefix=None): 61 | if save_fn_prefix is None: 62 | return sum(1 for _ in nx.all_simple_paths(G, a, b)) 63 | else: 64 | path_cnt = 0 65 | for path in nx.all_simple_paths(G, a, b): 66 | fn = save_fn_prefix + str(path_cnt) 67 | lines = [] 68 | lines.append('-' * 16 + ' MODULE {} '.format(os.path.basename(fn)) + '-' * 16 + '\n') 69 | for i, h in enumerate(path): 70 | lines.append('STATE {} ==\n'.format(i + 1)) 71 | lines.append(S[h]) 72 | lines.append('\n\n') 73 | lines.append('=' * 49 + '\n') 74 | path_cnt += 1 75 | with open(fn, 'w') as f: 76 | f.writelines(lines) 77 | return path_cnt 78 | 79 | 80 | pool = Pool(processes=args.nproc) 81 | results = [] 82 | leaf_cnt = 0 83 | for root in roots: 84 | for leaf in leaves: 85 | save_prefix = None if save_dir is None else save_dir + str(leaf_cnt) + '_' 86 | results.append(pool.apply_async(process_path, args=(root, leaf, save_prefix))) 87 | leaves_submitted += 1 88 | leaf_cnt += 1 89 | print_progress() 90 | pool.close() 91 | 92 | print('Submit finished') 93 | 94 | def reduce_results(): 95 | global results, all_paths, leaves_processed 96 | for i in range(len(results)-1, -1, -1): 97 | r = results[i] 98 | if r.ready(): 99 | all_paths += r.get() 100 | leaves_processed += 1 101 | del results[i] 102 | print_progress() 103 | 104 | while True: 105 | reduce_results() 106 | if len(results) == 0: 107 | break 108 | 109 | pool.join() 110 | print('Map/reduce finished') 111 | print_progress(period=0) 112 | -------------------------------------------------------------------------------- /trace_reader.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | # usage: python3 trace_reader.py -h 4 | 5 | import os 6 | import sys 7 | from collections import OrderedDict, defaultdict 8 | 9 | class TraceReader: 10 | LIST_IS_SEQ = "seq" 11 | LIST_IS_SET = "set" 12 | 13 | def __init__(self, save_action_name=False, hashable=False, sort_dict=False, 14 | handler_py=None): 15 | self._matching = {'{': ('}', self._braces), '<': ('$', self._chevrons), 16 | '[': (']', self._brackets), '(': (')', self._parentheses)} 17 | 18 | self._string_dict = {"TRUE": True, "FALSE": False} 19 | self._user_dict = dict() 20 | self._kv_outside_handler = lambda k, v: (k, v) 21 | self._kv_inside_handler = lambda k, v: (k, v) 22 | self._list_handler = lambda s, k: s 23 | self.set_handlers(handler_py) 24 | self.save_action_name = save_action_name 25 | self.sort_dict = sort_dict 26 | self.hashable = hashable 27 | if self.hashable: 28 | self.sort_dict = True 29 | 30 | 31 | # set callback handlers from ENV 'HANDLER_PY' 32 | def set_handlers(self, handler_py=None): 33 | if handler_py is None: 34 | handler_py = os.getenv('HANDLER_PY') 35 | if handler_py is None: 36 | return 37 | try: 38 | sys.path.insert(0, os.path.dirname(handler_py)) 39 | handler_module = __import__( 40 | os.path.basename(handler_py).replace('.py', '')) 41 | sys.path.pop(0) 42 | except ModuleNotFoundError: 43 | print("Warning: cannot import module '{}'".format(handler_py), 44 | file=sys.stderr) 45 | handler_module = None 46 | 47 | if hasattr(handler_module, "user_dict"): 48 | self.set_user_dict(handler_module.user_dict) 49 | if hasattr(handler_module, "list_handler"): 50 | self.set_list_handler(handler_module.list_handler) 51 | if hasattr(handler_module, "outside_kv_handler"): 52 | self.set_kv_handler(handler_module.outside_kv_handler, inside=False) 53 | if hasattr(handler_module, "inside_kv_handler"): 54 | self.set_kv_handler(handler_module.inside_kv_handler, inside=True) 55 | 56 | 57 | # find '}$])' for '{<[(' 58 | def _find_next_match(self, string): 59 | level = 0 60 | char = string[0] 61 | match_char = self._matching[char][0] 62 | for i, c in enumerate(string): 63 | if c == char: 64 | level += 1 65 | elif c == match_char: 66 | level -= 1 67 | if level == 0: 68 | return i 69 | assert False 70 | 71 | 72 | def _post_process_list(self, l, kind): 73 | l = self._list_handler(l, kind) 74 | if self.hashable: 75 | if kind is self.LIST_IS_SET: 76 | l = frozenset(l) 77 | elif kind is self.LIST_IS_SEQ: 78 | l = tuple(l) 79 | return l 80 | 81 | 82 | # return a list 83 | def _lists(self, string, kind): 84 | l = list() 85 | if not string: 86 | return self._post_process_list(l, kind) 87 | processed, pos, length = 0, 0, len(string) 88 | while pos < length: 89 | if string[pos] in self._matching or string[pos] == ',': 90 | if string[pos] != ',': 91 | pos = pos + self._find_next_match(string[pos:]) + 1 92 | l.append(self._variable_converter(string[processed:pos])) 93 | processed = pos + 2 94 | pos += 1 95 | pos += 1 96 | if pos != processed: 97 | l.append(self._variable_converter(string[processed:pos])) 98 | return self._post_process_list(l, kind) 99 | 100 | 101 | # < string $ 102 | def _chevrons(self, string): 103 | return self._lists(string, kind=self.LIST_IS_SEQ) 104 | 105 | 106 | # { string } (we treat set as list) 107 | def _braces(self, string): 108 | return self._lists(string, kind=self.LIST_IS_SET) 109 | 110 | 111 | # make dicts hashable if hash_data is True 112 | class HashableDict(OrderedDict): 113 | def __hash__(self): 114 | return hash(frozenset(self.items())) 115 | 116 | 117 | # sort dict or make dict hashable 118 | def _post_process_dict(self, d): 119 | if self.hashable: 120 | return self.HashableDict(sorted(d.items())) 121 | elif self.sort_dict: 122 | return OrderedDict(sorted(d.items())) 123 | else: 124 | return d 125 | 126 | 127 | # return a dict 128 | def _dict_common(self, string, arrow, sep, value_seq_len): 129 | d = dict() if not self.hashable else self.HashableDict() 130 | processed, pos, length = 0, 0, len(string) 131 | key, value = '', '' 132 | while True: 133 | if string[pos] == arrow[0]: 134 | key = string[processed:pos - 1] 135 | pos += len(arrow) 136 | processed = pos + 1 137 | if string[pos] == sep[0]: 138 | value = string[processed:pos - value_seq_len] 139 | pos += len(sep) 140 | processed = pos + 1 141 | key, value = self._kv_inside_handler( 142 | key, self._variable_converter(value)) 143 | d[key] = value 144 | pos += 1 145 | if pos >= length: 146 | break 147 | if string[pos] in self._matching: 148 | pos = pos + self._find_next_match(string[pos:]) 149 | value = string[processed:] 150 | key, value = self._kv_inside_handler( 151 | key, self._variable_converter(value)) 152 | d[key] = value 153 | return self._post_process_dict(d) 154 | 155 | 156 | # [ string ] 157 | def _brackets(self, string): 158 | return self._dict_common(string, '|->', ',', 0) 159 | 160 | 161 | # ( string ) 162 | def _parentheses(self, string): 163 | return self._dict_common(string, ':>', '@@', 1) 164 | 165 | 166 | # convert string to python variable 167 | def _variable_converter(self, string): 168 | if string[0] in self._matching: 169 | return self._matching[string[0]][1](string[1:-1].strip()) 170 | if string in self._user_dict: 171 | return self._user_dict[string] 172 | if string in self._string_dict: 173 | return self._string_dict[string] 174 | if string[0] == '"': 175 | return string[1:-1] 176 | try: 177 | return int(string) 178 | except ValueError: 179 | return string 180 | 181 | 182 | # callback handlers 183 | def set_user_dict(self, user_dict): 184 | self._user_dict = user_dict 185 | 186 | 187 | def set_kv_handler(self, kv_handler, inside=False): 188 | if inside: 189 | self._kv_inside_handler = kv_handler 190 | else: 191 | self._kv_outside_handler = kv_handler 192 | 193 | 194 | def set_list_handler(self, list_handler): 195 | self._list_handler = list_handler 196 | 197 | 198 | # convert MC.out to trace file 199 | @staticmethod 200 | def get_out_converted_string(file): 201 | if not hasattr(file, 'read'): 202 | f = open(file) 203 | else: 204 | f = file 205 | 206 | n_state = 0 207 | start_msg = 'The behavior up to this point is:' 208 | end_msg = ['Progress', 'The number of states generated', 'Worker: rmi'] 209 | for line in f: 210 | if 'TLC Server' in line: 211 | continue 212 | if line[0] != '@': 213 | start_msg = 'Error: ' + start_msg 214 | break 215 | for line in f: 216 | if line.startswith(start_msg): 217 | yield '-' * 16 + ' MODULE MC_trace ' + '-' * 16 + '\n' 218 | break 219 | for line in f: 220 | if line[0] in '/ ': 221 | yield line 222 | elif line.startswith('State') or line[0].isdigit(): 223 | yield r'\*' + line[line.find(':')+1:] 224 | n_state = n_state + 1 225 | yield 'STATE_{} == \n'.format(n_state) 226 | elif line == '\n': 227 | yield '\n' * 2 228 | elif any(line.startswith(x) for x in end_msg): 229 | yield '=' * 49 + '\n' 230 | break 231 | else: 232 | if line[0] != '@': 233 | pass 234 | 235 | f.close() 236 | 237 | 238 | @staticmethod 239 | def get_dot_label_string(line): 240 | space_idx = line.find(' ') 241 | state_hash = line[:space_idx] 242 | label_end_idx = 1 if line[-2] == ';' else 0 243 | label_end_idx += 2 if line[-label_end_idx-3] == '"' else 17 244 | label = eval(line[space_idx+8:-label_end_idx]) + '\n' 245 | return int(state_hash), label 246 | 247 | 248 | @staticmethod 249 | def get_dot_converted_string(file): 250 | if not hasattr(file, 'read'): 251 | f = open(file) 252 | else: 253 | f = file 254 | yield '-' * 16 + ' MODULE MC_dot ' + '-' * 16 + '\n' 255 | for line in f: 256 | if ' [label="' in line: 257 | state_hash, label = TraceReader.get_dot_label_string(line) 258 | yield 'STATE_{} == \n'.format(state_hash) 259 | yield from label.splitlines(keepends=True) 260 | yield '\n' * 2 261 | yield '=' * 49 + '\n' 262 | f.close() 263 | 264 | 265 | def get_dot_graph(self, file): 266 | if not hasattr(file, 'read'): 267 | f = open(file) 268 | else: 269 | f = file 270 | g = defaultdict(lambda: []) 271 | for line in f: 272 | if ' -> ' in line: 273 | a, b = map(int, line.rstrip(';\n').split(' -> ')) 274 | g[a].append(b) 275 | return self._post_process_dict(g) 276 | 277 | 278 | @staticmethod 279 | def get_action_name(line): 280 | start = line.find('<') 281 | end = line.find(' ', start) 282 | if end > start > 0: 283 | return line[start+1:end] 284 | return None 285 | 286 | 287 | # read trace file and yield states as python objects 288 | def trace_reader_with_state_str(self, file): 289 | if not hasattr(file, 'read'): 290 | f = open(file) 291 | else: 292 | f = file 293 | 294 | starting_chars = f.read(2) 295 | is_dot_file = False 296 | if starting_chars != '--': 297 | if starting_chars == '@!': 298 | f = self.get_out_converted_string(f) 299 | elif starting_chars == 'st': 300 | f = self.get_dot_converted_string(f) 301 | is_dot_file = True 302 | else: 303 | return 304 | 305 | state = dict() 306 | variable = "" 307 | lines = [] 308 | cur_action = None 309 | cur_action_line = None 310 | for line in f: 311 | if line.startswith(r'\*'): 312 | cur_action_line = line 313 | if self.save_action_name: 314 | cur_action = self.get_action_name(line) 315 | elif line[0] in "-=S": 316 | if state: 317 | state = self._post_process_dict(state) 318 | yield state, ''.join(lines).strip() 319 | state = dict() 320 | if cur_action is not None: 321 | state['_action'] = cur_action 322 | if is_dot_file and line[0] == 'S': 323 | state['_hash'] = int(line[6:-4]) 324 | lines = [] if cur_action_line is None else [cur_action_line] 325 | elif line[0] in "/\n": 326 | if variable: 327 | k, v = variable.split('=') 328 | k, v = k.rstrip(), v.lstrip() 329 | # replace to 1-char keywords, replace '>' to a uniq key 330 | k, v = self._kv_outside_handler(k, self._variable_converter( 331 | v.replace('<<', '<').replace('>>', '$'))) 332 | state[k] = v 333 | variable = line.strip()[3:] 334 | lines.append(line) 335 | else: 336 | variable += " " + line.strip() 337 | lines.append(line) 338 | 339 | f.close() 340 | 341 | 342 | def trace_reader(self, file): 343 | for state, _ in self.trace_reader_with_state_str(file): 344 | yield state 345 | 346 | 347 | get_dot_label_string = TraceReader.get_dot_label_string 348 | get_out_converted_string = TraceReader.get_out_converted_string 349 | get_dot_converted_string = TraceReader.get_dot_converted_string 350 | 351 | 352 | if __name__ == '__main__': 353 | import json 354 | import argparse 355 | 356 | # arg parser 357 | parser = argparse.ArgumentParser( 358 | description="Read TLA traces into Python objects") 359 | 360 | parser.add_argument(dest='trace_file', action='store', 361 | help='TLA trace file') 362 | parser.add_argument('-o', dest='json_file', action='store', required=False, 363 | help="output to json file") 364 | parser.add_argument('-i', dest='indent', action='store', required=False, 365 | type=int, help="json file indent") 366 | parser.add_argument('-p', dest='handler', action='store', required=False, 367 | help="python user_dict and list/kv handers") 368 | parser.add_argument('-a', dest='action', action='store_true', 369 | required=False, 370 | help="save action name in '_action' key if available") 371 | parser.add_argument('-d', dest='hash_data', action='store_true', 372 | required=False, 373 | help="make data structures hashable") 374 | parser.add_argument('-s', dest='sort_keys', action='store_true', 375 | required=False, 376 | help="sort dict by keys, true if -d is defined") 377 | parser.add_argument('-g', dest='graph', action='store_true', required=False, 378 | help="get dot file graph") 379 | args = parser.parse_args() 380 | 381 | tr = TraceReader(save_action_name=args.action, hashable=args.hash_data, 382 | sort_dict=args.sort_keys, handler_py=args.handler) 383 | 384 | if not args.graph: 385 | states = list(tr.trace_reader(args.trace_file)) 386 | else: 387 | states = tr.get_dot_graph(args.trace_file) 388 | 389 | def serialize_sets(obj): 390 | if isinstance(obj, frozenset): 391 | return tuple(obj) 392 | return obj 393 | 394 | 395 | if args.json_file: 396 | with open(args.json_file, 'w') as f: 397 | json.dump(states, f, indent=args.indent, default=serialize_sets) 398 | f.write('\n') 399 | else: 400 | print(json.dumps(states, indent=args.indent, default=serialize_sets)) 401 | --------------------------------------------------------------------------------