├── .vscode └── launch.json ├── LICENSE ├── README.md ├── main.py ├── screenshots ├── direct_deps.png ├── indirect_deps.png └── matrix.png └── test ├── a.c ├── b.h ├── blank.h ├── c.h ├── d.h └── test.c /.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | // Use IntelliSense to learn about possible attributes. 3 | // Hover to view descriptions of existing attributes. 4 | // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 5 | "version": "0.2.0", 6 | "configurations": [ 7 | { 8 | "name": "Python: Current File", 9 | "type": "python", 10 | "request": "launch", 11 | "program": "${file}", 12 | "console": "integratedTerminal" 13 | } 14 | ] 15 | } -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2020-2024 Jeremy Rifkin 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and 6 | associated documentation files (the "Software"), to deal in the Software without restriction, 7 | including without limitation the rights to use, copy, modify, merge, publish, distribute, 8 | sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is 9 | furnished to do so, subject to the following conditions: 10 | 11 | The above copyright notice and this permission notice shall be included in all copies or substantial 12 | portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT 15 | NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 16 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES 17 | OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 18 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This project is now mostly folded into https://github.com/jeremy-rifkin/build-blame 2 | 3 | --- 4 | 5 | This is a small static analysis project to analyze dependency graphs in C/C++ programs. Based off of 6 | a main file in a codebase, this tool will automatically parse, resolve, and traverse includes in 7 | order to build up a dependency graph. That graph is displayed as an adjacency matrix. The transitive 8 | closure of the graph is also displayed reflecting the indirect dependencies between parts of a 9 | codebase. One application of this tool is for analyzing technical debt within a codebase. 10 | 11 | ![](screenshots/matrix.png) 12 | 13 | ![](screenshots/direct_deps.png) 14 | 15 | ![](screenshots/indirect_deps.png) 16 | 17 | Nodes are colored based on how many translation units (.c or .cpp files) transitively include a given header. 18 | 19 | Usage: 20 | ``` 21 | python3 main.py --compile-commands COMPILE_COMMANDS [--exclude EXCLUDE] [--sentinel SENTINEL] 22 | ``` 23 | 24 | By default the script will transitively walk all include headers it can resolve, either based on local resolution rules 25 | or paths specified with `-I` flags in compile_commands.json. If you want to see the includes for an unresolved library 26 | include, e.g. `fmt/format.h`, pass `--sentinel fmt/format.h`. 27 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import colorama 3 | from enum import Enum 4 | import os 5 | import re 6 | import sys 7 | import json 8 | import math 9 | 10 | # 11 | # This is a tool to analyze dependencies within a codebase. 12 | # This code does the absolute bare-minimum C parsing in order to understand include directives. 13 | # No macros are expanded and no conditionals are evaluated. 14 | # There are better and more optimal ways to implement this all, however, it does it's job! And it 15 | # does it well (at least for small codebases). Not a whole lot of value in optimizing an 16 | # inconsequential script. 17 | # 18 | # Includes will form a dependency graph (usually a DAG but not necessarily) and this graph is 19 | # traversed depth-first. No include guards are evaluated but cycles are avoided. 20 | # 21 | # At the moment escape sequences in path-specs are not evaluated. 22 | # 23 | # The code also makes some assumptions about nomenclature. The code assumes YYY.c/cpp and YYY.h are 24 | # part of the same node/unit/module. It also assumes that file/node/unit/module names are unique 25 | # across directories. Furthermore it assumes the file extension of a file has only one segment. 26 | # 27 | # Copyright Jeremy Rifkin 2020-2024 28 | # 29 | 30 | def print_help(): 31 | print("Ussage: python main.py [search directories]") 32 | 33 | Trigraph_translation_table = { 34 | "=": "#", 35 | "/": "\\", 36 | "'": "^", 37 | "(": "[", 38 | ")": "]", 39 | "!": "|", 40 | "<": "{", 41 | ">": "}", 42 | "-": "~" 43 | } 44 | def phase_one(string): 45 | # trigraphs 46 | i = 0 47 | translated_string = "" 48 | while i < len(string): 49 | if string[i] == "?" and i < len(string) - 2 and string[i + 1] == "?" and string[i + 2] in Trigraph_translation_table: 50 | translated_string += Trigraph_translation_table[string[i + 2]] 51 | i += 3 52 | else: 53 | translated_string += string[i] 54 | i += 1 55 | return translated_string 56 | 57 | def phase_two(string): 58 | # backslash followed immediately by newline 59 | i = 0 60 | translated_string = "" 61 | # this is a really dirty way of taking care of line number errors for backslash + \n sequences 62 | line_debt = 0 63 | while i < len(string): 64 | if string[i] == "\\" and i < len(string) - 1 and string[i + 1] == "\n": 65 | i += 2 66 | line_debt += 1 67 | elif string[i] == "\n": 68 | translated_string += "\n" * (1 + line_debt) 69 | line_debt = 0 70 | i += 1 71 | else: 72 | translated_string += string[i] 73 | i += 1 74 | return translated_string 75 | 76 | # lexer rules 77 | lexer_rules = [ 78 | ("COMMENT", r"//.*(?=\n|$)"), 79 | ("MCOMMENT", r"/\*(?:(?!\*/)[\s\S])*\*/"), #r"/\*(?s:.)*\*/"), 80 | ("RAW_STRING", r"(R\"([^ ()\t\r\v\n]*)\((?P(?:(?!\)\5\").)*)\)\5\")"), 81 | ("IDENTIFIER", r"[a-zA-Z_$][a-zA-Z0-9_$]*"), 82 | ("NUMBER", r"[0-9]([eEpP][\-+]?[0-9a-zA-Z.']|[0-9a-zA-Z.'])*"), # basically a ppnumber regex # r"(?:0x|0b)?[0-9a-fA-F]+(?:.[0-9a-fA-F]+)?(?:[eEpP][0-9a-fA-F]+)?(?:u|U|l|L|ul|UL|ll|LL|ull|ULL|f|F)?", 83 | ("STRING", r"\"(?P(?:\\x[0-7]+|\\.|[^\"\\])*)\""), 84 | ("CHAR", r"'(\\x[0-9a-fA-F]+|\\.|[^\'\\])'"), 85 | ("PREPROCESSING_DIRECTIVE", r"(?:#|%:)[a-z]+"), 86 | ("PUNCTUATION", r"[,.<>?/=;:~!#%^&*\-\+|\(\)\{\}\[\]]"), 87 | ("NEWLINE", r"\n"), 88 | ("WHITESPACE", r"[^\S\n]+") #r"\s+" 89 | ] 90 | lexer_ignores = {"COMMENT", "MCOMMENT", "WHITESPACE"} 91 | lexer_regex = "" 92 | class Token: 93 | def __init__(self, token_type, value, line, pos): 94 | self.token_type = token_type 95 | self.value = value 96 | self.line = line 97 | self.pos = pos 98 | # only digraph that needs to be handled 99 | if token_type == "PREPROCESSING_DIRECTIVE": 100 | self.value = re.sub(r"^%:", "#", value) 101 | elif token_type == "NEWLINE": 102 | self.value = "" 103 | def __repr__(self): 104 | if self.value == "": 105 | return "{} {}".format(self.line, self.token_type) 106 | else: 107 | return "{} {} {}".format(self.line, self.token_type, self.value) 108 | def init_lexer(): 109 | global lexer_regex 110 | # for i in range(0, len(lexer_rules), 2): 111 | # name = lexer_rules[i] 112 | # pattern = lexer_rules[i + 1] 113 | for name, pattern in lexer_rules: 114 | lexer_regex += ("" if lexer_regex == "" else "|") + "(?P<{}>{})".format(name, pattern) 115 | # print(lexer_regex) 116 | lexer_regex = re.compile(lexer_regex) 117 | init_lexer() 118 | 119 | def phase_three(string): 120 | # tokenization 121 | tokens = [] 122 | i = 0 123 | line = 1 124 | while True: 125 | if i >= len(string): 126 | break 127 | m = lexer_regex.match(string, i) 128 | if m: 129 | groupname = m.lastgroup 130 | if groupname not in lexer_ignores: 131 | if groupname == "STRING": 132 | tokens.append(Token(groupname, m.group("STRING_CONTENT"), line, i)) 133 | elif groupname == "RAW_STRING": 134 | tokens.append(Token(groupname, m.group("RAW_STRING_CONTENT"), line, i)) 135 | else: 136 | tokens.append(Token(groupname, m.group(groupname), line, i)) 137 | if groupname == "NEWLINE": 138 | line += 1 139 | if groupname == "MCOMMENT": 140 | line += m.group(groupname).count("\n") 141 | i = m.end() 142 | else: 143 | print(string) 144 | print(i) 145 | print(tokens) 146 | print(line) 147 | print("\n\n{}\n\n".format(string[i-5:i+20])) 148 | raise Exception("lexer error") 149 | # TODO ensure there's always a newline token at the end? 150 | return tokens 151 | 152 | def peek_tokens(tokens, seq): 153 | if len(tokens) < len(seq): 154 | return False 155 | for i, token in enumerate(seq): 156 | if type(seq[i]) is tuple: 157 | if not (tokens[i].token_type == seq[i][0] and tokens[i].value == seq[i][1]): 158 | return False 159 | elif tokens[i].token_type != seq[i]: 160 | return False 161 | return True 162 | 163 | def expect(tokens, seq, line, after, expected=None): 164 | good = True 165 | reason = "" 166 | if len(tokens) < len(seq): 167 | good = False 168 | reason = "EOF" 169 | else: 170 | for i, token in enumerate(seq): 171 | if type(seq[i]) is tuple: 172 | if not (tokens[i].token_type == seq[i][0] and tokens[i].value == seq[i][1]): 173 | good = False 174 | reason = "{}".format(tokens[i].token_type) 175 | break 176 | elif tokens[i].token_type != seq[i]: 177 | good = False 178 | reason = "{}".format(tokens[i].token_type) 179 | break 180 | if not good: 181 | if expected is not None: 182 | raise Exception("parse error: expected {} after {} on line {}, found [{}, ...], failed due to {}".format(expected, after, line, tokens[0], reason)) 183 | else: 184 | raise Exception("parse error: unexpected tokens following {} on line {}, failed due to {}".format(after, line, reason)) 185 | 186 | def parse_includes(path: str) -> list: 187 | # get file contents 188 | with open(path, "r") as f: 189 | content = f.read() 190 | # trigraphs 191 | content = phase_one(content) 192 | # backslash newline 193 | content = phase_two(content) 194 | # tokenize 195 | tokens = phase_three(content) 196 | 197 | # print(tokens) 198 | # return 199 | 200 | # process the file 201 | # Preprocessor directives are only valid if they are at the beginning of a line. Code makes 202 | # sure the next token is always at the start of the line going into each loop iteration. 203 | includes = [] # files queued up to process so that logic doesn't get put in the middle of the parse logic 204 | while len(tokens) > 0: 205 | token = tokens.pop(0) 206 | if token.token_type == "PREPROCESSING_DIRECTIVE" and token.value == "#include": 207 | line = token.line 208 | if len(tokens) == 0: 209 | raise Exception("parse error: expected token following #include directive, found nothing") 210 | elif peek_tokens(tokens, ("STRING", )): 211 | path_token = tokens.pop(0) 212 | expect(tokens, ("NEWLINE", ), line, "#include declaration") 213 | tokens.pop(0) # pop eol 214 | print("{} #include \"{}\"".format(line, path_token.value)) 215 | #process_queue.append(path_token.value) 216 | includes.append(path_token.value) 217 | # self.queue_all(process_queue, os.path.join(os.path.dirname(file_path), path_token.value)) 218 | elif peek_tokens(tokens, (("PUNCTUATION", "<"), )): 219 | # because tokens can get weird between the angle brackets, the path is extracted from the raw source 220 | open_bracket = tokens.pop(0) 221 | i = open_bracket.pos + 1 222 | while True: 223 | if i >= len(content): 224 | # error unexpected eof 225 | raise Exception("parse error: unexpected end of file in #include directive on line {}.".format(line)) 226 | if content[i] == ">": 227 | # this is our exit condition 228 | break 229 | elif content[i] == "\n": 230 | # unexpected newline 231 | # don't know if this is technically allowed or not 232 | raise Exception("parse error: unexpected newline in #include directive on line {}.".format(line)) 233 | i += 1 234 | # extract path substring 235 | path = content[open_bracket.pos + 1 : i] 236 | # consume tokens up to the closing ">" 237 | while True: 238 | if len(tokens) == 0: 239 | # shouldn't happen 240 | raise Exception("internal parse error: unexpected eof") 241 | token = tokens.pop(0) 242 | if token.token_type == "PUNCTUATION" and token.value == ">": 243 | # exit condition 244 | break 245 | elif token.token_type == "NEWLINE": 246 | # shouldn't happen 247 | raise Exception("internal parse error: unexpected newline") 248 | expect(tokens, ("NEWLINE", ), line, "#include declaration") 249 | tokens.pop(0) # pop eol 250 | ## # library includes won't be traversed 251 | print("{} #include <{}>".format(line, path)) 252 | includes.append(path) 253 | elif peek_tokens(tokens, ("IDENTIFIER", )): 254 | identifier = tokens.pop(0) 255 | expect(tokens, ("NEWLINE", ), line, "#include declaration") 256 | print("Warning: Ignoring #include {}".format(identifier.value)) 257 | else: 258 | raise Exception("parse error: unexpected token sequence after #include directive on line {}. This may be a valid preprocessing directive and reflect a shortcoming of this parser.".format(line)) 259 | else: 260 | # need to consume the whole line of tokens 261 | while token.token_type != "NEWLINE" and len(tokens) > 0: 262 | token = tokens.pop(0) 263 | return includes 264 | 265 | class Analysis: 266 | def __init__(self, excludes: list, sentinels: list): 267 | self.excludes = excludes 268 | self.sentinels = sentinels 269 | self.not_found = set() 270 | self.visited = set() # set of absolute paths 271 | # absolute path -> { i: number, dependencies: list[absolute path]} 272 | self.nodes = {} 273 | # self.process_file(file_path) 274 | 275 | def resolve_include(self, base: str, file_path: str, search_paths: list): 276 | # search paths: first search relative, then via the paths 277 | relative = os.path.join( 278 | os.path.dirname(base), 279 | file_path 280 | ) 281 | if os.path.exists(relative): 282 | print(" Found:", relative) 283 | return os.path.abspath(relative) 284 | else: 285 | for search_path in search_paths: 286 | path = os.path.join( 287 | search_path, 288 | file_path 289 | ) 290 | if os.path.exists(path): 291 | print(" Found:", path) 292 | return os.path.abspath(path) 293 | 294 | def process_include(self, base: str, file_path: str, search_paths: list): 295 | resolved = self.resolve_include(base, file_path, search_paths) 296 | if resolved: 297 | print("Recursing into {}".format(file_path)) 298 | self.process_file(resolved, search_paths) 299 | return resolved 300 | else: 301 | self.not_found.add(file_path) 302 | return None 303 | 304 | def process_file(self, path: str, search_paths: list): 305 | if path in self.visited: 306 | return 307 | for exclude in self.excludes: 308 | if path.startswith(exclude): 309 | return 310 | self.visited.add(path) 311 | includes = parse_includes(path) 312 | # print(path) 313 | print(" Adding includes:", includes) 314 | dependencies = set() 315 | for include in includes: 316 | resolved = self.process_include(path, include, search_paths) 317 | if resolved is not None: 318 | dependencies.add(resolved) 319 | elif include in self.sentinels: 320 | if include not in self.nodes: 321 | self.nodes[include] = { 322 | "i": len(self.nodes), 323 | "dependencies": set() 324 | } 325 | dependencies.add(include) 326 | 327 | self.nodes[path] = { 328 | "i": len(self.nodes), 329 | "dependencies": dependencies 330 | } 331 | 332 | def build_matrix(self): 333 | N = len(self.nodes) 334 | self.matrix = [[0 for _ in range(N)] for _ in range(N)] 335 | for key in self.nodes: 336 | node = self.nodes[key] 337 | row = node["i"] 338 | for d in node["dependencies"]: 339 | if d in self.nodes: 340 | self.matrix[row][self.nodes[d]["i"]] = 1 341 | # deep copy 342 | self.matrix_closure = [[col for col in row] for row in self.matrix] 343 | G = self.matrix_closure 344 | # floyd-warshall 345 | for k in range(N): 346 | for i in range(N): 347 | for j in range(N): 348 | G[i][j] = G[i][j] or (G[i][k] and G[k][j]) 349 | 350 | def print_header(matrix, labels): 351 | print(" " * 50, end="") 352 | for i in range(len(matrix)): 353 | print(" {}".format(os.path.basename(labels[i])[0]), end="") 354 | print() 355 | 356 | def print_matrix(matrix, labels): 357 | color = os.isatty(1) 358 | for i, row in enumerate(matrix): 359 | print("{:>50} ".format(os.path.basename(labels[i])), end="") 360 | for j, n in enumerate(row): 361 | if i == j: 362 | print("{}{}{} ".format(colorama.Fore.BLUE if color else "", "#" if n else "~", colorama.Style.RESET_ALL if color else ""), end="") 363 | else: 364 | print("{} ".format("#" if n else "~"), end="") 365 | print() 366 | print() 367 | 368 | def count_incident_edges(matrix, labels, tu_only=False): 369 | counts = {} # label -> count 370 | for col in range(len(matrix)): 371 | for row in range(len(matrix)): 372 | # if the row is not a .c/.cpp file, it's a header so ignore it 373 | if tu_only and not (labels[row].endswith(".cpp") or labels[row].endswith(".c")): 374 | continue 375 | if matrix[row][col]: 376 | if labels[col] in counts: 377 | counts[labels[col]] += 1 378 | else: 379 | counts[labels[col]] = 1 380 | return counts 381 | 382 | def print_graphviz(analysis: Analysis, labels: list): 383 | print("digraph G {") 384 | #print("\tnodesep=0.3;") 385 | #print("\tranksep=0.2;") 386 | #print("\tnode [shape=circle, fixedsize=true];") 387 | #print("\tedge [arrowsize=0.8];") 388 | #print("\tlayout=fdp;") 389 | 390 | # counts = count_incident_edges(analysis.matrix, labels, True) 391 | counts = count_incident_edges(analysis.matrix_closure, labels, True) 392 | max_count = max(counts.values()) 393 | def get_count_color(label: str): 394 | if label in counts: 395 | return min(int(math.floor((counts[label] / max_count) * 9)) + 1, 9) 396 | else: 397 | return "white" 398 | print("\tsubgraph cluster_{} {{".format("direct")) 399 | print("\t\tnode [colorscheme=reds9] # Apply colorscheme to all nodes") 400 | print("\t\tlabel=\"{}\";".format("direct dependencies")) 401 | for i in range(len(labels)): 402 | print("\t\tn{} [label=\"{}\", fillcolor={}, style=\"filled,solid\"];".format(i, os.path.basename(labels[i]), get_count_color(labels[i]))) 403 | print("\t\t", end="") 404 | for i, row in enumerate(analysis.matrix): 405 | for j, v in enumerate(row): 406 | if v: 407 | print("n{}->n{};".format(i, j), end="") 408 | print() 409 | print("\t}") 410 | 411 | offset = len(labels) 412 | # counts = count_incident_edges(analysis.matrix_closure, labels, True) 413 | # max_count = max(counts.values()) 414 | # def get_count_color(label: str): 415 | # if label in counts: 416 | # return min(int(math.floor((counts[label] / max_count) * 10)), 9) 417 | # else: 418 | # return "white" 419 | print("\tsubgraph cluster_{} {{".format("indirect")) 420 | print("\t\tnode [colorscheme=reds9] # Apply colorscheme to all nodes") 421 | print("\t\tlabel=\"{}\";".format("dependency transitive closure")) 422 | for i in range(len(labels)): 423 | print("\t\tn{} [label=\"{}\", fillcolor={}, style=\"filled,solid\"];".format(i + offset, os.path.basename(labels[i]), get_count_color(labels[i]))) 424 | print("\t\t", end="") 425 | for i, row in enumerate(analysis.matrix_closure): 426 | for j, v in enumerate(row): 427 | if v: 428 | print("n{}->n{}[color={}];".format(i + offset, j + offset, "black" if analysis.matrix[i][j] else "orange"), end="") 429 | print() 430 | print("\t}") 431 | print("}") 432 | 433 | def parse_search_paths(command: str) -> list: 434 | paths = [x.group(1) for x in re.finditer(r"-I([^ ]+)", command)] 435 | # print("Search paths:", paths) 436 | return paths 437 | 438 | def file_path(string): 439 | if os.path.isfile(string): 440 | return string 441 | else: 442 | raise RuntimeError(f"Invalid file path {string}") 443 | 444 | def dir_path(string): 445 | if os.path.isdir(string): 446 | return string 447 | else: 448 | raise RuntimeError(f"Invalid directory {string}") 449 | 450 | def main(): 451 | parser = argparse.ArgumentParser( 452 | prog="cpp-dependency-analyzer", 453 | description="Analyze C++ transitive dependencies" 454 | ) 455 | parser.add_argument( 456 | "--compile-commands", 457 | type=file_path, 458 | required=True 459 | ) 460 | # parser.add_argument( 461 | # "--pwd", 462 | # type=dir_path, 463 | # ) 464 | parser.add_argument('--exclude', action='append', nargs=1) 465 | parser.add_argument('--sentinel', action='append', nargs=1) 466 | args = parser.parse_args() 467 | 468 | excludes = [] 469 | if args.exclude: 470 | # print(args.exclude) 471 | for exclude in args.exclude: 472 | abspath = os.path.abspath(exclude[0]) 473 | if os.path.isdir(abspath): 474 | excludes.append(abspath + os.path.sep) 475 | else: 476 | excludes.append(abspath) 477 | sentinels = [] 478 | if args.sentinel: 479 | sentinels = [s[0] for s in args.sentinel] 480 | print(excludes, sentinels) 481 | 482 | # if args.pwd: 483 | # os.chdir(args.pwd) 484 | 485 | with open(args.compile_commands, "r") as f: 486 | compile_commands = json.load(f) 487 | 488 | analysis = Analysis(excludes, sentinels) 489 | 490 | for entry in compile_commands: 491 | os.chdir(entry["directory"]) 492 | print("From compile commands:", entry["file"]) 493 | # entry["command"] ... 494 | analysis.process_file(os.path.abspath(entry["file"]), parse_search_paths(entry["command"])) 495 | 496 | # init_lexer() 497 | # p = Processor(root) 498 | 499 | # this is mainly for dev/debugging; make sure no components are missed in traversal 500 | #print("visited: ", p.visited) 501 | #print("all: ", p.all_files) 502 | # print("xor: ", p.all_files ^ p.visited) 503 | print("missed:", analysis.not_found) 504 | for key in analysis.nodes: 505 | print("{:20} {}".format(key, analysis.nodes[key])) 506 | print() 507 | 508 | analysis.build_matrix() 509 | 510 | labels = [k for k in analysis.nodes.keys()] 511 | print_graphviz(analysis, labels) 512 | print_header(analysis.matrix, labels) 513 | print_matrix(analysis.matrix, labels) 514 | print_header(analysis.matrix_closure, labels) 515 | print_matrix(analysis.matrix_closure, labels) 516 | print("translation units: {}".format(len(compile_commands))) 517 | print("direct density: {:.0f}%".format(100 * sum([sum(row) for row in analysis.matrix]) / len(analysis.matrix)**2)) 518 | print("indirect density: {:.0f}%".format(100 * sum([sum(row) for row in analysis.matrix_closure]) / len(analysis.matrix_closure)**2)) 519 | cycles = 0 520 | for i in range(len(analysis.matrix_closure)): 521 | if analysis.matrix_closure[i][i]: 522 | cycles += 1 523 | print("cyclic dependencies: {}".format("yes" if cycles > 0 else "no")) 524 | print() 525 | print() 526 | 527 | matrix_counts = count_incident_edges(analysis.matrix, labels) 528 | print("Dependency counts:") 529 | for name, count in sorted(matrix_counts.items(), key=lambda x: x[1], reverse=True): 530 | print(os.path.basename(name), count) 531 | 532 | print() 533 | print() 534 | matrix_closure_counts = count_incident_edges(analysis.matrix_closure, labels) 535 | print("Transitive dependency counts:") 536 | for name, count in sorted(matrix_closure_counts.items(), key=lambda x: x[1], reverse=True): 537 | print(os.path.basename(name), count) 538 | 539 | print() 540 | print() 541 | matrix_counts = count_incident_edges(analysis.matrix, labels, True) 542 | print("Dependency counts (TU-only):") 543 | for name, count in sorted(matrix_counts.items(), key=lambda x: x[1], reverse=True): 544 | print(os.path.basename(name), count) 545 | 546 | print() 547 | print() 548 | matrix_closure_counts = count_incident_edges(analysis.matrix_closure, labels, True) 549 | print("Transitive dependency counts (TU-only):") 550 | for name, count in sorted(matrix_closure_counts.items(), key=lambda x: x[1], reverse=True): 551 | print(os.path.basename(name), count) 552 | 553 | main() 554 | -------------------------------------------------------------------------------- /screenshots/direct_deps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jeremy-rifkin/cpp-dependency-analyzer/9832545528dedc56bf8fcfeca6e706bf149d0fd7/screenshots/direct_deps.png -------------------------------------------------------------------------------- /screenshots/indirect_deps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jeremy-rifkin/cpp-dependency-analyzer/9832545528dedc56bf8fcfeca6e706bf149d0fd7/screenshots/indirect_deps.png -------------------------------------------------------------------------------- /screenshots/matrix.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jeremy-rifkin/cpp-dependency-analyzer/9832545528dedc56bf8fcfeca6e706bf149d0fd7/screenshots/matrix.png -------------------------------------------------------------------------------- /test/a.c: -------------------------------------------------------------------------------- 1 | #include "b.h" 2 | -------------------------------------------------------------------------------- /test/b.h: -------------------------------------------------------------------------------- 1 | #include "c.h" 2 | #include "d.h" 3 | -------------------------------------------------------------------------------- /test/blank.h: -------------------------------------------------------------------------------- 1 | blank -------------------------------------------------------------------------------- /test/c.h: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/jeremy-rifkin/cpp-dependency-analyzer/9832545528dedc56bf8fcfeca6e706bf149d0fd7/test/c.h -------------------------------------------------------------------------------- /test/d.h: -------------------------------------------------------------------------------- 1 | #include "b.h" 2 | -------------------------------------------------------------------------------- /test/test.c: -------------------------------------------------------------------------------- 1 | //a \ 2 | //asdf 3 | //#define test \ 4 | //test 5 | 6 | //#define A "blank.h" 7 | ////#include A B 8 | //#include 9 | //"blank.h" 10 | 11 | //#include 12 | test #include "something.h" 13 | #include "blank.h" 14 | --------------------------------------------------------------------------------