├── .gitignore ├── .gitmodules ├── README.md ├── aimd_proofs.py ├── cached └── .note ├── cca_aimd.py ├── cca_bbr.py ├── cca_copa.py ├── clean_output.py ├── compiler ├── Cargo.toml └── src │ ├── ast.rs │ ├── context.rs │ └── lib.rs ├── config.py ├── copa_plot.py ├── copa_proofs.py ├── docs └── index.md ├── example_queries.py ├── model.py ├── model_properties ├── .gitignore ├── inf_buffer_compatible.dfy ├── possible_dafny_bug.dfy └── proofs.v ├── old ├── analyze_aimd.py ├── analyze_copa.py ├── analyze_fixed_d.py ├── func_repr.py ├── multi_flow.py └── questions.py ├── plot.py ├── test_cca_aimd.py ├── test_clean_output.py ├── test_model.py ├── utils.py └── variables.py /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *#* 3 | Cargo.lock 4 | compiler/target 5 | cached/*.cached -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "pyz3_utils"] 2 | path = pyz3_utils 3 | url = git@github.com:venkatarun95/pyz3_utils 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Formally Verifying Congestion Control Performance 2 | 3 | This code-base accompanies the SIGCOMM 2021 paper "Formally Verifying Congestion Control Performance" by Venkat Arun, Mina Arashloo, Ahmed Saeed, Mohammad Alizadeh and Hari Balakrishnan 4 | 5 | ## Dependencies 6 | 7 | We use Z3 as the SMT solver. Install the Z3 and its Python bindings from [https://github.com/Z3Prover/z3](https://github.com/Z3Prover/z3) 8 | 9 | We also need matplotlib, numpy and scipy. This project needs Python 3.8.5+ 10 | 11 | ## Running 12 | 13 | First ensure there is a `cached` folder. To make queries, you can modify the `__main__` part of `model.py`. Results can be plotted using `plot.py` by giving it the name of the cache file; if the result is unsat it naturally won't plot anything, because there is no satisfying assignment to plot. The result of computation can be `unknown`. Since we have restricted ourselves to decidable logic, `unknown` only occurs if Z3 hasn't had enough time to compute. In our experience, if the number of timesteps <= 20, it never takes more than 10 minutes to compute. 14 | 15 | The proofs about AIMD and Copa in the paper are in `aimd_proofs.py` and `copa_proofs.py`. To check the proofs just run the respective Python files. The files contain multiple lemmas which when stitched together prove that the CCA will eventually enter the steady state, and once entered remain there. If Z3 is able to prove a lemma, it will output `unsat`. If all lemmas are proved, the file will terminate successfully, which means the theorem in the paper is proven. Refer the paper and `variables.py` for what the variables mean. The lemmas should be clear from the code and comments. Each lemma has the form p —> q, where p are a set of assumptions and q is the property we want to prove. Note that to prove p —> q, we ask the solver to show that its negation, i.e., p /\ ~q, is unsatisfiable. 16 | 17 | We have created three examples of behaviors that CCAC uncovers in `example_queries.py`, one for each of the three algorithms: AIMD, BBR and Copa. An example command is `python3 example_queries.py copa_low_util`. It will output both a graph and print a table of values picked by the solver. 18 | 19 | ## Understanding the output 20 | 21 | When using `cache.py` to run Z3, the model (i.e. variable assignments computed by CCAC) are saved in the `cached/` folder. These can be plotted by calling `python3 plot.py `. Plots can also be created from code. 22 | 23 | ### The plot 24 | In this section, we explain the plots and output. The x-axis is in timesteps, which is (1 / c.R) RTTs. Typically, c.R=1 so 1 timestep=1 RTT. It will plot two graphs. On the top, it will plot curves like A(t), S(t), the bounds on S(t) (the black lines = C * t-W(t)). These are all cumulative curves where the y axis is in amount of data (bytes, megabytes). Since units are arbitrary, we set the link rate C = 1. On the bottom it will plot the congestion window (cwnd), pacing rate and queuing delay. Please be mindful that all three have separate scales on the y-axis. Queuing delay can be a range, as explained in section 5 of the paper; the solver is free to assume the queuing delay can take any value within that range. Queuing delay is not plotted when the simplification procedure is used, since numerical errors affect range computation (see below). 25 | 26 | ### The printed output 27 | In addition to the plot, CCAC prints interesting values from the solution. This includes additional information, for instance the `alpha` value it picked. `alpha` is the size of MSS, which the solver is usually allowed to pick. The solver's choice of `alpha` implicitly sets the link rate as `C` / `alpha` is the link rate in MSS/timestep. Similarly, it prints `dupacks` and internal variables of various CCAs. You can look at the code for the individual CCAs and `plot.py` for details. The plotting function may print a value of -1 if the value was unassigned by the solver. Note, Z3 internally represents numbers as fractions, which is why `plot.py` will print some numbers in fraction form for some variables to avoid losing information. 28 | 29 | ### Simplification 30 | If you choose to use `clean_output.py` to simplify the solution, keep in mind that the procedure uses finite precision floating point numbers. This is in contrast to Z3 with uses arbitrary precision rational arithmetic. Hence you may see some numerical errors which can make the solution inconsistent with the constraints. E.g. you may see small negative values for loss, while the constraint loss >= 0 is there in the constraints. 31 | 32 | The simplification procedure just tries to make the lines a little straighter. It does not always accomplish much. As a rule of thumb, the more extreme the query, the more understandable the output will be. For instance, Copa under the network model where the path-server doesn't compose, Copa achieves >=50% utilization with the query in `example_queries.py`. The output for thresholds near 50% are cleaner than for higher thresholds, since Z3 has less 'play' to make arbitrary decisions. 33 | 34 | 35 | ## Files 36 | 37 | The following files contain SMT constraints that express the logic in the paper 38 | 39 | * `model.py`: contains the main network model 40 | * `cca_aimd.py`: Implementation of AIMD 41 | * `cca_bbr.py`: Implementation of BBR 42 | * `cca_copa.py`: Implementation of Copa 43 | * `aimd_proofs.py`: `prove_loss_bounds` proves AIMD's steady state 44 | * `copa_proofs.py`: `prove_loss_bounds` proves Copa's steady state 45 | * `test_model.py`: Property-based unit tests for `model.py` 46 | * `test_cca_aimd.py`: Property-based unit tests for `cca_aimd.py` 47 | 48 | Utility files 49 | 50 | * `config.py` 51 | * `variables.py` Has the `Variables` struct which has all Z3 global variable 52 | * `utils.py`: Definition of `ModelDict`, which contains Z3's output assignment to variables 53 | * `plot.py`: Plots model. Can be used as a library and standalone from a cache file. See "understanding the output" for details 54 | * `clean_output.py`: takes a Z3 result and uses local gradient descent to simplify it somewhat. Can usually be invoked using the `--simplify` flag or the `simplify` property in `ModelConfig`. Note, since this uses fixed-precision numbers, its output can be inconsistent with the constraint. For instance, you may see a small negative number for loss. Z3's non-simplified output (which is often simple enough) by contrast is always consistent since it uses arbitrary precision rational arithmetic 55 | * `cache.py`: runs and caches Z3 queries 56 | * `my_solver.py`: a thin wrapper over the Python z3 wrapper 57 | * `binary_search.py`: a utility. E.g. if we want to know the minimum utilization of Copa, we could use binary search. This also handles the result `unknown` in addition to `sat` and `unsat` that Z3 outputs. 58 | -------------------------------------------------------------------------------- /aimd_proofs.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from z3 import And, If, Implies, Or 3 | 4 | from config import ModelConfig 5 | from model import Variables, make_solver, min_send_quantum 6 | from pyz3_utils import run_query 7 | 8 | 9 | def prove_loss_bounds(timeout: float): 10 | '''Prove loss bounds for a particular buffer length. Need to sweep buffer 11 | sizes to get confidence that the bounds hold. 12 | 13 | ''' 14 | c = ModelConfig.default() 15 | # You can prove the theorem for other values of buffer as well (note, BDP = 16 | # 1). For smaller buf_min (and buf_max where buf_min=buf_max), pick smaller 17 | # bounds on alpha (see explanation below). For larger buf_min, increase T 18 | c.buf_min = 1 19 | c.buf_max = 1 20 | c.cca = "aimd" 21 | 22 | def max_cwnd(v: Variables): 23 | return c.C*(c.R + c.D) + c.buf_min + v.alpha 24 | 25 | def max_undet(v: Variables): 26 | ''' We'll prove that the number of undetected losses will be below this 27 | at equilibrium 28 | 29 | ''' 30 | return c.C*(c.R + c.D) + v.alpha 31 | 32 | # If cwnd > max_cwnd and undetected <= max_undet, cwnd will decrease 33 | c.T = 10 34 | s, v = make_solver(c) 35 | # Lemma's assumption 36 | s.add(v.c_f[0][0] > max_cwnd(v)) 37 | s.add(v.L_f[0][0] - v.Ld_f[0][0] <= max_undet(v)) 38 | # We need to assume alpha is small, since otherwise we get uninteresting 39 | # counter-examples. This assumption is added to the whole theorem. 40 | s.add(v.alpha < (1 / 4) * c.C * c.R) 41 | # Lemma's statement's converse 42 | s.add(v.c_f[0][-1] >= v.c_f[0][0] - v.alpha) 43 | print("Proving that if cwnd is too big and undetected is small enough, " 44 | "cwnd will decrease") 45 | qres = run_query(c, s, v, timeout) 46 | print(qres.satisfiable) 47 | assert(qres.satisfiable == "unsat") 48 | 49 | # If undetected > max_undet, either undetected will fall by at least C 50 | # bytes (and cwnd won't exceed max_cwnd) or it might not if initial cwnd > 51 | # max_cwnd. In the latter case, cwnd would decrease by the end 52 | 53 | # Note: this lemma by itself proves that undetected will eventually fall 54 | # below max_undet. Then, coupled with the above lemma, we have that AIMD 55 | # will always enter steady state 56 | s, v = make_solver(c) 57 | # Lemma's assumption 58 | min_send_quantum(c, s, v) 59 | s.add(v.L_f[0][0] - v.Ld_f[0][0] > max_undet(v)) 60 | s.add(Or( 61 | v.L_f[0][-1] - v.Ld_f[0][-1] > v.L_f[0][0] - v.Ld_f[0][0] - c.C, 62 | v.c_f[0][-1] > max_cwnd(v))) 63 | s.add(v.alpha < 1 / 5) 64 | # Lemma's statement's converse 65 | s.add(Or(v.c_f[0][0] <= max_cwnd(v), 66 | v.c_f[0][-1] >= v.c_f[0][0] - v.alpha)) 67 | print("Proving that undetected will decrease eventually") 68 | qres = run_query(c, s, v, timeout) 69 | print(qres.satisfiable) 70 | assert(qres.satisfiable == "unsat") 71 | 72 | # If we are in steady state, we'll remain there. In steady state: cwnd <= 73 | # max_cwnd, undetected <= max_undet 74 | c.T = 10 75 | s, v = make_solver(c) 76 | # Lemma's assumption 77 | s.add(v.L_f[0][0] - v.Ld_f[0][0] <= max_undet(v)) 78 | s.add(v.c_f[0][0] <= max_cwnd(v)) 79 | s.add(v.alpha < 1 / 3) 80 | # Lemma's statement's converse 81 | s.add(Or( 82 | v.L_f[0][-1] - v.Ld_f[0][-1] > max_undet(v), 83 | v.c_f[0][-1] > max_cwnd(v))) 84 | print("Proving that if AIMD enters steady state, it will remain there") 85 | qres = run_query(c, s, v, timeout) 86 | print(qres.satisfiable) 87 | assert(qres.satisfiable == "unsat") 88 | 89 | # Prove a theorem about when loss can happen using this steady state 90 | c.T = 10 91 | print("Proving threshold on when loss can happen") 92 | for beta in [0.5, 1.9, 3]: 93 | c.buf_min = beta 94 | s, v = make_solver(c) 95 | # Lemma's assumption 96 | s.add(v.L_f[0][0] - v.Ld_f[0][0] <= max_undet(v)) 97 | s.add(v.c_f[0][0] <= max_cwnd(v)) 98 | s.add(v.alpha < 1 / 3) 99 | 100 | if beta <= c.C * (c.R + c.D): 101 | cwnd_thresh = c.buf_min - v.alpha 102 | else: 103 | cwnd_thresh = c.C * (c.R - 1) + c.buf_min - v.alpha 104 | 105 | for t in range(1, c.T): 106 | s.add(And(v.L_f[0][t] > v.L_f[0][t-1], 107 | v.c_f[0][t-1] < cwnd_thresh)) 108 | qres = run_query(c, s, v, timeout) 109 | print(qres.satisfiable) 110 | assert(qres.satisfiable == "unsat") 111 | 112 | 113 | if __name__ == "__main__": 114 | prove_loss_bounds(600) 115 | -------------------------------------------------------------------------------- /cached/.note: -------------------------------------------------------------------------------- 1 | SMT results are cached in this folder -------------------------------------------------------------------------------- /cca_aimd.py: -------------------------------------------------------------------------------- 1 | from z3 import And, Implies, Not, Or 2 | 3 | from config import ModelConfig 4 | from pyz3_utils import MySolver 5 | from variables import Variables 6 | 7 | 8 | class AIMDVariables: 9 | def __init__(self, c: ModelConfig, s: MySolver): 10 | # Whether or not cwnd can increase at this point 11 | self.incr_f = [[s.Bool(f"aimd_incr_{n},{t}") for t in range(c.T)] 12 | for n in range(c.N)] 13 | 14 | 15 | def can_incr( 16 | c: ModelConfig, 17 | s: MySolver, 18 | v: Variables, 19 | cv: AIMDVariables): 20 | # Always increase 21 | if c.aimd_incr_irrespective: 22 | for n in range(c.N): 23 | for t in range(c.T): 24 | s.add(cv.incr_f[n][t]) 25 | return 26 | 27 | for n in range(c.N): 28 | for t in range(1, c.T): 29 | # Increase cwnd only if we have got enough acks 30 | incr = [] 31 | for dt in range(1, t): 32 | # Note, it is possible that v.c_f[n][t-dt] == v.c_f[n][t] 33 | # even though the cwnd changed in between. Hence this SMT 34 | # encoding is more relaxed than reality, which is per our 35 | # policy of capturing a super-set of behaviors. We could of 36 | # course add appropriate checks, but that is 37 | # unnecessary. This is simpler and possibly more efficient. 38 | incr.append(And( 39 | And([v.c_f[n][t-ddt] == v.c_f[n][t] 40 | for ddt in range(1, dt+1)]), 41 | v.c_f[n][t-dt-1] != v.c_f[n][t-dt], 42 | v.S_f[n][t] - v.S_f[n][t-dt] >= v.c_f[n][t])) 43 | incr.append(And( 44 | And([v.c_f[n][t-ddt] == v.c_f[n][t] 45 | for ddt in range(1, t+1)]), 46 | v.S_f[n][t] - v.S_f[n][0] >= v.c_f[n][t])) 47 | incr.append(And( 48 | v.S_f[n][t] - v.S_f[n][t-1] >= v.c_f[n][t])) 49 | s.add(cv.incr_f[n][t] == Or(*incr)) 50 | 51 | 52 | def cca_aimd(c: ModelConfig, s: MySolver, v: Variables) -> AIMDVariables: 53 | cv = AIMDVariables(c, s) 54 | can_incr(c, s, v, cv) 55 | 56 | # The last send sequence number at which loss was detected 57 | ll = [[s.Real(f"last_loss_{n},{t}") for t in range(c.T)] 58 | for n in range(c.N)] 59 | s.add(v.dupacks == 3 * v.alpha) 60 | for n in range(c.N): 61 | # TODO: make this non-deterministic? 62 | s.add(ll[n][0] == v.S_f[n][0]) 63 | for t in range(c.T): 64 | if c.pacing: 65 | s.add(v.r_f[n][t] == v.c_f[n][t] / c.R) 66 | else: 67 | s.add(v.r_f[n][t] == c.C * 100) 68 | 69 | if t > 0: 70 | # We compare last_loss to outs[t-1-R] (and not outs[t-R]) 71 | # because otherwise it is possible to react to the same loss 72 | # twice 73 | if t > c.R+1: 74 | decrease = And( 75 | v.Ld_f[n][t] > v.Ld_f[n][t-1], 76 | ll[n][t-1] <= v.S_f[n][t-c.R-1] 77 | ) 78 | else: 79 | decrease = v.Ld_f[n][t] > v.Ld_f[n][t-1] 80 | 81 | s.add(Implies( 82 | And(decrease, Not(v.timeout_f[n][t])), 83 | And(ll[n][t] == v.A_f[n][t] - v.L_f[n][t] + v.dupacks, 84 | v.c_f[n][t] == v.c_f[n][t-1] / 2) 85 | )) 86 | s.add(Implies( 87 | And(Not(decrease), Not(v.timeout_f[n][t])), 88 | ll[n][t] == ll[n][t-1])) 89 | 90 | s.add(Implies( 91 | And(Not(decrease), Not(v.timeout_f[n][t]), 92 | cv.incr_f[n][t-1]), 93 | v.c_f[n][t] == v.c_f[n][t-1] + v.alpha)) 94 | s.add(Implies( 95 | And(Not(decrease), Not(v.timeout_f[n][t]), 96 | Not(cv.incr_f[n][t-1])), 97 | v.c_f[n][t] == v.c_f[n][t-1])) 98 | 99 | # Timeout 100 | s.add(Implies(v.timeout_f[n][t], 101 | And(v.c_f[n][t] == v.alpha, 102 | ll[n][t] == v.A_f[n][t] - v.L_f[n][t] 103 | + v.dupacks))) 104 | return cv 105 | -------------------------------------------------------------------------------- /cca_bbr.py: -------------------------------------------------------------------------------- 1 | ''' A simplified version of BBR ''' 2 | 3 | from z3 import And, If, Implies, Not 4 | 5 | from config import ModelConfig 6 | from pyz3_utils import MySolver 7 | from variables import Variables 8 | 9 | 10 | class BBRSimpleVariables: 11 | def __init__(self, c: ModelConfig, s: MySolver): 12 | # State increments every RTT 13 | self.start_state_f =\ 14 | [s.Int(f"bbr_start_state_{n}") for n in range(c.N)] 15 | 16 | 17 | def cca_bbr(c: ModelConfig, s: MySolver, v: Variables): 18 | # The period over which we compute rates 19 | P = c.R 20 | # Number of RTTs over which we compute the max_cwnd (=10 in the spec) 21 | max_R = 4 22 | # The number of RTTs in the BBR cycle (=8 in the spec) 23 | cycle = 4 24 | # The state the flow starts in at t=0 25 | start_state_f = [s.Int(f"bbr_start_state_{n}") for n in range(c.N)] 26 | 27 | for n in range(c.N): 28 | s.add(start_state_f[n] >= 0) 29 | s.add(start_state_f[n] < cycle) 30 | for t in range(c.R + P, c.T): 31 | # Compute the max RTT over the last max_R RTTs 32 | max_rate = [s.Real(f"max_rate_{n},{t},{dt}") 33 | for dt in range(min(t-c.R-P+1, max_R))] 34 | s.add(max_rate[0] == (v.S_f[n][t-c.R] - v.S_f[n][t-c.R-P]) / P) 35 | for dt in range(1, len(max_rate)): 36 | rate = (v.S_f[n][t-dt-c.R] - v.S_f[n][t-dt-c.R-P]) / P 37 | s.add(max_rate[dt] == If(rate > max_rate[dt-1], 38 | rate, max_rate[dt-1])) 39 | 40 | # Convenience variable for plotting 41 | s.add(s.Real(f"max_rate_{n},{t}") == max_rate[-1]) 42 | 43 | s.add(v.c_f[n][t] == 2 * max_rate[-1] * P) 44 | s_0 = (start_state_f[n] == (0 - t / c.R) % cycle) 45 | s_1 = (start_state_f[n] == (1 - t / c.R) % cycle) 46 | s.add(Implies(s_0, 47 | v.r_f[n][t] == 1.25 * max_rate[-1])) 48 | s.add(Implies(s_1, 49 | v.r_f[n][t] == 0.8 * max_rate[-1])) 50 | s.add(Implies(And(Not(s_0), Not(s_1)), 51 | v.r_f[n][t] == 1 * max_rate[-1])) 52 | -------------------------------------------------------------------------------- /cca_copa.py: -------------------------------------------------------------------------------- 1 | from z3 import And, If, Implies, Not, Or 2 | 3 | from config import ModelConfig 4 | from pyz3_utils import MySolver 5 | from variables import Variables 6 | 7 | 8 | def cca_copa(c: ModelConfig, s: MySolver, v: Variables): 9 | for n in range(c.N): 10 | for t in range(c.T): 11 | # Basic constraints 12 | s.add(v.c_f[n][t] > 0) 13 | s.add(v.r_f[n][t] == v.c_f[n][t] / c.R) 14 | 15 | if t - c.R - c.D < 0: 16 | continue 17 | 18 | incr_alloweds, decr_alloweds = [], [] 19 | for dt in range(t+1): 20 | # Whether we are allowd to increase/decrease 21 | incr_allowed = s.Bool("incr_allowed_%d,%d,%d" % (n, t, dt)) 22 | decr_allowed = s.Bool("decr_allowed_%d,%d,%d" % (n, t, dt)) 23 | # Warning: Adversary here is too powerful if D > 1. Add 24 | # a constraint for every point between t-1 and t-1-D 25 | assert(c.D == 1) 26 | s.add(incr_allowed 27 | == And( 28 | v.qdel[t-c.R][dt], 29 | v.S[t-c.R] > v.S[t-c.R-1], 30 | v.c_f[n][t-1] * max(0, dt-1) 31 | <= v.alpha*(c.R+max(0, dt-1)))) 32 | s.add(decr_allowed 33 | == And( 34 | v.qdel[t-c.R-c.D][dt], 35 | v.S[t-c.R] > v.S[t-c.R-1], 36 | v.c_f[n][t-1] * dt >= v.alpha * (c.R + dt))) 37 | incr_alloweds.append(incr_allowed) 38 | decr_alloweds.append(decr_allowed) 39 | # If inp is high at the beginning, qdel can be arbitrarily 40 | # large 41 | decr_alloweds.append(v.S[t-c.R] < v.A[0] - v.L[0]) 42 | 43 | incr_allowed = Or(*incr_alloweds) 44 | decr_allowed = Or(*decr_alloweds) 45 | 46 | # Either increase or decrease cwnd 47 | incr = s.Bool("incr_%d,%d" % (n, t)) 48 | decr = s.Bool("decr_%d,%d" % (n, t)) 49 | s.add(Or( 50 | And(incr, Not(decr)), 51 | And(Not(incr), decr))) 52 | s.add(Implies(incr, incr_allowed)) 53 | s.add(Implies(decr, decr_allowed)) 54 | s.add(Implies(incr, v.c_f[n][t] == v.c_f[n][t-1]+v.alpha/c.R)) 55 | sub = v.c_f[n][t-1] - v.alpha / c.R 56 | s.add(Implies(decr, v.c_f[n][t] 57 | == If(sub < v.alpha, v.alpha, sub))) 58 | -------------------------------------------------------------------------------- /clean_output.py: -------------------------------------------------------------------------------- 1 | ''' Take SMT output and clean it up, trying to remove the clutter and leave 2 | behind only the essential details for why the counter-example works ''' 3 | 4 | from config import ModelConfig 5 | from copy import copy, deepcopy 6 | from fractions import Fraction 7 | from functools import reduce 8 | from pyz3_utils import ModelDict, extract_vars 9 | import numpy as np 10 | import operator 11 | from scipy.optimize import LinearConstraint, minimize 12 | from typing import Any, Dict, List, Set, Tuple, Union 13 | from z3 import And, ArithRef, AstVector, BoolRef, IntNumRef, Not,\ 14 | RatNumRef, substitute 15 | 16 | 17 | Expr = Union[BoolRef, ArithRef] 18 | 19 | 20 | def eval_smt(m: ModelDict, a: Expr) -> Union[Fraction, bool]: 21 | if type(a) is AstVector: 22 | a = And(a) 23 | 24 | decl = str(a.decl()) 25 | children = [eval_smt(m, x) for x in a.children()] 26 | 27 | if len(children) == 0: 28 | if type(a) is ArithRef: 29 | return m[str(a)] 30 | elif type(a) is RatNumRef: 31 | return a.as_fraction() 32 | elif decl == "Int": 33 | return a.as_long() 34 | elif str(a.decl()) == "True": 35 | return True 36 | elif str(a.decl()) == "False": 37 | return False 38 | elif type(a) is BoolRef: 39 | return m[str(a)] 40 | 41 | if decl == "Not": 42 | assert(len(a.children()) == 1) 43 | return not children[0] 44 | if decl == "And": 45 | return all(children) 46 | if decl == "Or": 47 | return any(children) 48 | if decl == "Implies": 49 | assert(len(a.children()) == 2) 50 | if children[0] is True and children[1] is False: 51 | return False 52 | else: 53 | return True 54 | if decl == "If": 55 | assert(len(a.children()) == 3) 56 | if children[0] is True: 57 | return children[1] 58 | else: 59 | return children[2] 60 | if decl == "+": 61 | return sum(children, start=Fraction(0)) 62 | if decl == "-": 63 | if len(a.children()) == 2: 64 | return children[0] - children[1] 65 | elif len(a.children()) == 1: 66 | return -children[0] 67 | else: 68 | assert(False) 69 | if decl == "*": 70 | return reduce(operator.mul, children, 1) 71 | if decl == "/": 72 | assert(len(children) == 2) 73 | return children[0] / children[1] 74 | if decl == "<": 75 | assert(len(a.children()) == 2) 76 | return children[0] < children[1] 77 | if decl == "<=": 78 | assert(len(a.children()) == 2) 79 | return children[0] <= children[1] 80 | if decl == ">": 81 | assert(len(a.children()) == 2) 82 | return children[0] > children[1] 83 | if decl == ">=": 84 | assert(len(a.children()) == 2) 85 | return children[0] >= children[1] 86 | if decl == "==": 87 | assert(len(a.children()) == 2) 88 | return children[0] == children[1] 89 | if decl == "Distinct": 90 | assert(len(a.children()) == 2) 91 | return children[0] != children[1] 92 | print(f"Unrecognized decl {decl} in {a}") 93 | exit(1) 94 | 95 | 96 | def substitute_if( 97 | m: ModelDict, 98 | a: BoolRef) -> Tuple[BoolRef, List[BoolRef]]: 99 | ''' Substitute any 'If(c, t, f)' expressions with 't' if 'c' is true under 100 | 'm' and with 'f' otherwise. Also returns a list of 'c's from all the 'If's, 101 | since they need to be asserted true as well ''' 102 | 103 | # The set of 'c's for 'If's 104 | conds = [] 105 | res = deepcopy(a) 106 | # BFS queue 107 | if type(res) == AstVector: 108 | res = And(res) 109 | queue = [res] 110 | while len(queue) > 0: 111 | cur = queue[0] 112 | queue = queue[1:] 113 | if str(cur.decl()) == "If": 114 | assert(len(cur.children()) == 3) 115 | c, t, f = cur.children() 116 | if eval_smt(m, c): 117 | res = substitute(res, (cur, t)) 118 | queue.append(t) 119 | c, new_conds = substitute_if(m, c) 120 | conds.append(c) 121 | conds.extend(new_conds) 122 | else: 123 | res = substitute(res, (cur, f)) 124 | queue.append(f) 125 | c, new_conds = substitute_if(m, Not(c)) 126 | conds.append(c) 127 | conds.extend(new_conds) 128 | else: 129 | queue.extend(cur.children()) 130 | return (res, conds) 131 | 132 | 133 | def anded_constraints(m: ModelDict, a: Expr, truth=True, top_level=True)\ 134 | -> List[Expr]: 135 | ''' We'll find a subset of linear inequalities that are satisfied in the 136 | solution. To simplify computation, we'll only search for "nice" solutions 137 | within this set. 'a' is an assertion. 'top_level' and 'truth' are internal 138 | variables and indicate what we expect the truth value of the sub-expression 139 | to be and whether we are in the top level of recursion respectively ''' 140 | 141 | # No point searching for solutions if we are not given a satisfying 142 | # assignment to begin with 143 | if eval_smt(m, a) != truth: 144 | print(a, truth) 145 | assert(eval_smt(m, a) == truth) 146 | 147 | if type(a) is AstVector: 148 | a = And(a) 149 | 150 | decl = str(a.decl()) 151 | if decl in ["<", "<=", ">", ">=", "==", "Distinct"]: 152 | assert(len(a.children()) == 2) 153 | x, y = a.children() 154 | 155 | if not truth: 156 | decl = { 157 | "<": ">=", 158 | "<=": ">", 159 | ">": "<=", 160 | ">=": "<", 161 | "==": "Distinct", 162 | "Distinct": "==" 163 | }[decl] 164 | if decl in ["==", "Distinct"]: 165 | if type(x) is BoolRef or type(y) is BoolRef: 166 | assert(type(x) is BoolRef and type(y) is BoolRef) 167 | # It should evaluate to what it evaluated in the original 168 | # assignment 169 | return (anded_constraints(m, x, eval_smt(m, x), False) 170 | + anded_constraints(m, y, eval_smt(m, y), False)) 171 | 172 | if decl == "Distinct": 173 | # Convert != to either < or > 174 | if eval_smt(m, x) < eval_smt(m, y): 175 | return [x < y] 176 | else: 177 | return [y < x] 178 | 179 | if not truth: 180 | if decl == "<": 181 | return [x < y] 182 | if decl == "<=": 183 | return [x <= y] 184 | if decl == ">": 185 | return [x > y] 186 | if decl == ">=": 187 | return [x >= y] 188 | return [a] 189 | # if decl == "If": 190 | # assert(len(a.children()) == 3) 191 | # c, t, f = a.children() 192 | # if eval_smt(m, c): 193 | # return children[0] + children[1] 194 | # else: 195 | # return children[0] + children[2] 196 | 197 | if decl == "Not": 198 | assert(len(a.children()) == 1) 199 | return anded_constraints(m, a.children()[0], (not truth), False) 200 | if decl == "And": 201 | if truth: 202 | return sum([anded_constraints(m, x, True, False) 203 | for x in a.children()], 204 | start=[]) 205 | else: 206 | for x in a.children(): 207 | if not eval_smt(m, x): 208 | # Return just the first one (arbitrary choice). Returning 209 | # more causes us to be unnecessarily restrictive 210 | return anded_constraints(m, x, False, False) 211 | if decl == "Or": 212 | if truth: 213 | for x in a.children(): 214 | if eval_smt(m, x): 215 | # Return just the first one (arbitrary choice). Returning 216 | # more causes us to be unnecessarily restrictive 217 | return anded_constraints(m, x, True, False) 218 | else: 219 | return sum([anded_constraints(m, x, False, False) 220 | for x in a.children()], 221 | start=[]) 222 | 223 | if decl == "Implies": 224 | assert(len(a.children()) == 2) 225 | assert(type(eval_smt(m, a.children()[0])) is bool) 226 | if truth: 227 | if eval_smt(m, a.children()[0]): 228 | return anded_constraints(m, a.children()[1], True, False) 229 | else: 230 | return anded_constraints(m, a.children()[0], False, False) 231 | else: 232 | return (anded_constraints(m, a.children()[0], True, False) 233 | + anded_constraints(m, a.children()[1], False, False)) 234 | if type(a) is BoolRef: 235 | # Must be a boolean variable. We needn't do anything here 236 | return [] 237 | print(f"Unrecognized decl {decl} in {a}") 238 | assert(False) 239 | 240 | 241 | class LinearVars: 242 | def __init__(self, vars: Dict[str, float] = {}, constant: float = 0): 243 | self.vars = vars 244 | self.constant = constant 245 | 246 | def __add__(self, other): 247 | vars, constant = copy(self.vars), copy(self.constant) 248 | for k in other.vars: 249 | if k in vars: 250 | vars[k] += other.vars[k] 251 | else: 252 | vars[k] = other.vars[k] 253 | constant += other.constant 254 | return LinearVars(vars, constant) 255 | 256 | def __mul__(self, factor: float): 257 | vars, constant = copy(self.vars), copy(self.constant) 258 | for k in vars: 259 | vars[k] *= factor 260 | constant *= factor 261 | return LinearVars(vars, constant) 262 | 263 | def __str__(self): 264 | return ' + '.join([f"{self.vars[k]} * {k}" for k in self.vars])\ 265 | + f" + {self.constant}" 266 | 267 | def __eq__(self, other) -> bool: 268 | return self.vars == other.vars and self.constant == other.constant 269 | 270 | 271 | def get_linear_vars(expr: Union[ArithRef, RatNumRef])\ 272 | -> LinearVars: 273 | ''' Given a linear arithmetic expression, return its equivalent that takes 274 | the form res[1] + sum_i res[0][i][0] * res[0][i][1]''' 275 | 276 | decl = str(expr.decl()) 277 | if decl == "+": 278 | return sum([get_linear_vars(x) for x in expr.children()], 279 | start=LinearVars()) 280 | if decl == "-": 281 | assert(len(expr.children()) in [1, 2]) 282 | if len(expr.children()) == 2: 283 | a, b = map(get_linear_vars, expr.children()) 284 | return a + (b * (-1.0)) 285 | else: 286 | return get_linear_vars(expr.children()[0]) * -1.0 287 | if decl == "*": 288 | assert(len(expr.children()) == 2) 289 | a, b = expr.children() 290 | if type(a) == ArithRef and type(b) == RatNumRef: 291 | return get_linear_vars(a) * float(b.as_decimal(100)) 292 | if type(a) == RatNumRef and type(b) == ArithRef: 293 | return get_linear_vars(b) * float(a.as_decimal(100)) 294 | print(f"Only linear terms allowed. Found {str(expr)}") 295 | exit(1) 296 | if decl == "/": 297 | assert(len(expr.children()) == 2) 298 | a, b = expr.children() 299 | assert(type(b) == RatNumRef) 300 | return get_linear_vars(a) * (1. / float(b.as_decimal(100))) 301 | if type(expr) is ArithRef: 302 | # It is a single variable, since we have eliminated other cases 303 | return LinearVars({decl: 1}) 304 | if type(expr) is RatNumRef: 305 | return LinearVars({}, float(expr.as_decimal(100))) 306 | if type(expr) is IntNumRef: 307 | return LinearVars({}, expr.as_long()) 308 | print(f"Unrecognized expression {expr} {type(expr)}") 309 | exit(1) 310 | 311 | 312 | def solver_constraints(constraints: List[Any])\ 313 | -> Tuple[List[LinearConstraint], Dict[str, int]]: 314 | ''' Given a list of SMT constraints (e.g. those output by 315 | `anded_constraints`), return the corresponding LinearConstraint object and 316 | the names of the variables in the order used in LinearConstraint ''' 317 | 318 | tol = 1e-9 319 | 320 | # First get all the variables 321 | varss: Set[str] = set().union(*[set(extract_vars(e)) for e in constraints]) 322 | varsl: List[str] = list(varss) 323 | vars: Dict[str, int] = {k: i for (i, k) in enumerate(varsl)} 324 | 325 | # The number of equality and inequality constraints 326 | n_eq = sum([int(str(cons.decl()) == "==") for cons in constraints]) 327 | n_ineq = len(constraints) - n_eq 328 | 329 | A_eq = np.zeros((n_eq, len(vars))) 330 | lb_eq = np.zeros(n_eq) 331 | ub_eq = np.zeros(n_eq) 332 | 333 | A_ineq = np.zeros((n_ineq, len(vars))) 334 | lb_ineq = np.zeros(n_ineq) 335 | ub_ineq = np.zeros(n_ineq) 336 | 337 | i_eq, i_ineq = 0, 0 338 | for cons in constraints: 339 | assert(len(cons.children()) == 2) 340 | a = get_linear_vars(cons.children()[0]) 341 | b = get_linear_vars(cons.children()[1]) 342 | 343 | # Construct the linear part h 344 | if str(cons.decl()) in [">=", ">", "=="]: 345 | lin = b + (a * -1.0) 346 | elif str(cons.decl()) in ["<=", "<"]: 347 | lin = a + (b * -1.0) 348 | else: 349 | print(str(cons.decl())) 350 | assert(False) 351 | 352 | if str(cons.decl()) in ["<", ">"]: 353 | lin.constant += 1e-6 354 | 355 | # Put it into the matrix 356 | for k in lin.vars: 357 | j = vars[k] 358 | if str(cons.decl()) == "==": 359 | A_eq[i_eq, j] = lin.vars[k] 360 | else: 361 | A_ineq[i_ineq, j] = lin.vars[k] 362 | 363 | # Make the bounds 364 | if str(cons.decl()) == "==": 365 | lb_eq[i_eq] = -lin.constant - tol 366 | ub_eq[i_eq] = -lin.constant + tol 367 | i_eq += 1 368 | else: 369 | lb_ineq[i_ineq] = -float("inf") 370 | ub_ineq[i_ineq] = -lin.constant + tol 371 | i_ineq += 1 372 | assert(i_eq == n_eq) 373 | assert(i_ineq == n_ineq) 374 | 375 | return ([LinearConstraint(A_eq, lb_eq, ub_eq, keep_feasible=False), 376 | LinearConstraint(A_ineq, lb_ineq, ub_ineq, keep_feasible=False)], 377 | vars) 378 | 379 | 380 | def simplify_solution(c: ModelConfig, 381 | m: ModelDict, 382 | assertions: BoolRef) -> ModelDict: 383 | new_assertions, conds = substitute_if(m, assertions) 384 | anded = anded_constraints(m, And(new_assertions, And(conds))) 385 | constraints, vars = solver_constraints(anded) 386 | init_values = np.asarray([m[v] for v in vars]) 387 | 388 | def constraint_fit(soln: np.ndarray, cons: List[LinearConstraint]) \ 389 | -> float: 390 | ugap = np.concatenate(( 391 | np.dot(cons[0].A, soln) - cons[0].ub, 392 | np.dot(cons[1].A, soln) - cons[1].ub)) 393 | lgap = np.concatenate(( 394 | cons[0].lb - np.dot(cons[0].A, soln), 395 | cons[1].lb - np.dot(cons[1].A, soln))) 396 | for i in range(ugap.shape[0]): 397 | if ugap[i] > 1e-5 or lgap[i] > 1e-5: 398 | print("Found an unsatisfied constraint") 399 | print(anded[i]) 400 | v = extract_vars(anded[i]) 401 | print([(x, float(m[x])) for x in v]) 402 | constraint_fit(init_values, constraints) 403 | 404 | def score1(values: np.ndarray) -> float: 405 | res = 0 406 | for t in range(1, c.T): 407 | res += (values[vars[f"tot_inp_{t}"]] 408 | - values[vars[f"tot_inp_{t-1}"]]) ** 2 / c.T 409 | res += (values[vars[f"tot_out_{t}"]] 410 | - values[vars[f"tot_out_{t-1}"]]) ** 2 / c.T 411 | res += (values[vars[f"wasted_{t}"]] 412 | - values[vars[f"wasted_{t-1}"]]) ** 2 / c.T 413 | for n in range(c.N): 414 | res += (values[vars[f"cwnd_{n},{t}"]] 415 | - values[vars[f"cwnd_{n},{t-1}"]]) ** 2 / (c.T * c.N) 416 | return res 417 | 418 | # Score for the new implementation 419 | def score2(values: np.ndarray) -> float: 420 | res = 0 421 | for t in range(1, c.T): 422 | res += (values[vars[f"tot_arrival_{t}"]] 423 | - values[vars[f"tot_arrival_{t-1}"]]) ** 2 / c.T 424 | res += (values[vars[f"tot_service_{t}"]] 425 | - values[vars[f"tot_service_{t-1}"]]) ** 2 / c.T 426 | res += (values[vars[f"wasted_{t}"]] 427 | - values[vars[f"wasted_{t-1}"]]) ** 2 / c.T 428 | for n in range(c.N): 429 | res += (values[vars[f"cwnd_{n},{t}"]] 430 | - values[vars[f"cwnd_{n},{t-1}"]]) ** 2 / (c.T * c.N) 431 | return res 432 | 433 | # Methods that work are "SLSQP" and "trust-constr" 434 | soln = minimize(score2, init_values, constraints=constraints, 435 | method="SLSQP") 436 | constraint_fit(soln.x, constraints) 437 | 438 | res = copy(m) 439 | for var in vars: 440 | res[var] = soln.x[vars[var]] 441 | 442 | print(f"Successful? {soln.success} Message: {soln.message}") 443 | print(f"The solution found is feasible: {eval_smt(res, assertions)}") 444 | 445 | # Some cleaning up to account for numerical errors. For loss, small errors 446 | # make a big semantic difference. So get rid of those 447 | tol = 1e-9 448 | for t in range(1, c.T): 449 | if res[f"tot_lost_{t}"] - res[f"tot_lost_{t-1}"] <= 4 * tol: 450 | res[f"tot_lost_{t}"] = res[f"tot_lost_{t-1}"] 451 | for n in range(c.N): 452 | if res[f"loss_detected_{n},{t}"] - res[f"loss_detected_{n},{t-1}"]\ 453 | <= 4 * tol: 454 | res[f"loss_detected_{n},{t}"] = res[f"loss_detected_{n},{t-1}"] 455 | 456 | return res 457 | -------------------------------------------------------------------------------- /compiler/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "ccac_compiler" 3 | version = "0.1.0" 4 | authors = ["Venkat Arun "] 5 | edition = "2018" 6 | 7 | [dependencies] 8 | num-rational = "0.4" -------------------------------------------------------------------------------- /compiler/src/ast.rs: -------------------------------------------------------------------------------- 1 | use crate::context::{Context, Symbol}; 2 | use num_rational::Ratio; 3 | 4 | #[derive(Clone, Copy, Debug, Eq, PartialEq)] 5 | pub enum DataType { 6 | Bool, 7 | Int, 8 | Real, 9 | Void, 10 | } 11 | 12 | #[derive(Clone, Copy, Debug, Eq, PartialEq)] 13 | pub enum Constant { 14 | Bool(bool), 15 | Int(i64), 16 | /// Assume all real constants are rational numbers representable 17 | /// in i64. This certainly makes sense for the linear real 18 | /// arithmetic theory 19 | Real(Ratio), 20 | } 21 | 22 | impl Constant { 23 | fn data_type(&self) -> DataType { 24 | match self { 25 | Self::Bool(_) => DataType::Bool, 26 | Self::Int(_) => DataType::Int, 27 | Self::Real { .. } => DataType::Real, 28 | } 29 | } 30 | } 31 | 32 | #[derive(Clone, Copy, Debug, Eq, PartialEq)] 33 | pub enum Cmp { 34 | Lt, 35 | Le, 36 | Eq, 37 | } 38 | 39 | #[derive(Clone, Debug)] 40 | pub enum BoolOp { 41 | And(Vec), 42 | Or(Vec), 43 | Not(Box), 44 | } 45 | 46 | #[derive(Clone, Debug)] 47 | pub enum Node { 48 | Constant { 49 | val: Constant, 50 | }, 51 | /// Some variables are indexed. We store those indices in the AST 52 | Variable { 53 | name: String, 54 | indices: Vec>, 55 | }, 56 | Cmp { 57 | cmp: Cmp, 58 | left: Box, 59 | right: Box, 60 | }, 61 | BoolOp { 62 | op: BoolOp, 63 | }, 64 | Add { 65 | terms: Vec, 66 | }, 67 | Mul { 68 | terms: Vec, 69 | }, 70 | /// Represents a range of values. Useful to linearize by 71 | /// discretizing 72 | Range { 73 | lower: Box, 74 | upper: Box, 75 | }, 76 | Stmts { 77 | stmts: Vec, 78 | }, 79 | /// Loop over lo, lo+step, lo+2*step as long as the indices are < 80 | /// hi 81 | ForLoop { 82 | varname: String, 83 | lo: Box, 84 | hi: Box, 85 | step: Box, 86 | body: Box, 87 | }, 88 | /// Pushes a new context for its children 89 | Context { 90 | child: Box, 91 | }, 92 | /// Declare a new variable 93 | VarDecl { 94 | varname: String, 95 | sym: Symbol, 96 | }, 97 | } 98 | 99 | impl Node { 100 | pub fn data_type(&self, ctx: &mut Context) -> DataType { 101 | match self { 102 | Self::Constant { val } => val.data_type(), 103 | Self::Variable { name, .. } => { 104 | if let Some(s) = ctx.get(name) { 105 | s.data_type 106 | } else { 107 | panic!("Undeclared variable '{}'", name); 108 | } 109 | } 110 | Self::Cmp { .. } => DataType::Bool, 111 | Self::BoolOp { .. } => DataType::Bool, 112 | Self::Add { terms, .. } => { 113 | assert!(terms.len() > 0); 114 | let res = terms[0].data_type(ctx); 115 | for x in &terms[1..] { 116 | if x.data_type(ctx) != res { 117 | panic!("Adding incompatible datatypes"); 118 | } 119 | } 120 | res 121 | } 122 | Self::Mul { terms, .. } => { 123 | assert!(terms.len() > 0); 124 | let res = terms[0].data_type(ctx); 125 | for x in &terms[1..] { 126 | if x.data_type(ctx) != res { 127 | panic!("Adding incompatible datatypes"); 128 | } 129 | } 130 | res 131 | } 132 | Self::Range { lower, upper } => { 133 | assert!(lower.data_type(ctx) == upper.data_type(ctx)); 134 | lower.data_type(ctx) 135 | } 136 | Self::Stmts { stmts } => { 137 | // Perform checks 138 | for stmt in stmts { 139 | let dt = stmt.data_type(ctx); 140 | if dt != DataType::Bool && dt != DataType::Void { 141 | panic!( 142 | "Warning: Statement returns type {:?}, which does nothing.", 143 | dt 144 | ); 145 | } 146 | } 147 | DataType::Void 148 | } 149 | Self::ForLoop { lo, hi, step, .. } => { 150 | let dt = lo.data_type(ctx); 151 | // Perform some checks 152 | if dt != hi.data_type(ctx) || dt != step.data_type(ctx) { 153 | panic!("All parts of the for loop must have the same datatype. We have lo={:?}, hi={:?}, step={:?}", dt, hi.data_type(ctx), step.data_type(ctx)); 154 | } 155 | if dt != DataType::Int && dt != DataType::Real { 156 | panic!("For loop indices cannot have type {:?}", lo.data_type(ctx)); 157 | } 158 | DataType::Void 159 | } 160 | Self::Context { child, .. } => { 161 | ctx.push(); 162 | let res = child.data_type(ctx); 163 | ctx.pop(); 164 | res 165 | } 166 | Self::VarDecl { .. } => DataType::Void, 167 | } 168 | } 169 | 170 | pub fn is_constant(&self) -> bool { 171 | if let Self::Constant { .. } = self { 172 | true 173 | } else { 174 | false 175 | } 176 | } 177 | 178 | pub fn get_constant(&self) -> Option { 179 | if let Self::Constant { val } = self { 180 | Some(*val) 181 | } else { 182 | None 183 | } 184 | } 185 | 186 | pub fn is_linear(&self) -> bool { 187 | match self { 188 | Self::Constant { .. } => true, 189 | Self::Variable { .. } => true, 190 | Self::Add { terms, .. } => terms.iter().all(|x| x.is_linear()), 191 | Self::Mul { terms, .. } => { 192 | terms 193 | .iter() 194 | .map(|x| if let Self::Constant { .. } = x { 0 } else { 1 }) 195 | .sum::() 196 | <= 1 197 | } 198 | Self::Range { lower, upper } => lower.is_linear() && upper.is_linear(), 199 | _ => panic!("Called `is_linear` on {:?}", self), 200 | } 201 | } 202 | 203 | /// Recursively fold all constants and force to zero in 204 | /// multiplications 205 | pub fn fold_const(&mut self, ctx: &mut Context) -> bool { 206 | fn to_rational(v: &Constant) -> Ratio { 207 | match v { 208 | Constant::Bool(_) => panic!("Unexpected bool constant"), 209 | Constant::Int(v) => Ratio::new(*v, 1), 210 | Constant::Real(v) => *v, 211 | } 212 | } 213 | fn to_bool(x: &Constant) -> bool { 214 | if let Constant::Bool(x) = x { 215 | *x 216 | } else { 217 | panic!("Expected bool in boolean operation, found {:?}", x) 218 | } 219 | } 220 | 221 | // Delete constant values and return them in a separate vector. Also returns whether something changed 222 | fn fold_vec(v: &mut Vec, ctx: &mut Context) -> (Vec, bool) { 223 | let mut consts = Vec::new(); 224 | let mut changed = false; 225 | for t in v.iter_mut() { 226 | changed |= t.fold_const(ctx); 227 | if let Node::Constant { val, .. } = t { 228 | consts.push(*val); 229 | } 230 | } 231 | if consts.len() > 1 { 232 | // Remove all constant terms and add only the one constant 233 | changed = true; 234 | v.retain(|x| { 235 | if let Node::Constant { .. } = *x { 236 | false 237 | } else { 238 | true 239 | } 240 | }); 241 | let data_type = v[0].data_type(ctx); 242 | for x in v[1..].iter() { 243 | if x.data_type(ctx) != data_type { 244 | panic!( 245 | "All datatypes must be the same. Expected {:?} found {:?}", 246 | data_type, 247 | x.data_type(ctx) 248 | ); 249 | } 250 | } 251 | } 252 | (consts, changed) 253 | } 254 | 255 | match self { 256 | Self::Constant { .. } => false, 257 | Self::Variable { name, indices } => { 258 | let mut change = false; 259 | for i in indices { 260 | change |= i.fold_const(ctx); 261 | } 262 | change |= if let Some(sym) = ctx.get(name) { 263 | if let Some(val) = sym.val { 264 | *self = Self::Constant { val }; 265 | true 266 | } else { 267 | false 268 | } 269 | } else { 270 | panic!("Undeclared variable '{}'", name) 271 | }; 272 | change 273 | } 274 | Self::Cmp { cmp, left, right } => { 275 | let changed = left.fold_const(ctx) | right.fold_const(ctx); 276 | if let Self::Constant { val: left } = **left { 277 | if let Self::Constant { val: right } = **right { 278 | let left = to_rational(&left); 279 | let right = to_rational(&right); 280 | let res = match cmp { 281 | Cmp::Lt => left < right, 282 | Cmp::Le => left <= right, 283 | Cmp::Eq => left == right, 284 | }; 285 | *self = Self::Constant { 286 | val: Constant::Bool(res), 287 | }; 288 | return true; 289 | } 290 | } 291 | changed 292 | } 293 | Self::BoolOp { op } => { 294 | match op { 295 | BoolOp::And(v) => { 296 | let (consts, changed) = fold_vec(v, ctx); 297 | if !consts.iter().all(to_bool) { 298 | // The entire thing is false 299 | *self = Node::Constant { 300 | val: Constant::Bool(false), 301 | }; 302 | } 303 | changed 304 | } 305 | BoolOp::Or(v) => { 306 | let (consts, changed) = fold_vec(v, ctx); 307 | if consts.iter().any(to_bool) { 308 | // The entire thing is false 309 | *self = Node::Constant { 310 | val: Constant::Bool(true), 311 | }; 312 | } 313 | changed 314 | } 315 | BoolOp::Not(x) => { 316 | if let Node::Constant { val } = **x { 317 | if let Constant::Bool(val) = val { 318 | *self = Node::Constant { 319 | val: Constant::Bool(!val), 320 | }; 321 | } else { 322 | panic!( 323 | "Cannot compute NOT of value of non-bool type {:?}", 324 | val.data_type() 325 | ); 326 | } 327 | true 328 | } else { 329 | false 330 | } 331 | } 332 | } 333 | } 334 | Self::Add { terms } => { 335 | let (consts, changed) = fold_vec(terms, ctx); 336 | let c_term: Ratio = consts.iter().map(to_rational).sum(); 337 | if consts.len() > 0 { 338 | let val = match consts[0].data_type() { 339 | DataType::Real => Constant::Real(c_term), 340 | DataType::Int => { 341 | assert_eq!(*c_term.denom(), 1); 342 | Constant::Int(*c_term.numer()) 343 | } 344 | _ => unreachable!(), 345 | }; 346 | terms.push(Node::Constant { val }); 347 | } 348 | 349 | changed 350 | } 351 | Self::Mul { terms } => { 352 | let (consts, changed) = fold_vec(terms, ctx); 353 | let c_term: Ratio = consts.iter().map(to_rational).product(); 354 | if consts.len() > 0 { 355 | let val = match consts[0].data_type() { 356 | DataType::Real => Constant::Real(c_term), 357 | DataType::Int => { 358 | assert_eq!(*c_term.denom(), 1); 359 | Constant::Int(*c_term.numer()) 360 | } 361 | _ => unreachable!(), 362 | }; 363 | // If multiplying by zero, the entire thing will be 0 364 | if *c_term.numer() == 0 { 365 | *self = Node::Constant { val }; 366 | return true; 367 | } 368 | terms.push(Node::Constant { val }); 369 | } 370 | 371 | changed 372 | } 373 | Self::Range { lower, upper, .. } => lower.fold_const(ctx) | upper.fold_const(ctx), 374 | Self::Stmts { stmts, .. } => { 375 | let mut change = false; 376 | for stmt in stmts { 377 | change |= stmt.fold_const(ctx); 378 | } 379 | change 380 | } 381 | Self::ForLoop { 382 | varname, 383 | lo, 384 | hi, 385 | step, 386 | body, 387 | .. 388 | } => { 389 | let change = lo.fold_const(ctx) || hi.fold_const(ctx) || step.fold_const(ctx); 390 | if lo.is_constant() && hi.is_constant() && step.is_constant() { 391 | // Ensure the loop variable is declared and is not an array 392 | let loop_var = if let Some(var) = ctx.get(varname) { 393 | if var.data_type == DataType::Bool { 394 | panic!("Cannot loop over bools"); 395 | } 396 | if var.indices.len() > 0 { 397 | panic!("Loop variables cannot be array elements"); 398 | } 399 | var 400 | } else { 401 | panic!("Trying to use undeclared variable {} in loop", varname); 402 | }; 403 | // Unroll the loop 404 | let lo = lo.get_constant().unwrap(); 405 | let hi = hi.get_constant().unwrap(); 406 | let step = step.get_constant().unwrap(); 407 | 408 | // Generate the list of symbols generated by the for loop 409 | let mut cur = to_rational(&lo); 410 | let mut symbols = Vec::new(); 411 | while cur < to_rational(&hi) { 412 | let sym = match loop_var.data_type { 413 | DataType::Bool => panic!("For loop cannot operate over bools"), 414 | DataType::Void => panic!("For loop cannot operate over voids"), 415 | DataType::Int => { 416 | if *cur.denom() != 1 { 417 | panic!("Tried to use non-integer value '{}' for an integer loop variable '{}'", cur, varname); 418 | } 419 | Symbol { 420 | data_type: DataType::Int, 421 | val: Some(Constant::Int(*cur.numer())), 422 | indices: vec![], 423 | } 424 | } 425 | DataType::Real => Symbol { 426 | data_type: DataType::Real, 427 | val: Some(Constant::Real(cur)), 428 | indices: vec![], 429 | }, 430 | }; 431 | symbols.push(sym); 432 | cur += to_rational(&step); 433 | } 434 | 435 | // Now unroll the loop for each of the generated symbols 436 | // The new node we'll replace `self` by 437 | let mut new_stmts = Vec::new(); 438 | for sym in symbols { 439 | new_stmts.push(Self::Context { 440 | child: Box::new(Self::Stmts { 441 | stmts: vec![ 442 | Node::VarDecl { 443 | varname: varname.clone(), 444 | sym, 445 | }, 446 | *body.clone(), 447 | ], 448 | }), 449 | }); 450 | } 451 | 452 | return true; 453 | } 454 | change 455 | } 456 | Self::Context { child, .. } => { 457 | ctx.push(); 458 | let res = child.fold_const(ctx); 459 | ctx.pop(); 460 | res 461 | } 462 | Self::VarDecl { varname, sym } => { 463 | ctx.insert(varname.clone(), sym.clone()); 464 | false 465 | } 466 | } 467 | } 468 | } 469 | 470 | #[cfg(test)] 471 | mod tests { 472 | use super::*; 473 | 474 | #[test] 475 | fn test_const_fold() { 476 | let mut ast = Node::Add { 477 | terms: vec![ 478 | Node::Constant { 479 | val: Constant::Int(1), 480 | }, 481 | Node::Variable { 482 | name: "a".to_string(), 483 | indices: vec![], 484 | }, 485 | Node::Mul { 486 | terms: vec![ 487 | Node::Variable { 488 | name: "b".to_string(), 489 | indices: vec![], 490 | }, 491 | Node::Constant { 492 | val: Constant::Int(0), 493 | }, 494 | ], 495 | }, 496 | Node::Variable { 497 | name: "c".to_string(), 498 | indices: vec![], 499 | }, 500 | ], 501 | }; 502 | let mut ctx = Context::new(); 503 | 504 | ctx.insert( 505 | "a".to_string(), 506 | Symbol { 507 | data_type: DataType::Int, 508 | val: None, 509 | indices: vec![], 510 | }, 511 | ); 512 | ctx.insert( 513 | "b".to_string(), 514 | Symbol { 515 | data_type: DataType::Int, 516 | val: None, 517 | indices: vec![], 518 | }, 519 | ); 520 | ctx.insert( 521 | "c".to_string(), 522 | Symbol { 523 | data_type: DataType::Int, 524 | val: Some(Constant::Int(2)), 525 | indices: vec![], 526 | }, 527 | ); 528 | 529 | assert!(ast.fold_const(&ctx)); 530 | 531 | let expected_ast = Node::Add { 532 | terms: vec![ 533 | Node::Variable { 534 | name: "a".to_string(), 535 | indices: vec![], 536 | }, 537 | Node::Constant { 538 | val: Constant::Int(3), 539 | }, 540 | ], 541 | }; 542 | assert_eq!(format!("{:?}", ast), format!("{:?}", expected_ast)); 543 | } 544 | } 545 | -------------------------------------------------------------------------------- /compiler/src/context.rs: -------------------------------------------------------------------------------- 1 | use crate::ast::{Constant, DataType}; 2 | use std::collections::HashMap; 3 | 4 | #[derive(Clone, Debug)] 5 | pub struct Symbol { 6 | pub data_type: DataType, 7 | /// Populated if the value is known at compile time. Note, arrays 8 | /// cannot be constants 9 | pub val: Option, 10 | /// If this is an array, then specify the ranges of the indices 11 | pub indices: Vec, 12 | } 13 | 14 | #[derive(Debug)] 15 | pub struct Context { 16 | /// Stack of contexts. Guaranteed to have at least one context 17 | stack: Vec>, 18 | } 19 | 20 | impl Context { 21 | pub fn new() -> Self { 22 | Self { 23 | stack: vec![HashMap::new()], 24 | } 25 | } 26 | 27 | pub fn insert(&mut self, name: String, sym: Symbol) { 28 | if self.get(&name).is_some() { 29 | panic!("Variable '{}' already declared", name); 30 | } 31 | self.stack.last_mut().as_mut().unwrap().insert(name, sym); 32 | } 33 | 34 | pub fn get<'a>(&'a self, name: &str) -> Option<&'a Symbol> { 35 | for i in 0..self.stack.len() { 36 | let i = self.stack.len() - 1 - i; 37 | if let Some(s) = self.stack[i].get(name) { 38 | return Some(s); 39 | } 40 | } 41 | return None; 42 | } 43 | 44 | /// Push a new context in 45 | pub fn push(&mut self) { 46 | self.stack.push(HashMap::new()); 47 | } 48 | 49 | /// Remove the last context 50 | pub fn pop(&mut self) { 51 | self.stack.pop(); 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /compiler/src/lib.rs: -------------------------------------------------------------------------------- 1 | pub mod ast; 2 | pub mod context; 3 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from typing import Optional, Union 3 | import z3 4 | 5 | 6 | class ModelConfig: 7 | # Number of flows 8 | N: int 9 | # Jitter parameter (in timesteps) 10 | D: int 11 | # RTT (in timesteps) 12 | R: int 13 | # Number of timesteps 14 | T: int 15 | # Link rate 16 | C: float 17 | # Packets cannot be dropped below this threshold 18 | buf_min: Optional[float] 19 | # Packets have to be dropped above this threshold 20 | buf_max: Optional[float] 21 | # Number of dupacks before sender declares loss 22 | dupacks: Optional[float] 23 | # Congestion control algorithm 24 | cca: str 25 | # If false, we'll use a model that is more restrictive but does not compose 26 | compose: bool 27 | # Additive increase parameter used by various CCAs 28 | alpha: Union[float, z3.ArithRef] = 1.0 29 | # Whether or not to use pacing in various CCA 30 | pacing: bool 31 | # If compose is false, wastage can only happen if queue length < epsilon 32 | epsilon: str 33 | # Whether to turn on unsat_core for all variables 34 | unsat_core: bool 35 | # Whether to simplify output before plotting/saving 36 | simplify: bool 37 | # Whether AIMD can additively increase irrespective of losses. If true, the 38 | # the algorithm is more like cubic and has interesting failure modes 39 | aimd_incr_irrespective: bool 40 | 41 | # These config variables are calculated automatically 42 | calculate_qdel: bool 43 | 44 | def __init__(self, 45 | N: int, 46 | D: int, 47 | R: int, 48 | T: int, 49 | C: float, 50 | buf_min: Optional[float], 51 | buf_max: Optional[float], 52 | dupacks: Optional[float], 53 | cca: str, 54 | compose: bool, 55 | alpha: Optional[float], 56 | pacing: bool, 57 | epsilon: str, 58 | unsat_core: bool, 59 | simplify: bool, 60 | aimd_incr_irrespective: bool = False): 61 | self.__dict__ = locals() 62 | self.calculate_qdel = cca in ["copa"] or N > 1 63 | 64 | @staticmethod 65 | def get_argparse() -> argparse.ArgumentParser: 66 | parser = argparse.ArgumentParser(add_help=False) 67 | parser.add_argument("-N", "--num-flows", type=int, default=1) 68 | parser.add_argument("-D", type=int, default=1) 69 | parser.add_argument("-R", "--rtt", type=int, default=1) 70 | parser.add_argument("-T", "--time", type=int, default=10) 71 | parser.add_argument("-C", "--rate", type=float, default=1) 72 | parser.add_argument("--buf-min", type=float, default=None) 73 | parser.add_argument("--buf-max", type=float, default=None) 74 | parser.add_argument("--dupacks", type=float, default=None) 75 | parser.add_argument( 76 | "--cca", 77 | type=str, 78 | default="const", 79 | choices=["const", "aimd", "copa", "bbr", "fixed_d", "any"]) 80 | parser.add_argument("--no-compose", action="store_true") 81 | parser.add_argument("--alpha", type=float, default=None) 82 | parser.add_argument("--pacing", 83 | action="store_const", 84 | const=True, 85 | default=False) 86 | parser.add_argument( 87 | "--epsilon", 88 | type=str, 89 | default="zero", 90 | choices=["zero", "lt_alpha", "lt_half_alpha", "gt_alpha"]) 91 | parser.add_argument("--unsat-core", action="store_true") 92 | parser.add_argument("--simplify", action="store_true") 93 | parser.add_argument("--aimd-incr-irrespective", action="store_true") 94 | 95 | return parser 96 | 97 | @classmethod 98 | def from_argparse(cls, args: argparse.Namespace): 99 | return cls(args.num_flows, args.D, args.rtt, args.time, args.rate, 100 | args.buf_min, args.buf_max, args.dupacks, args.cca, 101 | not args.no_compose, args.alpha, args.pacing, args.epsilon, 102 | args.unsat_core, args.simplify, args.aimd_incr_irrespective) 103 | 104 | @classmethod 105 | def default(cls): 106 | return cls.from_argparse(cls.get_argparse().parse_args(args=[])) 107 | -------------------------------------------------------------------------------- /copa_plot.py: -------------------------------------------------------------------------------- 1 | from multi_flow import ModelConfig 2 | from cache import QueryResult 3 | import matplotlib 4 | import matplotlib.pyplot as plt 5 | import numpy as np 6 | from typing import Dict, List, Optional, Tuple, Union 7 | import pickle as pkl 8 | 9 | def plot_model(m: Dict[str, Union[float, bool]], cfg: ModelConfig): 10 | # matplotlib.rc('xtick', labelsize=20) 11 | # matplotlib.rc('ytick', labelsize=20) 12 | matplotlib.rc("font", size=40) 13 | 14 | def to_arr(name: str, n: Optional[int] = None) -> np.array: 15 | if n is None: 16 | names = [f"{name}_{t}" for t in range(cfg.T)] 17 | else: 18 | names = [f"{name}_{n},{t}" for t in range(cfg.T)] 19 | res = [] 20 | for n in names: 21 | if n in m: 22 | res.append(m[n]) 23 | else: 24 | res.append(-1) 25 | return np.array(res) 26 | 27 | # Configure the plotting 28 | fig, ax1 = plt.subplots(1, 1, sharex=True) 29 | fig.set_size_inches(18.5, 10.5) 30 | ax1.grid(True) 31 | 32 | times = [t for t in range(cfg.T)] 33 | ct = np.asarray([cfg.C * t for t in range(cfg.T)]) 34 | 35 | wasted = to_arr("wasted") 36 | out = to_arr("tot_out") 37 | inp = to_arr("tot_inp") 38 | 39 | between = [(0, inp[0])] 40 | for t in range(1, len(out)): 41 | between.append((t, out[t])) 42 | between.append((t, inp[t])) 43 | between.append((len(out) - 1, out[-1])) 44 | between_times, between_bytes = zip(*between) 45 | ax1.plot(between_times, between_bytes, color="orange", linestyle="--", label='S₁(t) = A₂(t)') 46 | 47 | 48 | wasted[-1] = wasted[-2] + 0.9 49 | ax1.plot(times, ct - wasted, 50 | color='black', marker='o', label='Bounds', linewidth=3) 51 | ax1.plot(times[cfg.D:], (ct - wasted)[:-cfg.D], 52 | color='black', marker='o', linewidth=3) 53 | ax1.plot(times, out, 54 | color='red', marker='o', label='S(t)') 55 | ax1.plot(times, inp, 56 | color='blue', marker='o', label='A(t)') 57 | # ax1.plot(times, to_arr("tot_inp") - to_arr("tot_lost"), 58 | # color='lightblue', marker='o', label='Total Ingress Accepted') 59 | 60 | 61 | ax1.set_xlabel("Time (Rₘ)") 62 | ax1.set_ylabel("Cumulative bytes (in BDP)") 63 | x_left, x_right = ax1.get_xlim() 64 | y_low, y_high = ax1.get_ylim() 65 | ax1.set_aspect(abs((x_right-x_left)/(y_low-y_high))/2) 66 | ax1.legend() 67 | plt.savefig('/home/venkatar/Downloads/copa-z3.pdf') 68 | plt.show() 69 | 70 | if __name__ == "__main__": 71 | f = open("cached/d2c00c416b9fd84c.cached", "rb") 72 | qres: QueryResult = pkl.load(f) 73 | plot_model(qres.model, qres.cfg) 74 | -------------------------------------------------------------------------------- /copa_proofs.py: -------------------------------------------------------------------------------- 1 | from z3 import And, Or 2 | 3 | from config import ModelConfig 4 | from model import make_solver 5 | from pyz3_utils import run_query 6 | 7 | 8 | def prove_steady_state(timeout=10): 9 | # This analysis is for infinite buffer size 10 | 11 | c = ModelConfig.default() 12 | c.cca = "copa" 13 | # We only need compose=False to prove cwnd increases/doesn't 14 | # fall. Otherwise, this assumption is not needed (of course, we *can* make 15 | # the assumption if we want) 16 | c.compose = True 17 | c.calculate_qdel = True 18 | 19 | # The last cwnd value that is chosen completely freely. We'll treat this as 20 | # the initial cwnd 21 | dur = c.R + c.D - 1 22 | 23 | # If cwnd > 4 BDP + alpha, cwnd wil decrease by at-least alpha 24 | s, v = make_solver(c) 25 | # Lemma's assumption 26 | # We are looking at infinite buffer, no loss case here and in the paper 27 | s.add(And(v.L[0] == 0, v.L[-1] == 0)) 28 | s.add(v.alpha < (1 / 3) * c.C * c.R) 29 | s.add(v.c_f[0][dur] > 4*c.C*c.R + v.alpha) 30 | # Lemma's statement's converse 31 | s.add(v.c_f[0][-1] >= v.c_f[0][dur] - v.alpha) 32 | print("Proving that cwnd will decrease when it is too big") 33 | qres = run_query(c, s, v, timeout) 34 | print(qres.satisfiable) 35 | assert(qres.satisfiable == "unsat") 36 | 37 | # If queue length is > 4 BDP + 2 alpha and cwnd < 4 BDP + alpha, queue 38 | # length decreases by at-least alpha and cwnd will not increase its bound 39 | s, v = make_solver(c) 40 | # Lemma's assumption 41 | s.add(And(v.L[0] == 0, v.L[-1] == 0)) 42 | s.add(v.alpha < (1 / 5) * c.C * c.R) 43 | s.add(v.c_f[0][dur] <= 4*c.C*c.R + v.alpha) 44 | s.add(v.A[0] - v.S[0] > 4*c.C*c.R + 2*v.alpha) 45 | # Lemma's statement's converse 46 | s.add(Or( 47 | v.A[-1] - v.S[-1] > v.A[0] - v.S[0] - v.alpha, 48 | v.c_f[0][-1] > 4*c.C*c.R + v.alpha)) 49 | print("Proving that if queue is too big and cwnd is small enough, then " 50 | "queue will fall") 51 | qres = run_query(c, s, v, timeout) 52 | print(qres.satisfiable) 53 | assert(qres.satisfiable == "unsat") 54 | 55 | # If cwnd < BDP - alpha and queue length < 4 BDP + 2 alpha, cwnd increases 56 | # by at-least alpha and queue length does not increase its bound 57 | c.T = 15 58 | c.compose = False # we definitely need it to prove cwnd increases 59 | s, v = make_solver(c) 60 | # Lemma's assumption 61 | s.add(And(v.L[0] == 0, v.L[-1] == 0)) 62 | s.add(v.alpha < (1 / 4) * c.C * c.R) 63 | s.add(v.c_f[0][dur] < c.C*c.R - v.alpha) 64 | s.add(v.A[0] - v.S[0] <= 4*c.C*c.R + 2*v.alpha) 65 | # Lemma's statement's converse 66 | s.add(Or( 67 | And( 68 | v.c_f[0][-1] < c.C*c.R - v.alpha, 69 | v.c_f[0][-1] < v.c_f[0][dur] + v.alpha), 70 | v.A[-1] - v.S[-1] > 4*c.C*c.R + 2*v.alpha)) 71 | print("Proving that if cwnd is too small and the queue is small enough, " 72 | "cwnd increases") 73 | qres = run_query(c, s, v, timeout) 74 | print(qres.satisfiable) 75 | assert(qres.satisfiable == "unsat") 76 | 77 | # If Copa has entered steady state, it does not leave it 78 | c.T = 10 79 | c.compose = False 80 | s, v = make_solver(c) 81 | ors = [] 82 | # Lemma's assumption 83 | s.add(v.alpha < (1 / 7) * c.C * c.R) 84 | s.add(And(v.L[0] == 0, v.L[-1] == 0)) 85 | s.add(v.c_f[0][dur] >= c.C*c.R - v.alpha) 86 | s.add(v.c_f[0][dur] <= 4*c.C*c.R + 2*v.alpha) 87 | s.add(v.A[0] - v.S[0] <= 4*c.C*c.R + 2*v.alpha) 88 | # Lemma's statement's converse 89 | ors.append(v.c_f[0][-1] > 4*c.C*c.R + v.alpha) 90 | ors.append(v.c_f[0][-1] < c.C*c.R - v.alpha) 91 | ors.append(v.A[-1] - v.S[-1] > 4*c.C*c.R + 2*v.alpha) 92 | s.add(Or(*ors)) 93 | print("Proving that if Copa has entered steady state, it will " 94 | "remain there") 95 | qres = run_query(c, s, v, timeout) 96 | print(qres.satisfiable) 97 | assert(qres.satisfiable == "unsat") 98 | 99 | 100 | if __name__ == "__main__": 101 | prove_steady_state() 102 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | Under construction. Coming soon! 2 | -------------------------------------------------------------------------------- /example_queries.py: -------------------------------------------------------------------------------- 1 | from z3 import And, Not, Or 2 | 3 | from config import ModelConfig 4 | from model import make_solver 5 | from plot import plot_model 6 | from pyz3_utils import MySolver, run_query 7 | from utils import make_periodic 8 | 9 | 10 | def bbr_low_util(timeout=10): 11 | '''Finds an example trace where BBR has < 10% utilization. It can be made 12 | arbitrarily small, since BBR can get arbitrarily small throughput in our 13 | model. 14 | 15 | You can simplify the solution somewhat by setting simplify=True, but that 16 | can cause small numerical errors which makes the solution inconsistent. See 17 | README for details. 18 | 19 | ''' 20 | c = ModelConfig.default() 21 | c.compose = True 22 | c.cca = "bbr" 23 | # Simplification isn't necessary, but makes the output a bit easier to 24 | # understand 25 | c.simplify = False 26 | s, v = make_solver(c) 27 | # Consider the no loss case for simplicity 28 | s.add(v.L[0] == 0) 29 | # Ask for < 10% utilization. Can be made arbitrarily small 30 | s.add(v.S[-1] - v.S[0] < 0.1 * c.C * c.T) 31 | make_periodic(c, s, v, 2 * c.R) 32 | qres = run_query(c, s, v, timeout) 33 | print(qres.satisfiable) 34 | if str(qres.satisfiable) == "sat": 35 | plot_model(qres.model, c, qres.v) 36 | 37 | 38 | def bbr_test(timeout=10): 39 | c = ModelConfig.default() 40 | c.compose = True 41 | c.cca = "bbr" 42 | c.buf_min = 0.5 43 | c.buf_max = 0.5 44 | c.T = 8 45 | # Simplification isn't necessary, but makes the output a bit easier to 46 | # understand 47 | c.simplify = False 48 | s, v = make_solver(c) 49 | # Consider the no loss case for simplicity 50 | s.add(v.L[0] == 0) 51 | # Ask for < 10% utilization. Can be made arbitrarily small 52 | #s.add(v.S[-1] - v.S[0] < 0.1 * c.C * c.T) 53 | s.add(v.L[-1] - v.L[0] >= 0.5 * (v.S[-1] - v.S[0])) 54 | s.add(v.A[0] == 0) 55 | s.add(v.r_f[0][0] < c.C) 56 | s.add(v.r_f[0][1] < c.C) 57 | s.add(v.r_f[0][2] < c.C) 58 | make_periodic(c, s, v, 2 * c.R) 59 | qres = run_query(c, s, v, timeout) 60 | print(qres.satisfiable) 61 | if str(qres.satisfiable) == "sat": 62 | plot_model(qres.model, c, qres.v) 63 | 64 | 65 | def copa_low_util(timeout=10): 66 | '''Finds an example where Copa gets < 10% utilization. This is with the default 67 | model that composes. If c.compose = False, then CCAC cannot find an example 68 | where utilization is below 50%. copa_proofs.py proves bounds on Copa's 69 | performance in the non-composing model. When c.compose = True, Copa can get 70 | arbitrarily low throughput 71 | 72 | ''' 73 | c = ModelConfig.default() 74 | c.compose = False 75 | c.cca = "copa" 76 | c.simplify = False 77 | c.calculate_qdel = True 78 | c.unsat_core = False 79 | c.T = 10 80 | s, v = make_solver(c) 81 | # Consider the no loss case for simplicity 82 | s.add(v.L[0] == v.L[-1]) 83 | # 10% utilization. Can be made arbitrarily small 84 | s.add(v.S[-1] - v.S[0] < 0.1 * c.C * c.T) 85 | make_periodic(c, s, v, c.R + c.D) 86 | 87 | print(s.to_smt2(), file = open("/tmp/ccac.smt2", "w")) 88 | s.check() 89 | print(s.statistics()) 90 | qres = run_query(c, s, v, timeout) 91 | print(qres.satisfiable) 92 | if str(qres.satisfiable) == "sat": 93 | plot_model(qres.model, c, qres.v) 94 | 95 | 96 | def aimd_premature_loss(timeout=60): 97 | '''Finds a case where AIMD bursts 2 BDP packets where buffer size = 2 BDP and 98 | cwnd <= 2 BDP. Here 1BDP is due to an ack burst and another BDP is because 99 | AIMD just detected 1BDP of loss. This analysis created the example 100 | discussed in section 6 of the paper. As a result, cwnd can reduce to 1 BDP 101 | even when buffer size is 2BDP, whereas in a fluid model it never goes below 102 | 1.5 BDP. 103 | 104 | ''' 105 | c = ModelConfig.default() 106 | c.cca = "aimd" 107 | c.buf_min = 2 108 | c.buf_max = 2 109 | c.simplify = False 110 | c.T = 5 111 | 112 | s, v = make_solver(c) 113 | 114 | # Start with zero loss and zero queue, so CCAC is forced to generate an 115 | # example trace *from scratch* showing how bad behavior can happen in a 116 | # network that was perfectly normal to begin with 117 | s.add(v.L[0] == 0) 118 | # Restrict alpha to small values, otherwise CCAC can output obvious and 119 | # uninteresting behavior 120 | s.add(v.alpha <= 0.1 * c.C * c.R) 121 | 122 | # Does there exist a time where loss happened while cwnd <= 1? 123 | conds = [] 124 | for t in range(2, c.T - 1): 125 | conds.append( 126 | And( 127 | v.c_f[0][t] <= 2, 128 | v.Ld_f[0][t + 1] - v.Ld_f[0][t] >= 129 | 1, # Burst due to loss detection 130 | v.S[t + 1 - c.R] - v.S[t - c.R] >= 131 | c.C + 1, # Burst of BDP acks 132 | v.A[t + 1] >= 133 | v.A[t] + 2, # Sum of the two bursts 134 | v.L[t+1] > v.L[t] 135 | )) 136 | 137 | # We don't want an example with timeouts 138 | for t in range(c.T): 139 | s.add(Not(v.timeout_f[0][t])) 140 | 141 | s.add(Or(*conds)) 142 | 143 | qres = run_query(c, s, v, timeout) 144 | print(qres.satisfiable) 145 | if str(qres.satisfiable) == "sat": 146 | plot_model(qres.model, c, qres.v) 147 | 148 | 149 | if __name__ == "__main__": 150 | import sys 151 | 152 | funcs = { 153 | "aimd_premature_loss": aimd_premature_loss, 154 | "bbr_low_util": bbr_low_util, 155 | "copa_low_util": copa_low_util 156 | } 157 | usage = f"Usage: python3 example_queries.py <{'|'.join(funcs.keys())}>" 158 | 159 | if len(sys.argv) != 2: 160 | print("Expected exactly one command") 161 | print(usage) 162 | exit(1) 163 | cmd = sys.argv[1] 164 | if cmd in funcs: 165 | funcs[cmd]() 166 | else: 167 | print("Command not recognized") 168 | print(usage) 169 | -------------------------------------------------------------------------------- /model.py: -------------------------------------------------------------------------------- 1 | from typing import Optional, Tuple 2 | from z3 import And, Sum, Implies, Or, Not, If 3 | 4 | from cca_aimd import cca_aimd 5 | from cca_bbr import cca_bbr 6 | from cca_copa import cca_copa 7 | from config import ModelConfig 8 | from pyz3_utils import MySolver 9 | from variables import Variables 10 | 11 | 12 | def monotone(c: ModelConfig, s: MySolver, v: Variables): 13 | for t in range(1, c.T): 14 | for n in range(c.N): 15 | s.add(v.A_f[n][t] >= v.A_f[n][t - 1]) 16 | s.add(v.Ld_f[n][t] >= v.Ld_f[n][t - 1]) 17 | s.add(v.S_f[n][t] >= v.S_f[n][t - 1]) 18 | s.add(v.L_f[n][t] >= v.L_f[n][t - 1]) 19 | 20 | s.add( 21 | v.A_f[n][t] - v.L_f[n][t] >= v.A_f[n][t - 1] - v.L_f[n][t - 1]) 22 | s.add(v.W[t] >= v.W[t - 1]) 23 | 24 | 25 | def initial(c: ModelConfig, s: MySolver, v: Variables): 26 | for n in range(c.N): 27 | # Making these positive actually matters. What the hell is negative 28 | # rate or loss? 29 | s.add(v.c_f[n][0] > 0) 30 | s.add(v.r_f[n][0] > 0) 31 | s.add(v.L_f[n][0] >= 0) 32 | s.add(v.Ld_f[n][0] >= 0) 33 | 34 | # These are invariant to y-shift. However, it does make the results 35 | # easier to interpret if they start from 0 36 | s.add(v.S_f[n][0] == 0) 37 | 38 | 39 | def relate_tot(c: ModelConfig, s: MySolver, v: Variables): 40 | ''' Relate total values to per-flow values ''' 41 | for t in range(c.T): 42 | s.add(v.A[t] == Sum([v.A_f[n][t] for n in range(c.N)])) 43 | s.add(v.L[t] == Sum([v.L_f[n][t] for n in range(c.N)])) 44 | s.add(v.S[t] == Sum([v.S_f[n][t] for n in range(c.N)])) 45 | 46 | 47 | def network(c: ModelConfig, s: MySolver, v: Variables): 48 | for t in range(c.T): 49 | for n in range(c.N): 50 | s.add(v.S_f[n][t] <= v.A_f[n][t] - v.L_f[n][t]) 51 | 52 | s.add(v.S[t] <= c.C * t - v.W[t]) 53 | if t >= c.D: 54 | s.add(c.C * (t - c.D) - v.W[t - c.D] <= v.S[t]) 55 | else: 56 | # The constraint is the most slack when black line is steepest. So 57 | # we'll say there was no wastage when t < 0 58 | s.add(c.C * (t - c.D) - v.W[0] <= v.S[t]) 59 | 60 | if c.compose: 61 | if t > 0: 62 | s.add( 63 | Implies(v.W[t] > v.W[t - 1], 64 | v.A[t] - v.L[t] <= c.C * t - v.W[t])) 65 | else: 66 | if t > 0: 67 | s.add( 68 | Implies(v.W[t] > v.W[t - 1], 69 | v.A[t] - v.L[t] <= v.S[t] + v.epsilon)) 70 | 71 | if c.buf_min is not None: 72 | if t > 0: 73 | r = sum([v.r_f[n][t] for n in range(c.N)]) 74 | s.add( 75 | Implies( 76 | v.L[t] > v.L[t - 1], v.A[t] - v.L[t] >= c.C * 77 | (t - 1) - v.W[t - 1] + c.buf_min 78 | # And(v.A[t] - v.L[t] >= c.C*(t-1) - v.W[t-1] + c.buf_min, 79 | # r > c.C, 80 | # c.C*(t-1) - v.W[t-1] + c.buf_min 81 | # - (v.A[t-1] - v.L[t-1]) < r - c.C 82 | # ) 83 | )) 84 | else: 85 | s.add(v.L[t] == v.L[0]) 86 | 87 | # Enforce buf_max if given 88 | if c.buf_max is not None: 89 | s.add(v.A[t] - v.L[t] <= c.C * t - v.W[t] + c.buf_max) 90 | 91 | 92 | def loss_detected(c: ModelConfig, s: MySolver, v: Variables): 93 | for n in range(c.N): 94 | for t in range(c.T): 95 | for dt in range(c.T): 96 | if t - c.R - dt < 0: 97 | continue 98 | # Loss is detectable through dupacks 99 | detectable = v.A_f[n][t-c.R-dt] - v.L_f[n][t-c.R-dt]\ 100 | + v.dupacks <= v.S_f[n][t-c.R] 101 | 102 | s.add( 103 | Implies(And(Not(v.timeout_f[n][t]), detectable), 104 | v.Ld_f[n][t] >= v.L_f[n][t - c.R - dt])) 105 | s.add( 106 | Implies(And(Not(v.timeout_f[n][t]), Not(detectable)), 107 | v.Ld_f[n][t] <= v.L_f[n][t - c.R - dt])) 108 | 109 | # We implement an RTO scheme that magically triggers when S(t) == 110 | # A(t) - L(t). While this is not implementable in reality, it is 111 | # still realistic. First, if a CCAC version of the CCA times out, 112 | # then a real implementation will also timeout. The timeout may 113 | # occur a different duration than in the real world. The user 114 | # should be mindful of this and not take the timeout duration 115 | # literally. Nevertheless, this difference has no bearing on 116 | # subsequent behavior. 117 | 118 | # This is also the only *legitimate* case where we want our CCA to 119 | # timeout. A CCAC adversary can cause a real implementation to 120 | # timeout by keeping RTTVAR=0 and then suddenly delaying packets by 121 | # D seconds. This counter-example is uninteresting. Hence 122 | # we usually want to avoid getting such counter-examples in 123 | # CCAC. Our timeout strategy sidesteps this issue. 124 | 125 | if t < c.R: 126 | s.add(v.timeout_f[n][t] == False) 127 | else: 128 | s.add(v.timeout_f[n][t] == And( 129 | v.S_f[n][t - c.R] < v.A_f[n][t - 1], # oustanding bytes 130 | v.S_f[n][t - c.R] == v.A_f[n][t - c.R] - 131 | v.L_f[n][t - c.R])) 132 | s.add(Implies(v.timeout_f[n][t], v.Ld_f[n][t] == v.L_f[n][t])) 133 | 134 | s.add(v.Ld_f[n][t] <= v.L_f[n][t - c.R]) 135 | 136 | 137 | def calculate_qdel(c: ModelConfig, s: MySolver, v: Variables): 138 | # Figure out the time when the bytes being output at time t were 139 | # first input 140 | for t in range(c.T): 141 | for dt in range(c.T): 142 | if dt > t: 143 | s.add(Not(v.qdel[t][dt])) 144 | continue 145 | s.add(v.qdel[t][dt] == Or( 146 | And( 147 | v.S[t] != v.S[t - 1], 148 | And(v.A[t - dt - 1] - v.L[t - dt - 1] < v.S[t], 149 | v.A[t - dt] - v.L[t - dt] >= v.S[t])), 150 | And(v.S[t] == v.S[t - 1], v.qdel[t - 1][dt]))) 151 | 152 | # We don't know what happened at t < 0, so we'll let the solver pick 153 | # non-deterministically 154 | s.add( 155 | Implies(And(v.S[t] != v.S[t - 1], v.A[0] - v.L[0] < v.S[t - 1]), 156 | Not(v.qdel[t][t - 1]))) 157 | 158 | 159 | def multi_flows(c: ModelConfig, s: MySolver, v: Variables): 160 | assert (c.calculate_qdel) 161 | for t in range(c.T): 162 | for n in range(c.N): 163 | for dt in range(c.T): 164 | if t - dt - 1 < 0: 165 | continue 166 | s.add( 167 | Implies(v.qdel[t][dt], v.S_f[n][t] > v.A_f[n][t - dt - 1])) 168 | 169 | 170 | def epsilon_alpha(c: ModelConfig, s: MySolver, v: Variables): 171 | if not c.compose: 172 | if c.epsilon == "zero": 173 | s.add(v.epsilon == 0) 174 | elif c.epsilon == "lt_alpha": 175 | s.add(v.epsilon < v.alpha) 176 | elif c.epsilon == "lt_half_alpha": 177 | s.add(v.epsilon < v.alpha * 0.5) 178 | elif c.epsilon == "gt_alpha": 179 | s.add(v.epsilon > v.alpha) 180 | else: 181 | assert (False) 182 | 183 | 184 | def cwnd_rate_arrival(c: ModelConfig, s: MySolver, v: Variables): 185 | for n in range(c.N): 186 | for t in range(c.T): 187 | if t >= c.R: 188 | assert (c.R >= 1) 189 | # Arrival due to cwnd 190 | A_w = v.S_f[n][t - c.R] + v.Ld_f[n][t] + v.c_f[n][t] 191 | A_w = If(A_w >= v.A_f[n][t - 1], A_w, v.A_f[n][t - 1]) 192 | # Arrival due to rate 193 | A_r = v.A_f[n][t - 1] + v.r_f[n][t] 194 | # Net arrival 195 | s.add(v.A_f[n][t] == If(A_w >= A_r, A_r, A_w)) 196 | else: 197 | # NOTE: This is different in this new version. Here anything 198 | # can happen. No restrictions 199 | pass 200 | 201 | 202 | def min_send_quantum(c: ModelConfig, s: MySolver, v: Variables): 203 | '''Every timestep, the sender must send either 0 bytes or > 1MSS bytes. 204 | While it is not recommended that we use these constraints everywhere, in 205 | AIMD it is possible to not trigger loss detection by sending tiny packets 206 | which sum up to less than beta. However this is not possible in the real 207 | world and should be ruled out. 208 | ''' 209 | 210 | for n in range(c.N): 211 | for t in range(1, c.T): 212 | s.add( 213 | Or(v.S_f[n][t - 1] == v.S_f[n][t], 214 | v.S_f[n][t - 1] + v.alpha <= v.S_f[n][t])) 215 | 216 | 217 | def cca_const(c: ModelConfig, s: MySolver, v: Variables): 218 | for n in range(c.N): 219 | for t in range(c.T): 220 | s.add(v.c_f[n][t] == v.alpha) 221 | 222 | if c.pacing: 223 | s.add(v.r_f[n][t] == v.alpha / c.R) 224 | else: 225 | s.add(v.r_f[n][t] >= c.C * 100) 226 | 227 | 228 | def make_solver(c: ModelConfig, 229 | s: Optional[MySolver] = None, 230 | v: Optional[Variables] = None) -> Tuple[MySolver, Variables]: 231 | if s is None: 232 | s = MySolver() 233 | if v is None: 234 | v = Variables(c, s) 235 | 236 | if c.unsat_core: 237 | s.set(unsat_core=True) 238 | 239 | monotone(c, s, v) 240 | initial(c, s, v) 241 | relate_tot(c, s, v) 242 | network(c, s, v) 243 | loss_detected(c, s, v) 244 | epsilon_alpha(c, s, v) 245 | if c.calculate_qdel: 246 | calculate_qdel(c, s, v) 247 | if c.N > 1: 248 | assert (c.calculate_qdel) 249 | multi_flows(c, s, v) 250 | cwnd_rate_arrival(c, s, v) 251 | 252 | if c.cca == "const": 253 | cca_const(c, s, v) 254 | elif c.cca == "aimd": 255 | cca_aimd(c, s, v) 256 | elif c.cca == "bbr": 257 | cca_bbr(c, s, v) 258 | elif c.cca == "copa": 259 | cca_copa(c, s, v) 260 | elif c.cca == "any": 261 | pass 262 | else: 263 | assert(False) 264 | 265 | return (s, v) 266 | 267 | 268 | if __name__ == "__main__": 269 | from plot import plot_model 270 | from pyz3_utils import run_query 271 | from utils import make_periodic 272 | 273 | c = ModelConfig(N=1, 274 | D=1, 275 | R=1, 276 | T=10, 277 | C=1, 278 | buf_min=1, 279 | buf_max=1, 280 | dupacks=None, 281 | cca="copa", 282 | compose=False, 283 | alpha=None, 284 | pacing=False, 285 | epsilon="zero", 286 | unsat_core=False, 287 | simplify=False) 288 | c.aimd_incr_irrespective = True 289 | 290 | s, v = make_solver(c) 291 | dur = c.R + c.D # + c.R + 2 * c.D 292 | # Consider the no loss case for simplicity 293 | s.add(v.L[0] == 0) 294 | s.add(v.alpha < 1 / 4) 295 | # s.add(v.c_f[0][0] == v.c_f[1][0]) 296 | # s.add(v.A_f[0][0] == v.A_f[1][0]) 297 | # s.add(v.A_f[0][0] == 0) 298 | # s.add(v.L[dur] == 0) 299 | s.add(v.S[-1] - v.S[0] < 0.5625 * c.C * (c.T - 1)) 300 | # s.add(v.S_f[0][-1] - v.S_f[1][-1] > 0.8 * c.C * c.T) 301 | make_periodic(c, s, v, dur) 302 | # cca_aimd_make_periodic(c, s, v) 303 | qres = run_query(c, s, v, timeout=120) 304 | print(qres.satisfiable) 305 | if str(qres.satisfiable) == "sat": 306 | assert qres.model is not None 307 | plot_model(qres.model, c, qres.v) 308 | -------------------------------------------------------------------------------- /model_properties/.gitignore: -------------------------------------------------------------------------------- 1 | *.dll 2 | *.mdb 3 | -------------------------------------------------------------------------------- /model_properties/inf_buffer_compatible.dfy: -------------------------------------------------------------------------------- 1 | predicate sorted(s: seq) 2 | { 3 | forall j, k :: 0 <= j <= k < |s| ==> s[j] <= s[k] 4 | } 5 | 6 | // If all neighbors are in sorted order, 'sorted' is true 7 | lemma SortedNeighbor(s: seq) 8 | requires forall i :: 0 <= i < i + 1 < |s| ==> s[i] <= s[i+1] 9 | ensures sorted(s) 10 | { 11 | forall j, k | 0 <= j <= k < |s| { 12 | SortedNeighborHelper(s, j, k); 13 | } 14 | } 15 | 16 | lemma SortedNeighborHelper(s: seq, j: nat, k: nat) 17 | requires forall i :: 0 <= i < i + 1 < |s| ==> s[i] <= s[i+1] 18 | requires 0 <= j <= k < |s| 19 | ensures s[j] <= s[k] { 20 | var i := j; 21 | while i < k 22 | invariant 0 <= j <= i <= k <= |s| 23 | invariant s[j] <= s[i] 24 | { 25 | i := i + 1; 26 | } 27 | } 28 | 29 | predicate bounded_topps(opps: seq, C: real, ku: real, kl: real) 30 | { 31 | sorted(opps) && 32 | forall t :: 0 <= t < |opps| ==> t as real * C - kl <= opps[t] <= t as real * C + ku 33 | } 34 | 35 | function run_inf_link(inp: seq, opps: seq) : (out: seq) 36 | requires sorted(inp) 37 | requires sorted(opps) 38 | {} 39 | 40 | // Definition of a link with an infinitely large buffer 41 | /*method run_inf_link(inp: array, opps: array) returns (out: array) 42 | requires inp.Length == opps.Length 43 | requires sorted(inp) 44 | requires sorted(out) 45 | { 46 | var t := 1; 47 | var queue_len := if inp[0] > opps[0] then { 48 | inp[0] - opps[0] 49 | } else { 50 | 0 51 | }; 52 | 53 | while t < inp.Length || queue_len > 0 54 | invariant queue_len >= 0 55 | { 56 | queue_len := queue_len + inp[t] - inp[t-1]; 57 | var change := if queue_len > (out[t] - out[t-1]) then { 58 | out[t] - out[t-1] 59 | } else { 60 | queue_len 61 | }; 62 | queue_len := queue_len - change; 63 | t := t + 1; 64 | } 65 | return out; 66 | }*/ 67 | 68 | // Add a constant to a sequence 69 | function method seq_add(inp: seq, val: real) : (out: seq) 70 | ensures |inp| == |out| 71 | decreases |inp| 72 | ensures forall i :: 0 <= i < |inp| ==> inp[i] + val == out[i] 73 | { 74 | if |inp| == 0 then 75 | [] 76 | else if |inp| == 1 then 77 | [inp[0] + val] 78 | else 79 | [inp[0] + val] + seq_add(inp[1..], val) 80 | } 81 | 82 | method link_cbr(inp: seq, C: real) returns (opps: seq) 83 | requires sorted(inp) 84 | requires |inp| > 0 ==> inp[0] >= 0.0 85 | requires C > 0.0 86 | ensures bounded_topps(opps, C, 0.0, 0.0) 87 | { 88 | if |inp| == 0 { 89 | return []; 90 | } 91 | 92 | var t: nat := 1; 93 | var tot_timesteps := 0; 94 | var queue := inp[0]; 95 | assert queue >= 0.0; 96 | 97 | while t < |inp| 98 | decreases |inp| - t, queue 99 | invariant 0 <= t <= |inp| 100 | invariant queue >= 0.0 101 | { 102 | if t < |inp| { 103 | queue := queue + inp[t] - inp[t-1]; 104 | t := t + 1; 105 | } 106 | assert C >= 0.0; 107 | queue := if queue < C then 0.0 else queue - C; 108 | 109 | tot_timesteps := tot_timesteps + 1; 110 | } 111 | 112 | 113 | // After the input is finished, clear the queue 114 | assert queue / C >= 0.0; 115 | tot_timesteps := tot_timesteps + (queue / C + 1.0).Floor; 116 | assert tot_timesteps >= 0; 117 | 118 | // Generate the sequence of the appropriate length and return 119 | var res := new real[tot_timesteps]; 120 | forall t | 0 <= t < tot_timesteps 121 | { 122 | res[t] := t as real * C; 123 | } 124 | return res[..]; 125 | } 126 | 127 | // Implements a link of the given rate that sends a burst every 128 | // 'agg' timesteps 129 | method link_aggr(tot_timesteps: nat, C: real, agg: nat) returns (opps: seq) 130 | requires agg > 0 131 | requires C > 0.0 132 | ensures bounded_topps(opps, C, 0.0, C * agg as real) 133 | { 134 | var res := new real[tot_timesteps]; 135 | var t := 0; 136 | while t < tot_timesteps 137 | invariant 0 <= t <= tot_timesteps 138 | invariant forall i :: 0 <= i < t ==> res[i] == C * ((i / agg) * agg) as real 139 | { 140 | res[t] := C * ((t / agg) * agg) as real; 141 | t := t + 1; 142 | } 143 | 144 | opps := res[..]; 145 | 146 | // Prove that it is sorted 147 | forall t | 0 <= t < t + 1 < |opps| 148 | ensures opps[t] <= opps[t+1]; 149 | { 150 | assert t <= t + 1; 151 | assert t / agg <= (t + 1) / agg; 152 | assert agg > 0; 153 | assert (t / agg) * agg <= ((t+1) / agg) * agg; 154 | } 155 | SortedNeighbor(opps); 156 | assert sorted(opps); 157 | 158 | return opps; 159 | } -------------------------------------------------------------------------------- /model_properties/possible_dafny_bug.dfy: -------------------------------------------------------------------------------- 1 | method link_aggr(C: real, agg: int) returns (opps: seq) 2 | requires agg > 0 3 | requires C > 0.0 4 | { 5 | var tot_timesteps := 10; 6 | 7 | var res := new real[tot_timesteps]; 8 | var t := 0; 9 | while t < tot_timesteps 10 | invariant 0 <= t <= tot_timesteps 11 | invariant forall i :: 0 <= i < t ==> res[i] == C * ((i / agg) * agg) as real 12 | { 13 | res[t] := C * ((t / agg) * agg) as real; 14 | t := t + 1; 15 | } 16 | 17 | // Prove it fits within the bounds 18 | forall t | 0 <= t < tot_timesteps 19 | ensures res[t] <= C * t as real 20 | ensures res[t] > C * t as real - C * agg as real 21 | { 22 | assert res[t] == C * ((t / agg) * agg) as real; 23 | } 24 | opps := res[..]; 25 | 26 | // Prove that it is sorted 27 | forall t | 0 <= t < t + 1 < |opps| 28 | ensures opps[t] <= opps[t+1]; 29 | { 30 | assert t <= t + 1; 31 | assert t / agg <= (t + 1) / agg; 32 | assert (t / agg) * agg <= ((t+1) / agg) * agg; 33 | } 34 | } -------------------------------------------------------------------------------- /model_properties/proofs.v: -------------------------------------------------------------------------------- 1 | Require Import Coq.Arith.PeanoNat. 2 | Require Import Psatz. 3 | 4 | Module CongestionControl. 5 | (** Time is represented as a natural number, in 'ticks'. This is not a problem, 6 | as we can make a tick correspond to an arbitrarily small amount of real 7 | time. *) 8 | Definition Time := nat. 9 | 10 | (** Units of data (e.g. bytes) *) 11 | Definition Bytes := nat. 12 | 13 | (** Units of data transmitted per unit time *) 14 | Definition Rate := nat. 15 | 16 | (** The assertion that f is non-decreasing *) 17 | Definition monotone (f : (Time -> Bytes)) : Prop := 18 | forall t1 t2, t1 < t2 -> (f t1) <= (f t2). 19 | 20 | (** A trace of what the server did and how much input it got *) 21 | Record Trace : Set := mkTrace { 22 | (* Parameters of the link *) 23 | C : Rate; 24 | D : Time; 25 | (* Minimum buffer size *) 26 | Buf : Bytes; 27 | 28 | (* Data comprising the trace *) 29 | (* Upper limit is C * t - wasted t. Lower limit is (upper limit at time t - D) *) 30 | wasted : Time -> Bytes; 31 | out : Time -> Bytes; 32 | lost : Time -> Bytes; 33 | (* Number of bytes that have come in (irrespective of whether they were lost) *) 34 | inp : Time -> Bytes; 35 | 36 | (* Constraints on out *) 37 | constraint_u : forall t, (out t) + (wasted t) <= C * t; 38 | constraint_l : forall t, (out t) + (wasted (t - D)) >= C * (t - D); 39 | 40 | (* The server can waste transmission opportunities if inp <= upper *) 41 | cond_waste : forall t, wasted t < wasted (S t) -> 42 | inp (S t) - lost (S t) + wasted (S t) <= C * S t; 43 | 44 | (* Can only lose packets if inp t > constraint_u t + Buf *) 45 | cond_lost : forall t, lost t < lost (S t) -> 46 | inp (S t) - lost (S t) > C * (S t) - (wasted (S t)) + Buf; 47 | 48 | (* Can't output more bytes that we got *) 49 | out_le_inp : forall t, out t <= inp t - lost t; 50 | 51 | (* Everything should be monotonic (non-decreasing) *) 52 | monotone_wasted : monotone wasted; 53 | monotone_out : monotone out; 54 | monotone_lost : monotone lost; 55 | monotone_inp : monotone inp; 56 | 57 | (* Where everything starts *) 58 | zero_wasted : forall t, t <= 0 -> wasted t = 0; 59 | zero_out : out 0 = 0; 60 | zero_lost : lost 0 = 0; 61 | zero_inp : inp 0 = 0; 62 | }. 63 | 64 | (*(** The vertical gap between the upper and lower constraints is bounded *) 65 | Theorem constraint_vert_gap_bound : 66 | forall s : Trace, forall t: Time, 67 | (C s * t - wasted s t) - 68 | (C s * (t - D s) - wasted s (t - D s)) <= D s * C s. 69 | Proof. 70 | intros s t. 71 | (* Change the goal to a simpler form *) 72 | assert (wasted s (t - D s) - wasted s t <= 0 -> 73 | (C s * t - wasted s t) - 74 | (C s * (t - D s) - wasted s (t - D s)) <= D s * C s). { 75 | intro H. 76 | } 77 | 78 | (** If the buffer of a latter link is bigger than the D * C of the preceeding 79 | link and the second link is faster, then the latter link cannot drop packets. 80 | *) 81 | Theorem trace_no_subsequent_loss : 82 | forall (s1 s2 : Trace), 83 | (C s1) <= (C s2) /\ 84 | (inp s2) = (out s1) /\ 85 | (C s1) * (D s1) < (Buf s2) -> 86 | forall t, (lost s2 t) = 0. 87 | Proof. 88 | intros s1 s2 [Hc12 [H12 Hbuf_D]] t. 89 | 90 | induction t. 91 | - apply (zero_lost s2). 92 | - (* Did lost s2 (S t) increase? *) 93 | pose (Nat.eq_0_gt_0_cases (lost s2 (S t))) as Hl. 94 | destruct Hl. assumption. exfalso. 95 | rewrite <- IHt in H. clear IHt. 96 | 97 | (* Get the condition for increase in lost s2 *) 98 | pose (cond_lost s2 t) as Hl_cond. apply Hl_cond in H. clear Hl_cond. 99 | rewrite H12 in H. 100 | 101 | (* Expand the goal to prove with a new condition. This will help with induction *) 102 | assert (wasted s1 t >= loss_thresh s2 t /\ 103 | lost s2 t = 0 -> lost s2 t = 0) as H. 104 | { intro. destruct H. apply H0. } 105 | apply H. clear H. 106 | 107 | induction t. 108 | - rewrite (zero_loss_thresh s2). rewrite (zero_lost s2). lia. 109 | - (* Split IHt *) 110 | destruct IHt as [IHt_upper IHt_lost]. 111 | (* Assert the first part of the induction separately, so we can use it later *) 112 | assert (wasted s1 (S t) >= loss_thresh s2 (S t)) as IHSt_upper. { 113 | (* Prove some monotonicity theorems for convenience *) 114 | pose (monotone_wasted s1) as Htmp. specialize (Htmp t (S t)). 115 | assert (t < S t) as HWmon. auto. apply Htmp in HWmon. clear Htmp. 116 | pose (monotone_inp s2) as Htmp. specialize (Htmp t (S t)). 117 | assert (t < S t) as HImon. auto. apply Htmp in HImon. clear Htmp. 118 | 119 | (* Create cases: either loss_thresh increased or it didn't *) 120 | remember (loss_thresh s2 (S t) - loss_thresh s2 t) as incr. 121 | destruct incr. 122 | * (* loss_thresh didn't increase *) 123 | assert (loss_thresh s2 (S t) = loss_thresh s2 t). { 124 | pose (monotone_loss_thresh s2) as Hm. specialize (Hm t (S t)). 125 | assert (t < S t) as Htmp. auto. apply Hm in Htmp. lia. 126 | } 127 | rewrite H. 128 | lia. 129 | * (* loss_thresh increased. So use cond_loss_thresh *) 130 | assert (loss_thresh s2 t < loss_thresh s2 (S t)) as Hincr. lia. 131 | clear Heqincr. 132 | 133 | apply (cond_loss_thresh s2) in Hincr. destruct Hincr as [Hempty Hthresh]. 134 | pose (constraint_l s1 t) as Hls1. rewrite <- H12 in Hls1. rewrite Hc12 in Hls1. 135 | 136 | lia. 137 | } 138 | 139 | split. 140 | + (* We already proves this above *) assumption. 141 | + (* Create cases: either lost increased or it didn't *) 142 | remember (lost s2 (S t)) as lost_s2_St. 143 | destruct lost_s2_St. reflexivity. 144 | exfalso. 145 | assert (lost s2 (S t) > 0) as H_gt_0. lia. 146 | clear Heqlost_s2_St. clear lost_s2_St. 147 | 148 | (* If lost increased, packets in queue must have increased the threshold 149 | (C * t - loss_thresh t) *) 150 | rewrite <- IHt_lost in H_gt_0. 151 | apply (cond_lost s2) in H_gt_0. 152 | 153 | assert (C s2 * S t - loss_thresh s2 (S t) >= inp s2 (S t) - lost s2 (S t)). { 154 | rewrite H12. 155 | pose (constraint_u s1 (S t)) as Hu. lia. 156 | } 157 | 158 | (* Exploit contradiction *) 159 | lia. 160 | Qed.*) 161 | 162 | 163 | 164 | Theorem trace_composes : 165 | forall (s1 s2 : Trace), 166 | (C s1) = (C s2) /\ 167 | (inp s2) = (out s1) -> 168 | exists (sc : Trace), 169 | (D sc) = (D s1) + (D s2) /\ 170 | (C sc) = (C s1) /\ 171 | (Buf sc) = (Buf s1) /\ 172 | (inp sc) = (inp s1) /\ 173 | (out sc) = (out s2) /\ 174 | forall t, (lost sc t) = (lost s1 t) + (lost s2 t) 175 | . 176 | Proof. 177 | intros s1 s2 [Hc H12]. 178 | 179 | (* Note: We will set (wasted sc) = (wasted s1) and (lost sc) = (lost s1) *) 180 | 181 | (* Prove constraint_u *) 182 | assert (forall t, (out s2 t) + (wasted s1 t) <= (C s1) * t) as H_eg_constraint_u. { 183 | intro t. 184 | (* Proof: (out s2 t) <= (inp s2 t) - (lost s2 t) <= (inp s2 t) = (out s1 t) *) 185 | apply Nat.le_trans with (m:=(inp s2 t) + (wasted s1 t)). 186 | apply Nat.add_le_mono_r. 187 | apply Nat.le_trans with (m:=(inp s2 t) - (lost s2 t)). apply (out_le_inp s2). 188 | apply Nat.le_sub_l. 189 | rewrite H12. 190 | apply (constraint_u s1). 191 | } 192 | 193 | (* Apply trace_no_subsequent_loss and keep for future use *) 194 | (* assert (forall t, lost s2 t = 0) as Hloss2. { 195 | apply trace_no_subsequent_loss with (s1:=s1). repeat split; assumption. 196 | } **) 197 | 198 | (* No loss for now *) 199 | assert (forall t, lost s2 t = 0) as Hloss2. { admit. } 200 | 201 | (* Intuition: upper of s2 >= lower of s1. This is equivalent to the following *) 202 | assert (forall t, wasted s2 t <= wasted s1 (t - D s1) + C s1 * D s1) as H_s2_upper_ge_s1_lower. { 203 | intro t. 204 | 205 | (* For t < D, this is trivially true since lower = 0 *) 206 | pose (Nat.lt_trichotomy t (D s1)) as Ht_le_D_cases. 207 | destruct Ht_le_D_cases as [Ht_lt_D | Ht_ge_D]. 208 | assert (t - D s1 = 0). { lia. } 209 | rewrite H. clear H. 210 | assert (wasted s1 0 = 0). { pose (zero_wasted s1 0). lia. } 211 | rewrite H. clear H. 212 | pose (constraint_u s2 t) as H. 213 | assert (wasted s2 t <= C s2 * t). { lia. } 214 | assert (wasted s2 t <= C s2 * D s2). { lia. } 215 | 216 | induction t. 217 | - pose (zero_wasted s2 0). pose (zero_wasted s1 (0 - D s1)). lia. 218 | - pose Nat.lt_trichotomy as Hcases. specialize (Hcases (wasted s2 t) (wasted s2 (S t))). 219 | destruct Hcases as [Hgt|Heq_gt]. 220 | + (* Case: (wasted s2) increased *) 221 | apply (cond_waste s2) in Hgt. 222 | rewrite H12 in Hgt. 223 | assert ((C s1) * (S t - D s1) - (wasted s1 (S t - D s1)) <= (out s1 (S t))) as H1. { 224 | pose (constraint_l s1 (S t)). lia. 225 | } 226 | rewrite (Hloss2 (S t)) in Hgt. 227 | assert (out s1 (S t) <= C s2 * S t - wasted s2 (S t)). { lia. } 228 | assert (C s1 * (S t - D s1) - wasted s1 (S t - D s1) <= C s1 * S t - wasted s2 (S t)). { lia. } 229 | assert (C s1 * (S t - D s1) <= C s1 * S t + wasted s1 (S t - D s1) - wasted s2 (S t)). { lia. } 230 | assert (wasted s2 (S t) + C s1 * (S t - D s1) <= C s1 * S t + wasted s1 (S t - D s1)). { lia. } 231 | 232 | rewrite <- Hc in Hgt. 233 | lia. 234 | + destruct Heq_gt as [Heq|Hlt]. 235 | * (* Case: (wasted s2) remains constant *) 236 | (* (wasted s1) is monotonic *) 237 | pose (monotone_wasted s1) as H1. unfold monotone in H1. 238 | specialize (H1 t (S t)). assert (t < S t) as Hmon. { lia. } apply H1 in Hmon. 239 | rewrite <- Heq. 240 | lia. 241 | * (* Case: (wasted s2) decreases, but this cannot happen *) 242 | exfalso. 243 | (* (wasted s2) is monotonic *) 244 | pose (monotone_wasted s2) as H1. unfold monotone in H1. 245 | specialize (H1 t (S t)). assert (t < S t) as Hmon. { lia. } apply H1 in Hmon. 246 | lia. 247 | } 248 | 249 | (* prove constraint_l *) 250 | assert (forall t, (out s2 t) + (wasted s1 t) >= (C s1) * t - (K s1 + K s2)) as H_eg_constraint_l. { 251 | intro t. specialize (H_s2_upper_ge_s1_lower t). 252 | pose (constraint_l s2) as Hl2. specialize (Hl2 t). 253 | lia. 254 | } 255 | 256 | (* cond_waste is the same as for s1. Hence no need to prove *) 257 | 258 | (* prove out_le_inp *) 259 | assert (forall t, out s2 t <= inp s1 t - (lost s1 t + lost s2 t)) as H_eg_out_le_inp. { 260 | intro t. 261 | pose (out_le_inp s1) as Hs1. specialize (Hs1 t). 262 | pose (out_le_inp s2) as Hs2. specialize (Hs2 t). 263 | rewrite H12 in Hs2. 264 | rewrite (Hloss2 t) in Hs2. rewrite Nat.sub_0_r in Hs2. 265 | rewrite (Hloss2 t). rewrite Nat.add_0_r. 266 | apply Nat.le_trans with (m:=(out s1 t)); assumption. 267 | } 268 | 269 | (* Now we are ready to construct our example *) 270 | remember (mkTrace 271 | (C s1) 272 | (K s1 + K s2) 273 | (wasted s1) 274 | (out s2) 275 | (lost s1) 276 | (inp s1) 277 | (loss_thresh s1) 278 | H_eg_constraint_u 279 | H_eg_constraint_l 280 | (cond_waste s1) 281 | H_eg_out_le_inp 282 | (monotone_wasted s1) 283 | (monotone_out s2) 284 | (monotone_inp s1) 285 | (zero_wasted s1) 286 | (zero_out s2) 287 | (zero_inp s1) 288 | ) as example. 289 | 290 | (* Show example has the required properties *) 291 | exists example. 292 | repeat split; rewrite Heqexample; reflexivity. 293 | Qed. 294 | End CongestionControl. 295 | -------------------------------------------------------------------------------- /old/analyze_aimd.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import matplotlib.pyplot as plt 3 | import numpy as np 4 | from z3 import And, If, Or, Real 5 | 6 | from binary_search import BinarySearch 7 | from cache import run_query 8 | from questions import find_bound, find_cwnd_incr_bound,\ 9 | find_const_cwnd_util_lbound, find_periodic_low_cwnd 10 | from multi_flow import ModelConfig, make_solver, freedom_duration 11 | from my_solver import MySolver 12 | 13 | # In units of 1 BDP 14 | # buf_sizes = np.asarray( 15 | # list(np.linspace(0.1, 1.1, 5)) + list(np.linspace(1.1, 3.1, 6))) 16 | # buf_sizes = [0.1, 0.5, 1, 1.1, 1.45, 1.5, 2, 2.25, 2.5, 2.75, 3] 17 | # buf_sizes = [0.1, 0.9, 1.3, 1.6, 1.7, 2, 3] 18 | buf_sizes = [1.9] 19 | # buf_sizes = [0.1, 0.25, 0.5, 0.75, 0.9, 1, 1.1, 1.15, 1.2, 1.25, 1.5, 1.75, 20 | # 1.9, 2, 2.1, 2.25, 2.5, 2.75, 3, 3.5, 4] 21 | 22 | 23 | def loss_thresh(cfg: ModelConfig, err: float, timeout: float): 24 | global buf_sizes 25 | assert(cfg.N == 1) 26 | buf_sizes = np.asarray(buf_sizes) * (cfg.C * cfg.R) 27 | max_gap = Real("alpha") + cfg.C * (cfg.R + cfg.D) 28 | 29 | def max_cwnd_f(cfg: ModelConfig): 30 | return cfg.C*(cfg.R + cfg.D) + cfg.buf_min # + 2 * Real("alpha") 31 | 32 | def gap(t: int, cfg: ModelConfig): 33 | return Real(f"tot_lost_{t}") - Real(f"loss_detected_0,{t}") 34 | 35 | def test(cfg: ModelConfig, thresh: float): 36 | max_cwnd = max_cwnd_f(cfg) 37 | s = make_solver(cfg) 38 | s.add(Or(*[ 39 | And( 40 | Real(f"tot_lost_{t}") > Real(f"tot_lost_{t-1}"), 41 | Real(f"cwnd_0,{t}") < thresh, 42 | # Real(f"tot_lost_{t-1}") == 0 43 | ) 44 | for t in range(3, cfg.T) 45 | ])) 46 | # s.add(Real("tot_lost_3") == 0) 47 | # s.add(gap(0, cfg) == 0) 48 | 49 | s.add(gap(0, cfg) <= max_gap) 50 | s.add(Real("cwnd_0,0") < max_cwnd) 51 | 52 | s.add(Real("alpha") < 0.1 * cfg.C * cfg.R) 53 | # s.add(Real("alpha") < cfg.buf_min / 2) 54 | return s 55 | 56 | T_orig = cfg.T 57 | cwnd_threshes = [] 58 | for buf_size in buf_sizes: 59 | cfg.buf_max = buf_size 60 | cfg.buf_min = buf_size 61 | max_cwnd = max_cwnd_f(cfg) 62 | cfg.T = T_orig 63 | 64 | if buf_size >= 2 * cfg.C * cfg.R: 65 | cfg.T = 15 66 | 67 | print(f"Testing buffer size {buf_size}") 68 | 69 | # If cwnd > max_cwnd, it will fall 70 | s = make_solver(cfg) 71 | s.add(Real("tot_lost_0") == Real(f"tot_lost_{cfg.T-1}")) 72 | s.add(Real(f"cwnd_0,{cfg.T-1}") > max_cwnd) 73 | # Eliminate timeouts where we just stop sending packets 74 | for t in range(cfg.T): 75 | s.add(Real(f"tot_inp_{t}") - Real(f"tot_lost_{t}") 76 | > Real(f"tot_out_{t}")) 77 | s.add(Real("alpha") < 0.1 * cfg.C * cfg.R) 78 | # qres = run_query(s, cfg, timeout) 79 | # print("tested max cwnd: ", qres.satisfiable) 80 | # assert(qres.satisfiable == "unsat") 81 | 82 | # If cwnd < max_cwnd, it will stay there 83 | s = make_solver(cfg) 84 | s.add(Real("cwnd_0,0") < max_cwnd) 85 | s.add(Real(f"cwnd_0,{cfg.T-1}") > max_cwnd) 86 | s.add(Real("alpha") < 0.1 * cfg.C * cfg.R) 87 | # qres = run_query(s, cfg, timeout) 88 | # print("Tested max cwnd stay: ", qres.satisfiable) 89 | # assert(qres.satisfiable == "unsat") 90 | 91 | # If gap > max_gap, it will fall by at-least C 92 | s = make_solver(cfg) 93 | s.add(gap(cfg.T-1, cfg) > max_gap) 94 | s.add(gap(0, cfg) - cfg.C < gap(cfg.T-1, cfg)) 95 | for t in range(cfg.T): 96 | s.add(Real(f"cwnd_0,{t}") < max_cwnd) 97 | # Eliminate timeouts where we just stop sending packets 98 | s.add(Real(f"tot_inp_{t}") - Real(f"tot_lost_{t}") 99 | > Real(f"tot_out_{t}")) 100 | # s.add(Real("alpha") < cfg.C * cfg.R * 0.1) 101 | # qres = run_query(s, cfg, timeout) 102 | # print("Tested loss detect: ", qres.satisfiable) 103 | # assert(qres.satisfiable == "unsat") 104 | 105 | s = make_solver(cfg) 106 | s.add(gap(0, cfg) < max_gap) 107 | s.add(gap(cfg.T-1, cfg) >= max_gap) 108 | s.add(Real("alpha") < 0.1 * cfg.C * cfg.R) 109 | for t in range(cfg.T): 110 | # Eliminate timeouts where we just stop sending packets 111 | s.add(Real(f"tot_inp_{t}") - Real(f"tot_lost_{t}") 112 | > Real(f"tot_out_{t}")) 113 | # qres = run_query(s, cfg, timeout) 114 | # print("Tested gap remains low: ", qres.satisfiable) 115 | # assert(qres.satisfiable == "unsat") 116 | 117 | if True: 118 | # cfg.T = 5 119 | if buf_size <= cfg.C * (cfg.R + cfg.D): 120 | thresh = buf_size - Real("alpha") 121 | else: 122 | thresh = buf_size + cfg.C * (cfg.R - 1) - Real("alpha") 123 | s = test(cfg, thresh) 124 | qres = run_query(s, cfg, timeout) 125 | print("Tested loss threshold: ", qres.satisfiable) 126 | assert(qres.satisfiable == "unsat") 127 | 128 | s = test(cfg, thresh + 0.1) 129 | qres = run_query(s, cfg, timeout) 130 | print("Tested loss threshold + 0.1: ", qres.satisfiable) 131 | assert(qres.satisfiable == "sat") 132 | continue 133 | 134 | cwnd_thresh = find_bound(test, cfg, 135 | BinarySearch(0, max_cwnd, err), timeout) 136 | print(cwnd_thresh) 137 | cwnd_threshes.append(cwnd_thresh) 138 | print(list(buf_sizes), cwnd_threshes) 139 | 140 | 141 | def single_flow_util( 142 | cfg: ModelConfig, err: float, timeout: float 143 | ): 144 | ''' Find a steady-state such that if it enters steady state, it will remain 145 | there ''' 146 | global buf_sizes 147 | buf_sizes = buf_sizes / (cfg.C * cfg.R) 148 | 149 | def cwnd_stay_bound(cfg: ModelConfig, thresh: float) -> MySolver: 150 | s = make_solver(cfg) 151 | conds = [] 152 | dur = freedom_duration(cfg) 153 | for n in range(cfg.N): 154 | for t in range(dur): 155 | s.add(Real(f"cwnd_{n},{t}") >= thresh) 156 | for t in range(dur, cfg.T): 157 | # We need all the last freedom_duration(cfg) timesteps to be 158 | # large so we can apply induction to extend theorem to infinity 159 | conds.append(Real(f"cwnd_{n},{t}") < thresh) 160 | s.add(Or(*conds)) 161 | assert(cfg.N == 1) 162 | s.add(Real("tot_inp_0") - Real("tot_lost_0") - (0-Real("wasted_0")) 163 | <= Real("cwnd_0,0") - cfg.C*cfg.R) 164 | return s 165 | 166 | # Plot as a function of buffer size (droptail) 167 | cwnd_bounds = [] 168 | util_bounds = [] 169 | T = cfg.T 170 | 171 | for buf_size in np.asarray(buf_sizes) * cfg.C * cfg.R: 172 | cfg.buf_min = buf_size 173 | cfg.buf_max = buf_size 174 | 175 | # Queue size will eventually match cwnd. We use this in 176 | # cwnd_stay_bound to make it tighter 177 | assert(cfg.N == 1) 178 | s = make_solver(cfg) 179 | s.add(Real("cwnd_0,0") < cfg.C * cfg.R + buf_size) 180 | q_bound = Real(f"cwnd_0,{T-1}") - cfg.C*cfg.R 181 | q_bound = If(q_bound >= 0, q_bound, 0) 182 | s.add(Real(f"tot_inp_{T-1}") - Real(f"tot_lost_{T-1}") 183 | - (cfg.C*(T-1) - Real(f"wasted_{T-1}")) 184 | > q_bound) 185 | # Loss did not happen recently 186 | s.add(Real(f"cwnd_0,{T-1}") > Real(f"cwnd_0,{T-2}")) 187 | qres = run_query(s, cfg, timeout=timeout) 188 | print("Queue will eventually be cwnd limited", qres.satisfiable) 189 | assert(qres.satisfiable == "unsat") 190 | 191 | incr_bound = find_cwnd_incr_bound(cfg, None, err, timeout) 192 | print(f"For buffer size {buf_size} incr_bound={incr_bound}") 193 | 194 | max_cwnd = cfg.C * cfg.R + cfg.buf_max 195 | 196 | # Same as find_cwnd_stay_bound 197 | print(f"Testing init cwnd for stay = {incr_bound[0]} BDP") 198 | s = cwnd_stay_bound(cfg, incr_bound[0]) 199 | qres = run_query(s, cfg, timeout=timeout) 200 | print(qres.satisfiable) 201 | 202 | if qres.satisfiable != "unsat": 203 | print("Could not prove bound. Will search for a new stay bound") 204 | stay_bound = find_bound(cwnd_stay_bound, cfg, 205 | BinarySearch(0, max_cwnd, err), timeout) 206 | if stay_bound[0] > incr_bound[0]: 207 | print("Failed to prove any bounds: ", stay_bound) 208 | return 209 | bound = stay_bound[0] 210 | else: 211 | # This is the definitive bound 212 | bound = incr_bound[0] 213 | 214 | # This is a definitive bound 215 | cwnd_thresh = bound 216 | #util_bound = find_const_cwnd_util_lbound( 217 | # cfg, cwnd_thresh, err, timeout) 218 | util_bound = [0] 219 | print(f"For buffer size {buf_size} util_bound={util_bound}") 220 | 221 | cwnd_bounds.append(cwnd_thresh) 222 | util_bounds.append(util_bound[0] * 100) 223 | 224 | print([x for x in zip(buf_sizes, cwnd_bounds, util_bounds)]) 225 | 226 | fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True) 227 | ax1.plot(buf_sizes, cwnd_bounds) 228 | ax2.plot(buf_sizes, util_bounds) 229 | ax1.set_ylabel("Steady state cwnd (in BDP)") 230 | ax2.set_ylabel("Minimum utilization %") 231 | ax2.set_xlabel("Buffer size (in BDP)") 232 | plt.show() 233 | plt.savefig("/tmp/single_flow_util.svg") 234 | print(buf_sizes, cwnd_bounds, util_bounds) 235 | 236 | 237 | def plot_periodic_low_util( 238 | cfg: ModelConfig, err: float, timeout: float 239 | ): 240 | global buf_sizes 241 | assert(cfg.N == 1) 242 | buf_sizes = np.asarray(buf_sizes) * cfg.C * cfg.R 243 | 244 | def model_cons(cfg: ModelConfig, thresh: float): 245 | dur = cfg.R 246 | s = make_solver(cfg) 247 | 248 | for t in range(dur): 249 | t0 = t+cfg.T-dur 250 | s.add(Real(f"losts_0,{t}") - Real(f"loss_detected_0,{t}") 251 | == Real(f"losts_0,{t0}") - Real(f"loss_detected_0,{t0}")) 252 | s.add(Real(f"inp_0,{t}") - Real(f"losts_0,{t}") - Real(f"out_0,{t}") 253 | == Real(f"inp_0,{t0}") - Real(f"losts_0,{t0}") - Real(f"out_0,{t0}")) 254 | s.add(cfg.C*t - Real(f"wasted_{t}") - Real(f"out_0,{t}") 255 | == cfg.C*t0 - Real(f"wasted_{t0}") - Real(f"out_0,{t0}")) 256 | s.add(Real(f"losts_0,{t}") - Real(f"loss_detected_0,{t}") 257 | == Real(f"losts_0,{t0}") - Real(f"loss_detected_0,{t0}")) 258 | s.add(Real(f"tot_out_{cfg.T-1}") - Real("tot_out_0") 259 | < thresh * cfg.C * (cfg.T - 1)) 260 | s.add(Real("cwnd_0,0") == Real(f"cwnd_0,{cfg.T-1}")) 261 | 262 | # Eliminate timeouts where we just stop sending packets 263 | for t in range(cfg.T): 264 | s.add(Real(f"tot_inp_{t}") - Real(f"tot_lost_{t}") 265 | > Real(f"tot_out_{t}")) 266 | 267 | return s 268 | 269 | cwnd_bounds = [] 270 | for buf_size in buf_sizes: 271 | cfg.buf_min = buf_size 272 | cfg.buf_max = buf_size 273 | 274 | bound = find_bound(model_cons, cfg, BinarySearch(0, 1, err), timeout) 275 | print(f"Util bound {bound}") 276 | cwnd_bounds.append(bound) 277 | 278 | print([x for x in zip(buf_sizes, cwnd_bounds)]) 279 | fig, ax = plt.subplots(1, 1) 280 | ax.plot(buf_sizes, cwnd_bounds) 281 | ax.set_ylabel("Steady state cwnd (in BDP)") 282 | ax.set_xlabel("Buffer size (in BDP)") 283 | plt.show() 284 | 285 | 286 | if __name__ == "__main__": 287 | cfg_args = ModelConfig.get_argparse() 288 | common_args = argparse.ArgumentParser(add_help=False) 289 | common_args.add_argument("--err", type=float, default=0.05) 290 | common_args.add_argument("--timeout", type=float, default=10) 291 | 292 | parser = argparse.ArgumentParser() 293 | subparsers = parser.add_subparsers(title="subcommand", dest="subcommand") 294 | 295 | tpt_bound_args = subparsers.add_parser( 296 | "loss_thresh", parents=[cfg_args, common_args]) 297 | 298 | tpt_bound_args = subparsers.add_parser( 299 | "single_flow_util", parents=[cfg_args, common_args]) 300 | 301 | tpt_bound_args = subparsers.add_parser( 302 | "plot_periodic_low_util", parents=[cfg_args, common_args]) 303 | 304 | args = parser.parse_args() 305 | cfg = ModelConfig.from_argparse(args) 306 | 307 | if args.subcommand == "loss_thresh": 308 | loss_thresh(cfg, args.err, args.timeout) 309 | elif args.subcommand == "single_flow_util": 310 | single_flow_util(cfg, args.err, args.timeout) 311 | elif args.subcommand == "plot_periodic_low_util": 312 | plot_periodic_low_util(cfg, args.err, args.timeout) 313 | else: 314 | print(f"Unrecognized command '{args.subcommand}'") 315 | -------------------------------------------------------------------------------- /old/analyze_copa.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from typing import Optional 3 | from z3 import If, Or, Real 4 | 5 | from binary_search import BinarySearch 6 | from cache import run_query 7 | from questions import find_bound 8 | from multi_flow import ModelConfig, freedom_duration, make_solver 9 | 10 | 11 | def copa_steady_state( 12 | cfg: ModelConfig, err: float, timeout: float 13 | ): 14 | alpha_thresh = 0.1 * cfg.C * cfg.R 15 | q_thresh = 4 * cfg.C * cfg.R + 2 * Real("alpha") 16 | cwnd_thresh = cfg.C * cfg.R - Real("alpha") 17 | cwnd_thresh_u = 4 * cfg.C * cfg.R + 2 * Real("alpha") 18 | T = cfg.T 19 | 20 | dur = freedom_duration(cfg) 21 | 22 | # If queue < q_thresh and cwnd < cwnd_thresh, cwnd increases by at-least 23 | # alpha / 2 24 | s = make_solver(cfg) 25 | conds = [] 26 | for n in range(cfg.N): 27 | s.add(Real(f"cwnd_{n},{dur-1}") <= cwnd_thresh) 28 | conds.append(Real(f"cwnd_{n},{T-1}") 29 | < Real(f"cwnd_{n},{dur-1}") + Real("alpha")) 30 | s.add(Or(*conds)) 31 | s.add(Real(f"tot_inp_{dur-1}") - Real(f"tot_out_{dur-1}") <= q_thresh) 32 | s.add(Real("alpha") < alpha_thresh) 33 | qres = run_query(s, cfg, timeout=timeout) 34 | print("Cwnd increases:", qres.satisfiable) 35 | 36 | # If queue < q_thresh and cwnd < cwnd_thresh, queue never exceeds q_thresh 37 | s = make_solver(cfg) 38 | for n in range(cfg.N): 39 | s.add(Real(f"cwnd_{n},{dur-1}") <= cwnd_thresh) 40 | s.add(Real(f"tot_inp_{dur-1}") - Real(f"tot_out_{dur-1}") <= q_thresh) 41 | s.add(Real(f"tot_inp_{T-1}") - Real(f"tot_out_{T-1}") > q_thresh) 42 | s.add(Real("alpha") < alpha_thresh) 43 | qres = run_query(s, cfg, timeout=timeout) 44 | print("Queue remains small: ", qres.satisfiable) 45 | 46 | # If Copa makes it to the steady state, it stays there 47 | s = make_solver(cfg) 48 | conds = [] 49 | for n in range(cfg.N): 50 | s.add(Real(f"cwnd_{n},{dur-1}") <= cwnd_thresh_u) 51 | s.add(Real(f"cwnd_{n},{dur-1}") >= cwnd_thresh) 52 | conds.append(Real(f"cwnd_{n},{T-1}") < cwnd_thresh) 53 | conds.append(Real(f"cwnd_{n},{T-1}") > cwnd_thresh_u) 54 | conds.append( 55 | Real(f"tot_inp_{T-1}") - Real(f"tot_out_{T-1}") > q_thresh) 56 | s.add(Real(f"tot_inp_{dur-1}") - Real(f"tot_out_{dur-1}") <= q_thresh) 57 | s.add(Real("alpha") < alpha_thresh) 58 | s.add(Or(*conds)) 59 | qres = run_query(s, cfg, timeout=timeout) 60 | print("Stays there: ", qres.satisfiable) 61 | 62 | # If queue > q_thresh and cwnd <= cwnd_thresh_u, queue will fall by 63 | # at least alpha and cwnd never exceeds cwnd_thresh_u 64 | s = make_solver(cfg) 65 | conds = [] 66 | for n in range(cfg.N): 67 | s.add(Real(f"cwnd_{n},{dur-1}") <= cwnd_thresh_u) 68 | conds.append(Real(f"cwnd_{n},{T-1}") > cwnd_thresh_u) 69 | conds.append(Real(f"tot_inp_{T-1}") - Real(f"tot_out_{T-1}") 70 | >= Real("tot_inp_0") - Real("alpha")) 71 | s.add(Real(f"tot_inp_{dur-1}") - Real(f"tot_out_{dur-1}") > q_thresh) 72 | s.add(Or(*conds)) 73 | s.add(Real("alpha") < alpha_thresh) 74 | qres = run_query(s, cfg, timeout=timeout) 75 | print("Queue always falls", qres.satisfiable) 76 | 77 | # If cwnd > cwnd_thresh_u, cwnd will fall by at-least alpha 78 | s = make_solver(cfg) 79 | conds = [] 80 | for n in range(cfg.N): 81 | s.add(Real(f"cwnd_{n},{dur-1}") > cwnd_thresh_u) 82 | conds.append(Real(f"cwnd_{n},{T-1}") 83 | >= Real(f"cwnd_{n},{dur-1}") - Real("alpha")) 84 | s.add(Or(*conds)) 85 | s.add(Real("alpha") < alpha_thresh) 86 | qres = run_query(s, cfg, timeout=timeout) 87 | print("Cwnd always falls", qres.satisfiable) 88 | 89 | 90 | def copa_performance(cfg: ModelConfig, err: float, timeout: float): 91 | ''' Given steady state, determine Copa's performance ''' 92 | alpha_thresh = cfg.C * cfg.R 93 | q_thresh = 4 * cfg.C * cfg.R + 2 * Real("alpha") 94 | cwnd_thresh = cfg.C * cfg.R - Real("alpha") 95 | cwnd_thresh_u = 4 * cfg.C * cfg.R + 2 * Real("alpha") 96 | 97 | dur = freedom_duration(cfg) 98 | 99 | def util(cfg: ModelConfig, thresh: float): 100 | s = make_solver(cfg) 101 | for n in range(cfg.N): 102 | s.add(Real(f"cwnd_{n},{dur-1}") <= cwnd_thresh_u) 103 | s.add(Real(f"cwnd_{n},{dur-1}") >= cwnd_thresh) 104 | s.add(Real(f"tot_inp_{dur-1}") - Real(f"tot_out_{dur-1}") <= q_thresh) 105 | s.add(Real("alpha") < alpha_thresh) 106 | 107 | s.add(Real(f"tot_out_{cfg.T-1}") - Real("tot_out_0") 108 | < thresh * cfg.C * (cfg.T - 1)) 109 | return s 110 | 111 | def min_q_len(cfg: ModelConfig, thresh: float): 112 | s = make_solver(cfg) 113 | for n in range(cfg.N): 114 | s.add(Real(f"cwnd_{n},{dur-1}") <= cwnd_thresh_u) 115 | s.add(Real(f"cwnd_{n},{dur-1}") >= cwnd_thresh) 116 | s.add(Real(f"tot_inp_{dur-1}") - Real(f"tot_out_{dur-1}") <= q_thresh) 117 | s.add(Real("alpha") < alpha_thresh) 118 | s.add(Real("alpha") > 0.1) 119 | 120 | for t in range(dur, cfg.T): 121 | s.add(Real(f"tot_inp_{t}") - Real(f"tot_out_{t}") >= thresh + 2 * Real("alpha")) 122 | # s.add(Real(f"tot_inp_{t}") - (cfg.C * t - Real(f"wasted_{t}")) >= thresh + 2 * Real("alpha")) 123 | return s 124 | 125 | util_bound = find_bound(util, cfg, BinarySearch(0, 1, err), timeout) 126 | print(f"Utilization bounds: {util_bound}") 127 | 128 | min_q_bound = find_bound(min_q_len, cfg, 129 | BinarySearch(0, 5*cfg.C*cfg.R, err), 130 | timeout, reverse=True) 131 | print(f"Utilization bounds: {min_q_bound}") 132 | 133 | 134 | def copa_fairness( 135 | cfg: ModelConfig, err: float, timeout: float 136 | ): 137 | def abs(expr): 138 | return If(expr >= 0, expr, -expr) 139 | 140 | cfg.N = 2 141 | dur = freedom_duration(cfg) 142 | s = make_solver(cfg) 143 | # s.add(Real(f"losts_0,{cfg.T-1}") > 0) 144 | # s.add(Real(f"losts_1,{cfg.T-1}") > 0) 145 | # s.add(Real(f"tot_lost_{cfg.T-1}") > 0) 146 | 147 | s.add(abs(Real(f"cwnd_0,{dur-1}") - Real(f"cwnd_1,{dur-1}")) 148 | < abs(Real(f"cwnd_0,{cfg.T-1}") - Real(f"cwnd_1,{cfg.T-1}"))) 149 | # s.add(abs(Real(f"out_0,{dur-1}") - Real(f"out_1,{dur-1}")) 150 | # < abs(Real(f"out_0,{cfg.T-1}") - Real(f"out_1,{cfg.T-1}"))) 151 | # s.add(Real(f"tot_inp_0") == 0) 152 | qres = run_query(s, cfg, timeout=timeout) 153 | print("Unfairness never increases:", qres.satisfiable) 154 | 155 | 156 | if __name__ == "__main__": 157 | cfg_args = ModelConfig.get_argparse() 158 | common_args = argparse.ArgumentParser(add_help=False) 159 | common_args.add_argument("--err", type=float, default=0.05) 160 | common_args.add_argument("--timeout", type=float, default=10) 161 | 162 | parser = argparse.ArgumentParser() 163 | subparsers = parser.add_subparsers(title="subcommand", dest="subcommand") 164 | 165 | subparsers.add_parser("steady_state", parents=[cfg_args, common_args]) 166 | subparsers.add_parser("performance", parents=[cfg_args, common_args]) 167 | subparsers.add_parser("fairness", parents=[cfg_args, common_args]) 168 | 169 | args = parser.parse_args() 170 | cfg = ModelConfig.from_argparse(args) 171 | 172 | if args.subcommand == "steady_state": 173 | copa_steady_state(cfg, args.err, args.timeout) 174 | elif args.subcommand == "performance": 175 | copa_performance(cfg, args.err, args.timeout) 176 | elif args.subcommand == "fairness": 177 | copa_fairness(cfg, args.err, args.timeout) 178 | else: 179 | assert(False) 180 | -------------------------------------------------------------------------------- /old/analyze_fixed_d.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from z3 import Real, And 3 | 4 | from cache import run_query 5 | from multi_flow import ModelConfig, freedom_duration, make_solver 6 | 7 | 8 | def fixed_d_util(cfg: ModelConfig, timeout: float): 9 | dur = freedom_duration(cfg) 10 | 11 | cwnd_thresh = 0.1 # cfg.C * (cfg.R + cfg.D) 12 | mult_incr = (cfg.R + cfg.D) / cfg.R 13 | 14 | cfg.T = 2 * dur - 1 15 | 16 | # How much does it increase 17 | s = make_solver(cfg) 18 | for t in range(dur): 19 | s.add(Real(f"cwnd_0,{t}") <= cwnd_thresh) 20 | s.add(And( 21 | Real(f"cwnd_0,{cfg.T-1}") < cwnd_thresh, 22 | Real(f"cwnd_0,{dur-1}") * mult_incr > Real(f"cwnd_0,{cfg.T-1}"))) 23 | qres = run_query(s, cfg, timeout=timeout) 24 | print("Does it increase reliably?", qres.satisfiable) 25 | 26 | s = make_solver(cfg) 27 | for t in range(dur): 28 | s.add(Real(f"cwnd_0,{t}") > cwnd_thresh) 29 | s.add(Real(f"wasted_{cfg.T-1}") > Real(f"wasted_{cfg.T-2}")) 30 | qres = run_query(s, cfg, timeout=timeout) 31 | print("Once we cross the threshold, do we ever waste?", qres.satisfiable) 32 | 33 | 34 | if __name__ == "__main__": 35 | cfg_args = ModelConfig.get_argparse() 36 | common_args = argparse.ArgumentParser(add_help=False) 37 | # common_args.add_argument("--err", type=float, default=0.05) 38 | common_args.add_argument("--timeout", type=float, default=10) 39 | 40 | parser = argparse.ArgumentParser() 41 | subparsers = parser.add_subparsers(title="subcommand", dest="subcommand") 42 | 43 | tpt_bound_args = subparsers.add_parser( 44 | "util", parents=[cfg_args, common_args]) 45 | 46 | args = parser.parse_args() 47 | cfg = ModelConfig.from_argparse(args) 48 | 49 | if args.cca != "fixed_d": 50 | print("Warning: this analysis really only applies to fixed_d") 51 | 52 | if args.subcommand == "util": 53 | fixed_d_util(cfg, args.timeout) 54 | -------------------------------------------------------------------------------- /old/func_repr.py: -------------------------------------------------------------------------------- 1 | from z3 import ForAll, Implies, Int, IntSort 2 | from my_solver import MySolver 3 | 4 | if __name__ == "__main__": 5 | C = 1 6 | D = 2 7 | 8 | s = MySolver() 9 | 10 | A = s.Function("A", IntSort(), IntSort()) 11 | S = s.Function("S", IntSort(), IntSort()) 12 | W = s.Function("W", IntSort(), IntSort()) 13 | 14 | x, y = Int("x"), Int("y") 15 | 16 | s.add(ForAll([x, y], Implies(x <= y, A(x) <= A(y)))) 17 | s.add(ForAll([x, y], Implies(x <= y, S(x) <= S(y)))) 18 | s.add(ForAll([x, y], Implies(x <= y, W(x) <= W(y)))) 19 | 20 | s.add(ForAll([x], S(x) <= A(x))) 21 | s.add(ForAll([x], S(x) <= C * x - W(x))) 22 | s.add(ForAll([x], S(x + D) >= C * x - W(x))) 23 | # s.add(ForAll([x], Implies(W(x) > W(x-1), A(x) <= C * x - W(x)))) 24 | 25 | print(s.to_smt2()) 26 | sat = s.check() 27 | print(sat) 28 | if sat == "sat": 29 | print(s.model()) 30 | -------------------------------------------------------------------------------- /old/multi_flow.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | from fractions import Fraction 3 | import matplotlib 4 | import matplotlib.pyplot as plt 5 | import numpy as np 6 | from typing import Dict, List, Optional, Tuple, Union 7 | from z3 import Bool, Real, Int, Sum, Implies, Not, And, Or, If 8 | import z3 9 | 10 | from my_solver import MySolver 11 | 12 | 13 | z3.set_param('parallel.enable', True) 14 | z3.set_param('parallel.threads.max', 8) 15 | 16 | 17 | def model_to_dict(model: z3.ModelRef) -> Dict[str, Union[Fraction, bool]]: 18 | ''' Utility function that takes a z3 model and extracts its variables to a 19 | dict''' 20 | decls = model.decls() 21 | res: Dict[str, Union[Fraction, bool]] = {} 22 | for d in decls: 23 | val = model[d] 24 | if type(val) == z3.BoolRef: 25 | res[d.name()] = bool(val) 26 | elif type(val) == z3.IntNumRef: 27 | res[d.name()] = Fraction(val.as_long()) 28 | else: 29 | # Assume it is numeric 30 | res[d.name()] = val.as_fraction() 31 | return res 32 | 33 | 34 | class Link: 35 | def __init__( 36 | self, 37 | inps: List[List[Real]], 38 | rates: List[List[Real]], 39 | s: MySolver, 40 | C: float, 41 | D: int, 42 | buf_min: Optional[float], 43 | buf_max: Optional[float], 44 | compose: bool = True, 45 | name: str = '' 46 | ): 47 | ''' Creates a link given `inps` and return (`out`, `lost`). `inps` is 48 | a list of `inp`, one for each sender. If buf_min is None, there won't 49 | be any losses. If compose is False, a weaker model that doesn't compose 50 | is used ''' 51 | 52 | assert(len(inps) > 0) 53 | N = len(inps) 54 | T = len(inps[0]) 55 | 56 | tot_inp = [s.Real('tot_inp%s_%d' % (name, t)) for t in range(T)] 57 | outs = [[s.Real('out%s_%d,%d' % (name, n, t)) for t in range(T)] 58 | for n in range(N)] 59 | tot_out = [s.Real('tot_out%s_%d' % (name, t)) for t in range(T)] 60 | losts = [[s.Real('losts%s_%d,%d' % (name, n, t)) for t in range(T)] 61 | for n in range(N)] 62 | tot_lost = [s.Real('tot_lost%s_%d' % (name, t)) for t in range(T)] 63 | wasted = [s.Real('wasted%s_%d' % (name, t)) for t in range(T)] 64 | 65 | if not compose: 66 | epsilon = s.Real('epsilon%s' % name) 67 | s.add(epsilon >= 0) 68 | 69 | max_dt = T 70 | # If qdel[t][dt] is true, it means that the bytes exiting at t were 71 | # input at time t - dt. If out[t] == out[t-1], then qdel[t][dt] == 72 | # false forall dt. Else, exactly qdel[t][dt] == true for exactly one dt 73 | # If qdel[t][dt] == true then inp[t - dt - 1] < out[t] <= inp[t - dt] 74 | qdel = [[s.Bool('qdel%s_%d,%d' % (name, t, dt)) for dt in range(max_dt)] 75 | for t in range(T)] 76 | 77 | for t in range(0, T): 78 | # Things are monotonic 79 | if t > 0: 80 | s.add(tot_out[t] >= tot_out[t-1]) 81 | s.add(wasted[t] >= wasted[t-1]) 82 | # Derived from black lines are monotonic 83 | s.add(wasted[t] <= wasted[t-1] + C) 84 | s.add(tot_lost[t] >= tot_lost[t-1]) 85 | s.add(tot_inp[t] - tot_lost[t] >= tot_inp[t-1] - tot_lost[t-1]) 86 | for n in range(N): 87 | s.add(outs[n][t] >= outs[n][t-1]) 88 | s.add(losts[n][t] >= losts[n][t-1]) 89 | 90 | # Add up the totals 91 | s.add(tot_inp[t] == Sum(*[x[t] for x in inps])) 92 | s.add(tot_out[t] == Sum(*[x[t] for x in outs])) 93 | s.add(tot_lost[t] == Sum(*[x[t] for x in losts])) 94 | 95 | # Constrain out using tot_inp and the black lines 96 | s.add(tot_out[t] <= tot_inp[t] - tot_lost[t]) 97 | s.add(tot_out[t] <= C * t - wasted[t]) 98 | if t >= D: 99 | s.add(tot_out[t] >= C * (t - D) - wasted[t - D]) 100 | else: 101 | # We do not know what wasted was at t < 0, but it couldn't have 102 | # been bigger than wasted[0], so this bound is valid (and 103 | # incidentally, tight) 104 | s.add(tot_out[t] >= C * (t - D) - wasted[0]) 105 | 106 | # Condition when wasted is allowed to increase 107 | if t > 0: 108 | if compose: 109 | s.add(Implies( 110 | wasted[t] > wasted[t-1], 111 | C * t - wasted[t] >= tot_inp[t] - tot_lost[t] 112 | )) 113 | else: 114 | s.add(Implies( 115 | wasted[t] > wasted[t-1], 116 | tot_inp[t] - tot_lost[t] <= tot_out[t] + epsilon 117 | )) 118 | 119 | if buf_min is not None: 120 | # When can loss happen? 121 | if t > 0: 122 | tot_rate = sum([rates[n][t-1] for n in range(N)]) 123 | if True: 124 | s.add(Implies( 125 | tot_lost[t] > tot_lost[t-1], 126 | And(tot_inp[t] - tot_lost[t] > C*(t-1) - wasted[t-1] + buf_min, 127 | tot_rate > C, 128 | C*(t-1) - wasted[t-1] + buf_min - (tot_inp[t-1] - tot_lost[t-1]) < (tot_rate - C)) 129 | )) 130 | else: 131 | s.add(Implies( 132 | tot_lost[t] > tot_lost[t-1], 133 | tot_inp[t] - tot_lost[t] >= C*t - wasted[t] + buf_min)) 134 | else: 135 | # Note: Initial loss is unconstrained 136 | pass 137 | else: 138 | s.add(tot_lost[t] == 0) 139 | 140 | # Enforce buf_max if given 141 | if buf_max is not None: 142 | s.add(tot_inp[t] - tot_lost[t] <= C*t - wasted[t] + buf_max) 143 | 144 | # Figure out the time when the bytes being output at time t were 145 | # first input 146 | for dt in range(max_dt): 147 | if t - dt - 1 < 0: 148 | s.add(qdel[t][dt] == False) 149 | continue 150 | 151 | s.add(qdel[t][dt] == Or( 152 | And( 153 | tot_out[t] != tot_out[t-1], 154 | And(tot_inp[t - dt - 1] - tot_lost[t - dt - 1] 155 | < tot_out[t], 156 | tot_inp[t - dt] - tot_lost[t - dt] >= tot_out[t]) 157 | ), 158 | And( 159 | tot_out[t] == tot_out[t-1], 160 | qdel[t-1][dt] 161 | ))) 162 | 163 | # Figure out how many packets were output from each flow. Ensure 164 | # that out[n][t] > inp[n][t-dt-1], but leave the rest free for the 165 | # adversary to choose 166 | for n in range(N): 167 | for dt in range(max_dt): 168 | if t - dt - 1 < 0: 169 | continue 170 | s.add(Implies(qdel[t][dt], outs[n][t] > inps[n][t-dt-1])) 171 | 172 | # Initial conditions 173 | if buf_max is not None: 174 | s.add(tot_inp[0] - tot_lost[0] <= -wasted[0] + buf_max) 175 | # if buf_min is not None: 176 | # s.add(tot_inp[0] - tot_lost[0] >= - wasted[0] + buf_min) 177 | s.add(tot_out[0] == 0) 178 | for n in range(N): 179 | s.add(outs[n][0] == 0) 180 | 181 | self.tot_inp = tot_inp 182 | self.inps = inps 183 | self.outs = outs 184 | self.tot_out = tot_out 185 | self.losts = losts 186 | self.tot_lost = tot_lost 187 | self.wasted = wasted 188 | self.max_dt = max_dt 189 | self.qdel = qdel 190 | 191 | 192 | class ModelConfig: 193 | # Number of flows 194 | N: int 195 | # Jitter parameter (in timesteps) 196 | D: int 197 | # RTT (in timesteps) 198 | R: int 199 | # Number of timesteps 200 | T: int 201 | # Link rate 202 | C: float 203 | # Packets cannot be dropped below this threshold 204 | buf_min: Optional[float] 205 | # Packets have to be dropped above this threshold 206 | buf_max: Optional[float] 207 | # Number of dupacks before sender declares loss 208 | dupacks: Optional[float] 209 | # Congestion control algorithm 210 | cca: str 211 | # If false, we'll use a model is more restrictive but does not compose 212 | compose: bool 213 | # Additive increase parameter used by various CCAs 214 | alpha: Union[float, z3.ArithRef] = 1.0 215 | # Whether or not to use pacing in various CCA 216 | pacing: bool 217 | # If compose is false, wastage can only happen if queue length < epsilon 218 | epsilon: str 219 | # Whether we should track unsat core when solving. Disables caching 220 | unsat_core: bool 221 | 222 | def __init__( 223 | self, 224 | N: int, 225 | D: int, 226 | R: int, 227 | T: int, 228 | C: float, 229 | buf_min: Optional[float], 230 | buf_max: Optional[float], 231 | dupacks: Optional[float], 232 | cca: str, 233 | compose: bool, 234 | alpha: Optional[float], 235 | pacing: bool, 236 | epsilon: str, 237 | unsat_core: bool 238 | ): 239 | self.__dict__ = locals() 240 | 241 | @staticmethod 242 | def get_argparse() -> argparse.ArgumentParser: 243 | parser = argparse.ArgumentParser(add_help=False) 244 | parser.add_argument("-N", "--num-flows", type=int, default=1) 245 | parser.add_argument("-D", type=int, default=1) 246 | parser.add_argument("-R", "--rtt", type=int, default=1) 247 | parser.add_argument("-T", "--time", type=int, default=10) 248 | parser.add_argument("-C", "--rate", type=float, default=1) 249 | parser.add_argument("--buf-min", type=float, default=None) 250 | parser.add_argument("--buf-max", type=float, default=None) 251 | parser.add_argument("--dupacks", type=float, default=None) 252 | parser.add_argument("--cca", type=str, default="const", 253 | choices=["const", "aimd", "copa", 254 | "copa_multiflow", "bbr", "fixed_d"]) 255 | parser.add_argument("--no-compose", action="store_true") 256 | parser.add_argument("--alpha", type=float, default=None) 257 | parser.add_argument("--pacing", action="store_const", const=True, 258 | default=False) 259 | parser.add_argument("--epsilon", type=str, default="zero", 260 | choices=["zero", "lt_alpha", "lt_half_alpha", 261 | "gt_alpha"]) 262 | parser.add_argument("--unsat-core", action="store_const", const=True, 263 | default=False) 264 | 265 | return parser 266 | 267 | @classmethod 268 | def from_argparse(cls, args: argparse.Namespace): 269 | return cls( 270 | args.num_flows, 271 | args.D, 272 | args.rtt, 273 | args.time, 274 | args.rate, 275 | args.buf_min, 276 | args.buf_max, 277 | args.dupacks, 278 | args.cca, 279 | not args.no_compose, 280 | args.alpha, 281 | args.pacing, 282 | args.epsilon, 283 | args.unsat_core) 284 | 285 | 286 | def make_solver(cfg: ModelConfig) -> MySolver: 287 | # Configuration 288 | N = cfg.N 289 | C = cfg.C 290 | D = cfg.D 291 | R = cfg.R 292 | T = cfg.T 293 | buf_min = cfg.buf_min 294 | buf_max = cfg.buf_max 295 | dupacks = cfg.dupacks 296 | cca = cfg.cca 297 | compose = cfg.compose 298 | alpha = cfg.alpha 299 | pacing = cfg.pacing 300 | s = MySolver() 301 | if cfg.unsat_core: 302 | s.set(unsat_core=True) 303 | 304 | inps = [[s.Real('inp_%d,%d' % (n, t)) for t in range(T)] 305 | for n in range(N)] 306 | cwnds = [[s.Real('cwnd_%d,%d' % (n, t)) for t in range(T)] 307 | for n in range(N)] 308 | rates = [[s.Real('rate_%d,%d' % (n, t)) for t in range(T)] 309 | for n in range(N)] 310 | # Number of bytes that have been detected as lost so far (per flow) 311 | loss_detected = [[s.Real('loss_detected_%d,%d' % (n, t)) for t in range(T)] 312 | for n in range(N)] 313 | 314 | lnk = Link(inps, rates, s, C, D, buf_min, buf_max, compose=compose, name='') 315 | 316 | if alpha is None: 317 | alpha = s.Real('alpha') 318 | s.add(alpha > 0) 319 | if dupacks is None: 320 | dupacks = s.Real('dupacks') 321 | s.add(dupacks >= 0) 322 | s.add(dupacks == 3 * alpha) 323 | 324 | if not cfg.compose: 325 | if cfg.epsilon == "zero": 326 | s.add(Real("epsilon") == 0) 327 | elif cfg.epsilon == "lt_alpha": 328 | s.add(Real("epsilon") < alpha) 329 | elif cfg.epsilon == "lt_half_alpha": 330 | s.add(Real("epsilon") < alpha * 0.5) 331 | elif cfg.epsilon == "gt_alpha": 332 | s.add(Real("epsilon") > alpha) 333 | else: 334 | assert(False) 335 | 336 | # Figure out when we can detect losses 337 | max_loss_dt = T 338 | for n in range(N): 339 | for t in range(T): 340 | for dt in range(max_loss_dt): 341 | if t > 0: 342 | s.add(loss_detected[n][t] >= loss_detected[n][t-1]) 343 | if t - R - dt < 0: 344 | continue 345 | detectable = lnk.inps[n][t-R-dt] - lnk.losts[n][t-R-dt] + dupacks <= lnk.outs[n][t-R] 346 | s.add(Implies( 347 | detectable, 348 | loss_detected[n][t] >= lnk.losts[n][t - R - dt] 349 | )) 350 | s.add(Implies( 351 | Not(detectable), 352 | loss_detected[n][t] <= lnk.losts[n][t - R - dt] 353 | )) 354 | s.add(loss_detected[n][t] <= lnk.losts[n][t - R]) 355 | for t in range(R): 356 | s.add(loss_detected[n][t] == 0) 357 | 358 | # Set inps based on cwnds and rates 359 | for t in range(T): 360 | for n in range(N): 361 | # Max value due to cwnd 362 | if t >= R: 363 | inp_w = lnk.outs[n][t - R] + loss_detected[n][t] + cwnds[n][t] 364 | else: 365 | inp_w = cwnds[n][t] 366 | if t > 0: 367 | inp_w = If(inp_w < inps[n][t-1], inps[n][t-1], inp_w) 368 | 369 | # Max value due to rate 370 | if t > 0: 371 | inp_r = inps[n][t-1] + rates[n][t] 372 | 373 | # Max of the two values 374 | s.add(inps[n][t] == If(inp_w < inp_r, inp_w, inp_r)) 375 | else: 376 | # Unconstrained 377 | pass 378 | 379 | # Congestion control 380 | if cca == "const": 381 | assert(freedom_duration(cfg) == 0) 382 | for n in range(N): 383 | for t in range(T): 384 | s.add(cwnds[n][t] == alpha) 385 | s.add(rates[n][t] == C * 10) 386 | elif cca == "aimd": 387 | # The last send sequence number at which a loss was detected 388 | last_loss = [[s.Real('last_loss_%d,%d' % (n, t)) for t in range(T)] 389 | for n in range(N)] 390 | next_incr = [[s.Real('next_incr_%d,%d' % (n, t)) for t in range(T)] 391 | for n in range(N)] 392 | for n in range(N): 393 | assert(freedom_duration(cfg) == 1) 394 | s.add(cwnds[n][0] > 0) 395 | s.add(last_loss[n][0] == 0) 396 | s.add(next_incr[n][0] == cwnds[n][0]) 397 | for t in range(T): 398 | if pacing: 399 | s.add(rates[n][t] == 2 * cwnds[n][t] / R) 400 | else: 401 | s.add(rates[n][t] == C * 100) 402 | if t > 0: 403 | # We compare last_loss to outs[t-1-R] (and not outs[t-R]) 404 | # because otherwise it is possible to react to the same loss 405 | # twice 406 | if t > R+1: 407 | decrease = And( 408 | loss_detected[n][t] > loss_detected[n][t-1], 409 | last_loss[n][t-1] <= lnk.outs[n][t-1-R] 410 | ) 411 | else: 412 | decrease = loss_detected[n][t] > loss_detected[n][t-1] 413 | 414 | # Whether 1 RTT has passed and we can increase 415 | can_incr = And(next_incr[n][t-1] <= lnk.outs[n][t], 416 | last_loss[n][t-1] <= lnk.outs[n][t-1-R]) 417 | 418 | s.add(Implies(decrease, 419 | last_loss[n][t] == lnk.inps[n][t] + dupacks)) 420 | s.add(Implies(Not(decrease), 421 | last_loss[n][t] == last_loss[n][t-1])) 422 | 423 | s.add(Implies(decrease, cwnds[n][t] == cwnds[n][t-1] / 2)) 424 | s.add(Implies(And(Not(decrease), can_incr), 425 | And(cwnds[n][t] == cwnds[n][t-1] + alpha, 426 | next_incr[n][t] == next_incr[n][t-1] + cwnds[n][t]))) 427 | s.add(Implies(And(Not(decrease), Not(can_incr)), 428 | And(cwnds[n][t] == cwnds[n][t-1], 429 | next_incr[n][t] == next_incr[n][t-1]))) 430 | # s.add(Implies(decrease, 431 | # next_incr[n][t] == next_incr[n][t-1] + cwnds[n][t])) 432 | elif cca == "fixed_d": 433 | for n in range(N): 434 | for t in range(T): 435 | diff = 2 * R + 2 * D 436 | assert(freedom_duration(cfg) == R + diff + 1) 437 | if t - R - diff < 0: 438 | s.add(cwnds[n][t] > 0) 439 | s.add(rates[n][t] == cwnds[n][t] / R) 440 | else: 441 | if t % (R + diff) == 0: 442 | cwnd = lnk.outs[n][t-R] - lnk.outs[n][t-R-diff] + alpha 443 | s.add(cwnds[n][t] == cwnd) 444 | s.add(rates[n][t] == cwnds[n][t] / R) 445 | else: 446 | s.add(cwnds[n][t] == cwnds[n][t-1]) 447 | s.add(rates[n][t] == rates[n][t-1]) 448 | elif cca == "copa": 449 | for n in range(N): 450 | for t in range(T): 451 | assert(freedom_duration(cfg) == R + D) 452 | if t - freedom_duration(cfg) < 0: 453 | s.add(cwnds[n][t] > 0) 454 | else: 455 | incr_alloweds, decr_alloweds = [], [] 456 | for dt in range(lnk.max_dt): 457 | # Whether we are allowd to increase/decrease 458 | incr_allowed = s.Bool("incr_allowed_%d,%d,%d" % (n, t, dt)) 459 | decr_allowed = s.Bool("decr_allowed_%d,%d,%d" % (n, t, dt)) 460 | # Warning: Adversary here is too powerful if D > 1. Add 461 | # a constraint for every point between t-1 and t-1-D 462 | assert(D == 1) 463 | s.add(incr_allowed 464 | == And( 465 | lnk.qdel[t-R][dt], 466 | cwnds[n][t-1] * max(0, dt-1) <= alpha*(R+max(0, dt-1)))) 467 | s.add(decr_allowed 468 | == And( 469 | lnk.qdel[t-R-D][dt], 470 | cwnds[n][t-1] * dt >= alpha * (R + dt))) 471 | incr_alloweds.append(incr_allowed) 472 | decr_alloweds.append(decr_allowed) 473 | # If inp is high at the beginning, qdel can be arbitrarily 474 | # large 475 | decr_alloweds.append(lnk.tot_out[t-R] < lnk.tot_inp[0]) 476 | 477 | incr_allowed = Or(*incr_alloweds) 478 | decr_allowed = Or(*decr_alloweds) 479 | 480 | # Either increase or decrease cwnd 481 | incr = s.Bool("incr_%d,%d" % (n, t)) 482 | decr = s.Bool("decr_%d,%d" % (n, t)) 483 | s.add(Or( 484 | And(incr, Not(decr)), 485 | And(Not(incr), decr))) 486 | s.add(Implies(incr, incr_allowed)) 487 | s.add(Implies(decr, decr_allowed)) 488 | s.add(Implies(incr, cwnds[n][t] == cwnds[n][t-1]+alpha/R)) 489 | sub = cwnds[n][t-1] - alpha / R 490 | s.add(Implies(decr, cwnds[n][t] 491 | == If(sub < alpha, alpha, sub))) 492 | 493 | # Basic constraints 494 | s.add(cwnds[n][t] > 0) 495 | # Pacing 496 | s.add(rates[n][t] == cwnds[n][t] / R) 497 | # s.add(rates[n][t] == 50) 498 | 499 | elif cca == "copa_multiflow": 500 | copa_qdel = [s.Real(f"copa_qdel,{t}") for t in range(T)] 501 | for t in range(T): 502 | # Warning: Adversary here is too powerful if D > 1. Add 503 | # a constraint for every point between t-1 and t-1-D 504 | assert(D == 1) 505 | if t - freedom_duration(cfg) < 0: 506 | continue 507 | s.add(Implies( 508 | lnk.qdel[t-R][dt], 509 | copa_qdel[t] >= max(0, dt-1))) 510 | s.add(Implies( 511 | lnk.qdel[t-R-D][dt], 512 | copa_qdel[t] <= dt)) 513 | for n in range(N): 514 | for t in range(T): 515 | assert(freedom_duration(cfg) == R + D) 516 | if t - freedom_duration(cfg) < 0: 517 | s.add(cwnds[n][t] > 0) 518 | else: 519 | incr = cwnds[n][t] * copa_qdel[t]\ 520 | <= alpha * (R + copa_qdel[t]) 521 | 522 | s.add(Implies(incr, cwnds[n][t] == cwnds[n][t-1]+alpha/R)) 523 | sub = cwnds[n][t-1] - alpha / R 524 | s.add(Implies(Not(incr), cwnds[n][t] 525 | == If(sub < alpha, alpha, sub))) 526 | 527 | # Basic constraints 528 | s.add(cwnds[n][t] > 0) 529 | # Pacing 530 | s.add(rates[n][t] == cwnds[n][t] / R) 531 | # s.add(rates[n][t] == 50) 532 | 533 | elif cca == "bbr": 534 | cycle_start = [[s.Real(f"cycle_start_{n},{t}") for t in range(T)] 535 | for n in range(N)] 536 | states = [[s.Int(f"states_{n},{t}") for t in range(T)] for n in range(N)] 537 | nrtts = [[s.Int(f"nrtts_{n},{t}") for t in range(T)] for n in range(N)] 538 | new_rates = [[s.Real(f"new_rates_{n},{t}") for t in range(T)] 539 | for n in range(N)] 540 | for n in range(N): 541 | s.add(states[n][0] == 0) 542 | s.add(nrtts[n][0] == 0) 543 | assert(freedom_duration(cfg) == R + 1) 544 | s.add(cycle_start[n][0] <= lnk.inps[n][0]) 545 | s.add(cycle_start[n][0] >= lnk.outs[n][0]) 546 | # for t in range(R + 1): 547 | # s.add(rates[n][t] > 0) 548 | # s.add(cwnds[n][t] == 2 * rates[n][t] * R) 549 | s.add(rates[n][0] > 0) 550 | s.add(cwnds[n][0] == 2 * R * rates[n][0]) 551 | 552 | for t in range(1, T): 553 | for dt in range(1, T): 554 | if t - dt < 0: 555 | continue 556 | # Has the cycle ended? We look at each dt separately so we 557 | # don't need to add non-linear constraints 558 | if t - dt == 0: 559 | ended = And(cycle_start[n][t-dt] == cycle_start[n][t-1], 560 | lnk.outs[n][t-R] >= cycle_start[n][t-1]) 561 | else: 562 | ended = And(cycle_start[n][t-dt] == cycle_start[n][t-1], 563 | cycle_start[n][t-dt] > cycle_start[n][t-dt-1], 564 | lnk.outs[n][t-R] >= cycle_start[n][t-1]) 565 | r1 = (lnk.outs[n][t] - lnk.outs[n][t-dt]) / dt 566 | r2 = (lnk.outs[n][t] - lnk.outs[n][t-dt-1]) / (dt+1) 567 | 568 | # The new rate should be between r1 and r2 569 | s.add(Implies(And(ended, r1 >= r2), 570 | And(r1 >= new_rates[n][t], 571 | r2 <= new_rates[n][t]))) 572 | s.add(Implies(And(ended, r2 >= r1), 573 | And(r1 <= new_rates[n][t], 574 | r2 >= new_rates[n][t]))) 575 | # Useful in case `ended` is not true for any dt because 576 | # tot_inp_0 >> tot_out_0 577 | s.add(new_rates[n][t] > 0) 578 | 579 | 580 | # Find the maximum rate in the last 10 RTTs 581 | max_rates = [s.Real(f"max_rates_{n},{t},{dt}") for dt in range(t+1)] 582 | s.add(max_rates[0] == new_rates[n][t]) 583 | # Create a separate variable for easy access from the plotting script 584 | max_rate = s.Real(f"max_rate_{n},{t}") 585 | s.add(max_rate == max_rates[-1]) 586 | for dt in range(1, t+1): 587 | assert(t - dt >= 0) 588 | calc = nrtts[n][t] - nrtts[n][t-dt] < 10 589 | s.add(Implies(calc, 590 | max_rates[dt] 591 | == If(max_rates[dt-1] > new_rates[n][t-dt], 592 | max_rates[dt-1], new_rates[n][t-dt]))) 593 | s.add(Implies(Not(calc), 594 | max_rates[dt] == max_rates[dt-1])) 595 | 596 | if t - R < 0: 597 | ended = False 598 | else: 599 | ended = lnk.outs[n][t-R] >= cycle_start[n][t-1] 600 | 601 | # Cycle did not end. Things remain the same 602 | s.add(Implies(Not(ended), 603 | And(new_rates[n][t] == new_rates[n][t-1], 604 | cycle_start[n][t] == cycle_start[n][t-1], 605 | states[n][t] == states[n][t-1], 606 | nrtts[n][t] == nrtts[n][t-1]))) 607 | 608 | # Things changed. Update states, cycle_start and cwnds 609 | s.add(Implies(ended, 610 | And(cycle_start[n][t] <= lnk.inps[n][t], 611 | cycle_start[n][t] >= lnk.inps[n][t-1], 612 | states[n][t] == If(states[n][t-1] < 4, 613 | states[n][t-1] + 1, 614 | 0), 615 | nrtts[n][t] == nrtts[n][t-1] + 1))) 616 | 617 | # If the cycle *just* ended, then the new rate can be anything 618 | # between the old rate and the new rate 619 | # in_between_rate = s.Real(f"in_between_rate_{n},{t}") 620 | # s.add(Implies(And(ended, max_rate >= rates[n][t-1]), 621 | # And(max_rate >= in_between_rate, 622 | # rates[n][t-1] <= in_between_rate))) 623 | # s.add(Implies(And(ended, max_rate <= rates[n][t-1]), 624 | # And(max_rate <= in_between_rate, 625 | # rates[n][t-1] >= in_between_rate))) 626 | # s.add(Implies(Not(ended), in_between_rate == max_rate)) 627 | 628 | # Implement min cwnd 629 | in_between_rate = If(max_rate < alpha / R, alpha / R, max_rate) 630 | 631 | s.add(rates[n][t] == If(states[n][t] == 0, 632 | 1.25 * in_between_rate, 633 | If(states[n][t] == 1, 634 | 0.75 * in_between_rate, 635 | in_between_rate))) 636 | s.add(cwnds[n][t] == 2 * R * in_between_rate) 637 | 638 | else: 639 | print("Unrecognized cca") 640 | exit(1) 641 | return s 642 | 643 | 644 | def freedom_duration(cfg: ModelConfig) -> int: 645 | ''' The amount of time for which the cc can pick any cwnd ''' 646 | if cfg.cca == "const": 647 | return 0 648 | elif cfg.cca == "aimd": 649 | return 1 650 | elif cfg.cca == "fixed_d": 651 | return 3 * cfg.R + 2 * cfg.D + 1 652 | elif cfg.cca == "copa": 653 | return cfg.R + cfg.D 654 | elif cfg.cca == "copa_multiflow": 655 | return cfg.R + cfg.D 656 | elif cfg.cca == "bbr": 657 | return cfg.R + 1 658 | else: 659 | assert(False) 660 | 661 | 662 | def plot_model(m: Dict[str, Union[float, bool]], cfg: ModelConfig): 663 | def to_arr(name: str, n: Optional[int] = None) -> np.ndarray: 664 | if n is None: 665 | names = [f"{name}_{t}" for t in range(cfg.T)] 666 | else: 667 | names = [f"{name}_{n},{t}" for t in range(cfg.T)] 668 | res = [] 669 | for n in names: 670 | if n in m: 671 | res.append(float(m[n])) 672 | else: 673 | res.append(-1) 674 | return np.asarray(res) 675 | 676 | if cfg.alpha is None: 677 | alpha = float(m["alpha"]) 678 | else: 679 | alpha = cfg.alpha 680 | 681 | # Print the constants we picked 682 | if cfg.dupacks is None: 683 | print("dupacks = ", m["dupacks"]) 684 | print("alpha = ", alpha) 685 | print(f"BDP = {cfg.C * cfg.R / alpha} packets") 686 | if not cfg.compose: 687 | print("epsilon = ", m["epsilon"]) 688 | for n in range(cfg.N): 689 | print(f"Init cwnd for flow {n}: ", 690 | to_arr("cwnd", n)[:freedom_duration(cfg)]) 691 | 692 | # Configure the plotting 693 | fig, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True) 694 | fig.set_size_inches(18.5, 10.5) 695 | ax1.grid(True) 696 | ax2.grid(True) 697 | ax3.grid(True) 698 | ax1.set_ylabel("Cum. Packets") 699 | ax3.set_ylabel("Packets") 700 | ax2.set_xticks(range(0, cfg.T)) 701 | ax2.yaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) 702 | 703 | # Create 3 y-axes in the second plot 704 | ax2_rtt = ax2.twinx() 705 | ax2_rate = ax2.twinx() 706 | ax2.set_ylabel("Cwnd") 707 | ax2_rtt.set_ylabel("RTT") 708 | ax2_rate.set_ylabel("Rate") 709 | ax2_rate.spines["right"].set_position(("axes", 1.05)) 710 | ax2_rate.spines["right"].set_visible(True) 711 | 712 | linestyles = ['--', ':', '-.', '-'] 713 | adj = 0 # np.asarray([C * t for t in range(T)]) 714 | times = [t for t in range(cfg.T)] 715 | ct = np.asarray([cfg.C * t for t in range(cfg.T)]) 716 | 717 | ax1.plot(times, (ct - to_arr("wasted")) / alpha, 718 | color='black', marker='o', label='Bound', linewidth=3) 719 | ax1.plot(times[cfg.D:], (ct - to_arr("wasted"))[:-cfg.D] / alpha, 720 | color='black', marker='o', linewidth=3) 721 | ax1.plot(times, to_arr("tot_out") / alpha, 722 | color='red', marker='o', label='Total Egress') 723 | ax1.plot(times, to_arr("tot_inp") / alpha, 724 | color='blue', marker='o', label='Total Ingress') 725 | ax1.plot(times, (to_arr("tot_inp") - to_arr("tot_lost")) / alpha, 726 | color='lightblue', marker='o', label='Total Ingress Accepted') 727 | 728 | # Print incr/decr allowed 729 | if cfg.cca == "copa": 730 | print("Copa queueing delay calculation. Format [incr/decr/qdel]") 731 | for n in range(cfg.N): 732 | print(f"Flow {n}") 733 | for t in range(cfg.T): 734 | print("{:<3}".format(t), end=": ") 735 | for dt in range(cfg.T): 736 | iname = f"incr_allowed_{n},{t},{dt}" 737 | dname = f"decr_allowed_{n},{t},{dt}" 738 | qname = f"qdel_{t},{dt}" 739 | if iname not in m: 740 | print(f" - /{int(m[qname])}", end=" ") 741 | else: 742 | print(f"{int(m[iname])}/{int(m[dname])}/{int(m[qname])}", 743 | end=" ") 744 | print("") 745 | 746 | col_names: List[str] = ["wasted", "tot_out", "tot_inp", "tot_lost"] 747 | per_flow: List[str] = ["loss_detected", "last_loss", "cwnd", "rate"] 748 | if cfg.cca == "bbr": 749 | per_flow.extend([f"max_rate", "states"]) 750 | 751 | cols: List[Tuple[str, Optional[int]]] = [(x, None) for x in col_names] 752 | for n in range(cfg.N): 753 | for x in per_flow: 754 | if x == "last_loss" and cfg.cca != "aimd": 755 | continue 756 | cols.append((x, n)) 757 | col_names.append(f"{x}_{n}") 758 | 759 | print("\n", "=" * 30, "\n") 760 | print(("t " + "{:<15}" * len(col_names)).format(*col_names)) 761 | for t, vals in enumerate(zip(*[list(to_arr(*c)) for c in cols])): 762 | v = ["%.10f" % v for v in vals] 763 | print(f"{t: <2}", ("{:<15}" * len(v)).format(*v)) 764 | 765 | # Calculate RTT (misnomer. Really just qdel) 766 | rtts_u, rtts_l, rtt_times = [], [], [] 767 | for t in range(cfg.T): 768 | rtt_u, rtt_l = None, None 769 | for dt in range(cfg.T): 770 | if m[f"qdel_{t},{dt}"]: 771 | assert(rtt_l is None) 772 | rtt_l = max(0, dt - 1) 773 | if t >= cfg.D: 774 | if m[f"qdel_{t-cfg.D},{dt}"]: 775 | assert(rtt_u is None) 776 | rtt_u = dt 777 | else: 778 | rtt_u = 0 779 | if rtt_u is None: 780 | rtt_u = rtt_l 781 | if rtt_l is None: 782 | rtt_l = rtt_u 783 | if rtt_l is not None: 784 | assert(rtt_u is not None) 785 | rtt_times.append(t) 786 | rtts_u.append(rtt_u) 787 | rtts_l.append(rtt_l) 788 | ax2_rtt.fill_between(rtt_times, rtts_u, rtts_l, 789 | color='lightblue', label='RTT', alpha=0.5) 790 | ax2_rtt.plot(rtt_times, rtts_u, rtts_l, 791 | color='blue', marker='o') 792 | 793 | for n in range(cfg.N): 794 | args = {'marker': 'o', 'linestyle': linestyles[n]} 795 | 796 | ax2.plot(times, to_arr("cwnd", n) / alpha, 797 | color='black', label='Cwnd %d' % n, **args) 798 | ax2_rate.plot(times, to_arr("rate", n) / alpha, 799 | color='orange', label='Rate %d' % n, **args) 800 | 801 | queue = to_arr("tot_inp") - to_arr("tot_lost") - to_arr("tot_out") 802 | toks = ct - to_arr("wasted") - to_arr("tot_out") 803 | wasted = to_arr("wasted") 804 | prev_toks = [toks[0]] 805 | if cfg.buf_min is not None: 806 | loss_thresh = [- wasted[0] + cfg.buf_min] 807 | for t in range(1, len(times)): 808 | prev_toks.append(toks[t] + wasted[t] - wasted[t-1]) 809 | if cfg.buf_min is not None: 810 | loss_thresh.append(cfg.C * (t-1) - wasted[t-1] 811 | - to_arr("tot_out")[t] + cfg.buf_min) 812 | prev_toks = np.asarray(prev_toks) 813 | if cfg.buf_min is not None: 814 | loss_thresh = np.asarray(loss_thresh) 815 | 816 | ax3.plot(times, queue / alpha, 817 | label="Queue length", marker="o", color="blue") 818 | ax3.plot(times, toks / alpha, 819 | label="Tokens", marker="o", color="black") 820 | ax3.fill_between(times, prev_toks / alpha, toks / alpha, 821 | color="lightgrey") 822 | ax3.plot(times[1:], (to_arr("tot_out")[1:] - to_arr("tot_out")[:-1]) / alpha, 823 | label="Service", marker="o", color="red") 824 | if cfg.buf_min is not None: 825 | ax3.plot(times, loss_thresh / alpha, label="Loss thresh", 826 | marker="o", color="green") 827 | for t in range(1, len(times)): 828 | if m[f"tot_lost_{t}"] > m[f"tot_lost_{t-1}"]: 829 | ax1.plot([t], [0.1 / alpha], marker="X", color="red") 830 | ax2.plot([t], [0.1 / alpha], marker="X", color="red") 831 | ax3.plot([t], [0.1 / alpha], marker="X", color="red") 832 | if m[f"loss_detected_0,{t}"] > m[f"loss_detected_0,{t-1}"]: 833 | ax1.plot([t], [0.1 / alpha], marker="X", color="orange") 834 | ax2.plot([t], [0.1 / alpha], marker="X", color="orange") 835 | ax3.plot([t], [0.1 / alpha], marker="X", color="orange") 836 | 837 | ax1.set_ylim(0, to_arr("tot_inp")[-1] / alpha) 838 | if cfg.buf_min is not None: 839 | ax3.set_ylim(0, max(np.max(queue), np.max(toks[:-1]), np.max(loss_thresh[:-1])) / alpha) 840 | else: 841 | ax3.set_ylim(0, max(np.max(queue), np.max(toks[:-1])) / alpha) 842 | 843 | ax1.legend() 844 | ax2.legend() 845 | ax2_rtt.legend() 846 | ax2_rate.legend() 847 | ax3.legend() 848 | plt.savefig('multi_flow_plot.svg') 849 | plt.show() 850 | 851 | 852 | if __name__ == "__main__": 853 | cfg = ModelConfig( 854 | N=1, 855 | D=1, 856 | R=1, 857 | T=15, 858 | C=1, 859 | buf_min=None, 860 | buf_max=None, 861 | dupacks=None, 862 | cca="bbr", 863 | compose=True, 864 | alpha=None, 865 | pacing=False, 866 | epsilon="zero", 867 | unsat_core=False) 868 | s = make_solver(cfg) 869 | dur = freedom_duration(cfg) 870 | 871 | # Query constraints 872 | 873 | # Low utilization 874 | s.add(Real(f"cwnd_0,{freedom_duration(cfg)-1}") >= s.Real(f"cwnd_0,{cfg.T-1}")) 875 | s.add(Real(f"tot_out_{cfg.T-1}") - Real("tot_out_0") < 0.1 * cfg.C * (cfg.T - 1)) 876 | s.add(Real("tot_lost_0") == 0) 877 | 878 | # AIMD loss 879 | # cons = [] 880 | # for t in range(1, cfg.T): 881 | # cons.append(And(Real(f"tot_lost_{t}") > Real(f"tot_lost_{t-1}"), 882 | # Real(f"cwnd_0,{t}") <= 1.1)) 883 | # s.add(Or(*cons)) 884 | # s.add(Real("tot_lost_0") == 0) 885 | 886 | # Run the model 887 | from questions import run_query 888 | from clean_output import simplify_solution 889 | 890 | satisfiable = run_query(s, cfg) 891 | satisfiable = s.check() 892 | print(satisfiable) 893 | if str(satisfiable) != 'sat': 894 | if cfg.unsat_core: 895 | print(s.unsat_core()) 896 | exit() 897 | m = s.model() 898 | m = model_to_dict(m) 899 | 900 | m = simplify_solution(cfg, m, s.assertions()) 901 | 902 | plot_model(m, cfg) 903 | -------------------------------------------------------------------------------- /old/questions.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import pickle as pkl 3 | from typing import Callable, Optional, Tuple 4 | from z3 import And, Or, Real, Solver 5 | 6 | from binary_search import BinarySearch, sat_to_val 7 | from cache import QueryResult, run_query 8 | from multi_flow import ModelConfig, freedom_duration, make_solver, plot_model 9 | 10 | 11 | def find_bound(model_cons: Callable[[ModelConfig, float], Solver], 12 | cfg: ModelConfig, search: BinarySearch, timeout: float, 13 | reverse: bool = False): 14 | while True: 15 | thresh = search.next_pt() 16 | if thresh is None: 17 | break 18 | s = model_cons(cfg, thresh) 19 | 20 | print(f"Testing threshold = {thresh}") 21 | qres = run_query(s, cfg, timeout=timeout) 22 | 23 | print(qres.satisfiable) 24 | search.register_pt(thresh, sat_to_val(qres.satisfiable, reverse)) 25 | return search.get_bounds() 26 | 27 | 28 | def find_lower_tpt_bound( 29 | cfg: ModelConfig, err: float, timeout: float 30 | ) -> Tuple[float, Optional[Tuple[float, float]], float]: 31 | ''' Finds a bound in terms of percentage used ''' 32 | search = BinarySearch(0.0, 1.0, err) 33 | while True: 34 | pt = search.next_pt() 35 | if pt is None: 36 | break 37 | 38 | s = make_solver(cfg) 39 | 40 | # Add constraints that allow us to extend this finite time result to 41 | # infinity via induction. 42 | 43 | # We start at timestep 1, so inp and out can have any value in the 44 | # beginning 45 | 46 | # Rate is high at some point in the future. We use from 0 to t (and not 47 | # 1 to t-1 or 0 to t+1) because both 0 to 1 is already required to have 48 | # a high rate, so our bound can exploit that. required to have a high 49 | # rate. 50 | 51 | # We do not take this idea further and pick 0 to t+1 because after 52 | # induction we say that we have a sequence of *non-overlapping* 53 | # intervals with a high rate, therefore the whole period also has a 54 | # high rate. This doesn't work if the intervals overlap, since the 55 | # overlapping portion could have a very high rate which will be double 56 | # counted. 57 | proven_cond = [] 58 | for t in range(2, cfg.T): 59 | proven_cond.append( 60 | Or( 61 | Real("tot_out_%d" % (t + 1)) - Real("tot_out_%d" % t) 62 | <= cfg.C*pt, 63 | Real("tot_out_%d" % t)-Real("tot_out_1") <= cfg.C*(t-1)*pt 64 | ) 65 | ) 66 | s.add(And( 67 | Real("tot_out_2") - Real("tot_out_1") >= cfg.C * pt, 68 | And(*proven_cond) 69 | )) 70 | 71 | # Utilization 72 | s.add(Real("tot_out_%d" % (cfg.T - 1)) < cfg.C * (cfg.T - 1) * pt) 73 | 74 | print(f"Testing {pt * 100}% utilization") 75 | qres = run_query(s, cfg, timeout=timeout) 76 | 77 | print(qres.satisfiable) 78 | search.register_pt(pt, sat_to_val(qres.satisfiable)) 79 | return search.get_bounds() 80 | 81 | 82 | def find_const_cwnd_util_lbound( 83 | cfg: ModelConfig, cwnd_thresh: float, err: float, timeout: float 84 | ): 85 | ''' Find a (possibly loose) bound on the minimum utilization it will 86 | eventially achieve if initial cwnds are all greater than given threshold. 87 | ''' 88 | 89 | search = BinarySearch(0, 1.0, err) 90 | while True: 91 | pt = search.next_pt() 92 | if pt is None: 93 | break 94 | 95 | s = make_solver(cfg) 96 | 97 | for n in range(cfg.N): 98 | for t in range(freedom_duration(cfg)): 99 | s.add(Real(f"cwnd_{n},{t}") >= cwnd_thresh) 100 | 101 | s.add(Real(f"tot_out_{cfg.T-1}") < pt * cfg.C * (cfg.T - 1)) 102 | 103 | print(f"Testing {pt * 100}% utilization") 104 | qres = run_query(s, cfg, timeout=timeout) 105 | 106 | print(qres.satisfiable) 107 | search.register_pt(pt, sat_to_val(qres.satisfiable)) 108 | return search.get_bounds() 109 | 110 | 111 | def find_cwnd_incr_bound( 112 | cfg: ModelConfig, max_cwnd: Optional[float], err: float, timeout: float 113 | ): 114 | ''' Find a threshold such that, if the cwnd starts below this threshold, it 115 | would increase past that threshold at the end of the timeframe. Then 116 | invoke find_const_cwnd_util_lbound. ''' 117 | if max_cwnd is None: 118 | if cfg.buf_max is None: 119 | print("Error: Neither max_cwnd nor buf_max are specified") 120 | return 121 | max_cwnd = cfg.C * cfg.R + cfg.buf_max 122 | # In multiple of BDP 123 | search = BinarySearch(0.01, max_cwnd, err) 124 | while True: 125 | thresh = search.next_pt() 126 | if thresh is None: 127 | break 128 | 129 | s = make_solver(cfg) 130 | 131 | conds = [] 132 | dur = freedom_duration(cfg) 133 | for n in range(cfg.N): 134 | for t in range(dur): 135 | s.add(Real(f"cwnd_{n},{t}") <= thresh) 136 | # We need all the last freedom_duration(cfg) timesteps to be 137 | # large so we can apply induction to extend theorem to infinity 138 | conds.append(And( 139 | Real(f"cwnd_{n},{cfg.T-1-t}") 140 | < Real(f"cwnd_{n},{dur-1}") + Real("alpha"), 141 | Real(f"cwnd_{n},{cfg.T-1-t}") < thresh)) 142 | s.add(Or(*conds)) 143 | 144 | print(f"Testing init cwnd = {thresh}") 145 | qres = run_query(s, cfg, timeout=timeout) 146 | 147 | print(qres.satisfiable) 148 | search.register_pt(thresh, sat_to_val(qres.satisfiable)) 149 | 150 | return search.get_bounds() 151 | 152 | 153 | def cwnd_stay_bound(cfg: ModelConfig, thresh: float) -> Solver: 154 | s = make_solver(cfg) 155 | conds = [] 156 | dur = freedom_duration(cfg) 157 | for n in range(cfg.N): 158 | for t in range(dur): 159 | s.add(Real(f"cwnd_{n},{t}") >= thresh) 160 | for t in range(dur, cfg.T): 161 | # We need all the last freedom_duration(cfg) timesteps to be 162 | # large so we can apply induction to extend theorem to infinity 163 | conds.append(Real(f"cwnd_{n},{t}") < thresh) 164 | s.add(Or(*conds)) 165 | return s 166 | 167 | 168 | def find_periodic_low_util( 169 | cfg: ModelConfig, no_loss: bool, err: float, timeout: float, 170 | ): 171 | T = cfg.T 172 | 173 | search = BinarySearch(0, 1, err) 174 | while True: 175 | util = search.next_pt() 176 | if util is None: 177 | break 178 | 179 | s = make_solver(cfg) 180 | 181 | # Make sure everything is periodic 182 | dur = freedom_duration(cfg) 183 | for n in range(cfg.N): 184 | for t in range(dur): 185 | tf = cfg.T-dur+t 186 | s.add(Real(f"cwnd_{n},{t}") == Real(f"cwnd_{n},{tf}")) 187 | 188 | s.add(Real(f"inp_{n},0") - Real(f"out_{n},0") 189 | == Real(f"inp_{n},{T-1}") - Real(f"out_{n},{T-1}")) 190 | s.add(Real(f"losts_{n},0") - Real(f"loss_detected_{n},0") 191 | == Real(f"losts_{n},{T-1}") - Real(f"loss_detected_{n},{T-1}")) 192 | 193 | if no_loss: 194 | s.add(Real(f"losts_{n},0") == 0) 195 | s.add(Real(f"tot_out_{cfg.T-1}") < util * cfg.C * (T - 1)) 196 | 197 | # Eliminate timeouts where we just stop sending packets 198 | for t in range(cfg.T): 199 | s.add(Real(f"tot_inp_{t}") - Real(f"tot_lost_{t}") 200 | > Real(f"tot_out_{t}")) 201 | 202 | print(f"Testing {util * 100}% utilization") 203 | qres = run_query(s, cfg, timeout=timeout) 204 | 205 | print(qres.satisfiable) 206 | search.register_pt(util, sat_to_val(qres.satisfiable)) 207 | return search.get_bounds() 208 | 209 | 210 | def find_periodic_low_cwnd( 211 | cfg: ModelConfig, no_loss: bool, err: float, timeout: float 212 | ): 213 | T = cfg.T 214 | 215 | search = BinarySearch(0, cfg.C * T + cfg.buf_max, err) 216 | while True: 217 | thresh = search.next_pt() 218 | if thresh is None: 219 | break 220 | 221 | s = make_solver(cfg) 222 | 223 | # Make sure everything is periodic 224 | dur = freedom_duration(cfg) 225 | for n in range(cfg.N): 226 | for t in range(dur): 227 | tf = cfg.T-dur+t 228 | s.add(Real(f"cwnd_{n},{t}") == Real(f"cwnd_{n},{tf}")) 229 | 230 | s.add(Real(f"inp_{n},0") - Real(f"losts_{n},0") - Real(f"out_{n},0") 231 | == Real(f"inp_{n},{T-1}") - Real(f"losts_{n},{T-1}") - Real(f"out_{n},{T-1}")) 232 | s.add(Real(f"losts_{n},0") - Real(f"loss_detected_{n},0") 233 | == Real(f"losts_{n},{T-1}") - Real(f"loss_detected_{n},{T-1}")) 234 | 235 | if no_loss: 236 | s.add(Real(f"losts_{n},0") == 0) 237 | 238 | s.add(Or(*[Real(f"cwnd_{n},{t}") < thresh for t in range(T)])) 239 | 240 | s.add(Real("tot_out_0") - (-Real("wasted_0")) 241 | == Real(f"tot_out_{T-1}") - (cfg.C * (T-1) - Real(f"wasted_{T-1}"))) 242 | 243 | print(f"Testing cwnd thresh {thresh / (cfg.C * cfg.R)} BDP") 244 | qres = run_query(s, cfg, timeout=timeout) 245 | 246 | print(qres.satisfiable) 247 | search.register_pt(thresh, sat_to_val(qres.satisfiable)) 248 | return search.get_bounds() 249 | 250 | 251 | if __name__ == "__main__": 252 | cfg_args = ModelConfig.get_argparse() 253 | common_args = argparse.ArgumentParser(add_help=False) 254 | common_args.add_argument("--err", type=float, default=0.05) 255 | common_args.add_argument("--timeout", type=float, default=10) 256 | 257 | parser = argparse.ArgumentParser() 258 | subparsers = parser.add_subparsers(title="subcommand", dest="subcommand") 259 | 260 | tpt_bound_args = subparsers.add_parser( 261 | "tpt_bound", parents=[cfg_args, common_args]) 262 | 263 | cwnd_incr_bound_args = subparsers.add_parser( 264 | "cwnd_incr_bound", 265 | parents=[cfg_args, common_args]) 266 | cwnd_incr_bound_args.add_argument( 267 | "--max-cwnd", type=float, required=False, 268 | help="As a multiple of BDP, the max cwnd threshold we should consider") 269 | 270 | cwnd_stay_bound_args = subparsers.add_parser( 271 | "cwnd_stay_bound", 272 | parents=[cfg_args, common_args]) 273 | cwnd_stay_bound_args.add_argument( 274 | "--max-cwnd", type=float, required=False, 275 | help="As a multiple of BDP, the max cwnd threshold we should consider") 276 | 277 | const_cwnd_util_lbound_args = subparsers.add_parser( 278 | "const_cwnd_util_lbound", 279 | parents=[cfg_args, common_args]) 280 | const_cwnd_util_lbound_args.add_argument( 281 | "--cwnd-thresh", type=float, required=True) 282 | 283 | periodic_low_util_args = subparsers.add_parser( 284 | "periodic_low_util", 285 | parents=[cfg_args, common_args]) 286 | periodic_low_util_args.add_argument("--no-loss", action="store_const", 287 | const=True, default=False) 288 | 289 | periodic_low_cwnd_args = subparsers.add_parser( 290 | "periodic_low_cwnd", 291 | parents=[cfg_args, common_args]) 292 | periodic_low_cwnd_args.add_argument("--no-loss", action="store_const", 293 | const=True, default=False) 294 | 295 | plot_args = subparsers.add_parser("plot") 296 | plot_args.add_argument("cache_file_name") 297 | 298 | args = parser.parse_args() 299 | if args.subcommand != "plot": 300 | cfg = ModelConfig.from_argparse(args) 301 | 302 | if args.subcommand == "tpt_bound": 303 | bounds = find_lower_tpt_bound( 304 | cfg, args.err, args.timeout) 305 | print(bounds) 306 | elif args.subcommand == "cwnd_incr_bound": 307 | bounds = find_cwnd_incr_bound( 308 | cfg, args.max_cwnd, args.err, args.timeout) 309 | print(bounds) 310 | elif args.subcommand == "cwnd_stay_bound": 311 | bounds = find_bound(cwnd_stay_bound, cfg, 312 | BinarySearch(0, cfg.C*cfg.R+cfg.buf_max, args.err), 313 | args.timeout) 314 | print(bounds) 315 | elif args.subcommand == "const_cwnd_util_lbound": 316 | bounds = find_const_cwnd_util_lbound( 317 | cfg, args.cwnd_thresh, args.err, args.timeout) 318 | print(bounds) 319 | elif args.subcommand == "periodic_low_util": 320 | bounds = find_periodic_low_util( 321 | cfg, args.no_loss, args.err, args.timeout) 322 | print(bounds) 323 | elif args.subcommand == "periodic_low_cwnd": 324 | bounds = find_periodic_low_cwnd( 325 | cfg, args.no_loss, args.err, args.timeout) 326 | print(bounds) 327 | elif args.subcommand == "plot": 328 | try: 329 | f = open(args.cache_file_name, 'rb') 330 | qres: QueryResult = pkl.load(f) 331 | except Exception as e: 332 | print("Exception while loacing cached file") 333 | print(e) 334 | print(qres.satisfiable) 335 | if qres.satisfiable == "sat": 336 | print(qres) 337 | assert(qres.model is not None) 338 | plot_model(qres.model, qres.cfg) 339 | -------------------------------------------------------------------------------- /plot.py: -------------------------------------------------------------------------------- 1 | import matplotlib 2 | import matplotlib.pyplot as plt 3 | import numpy as np 4 | import pickle as pkl 5 | import sys 6 | from typing import List, Optional, Tuple, Union 7 | 8 | from pyz3_utils import QueryResult 9 | from config import ModelConfig 10 | from utils import ModelDict 11 | from variables import VariableNames 12 | 13 | 14 | def plot_model(m: ModelDict, c: ModelConfig, v: VariableNames): 15 | def to_arr(names: Union[List[str], List[List[str]], str], 16 | n: Optional[int] = None, frac=False) -> np.ndarray: 17 | if type(names) == str: 18 | # Sometimes name is a str, for instance when it is an internal CCA 19 | # variable and not available in Variables. In this case, we 20 | # directly convert to list 21 | if n is None: 22 | names = [f"{names}_{t}" for t in range(c.T)] 23 | else: 24 | names = [f"{names}_{n},{t}" for t in range(c.T)] 25 | else: 26 | if n is not None: 27 | assert type(names[0]) == list 28 | names = names[n] 29 | else: 30 | assert type(names[0]) == str 31 | res = [] 32 | for n in names: 33 | if n in m: 34 | res.append(m[n]) 35 | else: 36 | res.append(-1) 37 | if frac: 38 | return res 39 | return np.array(res) 40 | 41 | # Print the constants we picked 42 | # if c.dupacks is None: 43 | # print("dupacks = ", m[v.dupacks]) 44 | if c.cca in ["aimd", "fixed_d", "copa"] and c.alpha is None: 45 | print("alpha = ", v.alpha) 46 | if not c.compose: 47 | print("epsilon = ", v.epsilon) 48 | 49 | # Configure the plotting 50 | fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True) 51 | fig.set_size_inches(18.5, 10.5) 52 | ax1.grid(True) 53 | ax2.grid(True) 54 | ax2.set_xticks(range(0, c.T)) 55 | ax2.yaxis.set_major_locator(matplotlib.ticker.MaxNLocator(integer=True)) 56 | 57 | # Create 3 y-axes in the second plot 58 | ax2_rtt = ax2.twinx() 59 | ax2_rate = ax2.twinx() 60 | ax2.set_ylabel("Cwnd") 61 | ax2.set_xlabel("Time") 62 | ax2_rtt.set_ylabel("Q Delay") 63 | ax2_rate.set_ylabel("Rate") 64 | ax2_rate.spines["right"].set_position(("axes", 1.05)) 65 | ax2_rate.spines["right"].set_visible(True) 66 | 67 | linestyles = ['--', ':', '-.', '-'] 68 | adj = 0 # np.asarray([C * t for t in range(T)]) 69 | times = [t for t in range(c.T)] 70 | ct = np.asarray([c.C * t for t in range(c.T)]) 71 | 72 | ax1.plot(times, ct - np.asarray(v.W), 73 | color='black', marker='o', label='Bound', linewidth=3) 74 | ax1.plot(times[c.D:], (ct - to_arr("wasted"))[:-c.D], 75 | color='black', marker='o', linewidth=3) 76 | ax1.plot(times, np.asarray(v.S), 77 | color='red', marker='o', label='Total Service') 78 | ax1.plot(times, np.asarray(v.A), 79 | color='blue', marker='o', label='Total Arrival') 80 | ax1.plot(times, np.asarray(v.A) - np.asarray(v.L), 81 | color='lightblue', marker='o', label='Total Arrival Accepted') 82 | 83 | # Print incr/decr allowed 84 | if c.cca == "copa": 85 | print("Copa queueing delay calculation. Format [incr/decr/qdel]") 86 | for n in range(c.N): 87 | print(f"Flow {n}") 88 | for t in range(c.T): 89 | print("{:<3}".format(t), end=": ") 90 | for dt in range(c.T): 91 | iname = f"incr_allowed_{n},{t},{dt}" 92 | dname = f"decr_allowed_{n},{t},{dt}" 93 | qname = f"qdel_{t},{dt}" 94 | if iname not in m: 95 | print(f" - /{int(m[qname])}", end=" ") 96 | else: 97 | print( 98 | f"{int(m[iname])}/{int(m[dname])}/{int(m[qname])}", 99 | end=" ") 100 | print("") 101 | 102 | acc_flows: List[Any] = [v.W, v.S, v.A, v.L] 103 | acc_flows_names: List[str] = ["W", "S", "A", "L"] 104 | per_flow: List[Any] = [v.Ld_f, v.c_f, v.r_f] 105 | per_flow_names: List[str] = ["Ld_f", "c_f", "r_f"] 106 | if c.cca == "aimd": 107 | per_flow.append("last_loss") 108 | if c.cca == "bbr": 109 | for n in range(c.N): 110 | print("BBR start state = ", m[f"bbr_start_state_{n}"]) 111 | per_flow.extend(["max_rate"]) 112 | 113 | # def printable(names) -> str: 114 | # '''Create a human friendly name from the list after stripping 115 | # "{n},{t}"''' 116 | # if type(names) == str: 117 | # name = x 118 | # else: 119 | # assert type(names) == list 120 | # if type(names[0]) == str: 121 | # name = names[0] 122 | # elif type(names[0]) == list: 123 | # name = names[0][0] 124 | # name = '_'.join(name.split('_')[:-1]) 125 | # return name 126 | 127 | cols = acc_flows 128 | col_names: List[str] = acc_flows_names 129 | for n in range(c.N): 130 | for (x, name) in zip(per_flow, per_flow_names): 131 | cols.append(x[n]) 132 | col_names.append(f"{name}_{n}") 133 | 134 | # Print when we timed out 135 | for n in range(c.N): 136 | print(f"Flow {n} timed out at: ", 137 | [t for t in range(c.T) if m[f"timeout_{n},{t}"]]) 138 | 139 | print("\n", "=" * 30, "\n") 140 | print(("t " + "{:<15}" * len(col_names)).format(*col_names)) 141 | for t, vals in enumerate(zip(*[c for c in cols])): 142 | vals = ["%.10f" % float(v) for v in vals] 143 | print(f"{t: <2}", ("{:<15}" * len(vals)).format(*vals)) 144 | 145 | for n in range(c.N): 146 | args = {'marker': 'o', 'linestyle': linestyles[n]} 147 | 148 | if c.N > 1: 149 | ax1.plot(times, to_arr("service", n) - adj, 150 | color='red', label='Egress %d' % n, **args) 151 | ax1.plot(times, to_arr("arrival", n) - adj, 152 | color='blue', label='Ingress %d' % n, **args) 153 | 154 | ax1.plot(times, to_arr("losts", n) - adj, 155 | color='orange', label='Num lost %d' % n, **args) 156 | ax1.plot(times, to_arr("loss_detected", n)-adj, 157 | color='yellow', label='Num lost detected %d' % n, **args) 158 | 159 | ax2.plot(times, to_arr("cwnd", n), 160 | color='black', label='Cwnd %d' % n, **args) 161 | ax2_rate.plot(times, to_arr("rate", n), 162 | color='orange', label='Rate %d' % n, **args) 163 | 164 | # Determine queuing delay 165 | if not c.simplify and c.calculate_qdel: 166 | # This doesn't work with simplification, since numerical errors creep 167 | # up 168 | qdel_low = [] 169 | qdel_high = [] 170 | A = v.A 171 | L = v.L 172 | S = v.S 173 | for t in range(c.T): 174 | dt_found = None 175 | if t > 0 and S[t] == S[t-1]: 176 | assert(dt_found is None) 177 | qdel_low.append(qdel_low[-1]) 178 | qdel_high.append(qdel_high[-1]) 179 | dt_found = qdel_low[-1] 180 | continue 181 | for dt in range(t): 182 | if A[t-dt] - L[t-dt] == S[t] \ 183 | and (t-dt == 0 or A[t-dt] - L[t-dt] != A[t-dt-1] - L[t-dt-1]): 184 | assert(dt_found is None) 185 | dt_found = dt 186 | qdel_low.append(dt) 187 | qdel_high.append(dt) 188 | if A[t-dt-1] - L[t-dt-1] < S[t] and A[t-dt] - L[t-dt] > S[t]: 189 | assert(dt_found is None) 190 | dt_found = dt 191 | qdel_low.append(dt) 192 | qdel_high.append(dt+1) 193 | if A[0] - L[0] > S[t]: 194 | # Only lower bound is known 195 | assert(dt_found is None) 196 | qdel_low.append(t) 197 | qdel_high.append(1e9) # Infinity 198 | dt_found = t 199 | if A[0] - L[0] == S[t]: 200 | assert(dt_found is None) 201 | qdel_low.append(t) 202 | qdel_high.append(t) 203 | dt_found = t 204 | assert(dt_found is not None) 205 | max_qdel = max([x for x in qdel_high if x != 1e9]) 206 | ax2_rtt.set_ylim(min(qdel_low), max_qdel) 207 | ax2_rtt.fill_between(times, qdel_high, qdel_low, 208 | color="skyblue", alpha=0.5, label="Q Delay") 209 | 210 | ax1.legend() 211 | ax2.legend(loc="upper left") 212 | ax2_rate.legend(loc="upper center") 213 | ax2_rtt.legend(loc="upper right") 214 | plt.savefig('multi_flow_plot.svg') 215 | plt.show() 216 | 217 | 218 | if __name__ == "__main__": 219 | if len(sys.argv) != 2: 220 | print("Usage: python3 plot.py cache_file_name [simp]", file=sys.stderr) 221 | exit(1) 222 | try: 223 | f = open(sys.argv[1], 'rb') 224 | qres: QueryResult = pkl.load(f) 225 | except Exception as e: 226 | print("Exception while loacing cached file") 227 | print(e) 228 | 229 | print(qres.satisfiable) 230 | if qres.satisfiable == "sat": 231 | assert(qres.model is not None) 232 | plot_model(qres.model, qres.cfg) 233 | else: 234 | print("The query was unsatisfiable, so there is nothing to plot") 235 | -------------------------------------------------------------------------------- /test_cca_aimd.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from z3 import And, Not, Or 3 | 4 | from model import Variables, cwnd_rate_arrival, epsilon_alpha,\ 5 | initial, loss_detected, monotone, network, relate_tot 6 | from cca_aimd import AIMDVariables, can_incr, cca_aimd 7 | from config import ModelConfig 8 | from pyz3_utils import MySolver 9 | 10 | 11 | class TestCCAAimd(unittest.TestCase): 12 | def test_can_incr(self): 13 | def create(): 14 | c = ModelConfig.default() 15 | c.aimd_incr_irrespective = False 16 | c.simplify = True 17 | s = MySolver() 18 | v = Variables(c, s) 19 | 20 | monotone(c, s, v) 21 | initial(c, s, v) 22 | relate_tot(c, s, v) 23 | network(c, s, v) 24 | loss_detected(c, s, v) 25 | epsilon_alpha(c, s, v) 26 | cwnd_rate_arrival(c, s, v) 27 | return (c, s, v) 28 | 29 | # There exists a network trace where cwnd can increase 30 | c, s, v = create() 31 | cv = AIMDVariables(c, s) 32 | can_incr(c, s, v, cv) 33 | s.add(Or(*[cv.incr_f[0][t] for t in range(c.T)])) 34 | sat = s.check() 35 | self.assertEqual(str(sat), "sat") 36 | 37 | # If S increases enough, cwnd will always increase 38 | c, s, v = create() 39 | cv = cca_aimd(c, s, v) 40 | conds = [] 41 | for t in range(4, 5): # range(1, c.T): 42 | conds.append(And( 43 | v.S[t] - v.S[t-1] >= v.c_f[0][t], 44 | Not(cv.incr_f[0][t]))) 45 | s.add(Or(*conds)) 46 | sat = s.check() 47 | self.assertEqual(str(sat), "unsat") 48 | 49 | 50 | if __name__ == '__main__': 51 | unittest.main() 52 | -------------------------------------------------------------------------------- /test_clean_output.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from clean_output import LinearVars, eval_smt, anded_constraints, \ 3 | get_linear_vars, substitute_if 4 | from z3 import And, Bool, If, Implies, Or, Real, Solver 5 | 6 | 7 | class TestCleanOutput(unittest.TestCase): 8 | def test_eval_smt(self): 9 | s = Solver() 10 | s.add(Real('a') < Real('b')) 11 | s.add(Implies(Bool('x'), Real('a') > Real('b'))) 12 | s.add(Or(Bool('x'), Bool('y'))) 13 | 14 | self.assertFalse(eval_smt({"a": 0, "b": 1, "x": False, "y": False}, 15 | s.assertions())) 16 | self.assertFalse(eval_smt({"a": 0, "b": 1, "x": True, "y": False}, 17 | s.assertions())) 18 | self.assertTrue(eval_smt({"a": 0, "b": 1, "x": False, "y": True}, 19 | s.assertions())) 20 | 21 | def test_anded_constraints(self): 22 | s = Solver() 23 | e1 = Real("a") < Real("b") 24 | e2 = Real("b") + Real("c") >= Real("a") 25 | e3 = Real("a") == 100 26 | e4 = Real("b") > 101 27 | e5 = Real("a") <= 1 28 | e6 = Real("a") + 2 > Real("b") 29 | s.add(And(e1, e2)) 30 | s.add(Implies(e3, e4)) 31 | s.add(Or(e5, e6)) 32 | 33 | # self.assertEqual( 34 | # set(anded_constraints({"a": 0, "b": 1, "c": -1}, 35 | # s.assertions())), 36 | # set([e1, e2, e5])) 37 | # self.assertEqual( 38 | # set(anded_constraints({"a": 1, "b": 2, "c": 0}, 39 | # s.assertions())), 40 | # set([e1, e2, e5])) 41 | # self.assertEqual( 42 | # set(anded_constraints({"a": 100, "b": 101.5, "c": -0.5}, 43 | # s.assertions())), 44 | # set([e1, e2, e3, e4, e6])) 45 | 46 | def test_get_linear_vars(self): 47 | self.assertEqual( 48 | get_linear_vars(Real("a") + 2 * Real("b") - 1), 49 | LinearVars({"a": 1.0, "b": 2.0}, -1.0) 50 | ) 51 | 52 | self.assertEqual( 53 | get_linear_vars(Real("a") + 0.5 - (Real("c") + 2 * Real("b")) - 1), 54 | LinearVars({"a": 1, "b": -2, "c": -1}, -0.5) 55 | ) 56 | 57 | def test_substitute_if(self): 58 | e = If(Real("a") < Real("b"), Real("a"), Real("b")) 59 | self.assertEqual( 60 | substitute_if({"a": 0, "b": 1}, e), 61 | (Real("a"), [Real("a") < Real("b")]) 62 | ) 63 | self.assertEqual( 64 | substitute_if({"a": 1, "b": 0}, e), 65 | (Real("b"), [Real("a") < Real("b")]) 66 | ) 67 | self.assertEqual( 68 | substitute_if({"a": 1, "b": 1}, Real("c") == e), 69 | (Real("c") == Real("b"), [Real("a") < Real("b")]) 70 | ) 71 | self.assertEqual( 72 | substitute_if({"a": 1, "b": 1}, Real("a") + Real("b") >= 0), 73 | (Real("a") + Real("b") >= 0, []) 74 | ) 75 | 76 | 77 | if __name__ == "__main__": 78 | unittest.main() 79 | -------------------------------------------------------------------------------- /test_model.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from z3 import And, Implies, Or 3 | 4 | from config import ModelConfig 5 | from model import Variables, calculate_qdel, initial, monotone, make_solver, \ 6 | network, relate_tot 7 | from pyz3_utils import MySolver 8 | 9 | 10 | class TestModel(unittest.TestCase): 11 | def test_tot_inp_out(self): 12 | for N in [1, 2]: 13 | c = ModelConfig.default() 14 | c.N = N 15 | s = MySolver() 16 | v = Variables(c, s) 17 | monotone(c, s, v) 18 | relate_tot(c, s, v) 19 | 20 | conds = [] 21 | for t in range(1, c.T): 22 | conds.append(v.A[t] < v.A[t-1]) 23 | conds.append(v.L[t] < v.L[t-1]) 24 | conds.append(v.S[t] < v.S[t-1]) 25 | conds.append(v.A[t] - v.L[t] 26 | < v.A[t-1] - v.L[t-1]) 27 | s.add(Or(*conds)) 28 | 29 | sat = s.check() 30 | self.assertEqual(str(sat), "unsat") 31 | 32 | def test_service_extreme(self): 33 | def csv(): 34 | c = ModelConfig.default() 35 | s = MySolver() 36 | v = Variables(c, s) 37 | monotone(c, s, v) 38 | network(c, s, v) 39 | return (c, s, v) 40 | 41 | c, s, v = csv() 42 | s.add(v.S[c.T-1] - v.S[0] > c.C * c.T) 43 | sat = s.check() 44 | self.assertEqual(str(sat), "unsat") 45 | 46 | c, s, v = csv() 47 | s.add(v.S[c.T-1] - v.S[0] == c.C * c.T) 48 | sat = s.check() 49 | self.assertEqual(str(sat), "sat") 50 | 51 | c, s, v = csv() 52 | s.add(v.S[c.T-1] - v.S[0] < 0) 53 | sat = s.check() 54 | self.assertEqual(str(sat), "unsat") 55 | 56 | c, s, v = csv() 57 | s.add(v.S[c.T-1] - v.S[0] == 0) 58 | sat = s.check() 59 | self.assertEqual(str(sat), "sat") 60 | 61 | def test_cca_const(self): 62 | c = ModelConfig.default() 63 | c.cca = "const" 64 | for cwnd in [c.C * c.R / 2, c.C * c.R, 2 * c.C * c.R, 3 * c.C * c.R]: 65 | util_bound = max((c.T-1) * cwnd / (c.R + c.D), c.T*c.C) 66 | c.alpha = cwnd 67 | 68 | s, v = make_solver(c) 69 | s.add(v.S[c.T-1] - v.S[0] > util_bound) 70 | s.add(v.A[0] == v.S[0]) 71 | sat = s.check() 72 | self.assertEqual(str(sat), "unsat") 73 | 74 | s, v = make_solver(c) 75 | s.add(v.S[c.T-1] - v.S[0] <= util_bound) 76 | s.add(v.A[0] == v.S[0]) 77 | sat = s.check() 78 | self.assertEqual(str(sat), "sat") 79 | 80 | def test_qdel(self): 81 | c = ModelConfig.default() 82 | c.calculate_qdel = True 83 | s = MySolver() 84 | v = Variables(c, s) 85 | 86 | monotone(c, s, v) 87 | initial(c, s, v) 88 | relate_tot(c, s, v) 89 | network(c, s, v) 90 | calculate_qdel(c, s, v) 91 | 92 | # The only case where two dt s can be true at the same time is when we 93 | # don't get any fresh acks 94 | conds = [] 95 | for t in range(c.T): 96 | for dt1 in range(c.T): 97 | for dt2 in range(c.T): 98 | if dt1 == dt2: 99 | continue 100 | conds.append(And(v.qdel[t][dt1], v.qdel[t][dt2])) 101 | # s.add(Implies( 102 | # And(v.qdel[t][dt1], v.qdel[t][dt2]), 103 | # v.S[t])) 104 | s.add(Or(*conds)) 105 | sat = s.check() 106 | 107 | self.assertEqual(str(sat), "unsat") 108 | 109 | 110 | if __name__ == '__main__': 111 | unittest.main() 112 | -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | from fractions import Fraction 2 | from typing import Callable, Dict, Union 3 | import z3 4 | 5 | from config import ModelConfig 6 | from pyz3_utils import BinarySearch, MySolver 7 | 8 | ModelDict = Dict[str, Union[Fraction, bool]] 9 | 10 | 11 | def model_to_dict(model: z3.ModelRef) -> ModelDict: 12 | ''' Utility function that takes a z3 model and extracts its variables to a 13 | dict''' 14 | decls = model.decls() 15 | res: Dict[str, Union[float, bool]] = {} 16 | for d in decls: 17 | val = model[d] 18 | if type(val) == z3.BoolRef: 19 | res[d.name()] = bool(val) 20 | elif type(val) == z3.IntNumRef: 21 | res[d.name()] = Fraction(val.as_long()) 22 | else: 23 | # Assume it is numeric 24 | res[d.name()] = val.as_fraction() 25 | return res 26 | 27 | 28 | def make_periodic(c, s, v, dur: int): 29 | '''A utility function that makes the solution periodic. A periodic solution 30 | means the same pattern can repeat indefinitely. If we don't make it 31 | periodic, CCAC might output examples where the cwnd is very low to begin 32 | with, and *therefore* the utilization is low. If we are looking for 33 | examples that hold in steady-state, then making things periodic is an easy 34 | way to do that. 35 | 36 | `dur` is the number of timesteps for which the cwnd of our CCA is 37 | arbitrary. They are arbitrary to ensure the solver can pick any initial 38 | conditions. For AIMD dur=1, for Copa dur=c.R+c.D, for BBR dur=2*c.R 39 | 40 | ''' 41 | s.add(v.A[-1] - v.L[-1] - (c.C * (c.T - 1) - v.W[-1]) == v.A[0] - v.L[0] - 42 | (-v.W[0])) 43 | for n in range(c.N): 44 | s.add(v.A_f[n][-1] - v.L_f[n][-1] - v.S_f[n][-1] == v.A_f[n][0] - 45 | v.L_f[n][0] - v.S_f[n][0]) 46 | s.add(v.L_f[n][-1] - v.Ld_f[n][-1] == v.L_f[n][0] - v.Ld_f[n][0]) 47 | for dt in range(dur): 48 | s.add(v.c_f[n][c.T - 1 - dt] == v.c_f[n][dur - 1 - dt]) 49 | s.add(v.r_f[n][c.T - 1 - dt] == v.r_f[n][dur - 1 - dt]) 50 | 51 | 52 | def find_bound(model_cons: Callable[[ModelConfig, float], MySolver], 53 | cfg: ModelConfig, search: BinarySearch, timeout: float): 54 | while True: 55 | thresh = search.next_pt() 56 | if thresh is None: 57 | break 58 | s = model_cons(cfg, thresh) 59 | 60 | print(f"Testing threshold = {thresh}") 61 | qres = cache.run_query(s, cfg, timeout=timeout) 62 | 63 | print(qres.satisfiable) 64 | search.register_pt(thresh, sat_to_val(qres.satisfiable)) 65 | return search.get_bounds() 66 | -------------------------------------------------------------------------------- /variables.py: -------------------------------------------------------------------------------- 1 | from typing import Any, List, Optional, Tuple 2 | 3 | from config import ModelConfig 4 | from pyz3_utils import MySolver 5 | import pyz3_utils 6 | 7 | 8 | class Variables(pyz3_utils.Variables): 9 | ''' Some variables that everybody uses ''' 10 | 11 | def __init__(self, c: ModelConfig, s: MySolver, 12 | name: Optional[str] = None): 13 | T = c.T 14 | 15 | # Add a prefix to all names so we can have multiple Variables instances 16 | # in one solver 17 | if name is None: 18 | pre = "" 19 | else: 20 | pre = name + "__" 21 | self.pre = pre 22 | 23 | # Naming convention: X_f denotes per-flow values (note, we only study 24 | # the single-flow case in the paper) 25 | 26 | # Cumulative number of bytes sent by flow n till time t 27 | self.A_f = [[s.Real(f"{pre}arrival_{n},{t}") for t in range(T)] 28 | for n in range(c.N)] 29 | # Sum of A_f across all flows 30 | self.A = [s.Real(f"{pre}tot_arrival_{t}") for t in range(T)] 31 | # Congestion window for flow n at time t 32 | self.c_f = [[s.Real(f"{pre}cwnd_{n},{t}") for t in range(T)] 33 | for n in range(c.N)] 34 | # Pacing rate for flow n at time t 35 | self.r_f = [[s.Real(f"{pre}rate_{n},{t}") for t in range(T)] 36 | for n in range(c.N)] 37 | # Cumulative number of losses detected (by duplicate acknowledgements 38 | # or timeout) by flow n till time t 39 | self.Ld_f = [[s.Real(f"{pre}loss_detected_{n},{t}") 40 | for t in range(T)] 41 | for n in range(c.N)] 42 | # Cumulative number of bytes served from the server for flow n till 43 | # time t. These acks corresponding to these bytes will reach the sender 44 | # at time t+c.R 45 | self.S_f = [[s.Real(f"{pre}service_{n},{t}") for t in range(T)] 46 | for n in range(c.N)] 47 | # Sum of S_f across all flows 48 | self.S = [s.Real(f"{pre}tot_service_{t}") for t in range(T)] 49 | # Cumulative number of bytes lost for flow n till time t 50 | self.L_f = [[s.Real(f"{pre}losts_{n},{t}") for t in range(T)] 51 | for n in range(c.N)] 52 | # Sum of L_f for all flows 53 | self.L = [s.Real(f"{pre}tot_lost_{t}") for t in range(T)] 54 | # Cumulative number of bytes wasted by the server till time t 55 | self.W = [s.Real(f"{pre}wasted_{t}") for t in range(T)] 56 | # Whether or not flow n is timing out at time t 57 | self.timeout_f = [[s.Bool(f"{pre}timeout_{n},{t}") for t in range(T)] 58 | for n in range(c.N)] 59 | 60 | # If qdel[t][dt] is true, it means that the bytes exiting at t were 61 | # input at time t - dt. If out[t] == out[t-1], then qdel[t][dt] == 62 | # qdel[t-1][dt], since qdel isn't really defined (since no packets were 63 | # output), so we default to saying we experience the RTT of the last 64 | # received packet. 65 | 66 | # This is only computed when calculate_qdel=True since not all CCAs 67 | # require it. Of the CCAs implemented so far, only Copa requires it 68 | if c.calculate_qdel: 69 | self.qdel = [[s.Bool(f"{pre}qdel_{t},{dt}") for dt in range(T)] 70 | for t in range(T)] 71 | 72 | # This is for the non-composing model where waste is allowed only when 73 | # A - L and S come within epsilon of each other. See in 'config' for 74 | # how epsilon can be configured 75 | if not c.compose: 76 | self.epsilon = s.Real(f"{pre}epsilon") 77 | 78 | # The number of dupacks that need to arrive before we declare that a 79 | # loss has occured by dupacks. Z3 can usually pick any amount. You can 80 | # also set dupacks = 3 * alpha to emulate the usual behavior 81 | if c.dupacks is None: 82 | self.dupacks = s.Real(f"{pre}dupacks") 83 | s.add(self.dupacks >= 0) 84 | else: 85 | self.dupacks = c.dupacks 86 | 87 | # The MSS. Since C=1 (arbitrary units), C / alpha sets the link rate in 88 | # MSS/timestep. Typically we allow Z3 to pick any value it wants to 89 | # search through the set of all possible link rates 90 | if c.alpha is None: 91 | self.alpha = s.Real(f"{pre}alpha") 92 | s.add(self.alpha > 0) 93 | else: 94 | self.alpha = c.alpha 95 | 96 | 97 | class VariableNames: 98 | ''' Class with the same structure as Variables, but with just the names ''' 99 | def __init__(self, v: Variables): 100 | for x in v.__dict__: 101 | if type(v.__dict__[x]) == list: 102 | self.__dict__[x] = self.to_names(v.__dict__[x]) 103 | else: 104 | self.__dict__[x] = str(v.__dict__[x]) 105 | 106 | @classmethod 107 | def to_names(cls, x: List[Any]): 108 | res = [] 109 | for y in x: 110 | if type(y) == list: 111 | res.append(cls.to_names(y)) 112 | else: 113 | if type(y) in [bool, int, float, tuple]: 114 | res.append(y) 115 | else: 116 | res.append(str(y)) 117 | return res 118 | --------------------------------------------------------------------------------