├── TVMfuzz ├── __init__.py ├── colors.py ├── utils.py ├── getAST.py └── LICENSE ├── .gitignore ├── INSTALL.pdf ├── google1ff9623bb489f1ce.html ├── dataset ├── dataset.xlsx └── drawing_script.R ├── A Comprehensive Study of Deep Learning Compiler Bugs.pdf ├── buggyFile ├── 1.py ├── 2.py ├── 3.py ├── 8.py ├── 6.py └── 7.py ├── REQUIREMENTS.txt ├── tests ├── test_typecall.py ├── test_ir_well_formed.py ├── test_op_fast_math.py ├── test_simplify_fc_transpose.py ├── test_pass_fast_math.py ├── test_pass_simplify_expr.py ├── test_pass_to_graph_normal_form.py ├── test_pass_lambda_lift.py ├── test_type_functor.py ├── test_pass_eta_expand.py ├── test_pass_simplify_inference.py ├── test_dynamic_op_level5.py ├── test_dynamic_op_level6.py ├── test_analysis_get_calibration_data.py ├── test_sparse_dense_convert.py ├── test_analysis_extract_fused_functions.py ├── test_dynamic_op_level4.py ├── test_memory_passes.py ├── test_pass_to_cps.py ├── test_cpp_build_module.py ├── test_annotated_regions.py ├── test_op_qnn_quantize.py ├── test_call_graph.py ├── test_backend_interpreter.py ├── test_tuning.py ├── test_pass_mac_count.py ├── test_op_qnn_subtract.py ├── test_dynamic_op_level10.py ├── test_pass_legalize.py ├── test_analysis_basic_block_normal_form.py ├── test_json_compact.py ├── test_op_qnn_concatenate.py ├── test_pass_defunctionalization.py ├── test_pass_to_a_normal_form.py ├── test_pass_context_analysis.py ├── test_external_codegen.py ├── test_dynamic_op_level3.py ├── test_op_qnn_mul.py ├── test_pass_check_kind.py └── test_dynamic_op_level2.py ├── run.py ├── STATUS.md ├── README.md └── LICENSE /TVMfuzz/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | byproduct/ -------------------------------------------------------------------------------- /INSTALL.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShenQingchao/DLCstudy/HEAD/INSTALL.pdf -------------------------------------------------------------------------------- /google1ff9623bb489f1ce.html: -------------------------------------------------------------------------------- 1 | google-site-verification: google1ff9623bb489f1ce.html -------------------------------------------------------------------------------- /dataset/dataset.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShenQingchao/DLCstudy/HEAD/dataset/dataset.xlsx -------------------------------------------------------------------------------- /A Comprehensive Study of Deep Learning Compiler Bugs.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ShenQingchao/DLCstudy/HEAD/A Comprehensive Study of Deep Learning Compiler Bugs.pdf -------------------------------------------------------------------------------- /buggyFile/1.py: -------------------------------------------------------------------------------- 1 | # v0.7 Type Problem High-Level IR Transformation fixed 2 | from tvm import relay 3 | import tvm 4 | cObhB=relay.var('x') 5 | B28Qg=relay.var('xjym') 6 | Xzba1=relay.add(cObhB,B28Qg) 7 | pDL7X=relay.Function([cObhB,B28Qg],Xzba1) 8 | xqvpP=tvm.IRModule.from_expr(pDL7X) 9 | -------------------------------------------------------------------------------- /buggyFile/2.py: -------------------------------------------------------------------------------- 1 | # v0.7 Type Problem High-Level IR Transformation fixed 2 | import tvm 3 | from tvm import relay 4 | 5 | M7BIu=tvm.IRModule() 6 | YDc6p=relay.GlobalTypeVar('box') 7 | fzeFg=relay.TypeVar('') 8 | ErplN=relay.TypeData(YDc6p,[fzeFg],[]) 9 | M7BIu[YDc6p]=ErplN 10 | M7BIu[YDc6p]=ErplN 11 | 12 | 13 | -------------------------------------------------------------------------------- /buggyFile/3.py: -------------------------------------------------------------------------------- 1 | # v0.7 Tensor Shape Problem High-Level IR Transformation fixed 2 | import tvm 3 | import tvm.relay.transform as transform 4 | from tvm import relay 5 | 6 | zqKSk=relay.var('y2','uint64') 7 | hNVPr=relay.split(zqKSk,3,axis=0) 8 | Gko2H=hNVPr.astuple() 9 | QLiMS=relay.Function([zqKSk],Gko2H) 10 | EeNPI=tvm.IRModule() 11 | EeNPI['main']=QLiMS -------------------------------------------------------------------------------- /TVMfuzz/colors.py: -------------------------------------------------------------------------------- 1 | 2 | 3 | def Red(string): 4 | return '\033[1;31m' + string + '\033[0m' 5 | 6 | def Green(string): 7 | return '\033[1;32m' + string + '\033[0m' 8 | 9 | def Yellow(string): 10 | return '\033[1;33m' + string + '\033[0m' 11 | 12 | def Blue(string): 13 | return '\033[1;34m' + string + '\033[0m' 14 | 15 | def Magenta(string): 16 | return '\033[1;35m' + string + '\033[0m' 17 | 18 | def Cyan(string): 19 | return '\033[1;36m' + string + '\033[0m' -------------------------------------------------------------------------------- /REQUIREMENTS.txt: -------------------------------------------------------------------------------- 1 | the hardware environment requirement: 2 | * 4G Memory 3 | * 4G storage 4 | 5 | 6 | the software environments requirement: 7 | * Docker 8 | * R language 9 | * RStudio(version 4.0.3 (2020-10-10) is recommended) 10 | 11 | Detail Information: 12 | To run the aritifact, a machine with at least 4G RAM and 4G storage is needed. 13 | Meanwhile, docker and R language are needed to be installed in the machine. 14 | Specifically, docker is used to run TVMFuzz. R language and RStudio are required to run the drawing script. 15 | -------------------------------------------------------------------------------- /buggyFile/8.py: -------------------------------------------------------------------------------- 1 | 2 | import tvm.topi.testing 3 | import tvm.relay.transform 4 | import tvm.relay.testing 5 | import tvm 6 | import tvm.relay as relay 7 | import pytest 8 | import tvm.testing 9 | from tvm.tir.expr import * 10 | from tvm.relay.dataflow_pattern import * 11 | 12 | warning=pytest.raises(tvm.error.DiagnosticError) 13 | 14 | def assert_graph_equal(lhs, rhs): 15 | tvm.ir.assert_structural_equal(lhs, rhs, map_free_vars=True) 16 | 17 | 18 | def graph_equal(lhs, rhs): 19 | return tvm.ir.structural_equal(lhs, rhs, map_free_vars=True) 20 | 21 | 22 | def roundtrip_expr(expr): 23 | text = tvm.relay.Expr.astext(expr, show_meta_data=False) 24 | x = tvm.parser.parse_expr(text) 25 | assert_graph_equal(x, expr) 26 | 27 | 28 | def roundtrip(expr): 29 | x = tvm.parser.fromtext(expr.astext()) 30 | assert_graph_equal(x, expr) 31 | 32 | 33 | def parse_text(code): 34 | expr = tvm.parser.parse_expr(code) 35 | roundtrip_expr(expr) 36 | return expr 37 | 38 | with warning: 39 | res=parse_text('''meta[random_entry][15123]''') 40 | 41 | -------------------------------------------------------------------------------- /tests/test_typecall.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | from tvm import relay 4 | from tvm.relay import transform 5 | 6 | 7 | def test_dup_type(): 8 | a = relay.TypeVar("a") 9 | av = relay.Var("av", a) 10 | make_id = relay.Function([av], relay.Tuple([av, av]), None, [a]) 11 | t = relay.scalar_type("float32") 12 | b = relay.Var("b", t) 13 | mod = tvm.IRModule.from_expr(make_id(b)) 14 | mod = transform.InferType()(mod) 15 | inferred = mod["main"].body 16 | assert inferred.checked_type == relay.TupleType([t, t]) 17 | 18 | 19 | def test_id_type(): 20 | mod = tvm.IRModule() 21 | id_type = relay.GlobalTypeVar("id") 22 | a = relay.TypeVar("a") 23 | mod[id_type] = relay.TypeData(id_type, [a], []) 24 | 25 | b = relay.TypeVar("b") 26 | make_id = relay.Var("make_id", relay.FuncType([b], id_type(b), [b])) 27 | t = relay.scalar_type("float32") 28 | b = relay.Var("b", t) 29 | mod["main"] = relay.Function([make_id, b], make_id(b)) 30 | mod = transform.InferType()(mod) 31 | assert mod["main"].body.checked_type == id_type(t) -------------------------------------------------------------------------------- /buggyFile/6.py: -------------------------------------------------------------------------------- 1 | # 0.8 dev Incorrect Exception Handling High-Level IR Transformation not fixed 2 | from tvm.relay import create_executor 3 | import tvm.relay as relay 4 | import tvm.topi.testing 5 | import numpy as np 6 | import tvm 7 | from tvm.relay.dataflow_pattern import * 8 | import tvm.testing 9 | from tvm.tir.expr import * 10 | from tvm.relay.testing import run_infer_type 11 | 12 | def C(x): 13 | return relay.expr.const(x, 'float32') 14 | 15 | def approx_exp(x): 16 | x = relay.minimum(relay.maximum(x, C((- 88.0))), C(88.0)) 17 | x = (C(127.0) + (x * C(1.44269504))) 18 | xf = relay.floor(x) 19 | i = relay.cast(xf, 'int32') 20 | x = (x - xf) 21 | Y = (C(0.99992522) + (x * (C(0.69583354) + (x * (C(0.22606716) + (x * C(0.078024523))))))) 22 | exponent = relay.left_shift(i, relay.expr.const(23, 'int32')) 23 | exponent = relay.reinterpret(exponent, 'float32') 24 | return (exponent * Y) 25 | 26 | def approximate_sigmoid(x): 27 | y = approx_exp(x) 28 | return (y / (y + C(1.0))) 29 | 30 | def approximate_tanh(x): 31 | x = (x * C(2.0)) 32 | y = approx_exp(x) 33 | return ((y - C(1.0)) / (y + C(1.0))) 34 | 35 | a = relay.var('a', relay.TensorType((1000,), 'float32')) 36 | y = approximate_sigmoid(a) 37 | yy = run_infer_type(y) 38 | data = np.linspace((- 5), 5, 100).astype('float32') 39 | intrp = create_executor() 40 | op_res = intrp.evaluate(y, {a: relay.const(data)}) 41 | -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | import ast 2 | from TVMfuzz.elements import ingredient 3 | import os 4 | from TVMfuzz.colors import * 5 | from TVMfuzz.elements import * 6 | import random 7 | 8 | if not os.path.exists('byproduct'): 9 | import platform 10 | osType = platform.system() 11 | if osType == 'Windows': 12 | os.makedirs('byproduct') 13 | 14 | elif osType == 'Linux': 15 | os.makedirs('byproduct') 16 | 17 | record_path = 'byproduct/astTree.txt' 18 | 19 | dir = 'tests/' 20 | print(Red('dir: '+ dir)) 21 | filelist = os.listdir(dir) 22 | fileID = 0 23 | os.environ['funcID'] = str(0) 24 | os.environ['isFunc'] = str('False') 25 | 26 | import random 27 | random.shuffle(filelist) 28 | 29 | for file in filelist: 30 | 31 | helperStatDef_global.clear() 32 | helperStatDef_local.clear() 33 | funcDefs.clear() 34 | 35 | fileID += 1 36 | os.environ['fileID'] = str(fileID) 37 | from TVMfuzz.getAST import * 38 | 39 | file_path = dir + file 40 | 41 | with open(file_path, 'r') as source: 42 | tree_node = ast.parse(source.read()) 43 | 44 | with open(record_path, 'w') as astTree: 45 | astTree.write(ast.dump(tree_node, indent=2)) 46 | 47 | NodeTransformer().visit(tree_node) 48 | 49 | f = open('byproduct/log.txt', 'w') 50 | for ing in ingredient: 51 | f.write('~~~~~~~~~~~~~~~~~~~~\n') 52 | f.write(str(ing) + '\n') 53 | 54 | from TVMfuzz.generation import generate 55 | generate() 56 | -------------------------------------------------------------------------------- /STATUS.md: -------------------------------------------------------------------------------- 1 | **Apply for the badges:** Artifacts Evaluated & Artifacts Available 2 | 3 | **The reasons why we believe that the artifact deserves the "Artifact Available" and "Artifact Evaluated" badges:** 4 | 5 | Our artifact contains all relevant files and codes and is publicly available. It can easily be replicated. Therefore, we apply for the badges "Artifact Available" and "Artifact Evaluated". 6 | 7 | A public GitHub repository is used to store the artifacts, which contain all required materials. In addition, we use Zenodo to archiving our GitHub repository releases. Therefore, we believe that the artifact deserves the "Artifact Available " badge. 8 | 9 | Our artifact has two components: the labeled dataset in our empirical study and our bug detection tool TVMFuzz. The dataset provides the basic support for our paper. It contains the basic information of the 603 bugs studied by us and the extra information that was manually labeled by us. To reproduce the figures based on the dataset in our paper, a drawing script is provided. TVMFuzz is a tool designed by us to fuzz the TVM. To facilitate the reproduction process, we also provide a docker file. The detailed installation guide, usage method, and result analysis are all included in the INSTALL file. In addition, the artifact is open sourced with Apache-2.0 License. 10 | Through the artifacts, other authors can easily replicate our paper. Therefore, we believe that the artifact deserves the "Artifact Evaluated " badge. 11 | 12 | -------------------------------------------------------------------------------- /tests/test_ir_well_formed.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | from tvm import relay 4 | from tvm.relay.analysis import well_formed 5 | from tvm.relay.prelude import Prelude 6 | 7 | 8 | def test_let(): 9 | x = relay.Var("x") 10 | assert well_formed(x) 11 | v = relay.Constant(tvm.nd.array(10)) 12 | ty = None 13 | let = relay.Let(x, v, x) 14 | assert well_formed(let) 15 | assert not well_formed(relay.Let(x, v, let)) 16 | f = relay.Function([x], x, ty) 17 | assert well_formed(f) 18 | assert well_formed(relay.Let(relay.Var("y"), f, relay.Let(relay.Var("z"), f, v))) 19 | 20 | 21 | def test_tuple(): 22 | x = relay.Var("x") 23 | assert well_formed(x) 24 | v = relay.Constant(tvm.nd.array(10)) 25 | let = relay.Let(x, v, x) 26 | assert well_formed(let) 27 | assert well_formed(relay.Tuple([v, v])) 28 | assert not well_formed(relay.Tuple([let, relay.Let(x, v, x)])) 29 | 30 | 31 | def test_tuple_get_item(): 32 | t = relay.Var("t") 33 | assert well_formed(relay.TupleGetItem(t, 2)) 34 | 35 | 36 | def test_adt(): 37 | mod = tvm.IRModule() 38 | p = Prelude(mod) 39 | x = relay.Var("x") 40 | some_case = relay.Clause(relay.PatternConstructor(p.some, [relay.PatternVar(x)]), x) 41 | default_case = relay.Clause(relay.PatternVar(x), x) 42 | m0 = relay.Match(p.none(), [default_case]) 43 | m1 = relay.Match(p.none(), [some_case, default_case]) 44 | assert well_formed(m0) 45 | assert not well_formed(m1) -------------------------------------------------------------------------------- /TVMfuzz/utils.py: -------------------------------------------------------------------------------- 1 | import random 2 | import numpy as np 3 | 4 | def varNameGenerator(oneSet): 5 | name = '' 6 | while True: 7 | space = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 8 | 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] 9 | space += ([i.upper() for i in space]) 10 | name = ''.join(random.choices(space, k=1)) 11 | space += ['1','2','3','4','5','6','7','8','9','0'] 12 | name += ''.join(random.choices(space, k=4)) 13 | if name not in oneSet: 14 | oneSet.add(name) 15 | break 16 | return name 17 | 18 | def levenshtein(seq1, seq2): 19 | size_x = len(seq1) + 1 20 | size_y = len(seq2) + 1 21 | matrix = np.zeros((2, size_y)) 22 | for x in range(2): 23 | matrix [x, 0] = x 24 | for y in range(size_y): 25 | matrix [0, y] = y 26 | 27 | for x in range(1, size_x): 28 | if x >= 2: 29 | matrix[x%2, 0] = x 30 | for y in range(1, size_y): 31 | if seq1[x-1] == seq2[y-1]: 32 | matrix [x%2,y] = min( 33 | matrix[(x-1)%2, y] + 1, 34 | matrix[(x-1)%2, y-1], 35 | matrix[x%2, y-1] + 1 36 | ) 37 | else: 38 | matrix [x%2,y] = min( 39 | matrix[(x-1)%2,y] + 1, 40 | matrix[(x-1)%2,y-1] + 1, 41 | matrix[x%2,y-1] + 1 42 | ) 43 | return (matrix[(size_x - 1)%2, size_y - 1]) 44 | -------------------------------------------------------------------------------- /buggyFile/7.py: -------------------------------------------------------------------------------- 1 | # 0.8 dev Incorrect Exception Handling High-Level IR Transformation not fixed 2 | import tvm.testing 3 | import tvm.relay.testing 4 | import pytest 5 | import tvm.relay.transform 6 | from tvm.tir.expr import * 7 | from tvm.relay.dataflow_pattern import * 8 | import tvm.topi.testing 9 | import tvm.relay as relay 10 | 11 | 12 | warning=pytest.raises(tvm.error.DiagnosticError) 13 | 14 | SEMVER = '#[version = "0.0.5"]\n' 15 | 16 | def assert_graph_equal(lhs, rhs): 17 | tvm.ir.assert_structural_equal(lhs, rhs, map_free_vars=True) 18 | 19 | 20 | def graph_equal(lhs, rhs): 21 | return tvm.ir.structural_equal(lhs, rhs, map_free_vars=True) 22 | 23 | def roundtrip_expr(expr): 24 | text = tvm.relay.Expr.astext(expr, show_meta_data=False) 25 | x = tvm.parser.parse_expr(text) 26 | assert_graph_equal(x, expr) 27 | 28 | def roundtrip(expr): 29 | x = tvm.parser.fromtext(expr.astext()) 30 | assert_graph_equal(x, expr) 31 | 32 | def parse_text(code): 33 | expr = tvm.parser.parse_expr(code) 34 | roundtrip_expr(expr) 35 | return expr 36 | 37 | def parses_as(code, expr): 38 | parsed = parse_text(code) 39 | result = graph_equal(parsed, expr) 40 | return result 41 | 42 | def parse_module(code): 43 | mod = tvm.parser.parse((SEMVER + code)) 44 | roundtrip(mod) 45 | return mod 46 | 47 | with warning: 48 | parse_module((''' 49 | %s 50 | 51 | type List[A] { 52 | Cons(A, List[A]), 53 | Nil, 54 | } 55 | ''' % ''' 56 | type List[A] { 57 | Cons(A, List[A]), 58 | Nil, 59 | } 60 | ''')) 61 | 62 | -------------------------------------------------------------------------------- /tests/test_op_fast_math.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import scipy 3 | from scipy import special 4 | import tvm 5 | import tvm.testing 6 | import tvm.relay as relay 7 | from tvm import topi 8 | from tvm import te 9 | from tvm.contrib import graph_runtime 10 | 11 | 12 | def test_fastmath(): 13 | def test_apply(relay_op, name, f_numpy, low, high, step, dtype="float32"): 14 | a_np = np.arange(low, high, step).astype(dtype) 15 | b_np = f_numpy(a_np) 16 | 17 | x = relay.var("x", shape=a_np.shape, dtype="float32") 18 | y = relay_op(x) 19 | func = relay.Function([x], y) 20 | mod = tvm.IRModule.from_expr(func) 21 | 22 | with tvm.transform.PassContext(opt_level=3, required_pass=["FastMath"]): 23 | graph, lib, params = relay.build(mod, target="llvm", params=None) 24 | 25 | # Check that the op related to fast math have been convered to function in lib 26 | func_name = "fused_" + name 27 | assert lib.get_function(func_name) 28 | 29 | ctx = tvm.cpu(0) 30 | m = graph_runtime.create(graph, lib, ctx) 31 | # Set inputs 32 | m.set_input("x", tvm.nd.array(a_np, ctx)) 33 | m.set_input(**params) 34 | # Execute 35 | m.run() 36 | # Get outputs 37 | tvm_output = m.get_output(0) 38 | tvm.testing.assert_allclose(tvm_output.asnumpy(), b_np, rtol=1e-5, atol=1e-5) 39 | 40 | test_apply(relay.exp, "fast_exp", np.exp, low=-88, high=88, step=0.01) 41 | test_apply(relay.erf, "fast_erf", scipy.special.erf, low=-10, high=10, step=0.01) 42 | test_apply(relay.tanh, "fast_tanh", np.tanh, low=-10, high=10, step=0.01) 43 | -------------------------------------------------------------------------------- /tests/test_simplify_fc_transpose.py: -------------------------------------------------------------------------------- 1 | import itertools 2 | 3 | import numpy as np 4 | import scipy.sparse as sp 5 | 6 | 7 | import tvm 8 | from tvm.ir import IRModule 9 | from tvm import relay 10 | from tvm.relay.data_dep_optimization import simplify_fc_transpose 11 | 12 | 13 | def run_func(func, params, x): 14 | with tvm.transform.PassContext(opt_level=3): 15 | lib = relay.build(func, "llvm", params=params) 16 | 17 | from tvm.contrib import graph_runtime 18 | 19 | ctx = tvm.cpu(0) 20 | dtype = "float32" 21 | m = graph_runtime.GraphModule(lib["default"](ctx)) 22 | # set inputs 23 | m.set_input("data", tvm.nd.array(x.astype(dtype))) 24 | # execute 25 | m.run() 26 | # get outputs 27 | tvm_output = m.get_output(0) 28 | return tvm_output.asnumpy() 29 | 30 | 31 | def test_simplify_fc_transpose(): 32 | data = relay.var("data", shape=(1, 32), dtype="float32") 33 | x = relay.nn.relu(data) 34 | w1 = relay.var("w1", shape=(32, 64), dtype="float32") 35 | y = relay.nn.dense(x, relay.transpose(w1, axes=[1, 0])) 36 | z = relay.nn.relu(y) 37 | w2 = relay.var("w2", shape=(64, 16), dtype="float32") 38 | zz = relay.nn.dense(z, relay.transpose(w2, axes=[1, 0])) 39 | func = relay.Function(relay.analysis.free_vars(zz), zz) 40 | params = { 41 | "w1": tvm.nd.array(np.random.uniform(-1, 1, (32, 64)).astype("float32")), 42 | "w2": tvm.nd.array(np.random.uniform(-1, 1, (64, 16)).astype("float32")), 43 | } 44 | x_np = np.random.randn(1, 32).astype("float32") 45 | old_result = run_func(func, params, x_np) 46 | 47 | new_func, new_params = simplify_fc_transpose.convert(func, params) 48 | new_result = run_func(new_func, new_params, x_np) 49 | np.testing.assert_allclose(old_result, new_result, atol=1e-5, rtol=1e-5) -------------------------------------------------------------------------------- /tests/test_pass_fast_math.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm.ir import IRModule 3 | from tvm import relay 4 | from tvm.relay.transform import FastMath 5 | 6 | 7 | def test_exp(): 8 | x = relay.var("x", shape=(1, 16, 16, 16), dtype="float32") 9 | y = relay.exp(x) 10 | func = relay.Function([x], y) 11 | mod = tvm.IRModule.from_expr(func) 12 | 13 | fast_mod = FastMath()(mod) 14 | assert "fast_exp" in fast_mod.astext() 15 | 16 | # Check that FastMath option works for relay.build. 17 | with tvm.transform.PassContext(opt_level=3, required_pass=["FastMath"]): 18 | fast_mod = relay.optimize(mod, target="llvm", params=None) 19 | assert "fast_exp" in fast_mod[0].astext() 20 | 21 | 22 | def test_tanh(): 23 | x = relay.var("x", shape=(1, 16, 16, 16), dtype="float32") 24 | y = relay.tanh(x) 25 | func = relay.Function([x], y) 26 | mod = tvm.IRModule.from_expr(func) 27 | 28 | fast_mod = FastMath()(mod) 29 | assert "fast_tanh" in fast_mod.astext() 30 | 31 | # Check that FastMath option works for relay.build. 32 | with tvm.transform.PassContext(opt_level=3, required_pass=["FastMath"]): 33 | fast_mod = relay.optimize(mod, target="llvm", params=None) 34 | assert "fast_tanh" in fast_mod[0].astext() 35 | 36 | 37 | def test_erf(): 38 | x = relay.var("x", shape=(1, 16, 16, 16), dtype="float32") 39 | y = relay.erf(x) 40 | func = relay.Function([x], y) 41 | mod = tvm.IRModule.from_expr(func) 42 | 43 | fast_mod = FastMath()(mod) 44 | assert "fast_erf" in fast_mod.astext() 45 | 46 | # Check that FastMath option works for relay.build. 47 | with tvm.transform.PassContext(opt_level=3, required_pass=["FastMath"]): 48 | fast_mod = relay.optimize(mod, target="llvm", params=None) 49 | assert "fast_erf" in fast_mod[0].astext() -------------------------------------------------------------------------------- /tests/test_pass_simplify_expr.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import relay 3 | from tvm.relay import transform 4 | from tvm.relay.testing import run_opt_pass 5 | 6 | 7 | def test_simplify_reshape(): 8 | def before(): 9 | x = relay.var("x", shape=(1, 16, 16, 16), dtype="float32") 10 | w = relay.var("w", shape=(32, 16, 3, 3), dtype="float32") 11 | y = relay.nn.conv2d(x, w, padding=(1, 1)) 12 | y = relay.reshape(y, newshape=(1, 16, -1)) 13 | y = relay.reshape(y, newshape=(4, 8, -1, 16)) 14 | y = relay.reverse_reshape(y, newshape=(32, 0, -1)) 15 | return relay.Function([x, w], y) 16 | 17 | def expected(): 18 | x = relay.var("x", shape=(1, 16, 16, 16), dtype="float32") 19 | w = relay.var("w", shape=(32, 16, 3, 3), dtype="float32") 20 | y = relay.nn.conv2d(x, w, padding=(1, 1)) 21 | y = relay.reshape(y, newshape=(32, 16, 16)) 22 | return relay.Function([x, w], y) 23 | 24 | def symbolic(): 25 | b = tvm.te.size_var("b") 26 | x = relay.var("x", shape=(b, 16, 16, 16), dtype="float32") 27 | w = relay.var("w", shape=(32, 16, 3, 3), dtype="float32") 28 | y = relay.nn.conv2d(x, w, padding=(1, 1)) 29 | y = relay.reshape(y, newshape=(1, 16, -1)) 30 | y = relay.reshape(y, newshape=(4, 8, -1, 16)) 31 | y = relay.reverse_reshape(y, newshape=(32, 0, -1)) 32 | return relay.Function([x, w], y) 33 | 34 | z = before() 35 | zz = run_opt_pass(z, transform.SimplifyExpr()) 36 | after = run_opt_pass(expected(), transform.InferType()) 37 | assert tvm.ir.structural_equal(zz, after) 38 | 39 | z = symbolic() 40 | zz = run_opt_pass(z, transform.SimplifyExpr()) 41 | after = run_opt_pass(symbolic(), transform.InferType()) 42 | assert tvm.ir.structural_equal(zz, after) -------------------------------------------------------------------------------- /tests/test_pass_to_graph_normal_form.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import relay 4 | from tvm.relay import op, create_executor, transform 5 | from tvm.relay.analysis import Feature 6 | from tvm.relay.analysis import detect_feature 7 | 8 | 9 | def run_opt_pass(expr, opt_pass): 10 | mod = tvm.IRModule.from_expr(expr) 11 | mod = opt_pass(mod) 12 | entry = mod["main"] 13 | return entry if isinstance(expr, relay.Function) else entry.body 14 | 15 | 16 | def check_eval(expr, args, expected_result, mod=None, rtol=1e-07): 17 | if mod is None: 18 | mod = tvm.IRModule() 19 | 20 | ctx = tvm.context("llvm", 0) 21 | intrp = create_executor(mod=mod, ctx=ctx, target="llvm") 22 | 23 | result = intrp.evaluate(expr)(*args) 24 | np.testing.assert_allclose(result.asnumpy(), expected_result, rtol=rtol) 25 | 26 | 27 | def test_implicit_share(): 28 | x = relay.Var("x") 29 | y = relay.Var("y") 30 | z = relay.Var("z") 31 | body = relay.Let(z, op.add(y, y), op.add(z, z)) 32 | body = relay.Let(y, op.add(x, x), body) 33 | f = relay.Function([], relay.Let(x, relay.const(1), body)) 34 | g = run_opt_pass(f, transform.ToGraphNormalForm()) 35 | assert Feature.fLet in detect_feature(f) 36 | assert not Feature.fLet in detect_feature(g) 37 | check_eval(f, [], 8.0) 38 | check_eval(g, [], 8.0) 39 | 40 | 41 | def test_round_trip(): 42 | x = relay.Var("x") 43 | y = relay.Var("y") 44 | z = relay.Var("z") 45 | body = relay.Let(z, op.add(y, y), op.add(z, z)) 46 | body = relay.Let(y, op.add(x, x), body) 47 | f = relay.Function([], relay.Let(x, relay.const(1), body)) 48 | g = run_opt_pass(f, transform.ToGraphNormalForm()) 49 | h = run_opt_pass(g, transform.ToANormalForm()) 50 | assert Feature.fLet in detect_feature(f) 51 | assert not Feature.fLet in detect_feature(g) 52 | check_eval(f, [], 8.0) 53 | check_eval(g, [], 8.0) 54 | check_eval(h, [], 8.0) -------------------------------------------------------------------------------- /tests/test_pass_lambda_lift.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | 4 | import tvm 5 | from tvm import te 6 | from tvm import relay 7 | from tvm.relay import transform 8 | 9 | 10 | def test_basic(): 11 | mod = tvm.IRModule() 12 | x2 = relay.var("x2", shape=(10, 5)) 13 | y2 = relay.var("y2", shape=(1, 5)) 14 | level2_func = relay.Function([x2, y2], relay.op.add(x2, y2)) 15 | 16 | x1 = relay.var("x1", shape=(10, 5)) 17 | y1 = relay.var("y1", shape=(1, 5)) 18 | level1_func = relay.Function([x1, y1], level2_func(x1, y1)) 19 | 20 | mod["main"] = level1_func 21 | new_mod = transform.LambdaLift()(mod) 22 | assert len(new_mod.functions) == 2 23 | 24 | 25 | def test_closure(): 26 | mod = tvm.IRModule() 27 | 28 | x = relay.var("x", shape=(2,)) 29 | y = relay.var("y", shape=(2,)) 30 | inner_func = relay.Function([x], x + y) 31 | outer_func = relay.Function([y], inner_func) 32 | clo = outer_func(relay.ones(shape=(2,), dtype="float32")) 33 | mod["main"] = relay.Function([], relay.Call(clo, [relay.zeros(shape=(2,), dtype="float32")])) 34 | 35 | new_mod = transform.LambdaLift()(mod) 36 | assert len(new_mod.functions) == 3 37 | 38 | 39 | def test_recursive(): 40 | mod = tvm.IRModule() 41 | 42 | x = relay.var("x", shape=(2,)) 43 | i = relay.var("i", shape=(), dtype="int32") 44 | s = relay.var("s", shape=(2,)) 45 | cond = i < relay.const(10, dtype="int32") 46 | 47 | loop = relay.var("while_loop") 48 | sb = relay.scope_builder.ScopeBuilder() 49 | with sb.if_scope(cond): 50 | ii = i + relay.const(1, dtype="int32") 51 | ss = s + x 52 | sb.ret(loop(ii, ss)) 53 | with sb.else_scope(): 54 | sb.ret(s) 55 | func = relay.Function([i, s], sb.get()) 56 | 57 | ret = relay.Let( 58 | loop, func, loop(relay.const(0, dtype="int32"), relay.zeros(shape=(2,), dtype="float32")) 59 | ) 60 | mod["main"] = relay.Function([x], ret) 61 | 62 | new_mod = transform.LambdaLift()(mod) 63 | assert len(new_mod.functions) == 2 -------------------------------------------------------------------------------- /tests/test_type_functor.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | from tvm import relay 4 | from tvm.relay import TypeFunctor, TypeMutator, TypeVisitor 5 | from tvm.relay.ty import ( 6 | TypeVar, 7 | IncompleteType, 8 | TensorType, 9 | FuncType, 10 | TupleType, 11 | TypeRelation, 12 | RefType, 13 | GlobalTypeVar, 14 | TypeCall, 15 | ) 16 | from tvm.relay.adt import TypeData 17 | 18 | 19 | def check_visit(typ): 20 | try: 21 | ef = TypeFunctor() 22 | ef.visit(typ) 23 | assert False 24 | except NotImplementedError: 25 | pass 26 | 27 | ev = TypeVisitor() 28 | ev.visit(typ) 29 | 30 | tvm.ir.assert_structural_equal(TypeMutator().visit(typ), typ, map_free_vars=True) 31 | 32 | 33 | def test_type_var(): 34 | tv = TypeVar("a") 35 | check_visit(tv) 36 | 37 | 38 | def test_incomplete_type(): 39 | it = IncompleteType() 40 | check_visit(it) 41 | 42 | 43 | def test_tensor_type(): 44 | tt = TensorType([]) 45 | check_visit(tt) 46 | 47 | 48 | def test_func_type(): 49 | tv = TypeVar("tv") 50 | tt = relay.TensorType(tvm.runtime.convert([1, 2, 3]), "float32") 51 | ft = FuncType([tt], tt, type_params=[tv]) 52 | check_visit(ft) 53 | 54 | 55 | def test_tuple_type(): 56 | tt = TupleType([TupleType([])]) 57 | check_visit(tt) 58 | 59 | 60 | def test_type_relation(): 61 | func = tvm.ir.EnvFunc.get("tvm.relay.type_relation.Broadcast") 62 | attrs = tvm.ir.make_node("attrs.TestAttrs", name="attr", padding=(3, 4)) 63 | tp = TypeVar("tp") 64 | tf = FuncType([], TupleType([]), [], []) 65 | tt = TensorType([1, 2, 3], "float32") 66 | tr = TypeRelation(func, [tp, tf, tt], 2, attrs) 67 | 68 | check_visit(tr) 69 | 70 | 71 | def test_ref_type(): 72 | rt = RefType(TupleType([])) 73 | check_visit(rt) 74 | 75 | 76 | def test_global_type_var(): 77 | gtv = GlobalTypeVar("gtv") 78 | check_visit(gtv) 79 | 80 | 81 | def test_type_call(): 82 | tc = TypeCall(GlobalTypeVar("tf"), [TupleType([])]) 83 | check_visit(tc) 84 | 85 | 86 | def test_type_data(): 87 | td = TypeData(GlobalTypeVar("td"), [TypeVar("tv")], []) 88 | check_visit(td) -------------------------------------------------------------------------------- /tests/test_pass_eta_expand.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import numpy as np 4 | 5 | import tvm 6 | from tvm import te 7 | from tvm import relay 8 | import tvm.relay.transform as _transform 9 | 10 | 11 | def test_eta_expand_global_var(): 12 | mod = tvm.parser.fromtext( 13 | r""" 14 | #[version = "0.0.5"] 15 | def @aux(%x: Tensor[(), int32]) -> Tensor[(), int32] { 16 | %x 17 | } 18 | def @main() -> fn(Tensor[(), int32]) -> Tensor[(), int32] { 19 | @aux 20 | } 21 | """ 22 | ) 23 | seq = tvm.transform.Sequential([_transform.EtaExpand(expand_global_var=True)]) 24 | with tvm.transform.PassContext(opt_level=3): 25 | mod = seq(mod) 26 | expected = tvm.parser.fromtext( 27 | r""" 28 | #[version = "0.0.5"] 29 | def @aux(%x: Tensor[(), int32]) -> Tensor[(), int32] { 30 | %x 31 | } 32 | def @main() -> fn(Tensor[(), int32]) -> Tensor[(), int32] { 33 | fn (%x: Tensor[(), int32]) -> Tensor[(), int32] { 34 | @aux(%x) 35 | } 36 | } 37 | """ 38 | ) 39 | tvm.ir.assert_structural_equal(mod["main"], expected["main"], map_free_vars=True) 40 | 41 | 42 | def test_eta_expand_constructor(): 43 | mod = tvm.parser.fromtext( 44 | r""" 45 | #[version = "0.0.5"] 46 | type List[A] { 47 | Cons(A, List[A]), 48 | Nil, 49 | } 50 | def @main[A]() -> fn(A, List[A]) -> List[A] { 51 | Cons 52 | } 53 | """ 54 | ) 55 | seq = tvm.transform.Sequential([_transform.EtaExpand(expand_constructor=True)]) 56 | with tvm.transform.PassContext(opt_level=3): 57 | mod = seq(mod) 58 | expected = tvm.parser.fromtext( 59 | r""" 60 | #[version = "0.0.5"] 61 | type List[A] { 62 | Cons(A, List[A]), 63 | Nil, 64 | } 65 | def @main[A]() -> fn(A, List[A]) -> List[A] { 66 | fn [A](%x: A, %xs: List[A]) -> List[A] { 67 | Cons(%x, %xs) 68 | } 69 | } 70 | """ 71 | ) 72 | tvm.ir.assert_structural_equal(mod["main"], expected["main"], map_free_vars=True) -------------------------------------------------------------------------------- /tests/test_pass_simplify_inference.py: -------------------------------------------------------------------------------- 1 | from tvm.ir import IRModule, structural_equal 2 | from tvm import relay as rly 3 | from tvm.relay.transform import SimplifyInference 4 | 5 | 6 | def test_simplify_batchnorm(dtype="float32"): 7 | def simple_bn(x, gamma, beta, moving_mean, moving_var, axis=1, epsilon=1e-5, shape=None): 8 | # expect = (x - moving_mean) / sqrt(moving_var + eps) * gamma + beta 9 | scale = rly.multiply( 10 | rly.const(1, dtype) / rly.sqrt(moving_var + rly.const(epsilon, dtype)), gamma 11 | ) 12 | shift = rly.add(rly.multiply(rly.negative(moving_mean), scale), beta) 13 | num_newaxis = len(shape) - (axis + 1) 14 | if num_newaxis: 15 | scale = rly.expand_dims(scale, axis=1, num_newaxis=num_newaxis) 16 | shift = rly.expand_dims(shift, axis=1, num_newaxis=num_newaxis) 17 | return x * scale + shift 18 | 19 | def check(dim, axis, nstep): 20 | eps = 0.01 21 | ttype1 = rly.TensorType(tuple(10 for i in range(dim)), dtype) 22 | ttype2 = rly.TensorType((10,), dtype) 23 | x = rly.var("x", ttype1) 24 | beta = rly.var("beta", ttype2) 25 | gamma = rly.var("gamma", ttype2) 26 | moving_var = rly.var("moving_var", ttype2) 27 | moving_mean = rly.var("moving_mean", ttype2) 28 | y1, y2 = x, x 29 | 30 | for _ in range(nstep): 31 | y1, _, _ = rly.nn.batch_norm( 32 | y1 + rly.const(1, dtype), 33 | gamma, 34 | beta, 35 | moving_mean, 36 | moving_var, 37 | epsilon=eps, 38 | axis=axis, 39 | ) 40 | y1 = rly.nn.dropout(y1) 41 | y2 = simple_bn( 42 | y2 + rly.const(1, dtype), 43 | gamma, 44 | beta, 45 | moving_mean, 46 | moving_var, 47 | epsilon=eps, 48 | axis=axis, 49 | shape=ttype1.shape, 50 | ) 51 | 52 | mod = IRModule.from_expr(y1) 53 | simplify = SimplifyInference() 54 | mod = simplify(mod) 55 | y1 = mod["main"].body 56 | 57 | assert structural_equal(y1, y2, map_free_vars=True) 58 | 59 | check(2, 1, 1) 60 | check(4, 1, 1) 61 | check(4, 0, 3) -------------------------------------------------------------------------------- /tests/test_dynamic_op_level5.py: -------------------------------------------------------------------------------- 1 | import math 2 | import numpy as np 3 | import tvm 4 | from tvm import te 5 | from tvm import relay 6 | from tvm.relay import transform 7 | from tvm.relay.testing import run_infer_type 8 | import tvm.topi.testing 9 | import tvm.testing 10 | 11 | 12 | def test_resize_infer_type(): 13 | n, c, h, w = te.size_var("n"), te.size_var("c"), te.size_var("h"), te.size_var("w") 14 | x = relay.var("x", relay.TensorType((n, c, h, w), "int8")) 15 | size = relay.var("size", relay.TensorType((2,), "int8")) 16 | z = relay.image.resize(x, size) 17 | zz = run_infer_type(z) 18 | assert zz.checked_type == relay.TensorType((n, c, relay.Any(), relay.Any()), "int8") 19 | 20 | 21 | @tvm.testing.uses_gpu 22 | def test_resize(): 23 | def verify_resize(dshape, scale, method, layout): 24 | if layout == "NHWC": 25 | size = (dshape[1] * scale, dshape[2] * scale) 26 | else: 27 | size = (dshape[2] * scale, dshape[3] * scale) 28 | size = np.array(size).astype("int64") 29 | x_data = np.random.uniform(size=dshape).astype("float32") 30 | if method == "bilinear": 31 | ref_res = tvm.topi.testing.bilinear_resize_python(x_data, size, layout) 32 | else: 33 | ref_res = tvm.topi.testing.upsampling_python(x_data, (scale, scale), layout) 34 | x = relay.var("x", relay.TensorType(dshape, "float32")) 35 | size_var = relay.var("size", relay.TensorType((2,), "int64")) 36 | 37 | coord_trans = "asymmetric" if method == "nearest_neighbor" else "align_corners" 38 | z = relay.image.resize( 39 | x, size_var, layout, method, coordinate_transformation_mode=coord_trans 40 | ) 41 | 42 | zz = run_infer_type(z) 43 | func = relay.Function([x, size_var], z) 44 | 45 | for target, ctx in tvm.testing.enabled_targets(): 46 | for kind in ["vm", "debug"]: 47 | mod = tvm.ir.IRModule.from_expr(func) 48 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 49 | op_res = intrp.evaluate()(x_data, size) 50 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-4, atol=1e-6) 51 | 52 | for method in ["bilinear", "nearest_neighbor"]: 53 | for layout in ["NCHW", "NHWC"]: 54 | verify_resize((1, 4, 4, 4), 2, method, layout) 55 | verify_resize((2, 8, 17, 20), 7, method, layout) -------------------------------------------------------------------------------- /tests/test_dynamic_op_level6.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import te 4 | from tvm import relay 5 | import tvm.testing 6 | 7 | 8 | @tvm.testing.uses_gpu 9 | def test_dynamic_topk(): 10 | def verify_topk(k, axis, ret_type, is_ascend, dtype): 11 | shape = (20, 100) 12 | x = relay.var("x", relay.TensorType(shape, "float32")) 13 | k_var = relay.var("x", relay.TensorType((1,), "float32")) 14 | out = relay.topk(x, k_var, axis, ret_type, is_ascend, dtype) 15 | if isinstance(out, relay.expr.TupleWrapper): 16 | out = out.astuple() 17 | func = relay.Function([x, k_var], out) 18 | 19 | np_data = np.random.uniform(size=shape).astype("float32") 20 | if is_ascend: 21 | np_indices = np.argsort(np_data, axis=axis) 22 | else: 23 | np_indices = np.argsort(-np_data, axis=axis) 24 | kk = k if k >= 1 else shape[axis] 25 | if axis == 0: 26 | np_indices = np_indices[:kk, :] 27 | np_values = np.zeros(np_indices.shape).astype("float32") 28 | for i in range(shape[1]): 29 | np_values[:, i] = np_data[np_indices[:, i], i] 30 | else: 31 | np_indices = np_indices[:, :kk] 32 | np_values = np.zeros(np_indices.shape).astype("float32") 33 | for i in range(shape[0]): 34 | np_values[i, :] = np_data[i, np_indices[i, :]] 35 | np_indices = np_indices.astype(dtype) 36 | 37 | for target, ctx in tvm.testing.enabled_targets(): 38 | for kind in ["vm", "debug"]: 39 | mod = tvm.ir.IRModule.from_expr(func) 40 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 41 | op_res = intrp.evaluate()(np_data, np.array([k]).astype("float32")) 42 | if ret_type == "both": 43 | tvm.testing.assert_allclose(op_res[0].asnumpy(), np_values) 44 | tvm.testing.assert_allclose(op_res[1].asnumpy(), np_indices) 45 | elif ret_type == "values": 46 | tvm.testing.assert_allclose(op_res.asnumpy(), np_values) 47 | else: 48 | tvm.testing.assert_allclose(op_res.asnumpy(), np_indices) 49 | 50 | np.random.seed(0) 51 | for k in [0, 1, 5]: 52 | for axis in [0, -1, 1]: 53 | for ret_type in ["both", "values", "indices"]: 54 | verify_topk(k, axis, ret_type, True, "int64") 55 | verify_topk(k, axis, ret_type, False, "float32") -------------------------------------------------------------------------------- /tests/test_analysis_get_calibration_data.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | import tvm 4 | import tvm.relay.testing 5 | from tvm import relay 6 | from tvm.relay import transform 7 | from tvm.relay.analysis import get_calibration_data 8 | 9 | 10 | def check_data_size(mod, data): 11 | assert len(data) == len(mod.functions) - 1 12 | for key, value in mod.functions.items(): 13 | if key.name_hint != "main": 14 | assert len(data[key]["inputs"]) == len(value.params) 15 | if isinstance(value.body, relay.Tuple): 16 | assert len(data[key]["outputs"]) == len(value.body.fields) 17 | else: 18 | assert len(data[key]["outputs"]) == 1 19 | 20 | 21 | def test_simple_graph(): 22 | # A module with two subgraphs 23 | mod = tvm.IRModule() 24 | 25 | x0 = relay.var("x0", shape=(8, 8)) 26 | y0 = relay.var("y0", shape=(8, 8)) 27 | z0 = x0 + y0 28 | z1 = x0 - y0 29 | z2 = relay.Tuple((z0, z1)) 30 | f0 = relay.Function([x0, y0], z2) 31 | f0 = f0.with_attr("Compiler", "test_graph") 32 | g0 = relay.GlobalVar("g0") 33 | mod[g0] = f0 34 | 35 | x1 = relay.var("x1", shape=(8, 8)) 36 | y1 = relay.var("y1", shape=(8, 8)) 37 | z1 = x1 - y1 38 | f1 = relay.Function([x1, y1], z1) 39 | f1 = f1.with_attr("Compiler", "test_graph") 40 | g1 = relay.GlobalVar("g1") 41 | mod[g1] = f1 42 | 43 | x = relay.var("x", shape=(8, 8)) 44 | y = relay.var("y", shape=(8, 8)) 45 | z = relay.var("z", shape=(8, 8)) 46 | c0 = relay.Call(g0, [x, y]) 47 | c1 = relay.Call(g1, [relay.TupleGetItem(c0, 0), z]) 48 | fm = relay.Function([x, y, z], c1) 49 | mod["main"] = fm 50 | 51 | x_data = np.random.rand(8, 8).astype("float32") 52 | y_data = np.random.rand(8, 8).astype("float32") 53 | z_data = np.random.rand(8, 8).astype("float32") 54 | data = get_calibration_data(mod, {"x": x_data, "y": y_data, "z": z_data}) 55 | 56 | # Check the number and orders 57 | check_data_size(mod, data) 58 | 59 | 60 | def test_mobilenet_dnnl(): 61 | # if not tvm.get_global_func("relay.ext.dnnl", True): 62 | # print("skip because DNNL codegen is not available") 63 | # return 64 | 65 | dtype = "float32" 66 | ishape = (1, 3, 224, 224) 67 | mod, params = relay.testing.mobilenet.get_workload(batch_size=1, dtype="float32") 68 | 69 | mod = transform.AnnotateTarget(["dnnl"])(mod) 70 | mod = transform.MergeCompilerRegions()(mod) 71 | mod = transform.PartitionGraph()(mod) 72 | 73 | i_data = np.random.uniform(0, 1, ishape).astype(dtype) -------------------------------------------------------------------------------- /tests/test_sparse_dense_convert.py: -------------------------------------------------------------------------------- 1 | import itertools 2 | 3 | import numpy as np 4 | import scipy.sparse as sp 5 | 6 | 7 | import tvm 8 | from tvm.ir import IRModule 9 | from tvm import relay 10 | 11 | 12 | def random_bsr_matrix(M, N, BS_R, BS_C, density, dtype="float32"): 13 | Y = np.zeros((M, N), dtype=dtype) 14 | assert M % BS_R == 0 15 | assert N % BS_C == 0 16 | nnz = int(density * M * N) 17 | num_blocks = int(nnz / (BS_R * BS_C)) + 1 18 | candidate_blocks = np.asarray(list(itertools.product(range(0, M, BS_R), range(0, N, BS_C)))) 19 | assert candidate_blocks.shape[0] == M // BS_R * N // BS_C 20 | chosen_blocks = candidate_blocks[ 21 | np.random.choice(candidate_blocks.shape[0], size=num_blocks, replace=False) 22 | ] 23 | for i in range(len(chosen_blocks)): 24 | r, c = chosen_blocks[i] 25 | Y[r : r + BS_R, c : c + BS_C] = np.random.randn(BS_R, BS_C) 26 | s = sp.bsr_matrix(Y, blocksize=(BS_R, BS_C)) 27 | assert s.data.shape == (num_blocks, BS_R, BS_C) 28 | assert s.data.size >= nnz 29 | assert s.indices.shape == (num_blocks,) 30 | assert s.indptr.shape == (M // BS_R + 1,) 31 | return s 32 | 33 | 34 | def run_func(func, params, x): 35 | with tvm.transform.PassContext(opt_level=3): 36 | graph, lib, new_params = relay.build(func, "llvm", params=params) 37 | 38 | from tvm.contrib import graph_runtime 39 | 40 | ctx = tvm.cpu(0) 41 | dtype = "float32" 42 | m = graph_runtime.create(graph, lib, ctx) 43 | # set inputs 44 | m.set_input("data", tvm.nd.array(x.astype(dtype))) 45 | m.set_input(**new_params) 46 | # execute 47 | m.run() 48 | # get outputs 49 | tvm_output = m.get_output(0) 50 | return tvm_output.asnumpy() 51 | 52 | 53 | def test_bsr_sparse_dense(): 54 | data = relay.var("data", shape=(1, 128), dtype="float32") 55 | x = relay.nn.relu(data) 56 | w = relay.var("weight", shape=(768, 128), dtype="float32") 57 | y = relay.nn.dense(x, w) 58 | z = relay.nn.relu(y) 59 | func = relay.Function(relay.analysis.free_vars(z), z) 60 | 61 | params = {"weight": tvm.nd.array(random_bsr_matrix(768, 128, 32, 1, 0.1).todense())} 62 | 63 | x_np = np.random.randn(1, 128).astype("float32") 64 | # dense output 65 | dense_output = run_func(func, params, x_np) 66 | # sparse 67 | sparse_func, params = relay.data_dep_optimization.bsr_dense.convert(func, params, (32, 1), 0.2) 68 | sparse_output = run_func(sparse_func, params, x_np) 69 | np.testing.assert_allclose(sparse_output, dense_output, atol=1e-5, rtol=1e-5) -------------------------------------------------------------------------------- /tests/test_analysis_extract_fused_functions.py: -------------------------------------------------------------------------------- 1 | """Test function extraction""" 2 | import tvm 3 | from tvm import relay 4 | from tvm.relay.testing.synthetic import get_workload 5 | 6 | 7 | def get_conv_net(): 8 | """This gets the net for a case described in fuse_ops.cc: 9 | 10 | conv2d 11 | / | \ 12 | / | \ 13 | op op op 14 | \ | / 15 | \ | / 16 | elemwise add 17 | | 18 | """ 19 | dshape = (1, 1, 5, 1) 20 | x = relay.var("x", shape=dshape) 21 | y = relay.nn.conv2d(x, relay.var("w1"), kernel_size=(3, 3), padding=(1, 1), channels=1) 22 | 23 | x1 = relay.nn.conv2d(y, relay.var("w2"), kernel_size=(3, 3), padding=(1, 1), channels=1) 24 | x2 = relay.nn.conv2d(y, relay.var("w3"), kernel_size=(3, 3), padding=(1, 1), channels=1) 25 | x3 = relay.nn.conv2d(y, relay.var("w4"), kernel_size=(3, 3), padding=(1, 1), channels=1) 26 | 27 | z = relay.add(x1, x2) 28 | z = relay.add(x3, z) 29 | 30 | return tvm.IRModule.from_expr(z) 31 | 32 | 33 | def get_conv2d(): 34 | x = relay.var("x", shape=(1, 56, 56, 64)) 35 | weight1 = relay.var("weight1", shape=(3, 3, 64, 32)) 36 | y = relay.nn.conv2d( 37 | x, 38 | weight1, 39 | channels=32, 40 | kernel_size=(3, 3), 41 | padding=(1, 1), 42 | data_layout="NHWC", 43 | kernel_layout="HWIO", 44 | ) 45 | return tvm.IRModule.from_expr(y) 46 | 47 | 48 | def test_extract_identity(): 49 | mod = get_conv2d() 50 | items = relay.analysis.extract_fused_functions(mod) 51 | assert len(items) == 1 52 | 53 | mod["main"] = mod["main"].with_attr("Primitive", tvm.tir.IntImm("int32", 1)) 54 | tvm.ir.structural_equal(list(items.values())[0], mod["main"]) 55 | 56 | 57 | def test_extract_conv_net(): 58 | mod = get_conv_net() 59 | items = relay.analysis.extract_fused_functions(mod) 60 | functions = list(items.values()) 61 | assert len(functions) == 2 62 | x = functions[0] 63 | y = functions[1] 64 | 65 | def is_conv(func): 66 | conv2d = relay.op.op.get("nn.conv2d") 67 | call_node = func.body 68 | return call_node.op == conv2d 69 | 70 | def is_conv_add(func): 71 | add = relay.op.op.get("add") 72 | call_node = func.body 73 | maybe_conv_module = tvm.IRModule.from_expr(call_node.args[0]) 74 | return call_node.op == add and is_conv(maybe_conv_module["main"]) 75 | 76 | # Function traversal order isn't obvious, so checking both orders is more consistent 77 | assert (is_conv(x) and is_conv_add(y)) or (is_conv_add(x) and is_conv(y)) 78 | 79 | 80 | def test_extract_resnet(): 81 | mod, _params = get_workload() 82 | items = relay.analysis.extract_fused_functions(mod) 83 | assert len(items) == 6 84 | -------------------------------------------------------------------------------- /tests/test_dynamic_op_level4.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | import numpy as np 4 | from tvm import relay 5 | from tvm.relay import transform 6 | from tvm.relay.testing import run_infer_type 7 | import tvm.topi.testing 8 | 9 | @tvm.testing.uses_gpu 10 | def test_dynamic_strided_slice(): 11 | def verify(dshape, begin, end, strides, output, slice_mode="end", test_ref=True, dtype="int32"): 12 | x = relay.var("x", relay.TensorType(dshape, "float32")) 13 | ndim = len(dshape) 14 | begin = begin if begin else [0] * ndim 15 | end = end if end else list(dshape) 16 | if strides: 17 | if len(strides) == 1: 18 | strides = strides * ndim 19 | else: 20 | strides = [1] * ndim 21 | 22 | # target numpy result 23 | x_data = np.random.uniform(size=dshape).astype("float32") 24 | ref_res = tvm.topi.testing.strided_slice_python(x_data, begin, end, strides, slice_mode) 25 | data = [x_data, np.array(begin), np.array(end)] 26 | 27 | begin = relay.const(begin, dtype=dtype) 28 | end = relay.const(end, dtype=dtype) 29 | 30 | if strides: 31 | data.append(np.array(strides)) 32 | strides = relay.const(strides, dtype=dtype) 33 | z = relay.strided_slice(x, begin=begin, end=end, strides=strides, slice_mode=slice_mode) 34 | else: 35 | z = relay.strided_slice(x, begin=begin, end=end, slice_mode=slice_mode) 36 | func = relay.Function([x], z) 37 | 38 | func = run_infer_type(func) 39 | text = func.astext() 40 | 41 | if not test_ref: 42 | return 43 | for target, ctx in tvm.testing.enabled_targets(): 44 | mod = tvm.ir.IRModule.from_expr(func) 45 | intrp = relay.create_executor("vm", mod=mod, ctx=ctx, target=target) 46 | op_res = intrp.evaluate()(x_data) 47 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res) 48 | 49 | verify( 50 | (1, 224, 224, 3), 51 | [0, 20, 20, 0], 52 | [1, 140, 140, 3], 53 | [1, 1, 1, 1], 54 | (1, 120, 120, 3), 55 | dtype="int64", 56 | ) 57 | verify((3, 4, 3), [1, 1, 0], [4, 4, 3], [2, 1, 1], (1, 3, 3), dtype="int16") 58 | verify((3, 4, 3), [0, 0, 0], [4, -5, 4], [1, -1, 2], (3, 1, 2)) 59 | verify((3, 4, 3), [1, 1, 0], [4, 4, 3], None, (2, 3, 3)) 60 | verify((3, 4, 3), [1, 1, 0], [4, 1000, 3], None, (2, 3, 3)) 61 | verify((3, 4, 3), [1, 1, 0], [4, 4, 4], None, (2, 3, 3)) 62 | verify((3, 4, 3), [1, 1, 0], [4, 4, 3], None, (2, 3, 3)) 63 | verify((3, 4, 3), [1, -1, 0], [4, -5, 3], [2, -1, 1], (1, 4, 3)) 64 | verify((3, 4, 3), [1, -1, 0], [2, -3, 3], [1, -1, 1], (1, 2, 3)) 65 | verify( 66 | (3, 4, 3), [1, 0, 0], [3, -1, 3], [1, 1, 1], (2, 4, 3), slice_mode="size", test_ref=False 67 | ) 68 | verify((3, 4, 3), [1, 0, 0], [-1, 2, 3], [1, 1, 1], (2, 2, 3), slice_mode="size", test_ref=True) 69 | -------------------------------------------------------------------------------- /tests/test_memory_passes.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | import numpy as np 4 | from tvm import relay 5 | from tvm.relay import memory_alloc 6 | 7 | 8 | def check_memory_plan(func, check_fn): 9 | # Build Module 10 | mod = tvm.IRModule().from_expr(func) 11 | 12 | # Convert arguments. 13 | args = [] 14 | for param in func.params: 15 | param = param.type_annotation 16 | sh = [int(sh) for sh in param.shape] 17 | data = np.random.rand(*sh).astype(param.dtype) 18 | args.append(tvm.nd.array(data)) 19 | 20 | # Compute without memory planning. 21 | ex = relay.create_executor("vm", mod) 22 | no_plan_result = ex.evaluate(mod["main"])(*args) 23 | 24 | # Compute with memory planning. 25 | with tvm.transform.PassContext(opt_level=1, disabled_pass=["MemoryPlan"]): 26 | plan_result = ex.evaluate(mod["main"])(*args) 27 | 28 | # Compute Python result. 29 | py_res = check_fn(*[arg.asnumpy() for arg in args]) 30 | 31 | # First check that the two VM results agree. 32 | np.testing.assert_allclose(no_plan_result.asnumpy(), plan_result.asnumpy()) 33 | 34 | # Finally check that the results match the Python result. 35 | np.testing.assert_allclose(plan_result.asnumpy(), py_res) 36 | 37 | 38 | def storage_type(mod): 39 | return relay.TypeCall(mod.get_global_type_var("Storage"), []) 40 | 41 | 42 | def test_tyck_alloc_storage(): 43 | mod = tvm.IRModule() 44 | mod.import_from_std("core.rly") 45 | 46 | 47 | def test_tyck_alloc_tensor(): 48 | mod = tvm.IRModule() 49 | mod.import_from_std("core.rly") 50 | sto = relay.Var("x", storage_type(mod)) 51 | sh = relay.const(np.array([1, 2]), dtype="int64") 52 | at = relay.op.memory.alloc_tensor(sto, relay.const(0, dtype="int64"), sh) 53 | mod["main"] = relay.Function([sto], at) 54 | relay.transform.InferType()(mod) 55 | 56 | 57 | def check_add(x): 58 | return x + x 59 | 60 | 61 | def test_add(): 62 | x = relay.var("x", shape=(2,)) 63 | z = x + x 64 | func = relay.Function( 65 | [ 66 | x, 67 | ], 68 | z, 69 | ) 70 | check_memory_plan(func, check_add) 71 | 72 | 73 | def check_add_sub(x, y): 74 | z = x + x 75 | return z - y 76 | 77 | 78 | def test_add_sub(): 79 | x = relay.var("x", shape=(10,)) 80 | y = relay.var("y", shape=(10,)) 81 | z = x + x 82 | z = z - y 83 | func = relay.Function([x, y], z) 84 | check_memory_plan(func, check_add_sub) 85 | 86 | 87 | def check_no_fuse(x, y, w): 88 | z = x + y 89 | return np.matmul(z, np.transpose(w)) 90 | 91 | 92 | def test_no_fuse(): 93 | x = relay.var("x", shape=(5, 1)) 94 | y = relay.var("y", shape=(5, 1)) 95 | w = relay.var("w", shape=(5, 1)) 96 | z = x + y 97 | out = relay.op.nn.dense(z, w) 98 | func = relay.Function([x, y, w], out) 99 | check_memory_plan(func, check_no_fuse) 100 | -------------------------------------------------------------------------------- /tests/test_pass_to_cps.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import relay 4 | from tvm.relay.analysis import detect_feature 5 | from tvm.relay.transform import to_cps, un_cps 6 | from tvm.relay.analysis import Feature 7 | from tvm.relay.prelude import Prelude 8 | from tvm.relay.testing import make_nat_expr, rand, run_infer_type, run_opt_pass 9 | from tvm.relay import create_executor 10 | from tvm.relay import transform 11 | 12 | 13 | def test_id(): 14 | x = relay.var("x", shape=[]) 15 | id = run_infer_type(relay.Function([x], x)) 16 | id_cps = run_infer_type(to_cps(id)) 17 | 18 | 19 | def test_double(): 20 | t = relay.TypeVar("t") 21 | x = relay.var("x", t) 22 | f = relay.var("f", relay.FuncType([t], t)) 23 | double = run_infer_type(relay.Function([f, x], f(f(x)), t, [t])) 24 | double_cps = run_infer_type(to_cps(double)) 25 | 26 | 27 | # make sure cps work for recursion. 28 | def test_recursion(): 29 | mod = tvm.IRModule() 30 | p = Prelude(mod) 31 | # add_nat_definitions(p) 32 | shape = (10, 10) 33 | dtype = "float32" 34 | t = relay.TensorType(shape, dtype) 35 | x = relay.var("x", t) 36 | double = relay.Function([x], x + x) 37 | i = relay.var("i", t) 38 | func = relay.Function([i], p.nat_iterate(double, make_nat_expr(p, 3))(i)) 39 | mod["main"] = func 40 | mod["main"] = to_cps(mod["main"], mod=mod) 41 | mod["main"] = un_cps(mod["main"]) 42 | ex = create_executor(mod=mod) 43 | i_nd = rand(dtype, *shape) 44 | forward = ex.evaluate()(i_nd) 45 | tvm.testing.assert_allclose(forward.asnumpy(), 8 * i_nd.asnumpy()) 46 | 47 | def test_cps_pe(): 48 | def destroy_ref(x): 49 | x = run_infer_type(x) 50 | x = to_cps(x) 51 | x = run_infer_type(x) 52 | y = un_cps(x) 53 | y = run_infer_type(y) 54 | x = run_opt_pass( 55 | x, 56 | tvm.transform.Sequential( 57 | [transform.PartialEvaluate(), transform.DeadCodeElimination(inline_once=True)] 58 | ), 59 | ) 60 | assert Feature.fRefCreate not in detect_feature(x) 61 | 62 | unit = relay.Function([], relay.const(0.0, dtype="float32")) 63 | f_ref = relay.Var("f_ref") 64 | 65 | one = relay.const(1.0, dtype="float32") 66 | two = relay.const(2.0, dtype="float32") 67 | cond = relay.var(shape=(), dtype="uint1", name_hint="cond") 68 | true_branch = relay.RefWrite(f_ref, relay.Function([], one)) 69 | false_branch = relay.RefWrite(f_ref, relay.Function([], two)) 70 | if_expr = relay.If(cond, true_branch, false_branch) 71 | 72 | stmt = relay.Let( 73 | f_ref, 74 | relay.RefCreate(unit), 75 | relay.Let(relay.Var("x"), if_expr, relay.Call(relay.RefRead(f_ref), [])), 76 | ) 77 | 78 | F = relay.Function([cond], stmt) 79 | destroy_ref(F) 80 | 81 | G = relay.Function([cond], relay.If(cond, one, two)) 82 | G = run_infer_type(G) 83 | G = relay.transform.gradient(G) 84 | destroy_ref(G) 85 | 86 | x = relay.var("x", shape=(1, 16)) 87 | y = relay.var("y", shape=(1, 16)) 88 | z = relay.var("z", shape=(1, 16)) 89 | cond = relay.var("cond", shape=(), dtype="uint1") 90 | H = relay.If(cond, x, y) 91 | H = relay.add(H, z) 92 | H = relay.Function([cond, x, y, z], H) 93 | H = run_infer_type(H) 94 | H = relay.transform.gradient(H) 95 | destroy_ref(H) -------------------------------------------------------------------------------- /tests/test_cpp_build_module.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import te 4 | from tvm import relay 5 | from tvm.contrib.nvcc import have_fp16 6 | import tvm.testing 7 | 8 | def test_basic_build(): 9 | tgt = "llvm" 10 | ctx = tvm.cpu() 11 | # func 12 | a = relay.var("a", dtype="float32", shape=(16, 8)) 13 | b = relay.var("b", dtype="float32", shape=(8, 8)) 14 | c = relay.var("c", dtype="float32", shape=(16, 8)) 15 | x = relay.nn.dense(a, b) 16 | y = relay.nn.relu(x) 17 | z = y + c 18 | func = relay.Function([a, b, c], z) 19 | A = tvm.nd.array(np.random.uniform(-1, 1, (16, 8)).astype("float32"), ctx=ctx) 20 | B = tvm.nd.array(np.random.uniform(-1, 1, (8, 8)).astype("float32"), ctx=ctx) 21 | C = tvm.nd.array(np.random.uniform(-1, 1, (16, 8)).astype("float32"), ctx=ctx) 22 | params = {"b": B, "c": C} 23 | # build 24 | targets = {tvm.tir.IntImm("int32", ctx.device_type): tgt} 25 | mod = tvm.IRModule.from_expr(func) 26 | func_in_mod = mod["main"] 27 | assert mod["main"] == func_in_mod, "cannot compare function to itself" 28 | 29 | lib = relay.build(mod, targets, "llvm", params=params) 30 | assert mod["main"] == func_in_mod, "relay.build changed module in-place" 31 | 32 | # test 33 | rt = tvm.contrib.graph_runtime.GraphModule(lib["default"](ctx)) 34 | rt.set_input("a", A) 35 | rt.run() 36 | out = rt.get_output(0) 37 | 38 | np.testing.assert_allclose( 39 | out.asnumpy(), 40 | np.maximum(np.dot(A.asnumpy(), B.asnumpy().T), 0) + C.asnumpy(), 41 | atol=1e-5, 42 | rtol=1e-5, 43 | ) 44 | 45 | 46 | @tvm.testing.requires_cuda 47 | def test_fp16_build(): 48 | dtype = "float16" 49 | 50 | ctx = tvm.gpu(0) 51 | x = relay.var("x", dtype=dtype, shape=(4, 4)) 52 | y = relay.var("y", dtype=dtype, shape=(4, 4)) 53 | z = x + y 54 | func = relay.Function([x, y], z) 55 | X = tvm.nd.array(np.random.uniform(-1, 1, (4, 4)).astype(dtype), ctx=ctx) 56 | Y = tvm.nd.array(np.random.uniform(-1, 1, (4, 4)).astype(dtype), ctx=ctx) 57 | params = { 58 | "x": X, 59 | "y": Y, 60 | } 61 | 62 | # build 63 | g_json, mmod, params = relay.build(func, "cuda", params=params) 64 | 65 | # test 66 | rt = tvm.contrib.graph_runtime.create(g_json, mmod, ctx) 67 | rt.load_params(relay.save_param_dict(params)) 68 | rt.run() 69 | out = rt.get_output(0) 70 | 71 | np.testing.assert_allclose(out.asnumpy(), X.asnumpy() + Y.asnumpy(), atol=1e-5, rtol=1e-5) 72 | 73 | 74 | # @tvm.testing.parametrize_targets("llvm", "cuda") 75 | def test_fp16_conversion(): 76 | target = 'llvm' 77 | ctx = tvm.cpu() 78 | 79 | n = 10 80 | 81 | for (src, dst) in [("float32", "float16"), ("float16", "float32")]: 82 | x = relay.var("x", relay.TensorType((n,), src)) 83 | y = x.astype(dst) 84 | func = relay.Function([x], y) 85 | 86 | # init input 87 | X = tvm.nd.array(n * np.random.randn(n).astype(src) - n / 2) 88 | 89 | # build 90 | with tvm.transform.PassContext(opt_level=1): 91 | g_json, mmod, params = relay.build(tvm.IRModule.from_expr(func), target) 92 | 93 | # test 94 | rt = tvm.contrib.graph_runtime.create(g_json, mmod, ctx) 95 | rt.set_input("x", X) 96 | rt.run() 97 | out = rt.get_output(0) 98 | 99 | np.testing.assert_allclose(out.asnumpy(), X.asnumpy().astype(dst), atol=1e-5, rtol=1e-5) 100 | 101 | 102 | -------------------------------------------------------------------------------- /tests/test_annotated_regions.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import relay 3 | from tvm.relay.op.annotation import compiler_begin, compiler_end 4 | 5 | 6 | def check_region(region_set, target, args, nodes, rets): 7 | region = region_set.get_region(args[0]) 8 | assert region 9 | assert target == region.target 10 | assert set(args) == set(region.args) 11 | assert set(nodes) == set(region.nodes) 12 | assert set(rets) == set(region.rets) 13 | 14 | 15 | def test_region_set_creator_diamond(): 16 | data = relay.var("data", shape=(10, 10)) 17 | cb_1 = compiler_begin(data, "test_target") 18 | O_1 = relay.abs(cb_1) 19 | ce_1 = compiler_end(O_1, "test_target") 20 | ce_2 = compiler_end(O_1, "test_target") 21 | cb_2 = compiler_begin(ce_1, "test_target") 22 | O_2 = relay.nn.relu(cb_2) 23 | ce_3 = compiler_end(O_2, "test_target") 24 | cb_d = compiler_begin(ce_2, "default") 25 | X = relay.tanh(cb_d) 26 | ce_d = compiler_end(X, "default") 27 | cb_3 = compiler_begin(ce_3, "test_target") 28 | cb_4 = compiler_begin(ce_d, "test_target") 29 | O_3 = relay.add(cb_3, cb_4) 30 | ce_4 = compiler_end(O_3, "test_target") 31 | diamond = relay.Function([data], ce_4) 32 | 33 | region_set = relay.analysis.AnnotatedRegionSet( 34 | diamond, relay.op.get("annotation.compiler_begin"), relay.op.get("annotation.compiler_end") 35 | ) 36 | assert len(region_set) == 4 37 | check_region( 38 | region_set, 39 | "test_target", 40 | [cb_1], 41 | [cb_1, O_1, ce_1, ce_2], 42 | [ce_1, ce_2], 43 | ) 44 | check_region( 45 | region_set, 46 | "test_target", 47 | [cb_2], 48 | [cb_2, O_2, ce_3], 49 | [ce_3], 50 | ) 51 | check_region( 52 | region_set, 53 | "default", 54 | [cb_d], 55 | [cb_d, X, ce_d], 56 | [ce_d], 57 | ) 58 | check_region( 59 | region_set, 60 | "test_target", 61 | [cb_3, cb_4], 62 | [cb_3, cb_4, O_3, ce_4], 63 | [ce_4], 64 | ) 65 | 66 | 67 | def test_region_set_creator_merged(): 68 | data = relay.var("data", shape=(10, 10)) 69 | cb_1 = compiler_begin(data, "test_target") 70 | O_1 = relay.abs(cb_1) 71 | ce_2 = compiler_end(O_1, "test_target") 72 | O_2 = relay.nn.relu(O_1) 73 | ce_3 = compiler_end(O_2, "test_target") 74 | cb_d = compiler_begin(ce_2, "default") 75 | X = relay.tanh(cb_d) 76 | ce_d = compiler_end(X, "default") 77 | cb_3 = compiler_begin(ce_3, "test_target") 78 | cb_4 = compiler_begin(ce_d, "test_target") 79 | O_3 = relay.add(cb_3, cb_4) 80 | O_4 = relay.add(cb_3, cb_4) 81 | O_5 = relay.Tuple([O_3, O_4]) 82 | ce_4 = compiler_end(O_5, "test_target") 83 | merged = relay.Function([data], ce_4) 84 | 85 | region_set = relay.analysis.AnnotatedRegionSet( 86 | merged, relay.op.get("annotation.compiler_begin"), relay.op.get("annotation.compiler_end") 87 | ) 88 | assert len(region_set) == 3 89 | check_region( 90 | region_set, 91 | "test_target", 92 | [cb_1], 93 | [cb_1, O_1, O_2, ce_2, ce_3], 94 | [ce_2, ce_3], 95 | ) 96 | check_region( 97 | region_set, 98 | "default", 99 | [cb_d], 100 | [cb_d, X, ce_d], 101 | [ce_d], 102 | ) 103 | check_region( 104 | region_set, 105 | "test_target", 106 | [cb_3, cb_4], 107 | [cb_3, cb_4, O_3, O_4, O_5, ce_4], 108 | [ce_4], 109 | ) 110 | 111 | 112 | if __name__ == "__main__": 113 | test_region_set_creator_diamond() 114 | test_region_set_creator_merged() 115 | -------------------------------------------------------------------------------- /tests/test_op_qnn_quantize.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | import numpy as np 4 | from tvm import relay 5 | from tvm.contrib import graph_runtime 6 | 7 | def quantize_test_driver(in_dtype, quant_args, axis, out_dtype, in_data, verify_output_data): 8 | shape = in_data.shape 9 | input_data = relay.var("input_data", shape=shape, dtype=in_dtype) 10 | output_zero_point = relay.const(quant_args["out_zero_point"]) 11 | output_scale = relay.const(quant_args["out_scale"]) 12 | quantized_output = relay.qnn.op.quantize( 13 | input_data, 14 | output_scale=output_scale, 15 | output_zero_point=output_zero_point, 16 | axis=axis, 17 | out_dtype=out_dtype, 18 | ) 19 | mod = relay.Function(relay.analysis.free_vars(quantized_output), quantized_output) 20 | mod = tvm.IRModule.from_expr(mod) 21 | with tvm.transform.PassContext(opt_level=3): 22 | graph, lib, params = relay.build(mod, "llvm", params=None) 23 | rt_mod = graph_runtime.create(graph, lib, ctx=tvm.cpu(0)) 24 | rt_mod.set_input(input_data=in_data) 25 | rt_mod.set_input(**params) 26 | rt_mod.run() 27 | res = rt_mod.get_output(0).asnumpy() 28 | np.testing.assert_equal(res, verify_output_data) 29 | assert res.dtype == out_dtype 30 | 31 | 32 | def test_float32_to_uint8(): 33 | data = ( 34 | np.array([-63.5, -63, -62.5, -62, -61.5, 62, 62.5, 63, 63.5, 64]) 35 | .astype("float32") 36 | .reshape((2, 5)) 37 | ) 38 | output = np.array([0, 1, 2, 3, 4, 251, 252, 253, 254, 255]).astype("uint8").reshape((2, 5)) 39 | quant_args = {"out_zero_point": np.int32(127), "out_scale": np.float32(0.5)} 40 | quantize_test_driver( 41 | in_dtype="float32", 42 | quant_args=quant_args, 43 | axis=-1, 44 | out_dtype="uint8", 45 | in_data=data, 46 | verify_output_data=output, 47 | ) 48 | 49 | 50 | def test_float32_to_int8(): 51 | data = ( 52 | np.array([-63.5, -63, -62.5, -62, -61.5, 62, 62.5, 63, 63.5, 64]) 53 | .astype("float32") 54 | .reshape((2, 5)) 55 | ) 56 | output = ( 57 | np.array([-128, -127, -126, -125, -124, 123, 124, 125, 126, 127]) 58 | .astype("int8") 59 | .reshape((2, 5)) 60 | ) 61 | quant_args = {"out_zero_point": np.int32(-1), "out_scale": np.float32(0.5)} 62 | quantize_test_driver( 63 | in_dtype="float32", 64 | quant_args=quant_args, 65 | axis=-1, 66 | out_dtype="int8", 67 | in_data=data, 68 | verify_output_data=output, 69 | ) 70 | 71 | 72 | def test_channelwise_axis_0(): 73 | data = ( 74 | np.array([-63.5, -63, -62.5, -62, -61.5, 30, 31, 31.5, 31.75, 32]) 75 | .astype("float32") 76 | .reshape((2, 5)) 77 | ) 78 | output = np.array([0, 1, 2, 3, 4, 243, 247, 249, 250, 251]).astype("uint8").reshape((2, 5)) 79 | quant_args = { 80 | "out_zero_point": np.array([127, 123]).astype("int32"), 81 | "out_scale": np.array([0.5, 0.25]).astype("float32"), 82 | } 83 | 84 | quantize_test_driver( 85 | in_dtype="float32", 86 | quant_args=quant_args, 87 | axis=0, 88 | out_dtype="uint8", 89 | in_data=data, 90 | verify_output_data=output, 91 | ) 92 | 93 | 94 | def test_channelwise_axis_1(): 95 | data = np.transpose( 96 | np.array([-63.5, -63, -62.5, -62, -61.5, 30, 31, 31.5, 31.75, 32]) 97 | .astype("float32") 98 | .reshape((2, 5)) 99 | ) 100 | output = np.transpose( 101 | np.array([0, 1, 2, 3, 4, 243, 247, 249, 250, 251]).astype("uint8").reshape((2, 5)) 102 | ) 103 | quant_args = { 104 | "out_zero_point": np.array([127, 123]).astype("int32"), 105 | "out_scale": np.array([0.5, 0.25]).astype("float32"), 106 | } 107 | 108 | quantize_test_driver( 109 | in_dtype="float32", 110 | quant_args=quant_args, 111 | axis=1, 112 | out_dtype="uint8", 113 | in_data=data, 114 | verify_output_data=output, 115 | ) 116 | -------------------------------------------------------------------------------- /tests/test_call_graph.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import tvm 3 | from tvm import relay 4 | 5 | def test_callgraph_construct(): 6 | mod = tvm.IRModule({}) 7 | x = relay.var("x", shape=(2, 3)) 8 | y = relay.var("y", shape=(2, 3)) 9 | mod["g1"] = relay.Function([x, y], x + y) 10 | call_graph = relay.analysis.CallGraph(mod) 11 | assert "g1" in str(call_graph) 12 | assert tvm.ir.structural_equal(mod, call_graph.module) 13 | 14 | 15 | def test_print_element(): 16 | mod = tvm.IRModule({}) 17 | x0 = relay.var("x0", shape=(2, 3)) 18 | y0 = relay.var("y0", shape=(2, 3)) 19 | mod["g0"] = relay.Function([x0, y0], x0 + y0) 20 | x1 = relay.var("x1", shape=(2, 3)) 21 | y1 = relay.var("y1", shape=(2, 3)) 22 | mod["g1"] = relay.Function([x1, y1], x1 - y1) 23 | call_graph = relay.analysis.CallGraph(mod) 24 | 25 | assert "#refs = 0" in str(call_graph.print_var("g0")) 26 | assert "#refs = 0" in str(call_graph.print_var("g1")) 27 | 28 | 29 | def test_global_call_count(): 30 | mod = tvm.IRModule({}) 31 | x0 = relay.var("x0", shape=(2, 3)) 32 | y0 = relay.var("y0", shape=(2, 3)) 33 | g0 = relay.GlobalVar("g0") 34 | mod[g0] = relay.Function([x0, y0], x0 + y0) 35 | x1 = relay.var("x1", shape=(2, 3)) 36 | y1 = relay.var("y1", shape=(2, 3)) 37 | g1 = relay.GlobalVar("g1") 38 | mod[g1] = relay.Function([x1, y1], g0(x1, y1)) 39 | call_graph = relay.analysis.CallGraph(mod) 40 | 41 | p0 = relay.var("p0", shape=(2, 3)) 42 | p1 = relay.var("p1", shape=(2, 3)) 43 | func = relay.Function([p0, p1], g0(p0, p1) * g1(p0, p1)) 44 | mod["main"] = func 45 | call_graph = relay.analysis.CallGraph(mod) 46 | 47 | assert call_graph.global_call_count(g0) == 0 48 | assert call_graph.global_call_count(g1) == 1 49 | assert call_graph.global_call_count("main") == 2 50 | 51 | 52 | def test_ref_count(): 53 | mod = tvm.IRModule({}) 54 | x0 = relay.var("x0", shape=(2, 3)) 55 | y0 = relay.var("y0", shape=(2, 3)) 56 | g0 = relay.GlobalVar("g0") 57 | mod[g0] = relay.Function([x0, y0], x0 + y0) 58 | x1 = relay.var("x1", shape=(2, 3)) 59 | y1 = relay.var("y1", shape=(2, 3)) 60 | g1 = relay.GlobalVar("g1") 61 | mod[g1] = relay.Function([x1, y1], x1 - y1) 62 | call_graph = relay.analysis.CallGraph(mod) 63 | 64 | p0 = relay.var("p0", shape=(2, 3)) 65 | p1 = relay.var("p1", shape=(2, 3)) 66 | func = relay.Function([p0, p1], g0(p0, p1) * g1(p0, p1)) 67 | mod["main"] = func 68 | call_graph = relay.analysis.CallGraph(mod) 69 | 70 | assert call_graph.ref_count(g0) == 1 71 | assert call_graph.ref_count(g1) == 1 72 | assert call_graph.ref_count("main") == 0 73 | 74 | 75 | def test_nested_ref(): 76 | mod = tvm.IRModule({}) 77 | x0 = relay.var("x0", shape=(2, 3)) 78 | y0 = relay.var("y0", shape=(2, 3)) 79 | g0 = relay.GlobalVar("g0") 80 | mod[g0] = relay.Function([x0, y0], x0 + y0) 81 | x1 = relay.var("x1", shape=(2, 3)) 82 | y1 = relay.var("y1", shape=(2, 3)) 83 | g1 = relay.GlobalVar("g1") 84 | mod[g1] = relay.Function([x1, y1], g0(x1, y1)) 85 | call_graph = relay.analysis.CallGraph(mod) 86 | 87 | p0 = relay.var("p0", shape=(2, 3)) 88 | p1 = relay.var("p1", shape=(2, 3)) 89 | func = relay.Function([p0, p1], g0(p0, p1) * g1(p0, p1)) 90 | mod["main"] = func 91 | call_graph = relay.analysis.CallGraph(mod) 92 | 93 | assert call_graph.ref_count(g0) == 2 94 | assert call_graph.ref_count(g1) == 1 95 | assert call_graph.ref_count("main") == 0 96 | 97 | def test_recursive_func(): 98 | mod = tvm.IRModule({}) 99 | 100 | x = relay.var("x", shape=[], dtype="int32") 101 | fn0 = relay.Function([x], x) 102 | gx = relay.GlobalVar("gx") 103 | mod[gx] = fn0 104 | 105 | sum_up = relay.GlobalVar("sum_up") 106 | i = relay.var("i", shape=[], dtype="int32") 107 | sb = relay.ScopeBuilder() 108 | 109 | func = relay.Function([i], sb.get(), ret_type=relay.TensorType([], "int32")) 110 | func = func.with_attr("Compiler", "a") 111 | mod[sum_up] = func 112 | iarg = relay.var("i", shape=[], dtype="int32") 113 | mod["main"] = relay.Function([iarg], sum_up(iarg)) 114 | call_graph = relay.analysis.CallGraph(mod) 115 | -------------------------------------------------------------------------------- /tests/test_backend_interpreter.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import te 4 | import tvm.testing 5 | from tvm import nd 6 | from tvm import relay 7 | from tvm.runtime import container 8 | from tvm.relay.backend.interpreter import RefValue, ConstructorValue 9 | from tvm.relay.scope_builder import ScopeBuilder 10 | from tvm.relay import testing, create_executor 11 | 12 | def test_tuple_value(): 13 | tv = container.tuple_object([relay.const(1), relay.const(2), relay.const(3)]) 14 | np.testing.assert_allclose(tv[0].data.asnumpy(), 1) 15 | np.testing.assert_allclose(tv[1].data.asnumpy(), 2) 16 | np.testing.assert_allclose(tv[2].data.asnumpy(), 3) 17 | 18 | 19 | def test_tuple_getitem(): 20 | two = relay.add(relay.const(1), relay.const(1)) 21 | func = relay.Function([], relay.TupleGetItem(relay.Tuple([relay.const(1), relay.const(2)]), 0)) 22 | 23 | def test_id(): 24 | x = relay.var("x", "float32") 25 | ident = relay.Function([x], x) 26 | one = np.array(1.0, "float32") 27 | 28 | def test_add_const(): 29 | two = relay.add(relay.const(1), relay.const(1)) 30 | func = relay.Function([], two) 31 | 32 | def test_mul_param(): 33 | x = relay.var("x", shape=(10, 10)) 34 | y = relay.var("y", shape=(1, 10)) 35 | func = relay.Function([x, y], relay.multiply(x, y)) 36 | x_data = np.random.rand(10, 10).astype("float32") 37 | y_data = np.random.rand(1, 10).astype("float32") 38 | 39 | def test_equal(): 40 | i = relay.var("i", shape=[], dtype="int32") 41 | j = relay.var("i", shape=[], dtype="int32") 42 | z = relay.equal(i, j) 43 | func = relay.Function([i, j], z, ret_type=relay.TensorType([], "bool")) 44 | i_data = relay.const(0, "int32") 45 | j_data = relay.const(0, "int32") 46 | 47 | def test_subtract(): 48 | i = relay.var("i", shape=[], dtype="int32") 49 | sub = relay.subtract(i, relay.const(1, dtype="int32")) 50 | func = relay.Function([i], sub, ret_type=relay.TensorType([], "int32")) 51 | i_data = np.array(1, dtype="int32") 52 | 53 | def test_ref(): 54 | mod = tvm.IRModule() 55 | three_with_ref = relay.GlobalVar("three_with_ref") 56 | i = relay.Var("i") 57 | iv = relay.Var("iv") 58 | u = relay.Var("u") 59 | uv = relay.Var("uv") 60 | body = relay.add(iv, uv) 61 | body = relay.Let(uv, relay.RefRead(i), body) 62 | body = relay.Let(u, relay.RefWrite(i, relay.const(2)), body) 63 | body = relay.Let(iv, relay.RefRead(i), body) 64 | body = relay.Let(i, relay.RefCreate(relay.const(1)), body) 65 | mod[three_with_ref] = relay.Function([], body) 66 | 67 | def test_binds(): 68 | x = relay.var("x") 69 | y = relay.add(x, x) 70 | intrp = create_executor("debug") 71 | xx = np.ones((10, 20)) 72 | res = intrp.evaluate(y, binds={x: xx}).asnumpy() 73 | tvm.testing.assert_allclose(xx + xx, res) 74 | 75 | def test_kwargs_params(): 76 | x = relay.var("x", shape=(1, 10)) 77 | y = relay.var("y", shape=(1, 10)) 78 | z = relay.var("z", shape=(1, 10)) 79 | f = relay.Function([x, y, z], x + y + z) 80 | x_data = np.random.rand(1, 10).astype("float32") 81 | y_data = np.random.rand(1, 10).astype("float32") 82 | z_data = np.random.rand(1, 10).astype("float32") 83 | params = {"y": y_data, "z": z_data} 84 | intrp = create_executor("debug") 85 | res = intrp.evaluate(f)(x_data, **params) 86 | tvm.testing.assert_allclose(res.asnumpy(), x_data + y_data + z_data) 87 | 88 | def test_tuple_passing(): 89 | x = relay.var( 90 | "x", 91 | type_annotation=relay.ty.TupleType( 92 | [relay.ty.TensorType((), "int64"), relay.ty.TensorType((), "int64")] 93 | ), 94 | ) 95 | 96 | fn = relay.Function([x], relay.expr.TupleGetItem(x, 0)) 97 | mod = tvm.IRModule({}) 98 | gv = relay.GlobalVar("main") 99 | mod[gv] = fn 100 | mod = relay.transform.InferType()(mod) 101 | 102 | ctx = tvm.cpu() 103 | target = tvm.target.Target("llvm") 104 | exec = relay.create_executor(mod=mod, ctx=ctx, target=target) 105 | f = exec.evaluate(gv) 106 | # First use a Python tuple. 107 | out = f((10, 8)) 108 | tvm.testing.assert_allclose(out.asnumpy(), np.array(10)) 109 | # Second use a tuple value. 110 | value_tuple = container.tuple_object([nd.array(np.array(11)), nd.array(np.array(12))]) 111 | out = f(value_tuple) 112 | tvm.testing.assert_allclose(out.asnumpy(), np.array(11)) -------------------------------------------------------------------------------- /tests/test_tuning.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import time 3 | 4 | import tvm 5 | from tvm import te 6 | 7 | from tvm import autotvm 8 | from tvm.autotvm.tuner import RandomTuner 9 | 10 | import tvm.testing 11 | 12 | 13 | @autotvm.template("testing/conv2d_no_batching") 14 | def conv2d_no_batching(N, H, W, CI, CO, KH, KW): 15 | """An example template for testing""" 16 | assert N == 1, "Only consider batch_size = 1 in this template" 17 | 18 | data = te.placeholder((N, CI, H, W), name="data") 19 | kernel = te.placeholder((CO, CI, KH, KW), name="kernel") 20 | 21 | rc = te.reduce_axis((0, CI), name="rc") 22 | ry = te.reduce_axis((0, KH), name="ry") 23 | rx = te.reduce_axis((0, KW), name="rx") 24 | 25 | conv = te.compute( 26 | (N, CO, H - KH + 1, W - KW + 1), 27 | lambda nn, ff, yy, xx: te.sum( 28 | data[nn, rc, yy + ry, xx + rx] * kernel[ff, rc, ry, rx], axis=[rc, ry, rx] 29 | ), 30 | tag="conv2d_nchw", 31 | ) 32 | 33 | s = te.create_schedule([conv.op]) 34 | 35 | output = conv 36 | OL = s.cache_write(conv, "local") 37 | 38 | # create cache stage 39 | AA = s.cache_read(data, "shared", [OL]) 40 | WW = s.cache_read(kernel, "shared", [OL]) 41 | AL = s.cache_read(AA, "local", [OL]) 42 | WL = s.cache_read(WW, "local", [OL]) 43 | 44 | # tile and bind spatial axes 45 | n, f, y, x = s[output].op.axis 46 | cfg = autotvm.get_config() 47 | cfg.define_split("tile_f", cfg.axis(f), num_outputs=4) 48 | cfg.define_split("tile_y", cfg.axis(y), num_outputs=4) 49 | cfg.define_split("tile_x", cfg.axis(x), num_outputs=4) 50 | bf, vf, tf, fi = cfg["tile_f"].apply(s, output, f) 51 | by, vy, ty, yi = cfg["tile_y"].apply(s, output, y) 52 | bx, vx, tx, xi = cfg["tile_x"].apply(s, output, x) 53 | kernel_scope = n # this is the scope to attach global config inside this kernel 54 | 55 | s[output].bind(bf, te.thread_axis("blockIdx.z")) 56 | s[output].bind(by, te.thread_axis("blockIdx.y")) 57 | s[output].bind(bx, te.thread_axis("blockIdx.x")) 58 | s[output].bind(vf, te.thread_axis("vthread")) 59 | s[output].bind(vy, te.thread_axis("vthread")) 60 | s[output].bind(vx, te.thread_axis("vthread")) 61 | s[output].bind(tf, te.thread_axis("threadIdx.z")) 62 | s[output].bind(ty, te.thread_axis("threadIdx.y")) 63 | s[output].bind(tx, te.thread_axis("threadIdx.x")) 64 | s[output].reorder(n, bf, by, bx, vf, vy, vx, tf, ty, tx, fi, yi, xi) 65 | s[OL].compute_at(s[output], tx) 66 | 67 | # tile and bind reduction axes 68 | n, f, y, x = s[OL].op.axis 69 | rc, ry, rx = s[OL].op.reduce_axis 70 | cfg.define_split("tile_rc", cfg.axis(rc), num_outputs=3) 71 | cfg.define_split("tile_ry", cfg.axis(ry), num_outputs=3) 72 | cfg.define_split("tile_rx", cfg.axis(rx), num_outputs=3) 73 | rco, rcm, rci = cfg["tile_rc"].apply(s, OL, rc) 74 | ryo, rym, ryi = cfg["tile_rx"].apply(s, OL, ry) 75 | rxo, rxm, rxi = cfg["tile_ry"].apply(s, OL, rx) 76 | s[OL].reorder(rco, ryo, rxo, rcm, rym, rxm, rci, ryi, rxi, n, f, y, x) 77 | 78 | s[AA].compute_at(s[OL], rxo) 79 | s[WW].compute_at(s[OL], rxo) 80 | s[AL].compute_at(s[OL], rxm) 81 | s[WL].compute_at(s[OL], rxm) 82 | 83 | # cooperative fetching 84 | for load in [AA, WW]: 85 | n, f, y, x = s[load].op.axis 86 | fused = s[load].fuse(n, f, y, x) 87 | tz, fused = s[load].split(fused, nparts=cfg["tile_f"].size[2]) 88 | ty, fused = s[load].split(fused, nparts=cfg["tile_y"].size[2]) 89 | tx, fused = s[load].split(fused, nparts=cfg["tile_x"].size[2]) 90 | s[load].bind(tz, te.thread_axis("threadIdx.z")) 91 | s[load].bind(ty, te.thread_axis("threadIdx.y")) 92 | s[load].bind(tx, te.thread_axis("threadIdx.x")) 93 | 94 | # tune unroll 95 | cfg.define_knob("auto_unroll_max_step", [0, 512, 1500]) 96 | cfg.define_knob("unroll_explicit", [0, 1]) 97 | s[output].pragma(kernel_scope, "auto_unroll_max_step", cfg["auto_unroll_max_step"].val) 98 | s[output].pragma(kernel_scope, "unroll_explicit", cfg["unroll_explicit"].val) 99 | 100 | return s, [data, kernel, conv] 101 | 102 | 103 | def get_sample_task(target=tvm.target.cuda(), target_host=None): 104 | """return a sample task for testing""" 105 | task = autotvm.task.create( 106 | "testing/conv2d_no_batching", 107 | args=(1, 7, 7, 512, 512, 3, 3), 108 | target=target, 109 | target_host=target_host, 110 | ) 111 | return task, target 112 | 113 | 114 | @tvm.testing.parametrize_targets("cuda", "opencl") 115 | def test_tuning(target, ctx): 116 | # init task 117 | task, target = get_sample_task(target, None) 118 | logging.info("%s", task.config_space) 119 | 120 | measure_option = autotvm.measure_option(autotvm.LocalBuilder(), autotvm.LocalRunner()) 121 | 122 | tuner = RandomTuner(task) 123 | tuner.tune(n_trial=20, measure_option=measure_option) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 1. TVMfuzz 2 | 3 | 4 | ## introduction 5 | 6 | TVMfuzz is a demo project for fuzzing TVM, a widely-used Deep Learning Compiler, based on the findings in **A Comprehensive Study of Deep Learning Compiler Bugs**. TVMfuzz is capable of analyzing the interrelationship among statements and building test programs given the existing test files in TVM. 7 | 8 | This project involves only 3 folders and 1 script. 9 | 10 | + buggyFile: includes 8 bug-triggered programs found by TVMfuzz 11 | + tests: includes 53 effective test files in TVM for analysis 12 | + TVMfuzz: includes all the implementation of major features and functions of TVMfuzz 13 | + run.py: the script for building test programs 14 | 15 | After running *run.py*, a new folder named *byproduct* will be created and it contains 3 extra files: 16 | 17 | + asTree.txt: illustrates the AST of test files with the help of Python package *ast* 18 | + log.txt: records the interrelationship among all involved statements of interest 19 | + program.py: the generated test program 20 | 21 | 22 | 23 | ## Reproducibility 24 | 25 | ### TVMFuzz 26 | 27 | To release reviews from laborious tasks of building experimental environments, we have created a docker image and pushed it to docker hub. The version of TVM installed in our image is 0.7, consistent with the one in our experiments. 28 | You can download the image and reproduce our experiments about TVMfuzz following the **[INSTALL.pdf](https://github.com/ShenQingchao/DLCstudy/blob/master/INSTALL.pdf)** file. 29 | 30 | 31 | 32 | # 2. Dataset 33 | 34 | ## introduction 35 | 36 | This dataset is the basic support for the paper: **A Comprehensive Study of Deep Learning Compiler Bugs**. 37 | 38 | We collected the closed and the merged pull requests that are responsible for fixing bugs from their GitHub repositories over 15 months. In total, we collected 1,361 bug-fixing pull requests and identified 603 bugs, including 318 TVM bugs, 145 Glow bugs, and 140 nGraph bugs. 39 | 40 | All the bugs are recorded in the excel table and the bugs of each compiler are displayed in a single worksheet. 41 | 42 | ## repository 43 | 44 | The repositories corresponding to these three compilers are as follows. Since some model loaders of nGraph are in separate repositories, we also collect the related data in the same time period. 45 | 46 | TVM :https://github.com/apache/tvm 47 | 48 | Glow: https://github.com/pytorch/glow 49 | 50 | nGraph: 51 | 52 | https://github.com/NervanaSystems/ngraph 53 | 54 | https://github.com/NervanaSystems/ngraph-tf (one model loader of nGraph) 55 | 56 | https://github.com/NervanaSystems/ngraph-onnx (one model loader of nGraph) 57 | 58 | ## information 59 | 60 | For each worksheet, the following related information are shown: 61 | 62 | - the name of the compiler 63 | - pr_id: short for pull request id 64 | - the title of the pull request(pr) 65 | - the url directed to this pr 66 | - the concrete date when this pr was published 67 | - the number of comments involved 68 | - the number of files involved and their separate names 69 | - the symptom of this bug 70 | - the stage about this bug 71 | - the top root cause of this bug 72 | - sub_causes: short for subcategories of root causes 73 | - the related framework of the Model Loading bugs 74 | 75 | ## Plotting 76 | In order to better reproduce the figures in the paper, we provide a drawing scrip (**drawing_script.R**), which can generate all the graphs in our paper. To see the generated graph intuitively, we recommend that you use RStudio to run this script. 77 | First You just need to download the **dataset** folder in this repository to your computer. 78 | 79 | Secondly, you need to run the script(`drawing_script.R`) with RStudio, and then all the figures in our paper will be generated one by one. 80 | 81 | Notes: 82 | 1. The dataset file(**dataset.xlsx**) should be placed in the same directory as the **drawing_script.R** file. 83 | 2. If the running crash with a message "\`path\` does not exist: ‘dataset.xlsx’", you need set the **working directory** to source file location. 84 | 85 | ## Citation 86 | Please cite our paper if this work is helpful to you. 87 | ``` 88 | @inproceedings{10.1145/3468264.3468591, 89 | author = {Shen, Qingchao and Ma, Haoyang and Chen, Junjie and Tian, Yongqiang and Cheung, Shing-Chi and Chen, Xiang}, 90 | title = {A Comprehensive Study of Deep Learning Compiler Bugs}, 91 | year = {2021}, 92 | isbn = {9781450385626}, 93 | publisher = {Association for Computing Machinery}, 94 | address = {New York, NY, USA}, 95 | url = {https://doi.org/10.1145/3468264.3468591}, 96 | doi = {10.1145/3468264.3468591}, 97 | booktitle = {Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering}, 98 | pages = {968–980}, 99 | numpages = {13}, 100 | keywords = {Deep Learning, Compiler Testing, Empirical Study, Deep Learning Compiler Bug}, 101 | location = {Athens, Greece}, 102 | series = {ESEC/FSE 2021} 103 | } 104 | ``` 105 | -------------------------------------------------------------------------------- /tests/test_pass_mac_count.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import te 4 | from tvm import relay 5 | from tvm.relay import analysis, transform 6 | 7 | 8 | def run_opt_pass(expr, opt_pass): 9 | assert isinstance(opt_pass, tvm.transform.Pass) 10 | mod = tvm.IRModule.from_expr(expr) 11 | mod = opt_pass(mod) 12 | entry = mod["main"] 13 | return entry if isinstance(expr, relay.Function) else entry.body 14 | 15 | 16 | def test_gemm(): 17 | n = 512 18 | k = 1024 19 | m = 256 20 | dshape1 = (n, k) 21 | dshape2 = (m, k) 22 | data1 = relay.var("data1", shape=dshape1) 23 | data2 = relay.var("data2", shape=dshape2) 24 | gemm = relay.nn.dense(data1, data2) 25 | func = relay.Function([data1, data2], relay.Tuple(tvm.runtime.convert([gemm]))) 26 | func = run_opt_pass(func, transform.InferType()) 27 | compute_count = analysis.get_total_mac_number(func) 28 | expect_count = n * m * k 29 | assert compute_count == expect_count 30 | 31 | 32 | def test_conv(): 33 | batch_size = 1 34 | input_channel = 3 35 | h = 224 36 | w = 224 37 | output_channel = 64 38 | kh = 7 39 | kw = 7 40 | h_padding = 1 41 | w_padding = 1 42 | oh = h + h_padding * 2 - kh + 1 43 | ow = w + w_padding * 2 - kw + 1 44 | dshape = (batch_size, input_channel, h, w) 45 | weight = relay.var("weight", shape=(output_channel, input_channel, kh, kw)) 46 | data = relay.var("data", shape=dshape) 47 | conv2d = relay.nn.conv2d( 48 | data, weight, channels=output_channel, kernel_size=(kh, kw), padding=(h_padding, w_padding) 49 | ) 50 | func = relay.Function([data, weight], relay.Tuple(tvm.runtime.convert([conv2d]))) 51 | func = run_opt_pass(func, transform.InferType()) 52 | compute_count = analysis.get_total_mac_number(func) 53 | expect_count = batch_size * input_channel * oh * ow * output_channel * kh * kw 54 | assert compute_count == expect_count 55 | 56 | 57 | def test_simple_network(): 58 | batch_size = 1 59 | dshape = (batch_size, 64, 56, 56) 60 | weight_conv = relay.var("weight_conv", shape=(64, 64, 3, 3)) 61 | data1 = relay.var("data1", shape=dshape) 62 | data2 = relay.var("data2", shape=dshape) 63 | weight_dense = relay.var("weight_dense", shape=(1, 56 * 56 * 64)) 64 | 65 | conv2d_1 = relay.nn.conv2d(data1, weight_conv, channels=64, kernel_size=(3, 3), padding=(1, 1)) 66 | conv2d_2 = relay.nn.conv2d(data2, weight_conv, channels=64, kernel_size=(3, 3), padding=(1, 1)) 67 | add = relay.add(conv2d_1, conv2d_2) 68 | flattened = relay.nn.batch_flatten(add) 69 | dense_1 = relay.nn.dense(flattened, weight_dense) 70 | 71 | func = relay.Function( 72 | [data1, data2, weight_conv, weight_dense], 73 | relay.Tuple(tvm.runtime.convert([conv2d_1, conv2d_2, dense_1, add, flattened])), 74 | ) 75 | # alter the CONV 2D data layout to test 76 | func = run_opt_pass(func, transform.AlterOpLayout()) 77 | compute_count = analysis.get_total_mac_number(func) 78 | expect_count = 231411712 79 | assert compute_count == expect_count 80 | 81 | 82 | def test_depthwise_conv2d(): 83 | batch_size = 1 84 | dshape = (batch_size, 64, 56, 56) 85 | weight_conv = relay.var("weight_depthwiseconv", shape=(64, 1, 3, 3)) 86 | data1 = relay.var("data1", shape=dshape) 87 | data2 = relay.var("data2", shape=dshape) 88 | depthwise_conv2d_1 = relay.nn.conv2d( 89 | data1, weight_conv, kernel_size=(3, 3), padding=(1, 1), groups=64 90 | ) 91 | depthwise_conv2d_2 = relay.nn.conv2d( 92 | data2, weight_conv, kernel_size=(3, 3), padding=(1, 1), groups=64 93 | ) 94 | add = relay.add(depthwise_conv2d_1, depthwise_conv2d_2) 95 | func = relay.Function( 96 | [data1, data2, weight_conv], 97 | relay.Tuple(tvm.runtime.convert([depthwise_conv2d_1, depthwise_conv2d_2, add])), 98 | ) 99 | func = run_opt_pass(func, transform.InferType()) 100 | compute_count = analysis.get_total_mac_number(func) 101 | assert compute_count == 2 * np.prod(dshape) * 3 * 3 102 | 103 | 104 | def test_conv_2d_transpose(): 105 | batch_size = 1 106 | input_channel = 3 107 | h = 224 108 | w = 224 109 | output_channel = 64 110 | kh = 7 111 | kw = 7 112 | h_padding = 1 113 | w_padding = 1 114 | oh = h - h_padding * 2 + kh - 1 115 | ow = w - w_padding * 2 + kw - 1 116 | dshape = (batch_size, input_channel, h, w) 117 | weight = relay.var("weight", shape=(input_channel, output_channel, kh, kw)) 118 | data = relay.var("data", shape=dshape) 119 | conv2d_transpose = relay.nn.conv2d_transpose( 120 | data, weight, channels=output_channel, kernel_size=(kh, kw), padding=(h_padding, w_padding) 121 | ) 122 | func = relay.Function([data, weight], relay.Tuple(tvm.runtime.convert([conv2d_transpose]))) 123 | func = run_opt_pass(func, transform.InferType()) 124 | compute_count = analysis.get_total_mac_number(func) 125 | expect_count = batch_size * input_channel * oh * ow * output_channel * kh * kw 126 | assert compute_count == expect_count -------------------------------------------------------------------------------- /tests/test_op_qnn_subtract.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | import numpy as np 3 | from tvm import relay 4 | 5 | def qnn_subtract_driver(x_datas, y_datas, golden_outputs, scale_and_zp, data_dtype="uint8"): 6 | # all x, y and golden outputs should be of the same length 7 | assert len(x_datas) == len(y_datas) 8 | assert len(y_datas) == len(golden_outputs) 9 | 10 | x = relay.var("x", shape=(1, 4), dtype=data_dtype) 11 | y = relay.var("y", shape=(1, 4), dtype=data_dtype) 12 | lhs_scale = relay.const(scale_and_zp["lhs_scale"], "float32") 13 | lhs_zp = relay.const(scale_and_zp["lhs_zp"], "int32") 14 | rhs_scale = relay.const(scale_and_zp["rhs_scale"], "float32") 15 | rhs_zp = relay.const(scale_and_zp["rhs_zp"], "int32") 16 | output_scale = relay.const(scale_and_zp["output_scale"], "float32") 17 | output_zp = relay.const(scale_and_zp["output_zp"], "int32") 18 | z = relay.qnn.op.subtract( 19 | lhs=x, 20 | rhs=y, 21 | lhs_scale=lhs_scale, 22 | lhs_zero_point=lhs_zp, 23 | rhs_scale=rhs_scale, 24 | rhs_zero_point=rhs_zp, 25 | output_scale=output_scale, 26 | output_zero_point=output_zp, 27 | ) 28 | func = relay.Function([x, y], z) 29 | mod = tvm.IRModule.from_expr(func) 30 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 31 | func = mod["main"] 32 | for i in range(0, len(x_datas)): 33 | x_data = x_datas[i] 34 | y_data = y_datas[i] 35 | golden_output = golden_outputs[i] 36 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 37 | op_res = intrp.evaluate(func)(x_data, y_data) 38 | np.testing.assert_equal(op_res.asnumpy(), golden_output) 39 | 40 | 41 | def test_tflite_same_io_qnn_params(): 42 | scale_and_zp = { 43 | "lhs_scale": 0.00784314, 44 | "lhs_zp": 127, 45 | "rhs_scale": 0.00784314, 46 | "rhs_zp": 127, 47 | "output_scale": 0.00784314, 48 | "output_zp": 127, 49 | } 50 | x_datas = [ 51 | np.array((140, 153, 165, 178)).reshape((1, 4)), 52 | np.array((25, 153, 178, 216)).reshape((1, 4)), 53 | np.array((25, 153, 216, 165)).reshape((1, 4)), 54 | ] 55 | y_datas = [ 56 | np.array((204, 178, 165, 140)).reshape((1, 4)), 57 | np.array((204, 178, 191, 25)).reshape((1, 4)), 58 | np.array((204, 178, 25, 191)).reshape((1, 4)), 59 | ] 60 | golden_outputs = [ 61 | np.array((63, 102, 127, 165)).reshape((1, 4)), 62 | np.array((0, 102, 114, 255)).reshape((1, 4)), 63 | np.array((0, 102, 255, 101)).reshape((1, 4)), 64 | ] 65 | qnn_subtract_driver(x_datas, y_datas, golden_outputs, scale_and_zp) 66 | 67 | 68 | def test_tflite_different_io_qnn_params(): 69 | scale_and_zp = { 70 | "lhs_scale": 0.0156863, 71 | "lhs_zp": 127, 72 | "rhs_scale": 0.0117647, 73 | "rhs_zp": 85, 74 | "output_scale": 0.0235294, 75 | "output_zp": 128, 76 | } 77 | x_datas = [ 78 | np.array((76, 140, 153, 172)).reshape((1, 4)), 79 | np.array((133, 140, 146, 153)).reshape((1, 4)), 80 | np.array((76, 140, 172, 146)).reshape((1, 4)), 81 | ] 82 | y_datas = [ 83 | np.array((136, 119, 128, 17)).reshape((1, 4)), 84 | np.array((136, 119, 111, 94)).reshape((1, 4)), 85 | np.array((136, 119, 17, 128)).reshape((1, 4)), 86 | ] 87 | golden_outputs = [ 88 | np.array((68, 120, 123, 192)).reshape((1, 4)), 89 | np.array((106, 120, 128, 140)).reshape((1, 4)), 90 | np.array((68, 120, 192, 119)).reshape((1, 4)), 91 | ] 92 | qnn_subtract_driver(x_datas, y_datas, golden_outputs, scale_and_zp) 93 | 94 | 95 | def test_saturation(): 96 | # Same params 97 | scale_and_zp = { 98 | "lhs_scale": 0.125, 99 | "lhs_zp": 0, 100 | "rhs_scale": 0.125, 101 | "rhs_zp": 0, 102 | "output_scale": 0.125, 103 | "output_zp": 0, 104 | } 105 | x_data = [np.array((255, 1, 1, 0)).reshape((1, 4))] 106 | y_data = [np.array((255, 255, 128, 0)).reshape((1, 4))] 107 | golden_output = [np.array((0, 0, 0, 0)).reshape((1, 4))] 108 | qnn_subtract_driver(x_data, y_data, golden_output, scale_and_zp) 109 | 110 | # Same params, different scale 111 | scale_and_zp = { 112 | "lhs_scale": 0.125, 113 | "lhs_zp": 0, 114 | "rhs_scale": 0.125, 115 | "rhs_zp": 0, 116 | "output_scale": 0.25, 117 | "output_zp": 0, 118 | } 119 | x_data = [np.array((255, 1, 200, 0)).reshape((1, 4))] 120 | y_data = [np.array((255, 255, 127, 0)).reshape((1, 4))] 121 | golden_output = [np.array((0, 0, 36, 0)).reshape((1, 4))] 122 | qnn_subtract_driver(x_data, y_data, golden_output, scale_and_zp) 123 | 124 | # All params different 125 | scale_and_zp = { 126 | "lhs_scale": 0.5, 127 | "lhs_zp": 0, 128 | "rhs_scale": 0.25, 129 | "rhs_zp": 0, 130 | "output_scale": 0.125, 131 | "output_zp": 0, 132 | } 133 | x_data = [np.array((255, 0, 1, 0)).reshape((1, 4))] 134 | y_data = [np.array((0, 128, 64, 0)).reshape((1, 4))] 135 | golden_output = [np.array((255, 0, 0, 0)).reshape((1, 4))] 136 | qnn_subtract_driver(x_data, y_data, golden_output, scale_and_zp) 137 | 138 | 139 | if __name__ == "__main__": 140 | test_tflite_same_io_qnn_params() 141 | test_tflite_different_io_qnn_params() 142 | test_saturation() 143 | -------------------------------------------------------------------------------- /tests/test_dynamic_op_level10.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import relay 4 | from tvm.relay.testing import run_infer_type 5 | import tvm.topi.testing 6 | import random 7 | import tvm.testing 8 | 9 | 10 | @tvm.testing.uses_gpu 11 | def test_broadcast_to(): 12 | def verify_more_dynamic_broadcast_to(x_shape, out_shape): 13 | rank = len(out_shape) 14 | dtype = "float32" 15 | shape_type = "int64" 16 | reshape_shape = relay.Var("shape", relay.ty.TensorType((len(x_shape),), shape_type)) 17 | broadcast_shape = relay.Var("shape", relay.ty.TensorType((rank,), shape_type)) 18 | x = relay.Var("x", relay.ty.TensorType((np.prod(x_shape),), dtype)) 19 | r = relay.reshape(x, reshape_shape) 20 | z = relay.broadcast_to(r, broadcast_shape) 21 | 22 | func = relay.Function([x, reshape_shape, broadcast_shape], z) 23 | 24 | x = np.random.uniform(size=np.prod(x_shape)).astype(dtype) 25 | ref_res = np.broadcast_to(np.reshape(x, x_shape), out_shape) 26 | for target, ctx in tvm.testing.enabled_targets(): 27 | for kind in ["vm", "debug"]: 28 | mod = tvm.ir.IRModule.from_expr(func) 29 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 30 | op_res = intrp.evaluate(func)( 31 | x, np.array(x_shape).astype(shape_type), np.array(out_shape).astype(shape_type) 32 | ) 33 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-5) 34 | 35 | verify_more_dynamic_broadcast_to((4, 3), (3, 4, 3)) 36 | 37 | def verify_broadcast_to(x_shape, out_shape): 38 | rank = len(out_shape) 39 | dtype = "float32" 40 | shape_type = "int64" 41 | dyn_shape = relay.Var("shape", relay.ty.TensorType((rank,), shape_type)) 42 | x = relay.Var("x", relay.ty.TensorType(x_shape, dtype)) 43 | z = relay.broadcast_to(x, dyn_shape) 44 | zz = run_infer_type(z) 45 | 46 | assert zz.checked_type == relay.ty.TensorType((relay.Any(),) * rank, dtype) 47 | 48 | func = relay.Function([x, dyn_shape], z) 49 | 50 | x = np.random.uniform(size=x_shape).astype(dtype) 51 | ref_res = np.broadcast_to(x, out_shape) 52 | for target, ctx in tvm.testing.enabled_targets(): 53 | for kind in ["vm", "debug"]: 54 | mod = tvm.ir.IRModule.from_expr(func) 55 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 56 | op_res = intrp.evaluate(func)(x, np.array(out_shape).astype(shape_type)) 57 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-5) 58 | 59 | verify_broadcast_to((1,), (1, 1, 1)) 60 | verify_broadcast_to((1, 1), (4, 1, 1)) 61 | verify_broadcast_to((4, 1), (1, 4, 3)) 62 | 63 | 64 | @tvm.testing.uses_gpu 65 | def test_dyn_broadcast_to(): 66 | dtype = "uint8" 67 | rank = 3 68 | shape_type = "int64" 69 | dyn_shape = relay.Var("shape", relay.ty.TensorType((rank,), shape_type)) 70 | x_shape = (1,) 71 | x = relay.Var("x", relay.ty.TensorType(x_shape, dtype)) 72 | z = relay.broadcast_to(x, dyn_shape) 73 | zz = run_infer_type(z) 74 | 75 | assert zz.checked_type == relay.ty.TensorType((relay.Any(),) * rank, dtype) 76 | 77 | func = relay.Function([x, dyn_shape], z) 78 | 79 | x = np.random.uniform(size=x_shape).astype(dtype) 80 | dyn_shape = (1,) * rank 81 | ref_res = np.broadcast_to(x, dyn_shape) 82 | for target, ctx in tvm.testing.enabled_targets(): 83 | for kind in ["vm", "debug"]: 84 | mod = tvm.ir.IRModule.from_expr(func) 85 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 86 | op_res = intrp.evaluate(func)(x, np.array(dyn_shape).astype(shape_type)) 87 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-5) 88 | 89 | 90 | @tvm.testing.uses_gpu 91 | def test_dyn_one_hot(): 92 | def _get_oshape(indices_shape, depth, axis): 93 | oshape = [] 94 | true_axis = len(indices_shape) if axis == -1 else axis 95 | ndim = len(indices_shape) + 1 96 | indices_index = 0 97 | for i in range(0, ndim): 98 | if i == true_axis: 99 | oshape.append(depth) 100 | else: 101 | oshape.append(indices_shape[indices_index]) 102 | indices_index += 1 103 | 104 | return oshape 105 | 106 | def _verify(indices_shape, depth, on_value, off_value, axis, dtype): 107 | indices = relay.var("indices", relay.TensorType(indices_shape, "int32")) 108 | depth_var = relay.var("depth", relay.TensorType((), "int32")) 109 | on_value_const = relay.const(on_value) 110 | off_value_const = relay.const(off_value) 111 | out = relay.one_hot(indices, on_value_const, off_value_const, depth_var, axis, dtype) 112 | func = relay.Function([indices, depth_var], out) 113 | indices_np = np.random.randint(0, depth, size=indices_shape).astype("int32") 114 | out_np = tvm.topi.testing.one_hot(indices_np, on_value, off_value, depth, axis, dtype) 115 | for target, ctx in tvm.testing.enabled_targets(): 116 | for kind in ["vm", "debug"]: 117 | mod = tvm.ir.IRModule.from_expr(func) 118 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 119 | out_relay = intrp.evaluate()(indices_np, np.array(depth).astype("int32")) 120 | tvm.testing.assert_allclose(out_relay.asnumpy(), out_np) 121 | 122 | _verify((3,), 3, 1, 0, -1, "int32") 123 | _verify((3,), 3, 1.0, 0.0, -1, "float32") 124 | _verify((2, 2), 5, 2, -2, 0, "int32") 125 | _verify((2, 2), 5, 0.5, -0.5, 1, "float32") 126 | _verify((3, 2, 4, 5), 6, 1, 0, 1, "int32") 127 | _verify((3, 2, 4, 5), 6, 1.0, 0.0, 0, "float32") -------------------------------------------------------------------------------- /tests/test_pass_legalize.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import te 4 | 5 | from tvm import relay 6 | from tvm.contrib import graph_runtime 7 | from tvm.relay import transform, analysis 8 | from tvm.relay.testing.temp_op_attr import TempOpAttr 9 | 10 | 11 | def run_opt_pass(expr, passes): 12 | passes = passes if isinstance(passes, list) else [passes] 13 | mod = tvm.IRModule.from_expr(expr) 14 | seq = tvm.transform.Sequential(passes) 15 | with tvm.transform.PassContext(opt_level=3): 16 | mod = seq(mod) 17 | entry = mod["main"] 18 | return entry if isinstance(expr, relay.Function) else entry.body 19 | 20 | 21 | def test_legalize(): 22 | """Test directly replacing an operator with a new one""" 23 | 24 | def before(): 25 | x = relay.var("x", shape=(1, 64, 56, 56)) 26 | weight = relay.var("weight", shape=(64, 64, 3, 3)) 27 | y = relay.nn.conv2d(x, weight, channels=64, kernel_size=(3, 3), padding=(1, 1)) 28 | y = relay.nn.relu(y) 29 | y = relay.Function([x, weight], y) 30 | return y 31 | 32 | def legalize_conv2d(attrs, inputs, types): 33 | data, weight = inputs 34 | weight = relay.multiply(weight, relay.const(2.0, "float32")) 35 | return relay.nn.conv2d(data, weight, **attrs) 36 | 37 | def expected(): 38 | x = relay.var("x", shape=(1, 64, 56, 56)) 39 | weight = relay.var("weight", shape=(64, 64, 3, 3)) 40 | y = relay.nn.conv2d( 41 | x, 42 | relay.multiply(weight, relay.const(2.0, "float32")), 43 | channels=64, 44 | kernel_size=(3, 3), 45 | padding=(1, 1), 46 | ) 47 | y = relay.nn.relu(y) 48 | y = relay.Function([x, weight], y) 49 | return y 50 | 51 | with TempOpAttr("nn.conv2d", "FTVMLegalize", legalize_conv2d): 52 | a = before() 53 | a = run_opt_pass(a, transform.Legalize()) 54 | b = run_opt_pass(expected(), transform.InferType()) 55 | 56 | assert tvm.ir.structural_equal(a, b), "Actual = \n" + str(a) 57 | 58 | 59 | def test_legalize_none(): 60 | """Test doing nothing by returning 'None' """ 61 | 62 | def before(): 63 | x = relay.var("x", shape=(1, 64, 56, 56)) 64 | y = relay.nn.global_max_pool2d(x) 65 | y = relay.Function([x], y) 66 | return y 67 | 68 | called = [False] 69 | 70 | def legalize_conv2d(attrs, inputs, types): 71 | called[0] = True 72 | return None 73 | 74 | with TempOpAttr("nn.global_max_pool2d", "FTVMLegalize", legalize_conv2d): 75 | a = before() 76 | a = run_opt_pass(a, transform.Legalize()) 77 | b = run_opt_pass(before(), transform.InferType()) 78 | 79 | assert tvm.ir.structural_equal(a, b), "Actual = \n" + str(a) 80 | assert called[0] 81 | 82 | 83 | def test_legalize_multiple_ops(): 84 | """Test directly replacing an operator with a new one""" 85 | 86 | def before(): 87 | x = relay.var("x", shape=(1, 64, 56, 56)) 88 | weight = relay.var("weight", shape=(64, 64, 3, 3)) 89 | y = relay.nn.conv2d(x, weight, channels=64, kernel_size=(3, 3), padding=(1, 1)) 90 | y = relay.nn.relu(y) 91 | y = relay.Function([x, weight], y) 92 | return y 93 | 94 | def legalize_conv2d(attrs, inputs, types): 95 | data, weight = inputs 96 | weight = relay.multiply(weight, relay.const(2.0, "float32")) 97 | return relay.nn.conv2d(data, weight, **attrs) 98 | 99 | def legalize_relu(attrs, inputs, types): 100 | data = inputs[0] 101 | add = relay.add(tvm.relay.const(0, "float32"), data) 102 | return relay.nn.relu(add) 103 | 104 | def expected(): 105 | x = relay.var("x", shape=(1, 64, 56, 56)) 106 | weight = relay.var("weight", shape=(64, 64, 3, 3)) 107 | y = relay.nn.conv2d( 108 | x, 109 | relay.multiply(weight, relay.const(2.0, "float32")), 110 | channels=64, 111 | kernel_size=(3, 3), 112 | padding=(1, 1), 113 | ) 114 | y = relay.add(tvm.relay.const(0, "float32"), y) 115 | y = relay.nn.relu(y) 116 | y = relay.Function([x, weight], y) 117 | return y 118 | 119 | with TempOpAttr("nn.conv2d", "FTVMLegalize", legalize_conv2d): 120 | with TempOpAttr("nn.relu", "FTVMLegalize", legalize_relu): 121 | a = before() 122 | a = run_opt_pass(a, transform.Legalize()) 123 | b = run_opt_pass(expected(), transform.InferType()) 124 | 125 | assert tvm.ir.structural_equal(a, b), "Actual = \n" + str(a) 126 | 127 | 128 | def test_legalize_multi_input(): 129 | """Test directly replacing an operator with a new one""" 130 | 131 | def before(): 132 | x = relay.var("x", shape=(1, 64, 56, 56)) 133 | y = relay.var("y", shape=(1, 64, 56, 20)) 134 | z = relay.var("z", shape=(1, 64, 56, 10)) 135 | func = relay.concatenate([x, y, z], axis=3) 136 | func = relay.Function([x, y, z], func) 137 | return func 138 | 139 | def legalize_concatenate(attrs, inputs, types): 140 | # Check that the correct multi-input case is handled. 141 | assert len(inputs) == 1 142 | assert isinstance(inputs[0], tvm.relay.expr.Tuple) 143 | assert len(types) == 2 144 | assert isinstance(types[0], tvm.relay.ty.TupleType) 145 | assert isinstance(types[1], tvm.relay.ty.TensorType) 146 | return None 147 | 148 | def expected(): 149 | x = relay.var("x", shape=(1, 64, 56, 56)) 150 | y = relay.var("y", shape=(1, 64, 56, 20)) 151 | z = relay.var("z", shape=(1, 64, 56, 10)) 152 | func = relay.concatenate([x, y, z], axis=3) 153 | func = relay.Function([x, y, z], func) 154 | return func 155 | 156 | with TempOpAttr("concatenate", "FTVMLegalize", legalize_concatenate): 157 | a = before() 158 | a = run_opt_pass(a, transform.Legalize()) 159 | b = run_opt_pass(expected(), transform.InferType()) 160 | 161 | assert tvm.ir.structural_equal(a, b), "Actual = \n" + str(a) -------------------------------------------------------------------------------- /tests/test_analysis_basic_block_normal_form.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import tvm 3 | from tvm import relay 4 | from tvm.relay.analysis import check_basic_block_normal_form 5 | 6 | 7 | def test_one_block(): 8 | x = relay.var("x") 9 | y = relay.add(x, x) 10 | z = relay.add(x, y) 11 | check_basic_block_normal_form(z) 12 | 13 | def test_let(): 14 | x = relay.var("x") 15 | y = relay.var("y") 16 | body = relay.Let(y, x, y) 17 | check_basic_block_normal_form(body) 18 | 19 | @pytest.mark.xfail(raises=tvm.error.TVMError) 20 | def test_invalid_if(): 21 | cond = relay.var("cond", dtype="bool", shape=()) 22 | shared = relay.var("shared") 23 | true_branch = shared 24 | false_branch = relay.add(shared, shared) 25 | body = relay.If(cond, true_branch, false_branch) 26 | """ 27 | The program below violates basic block normal form, as the scope of %shared 28 | is ambiguous and should not be in that of true branch. 29 | 30 | free_var %cond: bool 31 | if (%cond) { 32 | free_var %shared 33 | %shared 34 | } else { 35 | add(%shared, %shared) 36 | } 37 | """ 38 | check_basic_block_normal_form(body) 39 | 40 | 41 | def test_valid_if(): 42 | cond = relay.var("cond", dtype="bool", shape=()) 43 | shared = relay.var("shared") 44 | true_branch = shared 45 | false_branch = relay.add(shared, shared) 46 | body = relay.If(cond, true_branch, false_branch) 47 | shared_bound = relay.var("shared_bound", shape=(1,), dtype="float32") 48 | body = relay.Let(shared, shared_bound, body) 49 | """ 50 | The program below uses let binding to control the scope of %shared, which 51 | follows the basic block normal form. 52 | 53 | free_var %shared_bound: Tensor[(1), float32] 54 | let %shared = %shared_bound; 55 | free_var %cond: bool 56 | if (%cond) { 57 | %shared 58 | } else { 59 | add(%shared, %shared) 60 | } 61 | """ 62 | check_basic_block_normal_form(body) 63 | 64 | 65 | @pytest.mark.xfail(raises=tvm.error.TVMError) 66 | def test_invalid_if2(): 67 | """ 68 | fn (%x: float32) { 69 | %0 = equal(%x, 2f); 70 | if (%0) { 71 | %1 = add(%x, 1f); 72 | multiply(%1, 2f) 73 | } else { 74 | multiply(%1, 1f) 75 | } 76 | } 77 | """ 78 | x = relay.var("x", shape=(), dtype="float32") 79 | one = relay.const(1, dtype="float32") 80 | two = relay.const(2, dtype="float32") 81 | v1 = relay.add(x, one) 82 | v2 = relay.equal(x, two) 83 | true_branch = relay.multiply(v1, two) 84 | false_branch = relay.multiply(v1, one) 85 | body = relay.If(v2, true_branch, false_branch) 86 | func = relay.Function([x], body) 87 | check_basic_block_normal_form(func) 88 | 89 | 90 | def test_valid_if2(): 91 | """ 92 | fn (%x: float32) { 93 | let %v1 = add(%x, 1f); 94 | %0 = equal(%x, 2f); 95 | if (%0) { 96 | multiply(%v1, 2f) 97 | } else { 98 | multiply(%v1, 1f) 99 | } 100 | } 101 | """ 102 | x = relay.var("x", shape=(), dtype="float32") 103 | one = relay.const(1, dtype="float32") 104 | two = relay.const(2, dtype="float32") 105 | v1 = relay.var("v1") 106 | v2 = relay.equal(x, two) 107 | true_branch = relay.multiply(v1, two) 108 | false_branch = relay.multiply(v1, one) 109 | body = relay.If(v2, true_branch, false_branch) 110 | body = relay.Let(v1, relay.add(x, one), body) 111 | func = relay.Function([x], body) 112 | check_basic_block_normal_form(func) 113 | 114 | 115 | @pytest.mark.xfail(raises=tvm.error.TVMError) 116 | def test_func(): 117 | x = relay.var("x", shape=(1,), dtype="float32") # , a) 118 | y = relay.var("y", shape=(1,), dtype="float32") # , a) 119 | z = relay.var("z", shape=(1,), dtype="float32") # , a) 120 | x2 = relay.add(x, x) 121 | func_a = relay.Function([y], relay.add(x2, y)) # , a, [a]) 122 | func_b = relay.Function([z], relay.add(x2, z)) # , a, [a]) 123 | body = relay.Tuple([func_a, func_b]) 124 | body = relay.Function([x], body) 125 | """ 126 | fn (%x: Tensor[(1), float32]) { 127 | %1 = fn (%y: Tensor[(1), float32]) { 128 | %0 = add(%x, %x); 129 | add(%0, %y) 130 | }; 131 | %2 = fn (%z: Tensor[(1), float32]) { 132 | add(%0, %z) 133 | }; 134 | (%1, %2) 135 | } 136 | """ 137 | check_basic_block_normal_form(body) 138 | 139 | 140 | @pytest.mark.xfail(raises=tvm.error.TVMError) 141 | def test_higher_order_return(): 142 | x = relay.var("x", shape=(1,), dtype="float32") # , a) 143 | y = relay.var("y", shape=(1,), dtype="float32") # , a) 144 | z = relay.var("z", shape=(1,), dtype="float32") # , a) 145 | x2 = relay.add(x, x) 146 | func_a = relay.Function([y], relay.add(x2, y)) # , a, [a]) 147 | func_b = relay.Function([z], relay.add(x2, z)) # , a, [a]) 148 | body = relay.Tuple([func_a, func_b]) 149 | body = relay.Function([x], body) 150 | """ 151 | fn (%x: Tensor[(1), float32]) { 152 | %1 = fn (%y: Tensor[(1), float32]) { 153 | %0 = add(%x, %x); 154 | add(%0, %y) 155 | }; 156 | %2 = fn (%z: Tensor[(1), float32]) { 157 | add(%0, %z) 158 | }; 159 | (%1, %2) 160 | } 161 | """ 162 | check_basic_block_normal_form(body) 163 | 164 | 165 | @pytest.mark.xfail(raises=tvm.error.TVMError) 166 | def test_higher_order_nested(): 167 | x = relay.var("x", dtype="float32", shape=(1,)) 168 | s = relay.var("s", dtype="float32", shape=(1,)) 169 | shared = relay.add(s, s) 170 | func_true = relay.Function([x], relay.add(x, shared)) 171 | choice_t = relay.FuncType([], relay.scalar_type("bool")) 172 | f = relay.Var("f", choice_t) 173 | z = relay.Var("z") 174 | body = relay.If(f(), func_true, relay.Function([z], relay.add(z, shared))) 175 | top = relay.Function([f, s], body) 176 | """ 177 | fn (%f: fn () -> bool, %s: Tensor[(1), float32]) { 178 | %0 = %f(); 179 | if (%0) { 180 | fn (%x: Tensor[(1), float32]) { 181 | %1 = add(%s, %s); 182 | add(%x, %1) 183 | } 184 | } else { 185 | fn (%z) { 186 | add(%z, %1) 187 | } 188 | } 189 | } 190 | """ 191 | check_basic_block_normal_form(top) -------------------------------------------------------------------------------- /tests/test_json_compact.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import relay 3 | from tvm import te 4 | import json 5 | 6 | 7 | def test_type_var(): 8 | # type var in 0.6 9 | nodes = [ 10 | {"type_key": ""}, 11 | {"type_key": "relay.TypeVar", "attrs": {"kind": "0", "span": "0", "var": "2"}}, 12 | {"type_key": "Variable", "attrs": {"dtype": "int32", "name": "in0"}}, 13 | ] 14 | data = { 15 | "root": 1, 16 | "nodes": nodes, 17 | "attrs": {"tvm_version": "0.6.0"}, 18 | "b64ndarrays": [], 19 | } 20 | tvar = tvm.ir.load_json(json.dumps(data)) 21 | assert isinstance(tvar, tvm.ir.TypeVar) 22 | assert tvar.name_hint == "in0" 23 | nodes[1]["type_key"] = "relay.GlobalTypeVar" 24 | tvar = tvm.ir.load_json(json.dumps(data)) 25 | assert isinstance(tvar, tvm.ir.GlobalTypeVar) 26 | assert tvar.name_hint == "in0" 27 | 28 | 29 | def test_var(): 30 | # type var in 0.6 31 | nodes = [ 32 | {"type_key": ""}, 33 | { 34 | "type_key": "relay.Var", 35 | "attrs": {"_checked_type_": "0", "span": "0", "type_annotation": "0", "vid": "2"}, 36 | }, 37 | {"type_key": "relay.Id", "attrs": {"name_hint": "a3"}}, 38 | {"type_key": "relay.TensorType", "attrs": {"dtype": "float32", "shape": "4", "span": "0"}}, 39 | {"type_key": "Array", "data": [5, 6]}, 40 | {"type_key": "IntImm", "attrs": {"dtype": "int32", "value": "16"}}, 41 | {"type_key": "IntImm", "attrs": {"dtype": "int32", "value": "8"}}, 42 | ] 43 | data = { 44 | "root": 1, 45 | "nodes": nodes, 46 | "attrs": {"tvm_version": "0.6.0"}, 47 | "b64ndarrays": [], 48 | } 49 | tvar = tvm.ir.load_json(json.dumps(data)) 50 | assert isinstance(tvar, relay.Var) 51 | assert tvar.name_hint == "a3" 52 | 53 | 54 | def test_incomplete_type(): 55 | nodes = [ 56 | {"type_key": ""}, 57 | {"type_key": "relay.IncompleteType", "attrs": {"kind": "0", "span": "0"}}, 58 | ] 59 | data = { 60 | "root": 1, 61 | "nodes": nodes, 62 | "attrs": {"tvm_version": "0.6.0"}, 63 | "b64ndarrays": [], 64 | } 65 | tvar = tvm.ir.load_json(json.dumps(data)) 66 | assert isinstance(tvar, tvm.ir.IncompleteType) 67 | 68 | 69 | def test_func_tuple_type(): 70 | nodes = [ 71 | {"type_key": ""}, 72 | { 73 | "type_key": "relay.FuncType", 74 | "attrs": { 75 | "arg_types": "2", 76 | "ret_type": "3", 77 | "span": "0", 78 | "type_constraints": "6", 79 | "type_params": "5", 80 | }, 81 | }, 82 | {"type_key": "Array"}, 83 | {"type_key": "relay.TupleType", "attrs": {"fields": "4", "span": "0"}}, 84 | {"type_key": "Array"}, 85 | {"type_key": "Array"}, 86 | {"type_key": "Array"}, 87 | ] 88 | data = { 89 | "root": 1, 90 | "nodes": nodes, 91 | "attrs": {"tvm_version": "0.6.0"}, 92 | "b64ndarrays": [], 93 | } 94 | tvar = tvm.ir.load_json(json.dumps(data)) 95 | assert isinstance(tvar, tvm.ir.FuncType) 96 | 97 | 98 | def test_global_var(): 99 | nodes = [ 100 | {"type_key": ""}, 101 | { 102 | "type_key": "relay.GlobalVar", 103 | "attrs": {"_checked_type_": "0", "name_hint": "x", "span": "0"}, 104 | }, 105 | ] 106 | data = { 107 | "root": 1, 108 | "nodes": nodes, 109 | "attrs": {"tvm_version": "0.6.0"}, 110 | "b64ndarrays": [], 111 | } 112 | tvar = tvm.ir.load_json(json.dumps(data)) 113 | assert isinstance(tvar, tvm.ir.GlobalVar) 114 | nodes = [ 115 | {"type_key": ""}, 116 | {"type_key": "GlobalVar", "attrs": {"_checked_type_": "0", "name_hint": "x", "span": "0"}}, 117 | ] 118 | data = { 119 | "root": 1, 120 | "nodes": nodes, 121 | "attrs": {"tvm_version": "0.6.0"}, 122 | "b64ndarrays": [], 123 | } 124 | tvar = tvm.ir.load_json(json.dumps(data)) 125 | assert isinstance(tvar, tvm.ir.GlobalVar) 126 | 127 | 128 | def test_op(): 129 | nodes = [{"type_key": ""}, {"type_key": "relay.Op", "global_key": "nn.conv2d"}] 130 | data = { 131 | "root": 1, 132 | "nodes": nodes, 133 | "attrs": {"tvm_version": "0.6.0"}, 134 | "b64ndarrays": [], 135 | } 136 | op = tvm.ir.load_json(json.dumps(data)) 137 | assert op == relay.op.get("nn.conv2d") 138 | 139 | 140 | def test_tir_var(): 141 | nodes = [ 142 | {"type_key": ""}, 143 | {"type_key": "Variable", "attrs": {"dtype": "int32", "name": "x"}}, 144 | {"type_key": "SizeVar", "attrs": {"dtype": "int32", "name": "y"}}, 145 | ] 146 | data = { 147 | "root": 1, 148 | "nodes": nodes, 149 | "attrs": {"tvm_version": "0.6.0"}, 150 | "b64ndarrays": [], 151 | } 152 | x = tvm.ir.load_json(json.dumps(data)) 153 | assert isinstance(x, tvm.tir.Var) 154 | assert x.name == "x" 155 | data["root"] = 2 156 | y = tvm.ir.load_json(json.dumps(data)) 157 | assert isinstance(y, tvm.tir.SizeVar) 158 | assert y.name == "y" 159 | 160 | 161 | def test_str_map(): 162 | nodes = [ 163 | {"type_key": ""}, 164 | {"type_key": "StrMap", "keys": ["z", "x"], "data": [2, 3]}, 165 | {"type_key": "IntImm", "attrs": {"dtype": "int32", "value": "2"}}, 166 | {"type_key": "Max", "attrs": {"a": "4", "b": "10", "dtype": "int32"}}, 167 | {"type_key": "Add", "attrs": {"a": "5", "b": "9", "dtype": "int32"}}, 168 | {"type_key": "Add", "attrs": {"a": "6", "b": "8", "dtype": "int32"}}, 169 | {"type_key": "tir.Var", "attrs": {"dtype": "int32", "name": "7", "type_annotation": "0"}}, 170 | {"type_key": "runtime.String", "repr_str": "x"}, 171 | {"type_key": "IntImm", "attrs": {"dtype": "int32", "value": "1"}}, 172 | {"type_key": "IntImm", "attrs": {"dtype": "int32", "value": "2"}}, 173 | {"type_key": "IntImm", "attrs": {"dtype": "int32", "value": "100"}}, 174 | ] 175 | data = { 176 | "root": 1, 177 | "nodes": nodes, 178 | "attrs": {"tvm_version": "0.6.0"}, 179 | "b64ndarrays": [], 180 | } 181 | x = tvm.ir.load_json(json.dumps(data)) -------------------------------------------------------------------------------- /tests/test_op_qnn_concatenate.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | import numpy as np 4 | from tvm import relay 5 | from tvm.contrib import graph_runtime 6 | import tvm.topi.testing 7 | 8 | 9 | def test_same_io_qnn_params(): 10 | data_dtype = "int32" 11 | axis = 0 12 | x_data = np.arange(-32, 32, 1).reshape(1, 64).astype(data_dtype) 13 | y_data = np.arange(-64, 64, 2).reshape(1, 64).astype(data_dtype) 14 | x_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 15 | y_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 16 | zero = relay.const(0, "int32") 17 | 18 | x = relay.var("x", shape=(1, 64), dtype=data_dtype) 19 | y = relay.var("y", shape=(1, 64), dtype=data_dtype) 20 | z = relay.qnn.op.concatenate( 21 | (x, y), 22 | input_scales=(x_scale, y_scale), 23 | input_zero_points=(zero, zero), 24 | output_scale=y_scale, 25 | output_zero_point=zero, 26 | axis=axis, 27 | ) 28 | 29 | func = relay.Function([x, y], z) 30 | mod = tvm.IRModule.from_expr(func) 31 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 32 | func = mod["main"] 33 | 34 | golden_output = np.concatenate((x_data, y_data), axis=axis) 35 | 36 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 37 | op_res = intrp.evaluate(func)(x_data, y_data) 38 | np.testing.assert_equal(op_res.asnumpy(), golden_output) 39 | 40 | 41 | def test_different_io_qnn_params(): 42 | data_dtype = "int32" 43 | axis = 0 44 | x_data = np.arange(-32, 32, 1).reshape(1, 64).astype(data_dtype) 45 | y_data = np.arange(-64, 64, 2).reshape(1, 64).astype(data_dtype) 46 | 47 | x_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 48 | y_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 49 | x_zero_point = relay.const(3, "int32") 50 | y_zero_point = relay.const(4, "int32") 51 | 52 | x = relay.var("x", shape=(1, 64), dtype=data_dtype) 53 | y = relay.var("y", shape=(1, 64), dtype=data_dtype) 54 | z = relay.qnn.op.concatenate( 55 | (x, y), 56 | input_scales=(x_scale, y_scale), 57 | input_zero_points=(x_zero_point, y_zero_point), 58 | output_scale=y_scale, 59 | output_zero_point=relay.const(1, "int32"), 60 | axis=axis, 61 | ) 62 | 63 | func = relay.Function([x, y], z) 64 | mod = tvm.IRModule.from_expr(func) 65 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 66 | func = mod["main"] 67 | 68 | golden_output = np.concatenate((x_data - 2, y_data - 3), axis=axis) 69 | 70 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 71 | op_res = intrp.evaluate(func)(x_data, y_data) 72 | np.testing.assert_equal(op_res.asnumpy(), golden_output) 73 | 74 | 75 | def test_few_same_io_qnn_params(): 76 | data_dtype = "int32" 77 | axis = 0 78 | x_data = np.arange(-32, 32, 1).reshape(1, 64).astype(data_dtype) 79 | y_data = np.arange(-64, 64, 2).reshape(1, 64).astype(data_dtype) 80 | 81 | x_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 82 | y_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 83 | x_zero_point = relay.const(0, "int32") 84 | y_zero_point = relay.const(1, "int32") 85 | 86 | x = relay.var("x", shape=(1, 64), dtype=data_dtype) 87 | y = relay.var("y", shape=(1, 64), dtype=data_dtype) 88 | z = relay.qnn.op.concatenate( 89 | (x, y), 90 | input_scales=(x_scale, y_scale), 91 | input_zero_points=(x_zero_point, y_zero_point), 92 | output_scale=y_scale, 93 | output_zero_point=relay.const(1, "int32"), 94 | axis=axis, 95 | ) 96 | 97 | func = relay.Function([x, y], z) 98 | mod = tvm.IRModule.from_expr(func) 99 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 100 | func = mod["main"] 101 | 102 | golden_output = np.concatenate((x_data + 1, y_data), axis=axis) 103 | 104 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 105 | op_res = intrp.evaluate(func)(x_data, y_data) 106 | np.testing.assert_equal(op_res.asnumpy(), golden_output) 107 | 108 | 109 | def test_same_i_qnn_params(): 110 | data_dtype = "int32" 111 | axis = 0 112 | x_data = np.arange(-32, 32, 1).reshape(1, 64).astype(data_dtype) 113 | y_data = np.arange(-64, 64, 2).reshape(1, 64).astype(data_dtype) 114 | 115 | x_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 116 | y_scale = relay.const((62 + 64) / (np.power(2, 32) - 1.0), "float32") 117 | x_zero_point = relay.const(0, "int32") 118 | y_zero_point = relay.const(0, "int32") 119 | 120 | x = relay.var("x", shape=(1, 64), dtype=data_dtype) 121 | y = relay.var("y", shape=(1, 64), dtype=data_dtype) 122 | z = relay.qnn.op.concatenate( 123 | (x, y), 124 | input_scales=(x_scale, y_scale), 125 | input_zero_points=(x_zero_point, y_zero_point), 126 | output_scale=y_scale, 127 | output_zero_point=relay.const(1, "int32"), 128 | axis=axis, 129 | ) 130 | 131 | func = relay.Function([x, y], z) 132 | mod = tvm.IRModule.from_expr(func) 133 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 134 | func = mod["main"] 135 | 136 | golden_output = np.concatenate((x_data + 1, y_data + 1), axis=axis) 137 | 138 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 139 | op_res = intrp.evaluate(func)(x_data, y_data) 140 | np.testing.assert_equal(op_res.asnumpy(), golden_output) 141 | 142 | 143 | def test_call_input(): 144 | # This tests the case where the input to concatenate is not explicitly a 145 | # tuple node but is instead a call node. 146 | x_data = np.ones(shape=(64,)).astype("uint8") 147 | 148 | x = relay.var("x", shape=(64,), dtype="uint8") 149 | x_scale = relay.const(1, "float32") 150 | y_scale = relay.const(1, "float32") 151 | x_zero_point = relay.const(0, "int32") 152 | y_zero_point = relay.const(0, "int32") 153 | 154 | tup = relay.split(x, 2, axis=0) 155 | z = relay.qnn.op.concatenate( 156 | tup, 157 | input_scales=(x_scale, y_scale), 158 | input_zero_points=(x_zero_point, y_zero_point), 159 | output_scale=y_scale, 160 | output_zero_point=relay.const(0, "int32"), 161 | axis=0, 162 | ) 163 | func = relay.Function([x], z) 164 | 165 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 166 | op_res = intrp.evaluate(func)(x_data) 167 | np.testing.assert_equal(op_res.asnumpy(), x_data) 168 | -------------------------------------------------------------------------------- /tests/test_pass_defunctionalization.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import numpy as np 3 | 4 | import tvm 5 | from tvm import relay 6 | from tvm.relay.backend.interpreter import ConstructorValue 7 | from tvm.relay import transform, ExprVisitor, TypeVisitor 8 | from tvm.relay.testing import Prelude 9 | 10 | # determine if type t is a FuncType or has a nested FuncType 11 | def has_func_type(t): 12 | class FuncTypeVisitor(TypeVisitor): 13 | def __init__(self): 14 | super().__init__() 15 | self.has_func = False 16 | 17 | def visit_func_type(self, ftt): 18 | self.has_func = True 19 | 20 | ftvisitor = FuncTypeVisitor() 21 | ftvisitor.visit(t) 22 | return ftvisitor.has_func 23 | 24 | 25 | # determine whether a program has any higher order functions 26 | # a higher order function is defined as one that: 27 | # - has function type arguments 28 | # - returns a function 29 | def assert_no_higher_order_functions(expr, mod): 30 | class CheckFirstOrderVisitor(ExprVisitor): 31 | def __init__(self, mod): 32 | super().__init__() 33 | self.mod = mod 34 | self.hof = [] 35 | self.visited_gv = set() 36 | 37 | def visit_call(self, call): 38 | is_higher_order = False 39 | # check return type 40 | if has_func_type(call.checked_type): 41 | is_higher_order = True 42 | # check argument types 43 | for a in call.args: 44 | if has_func_type(a.checked_type): 45 | is_higher_order = True 46 | # if it is higher order, save it for debugging later 47 | if is_higher_order: 48 | self.hof.append(call) 49 | super().visit_call(call) 50 | 51 | def visit_global_var(self, gv): 52 | # visit global vars to visit entire program 53 | if gv not in self.visited_gv: 54 | self.visited_gv.add(gv) 55 | self.visit(self.mod[gv]) 56 | 57 | mod = transform.InferType()(mod) 58 | check_fo_visitor = CheckFirstOrderVisitor(mod) 59 | check_fo_visitor.visit(expr) 60 | 61 | nl = "\n--------\n" 62 | errmsg = f"""found {len(check_fo_visitor.hof)} higher order functions: 63 | {nl.join(expr.astext() for expr in check_fo_visitor.hof)}""" 64 | 65 | assert len(check_fo_visitor.hof) == 0, errmsg 66 | 67 | 68 | # assert that a program is defunctionalized and returns 69 | # defunctionalized module 70 | # assumes program starts from mod['main'] 71 | def defunctionalized(mod): 72 | mod = transform.InferType()(mod) 73 | mod["main"] = transform.Defunctionalization(mod["main"], mod) 74 | mod = transform.InferType()(mod) 75 | assert_no_higher_order_functions(mod["main"], mod) 76 | 77 | return mod 78 | 79 | 80 | # adt list to python list 81 | def to_list(mod, l): 82 | list = mod.get_global_type_var("List") 83 | list_adt = mod[list] 84 | cons = list_adt.constructors[0] 85 | nil = list_adt.constructors[1] 86 | 87 | assert isinstance(l, ConstructorValue) 88 | val = l 89 | ret = [] 90 | while True: 91 | if val.tag == cons.tag: 92 | ret.append(val.fields[0].asnumpy()) 93 | val = val.fields[1] 94 | else: 95 | assert val.tag == nil.tag 96 | break 97 | return ret 98 | 99 | 100 | # list to adt list 101 | def to_adt_list(mod, arr): 102 | expr = mod["main"] 103 | l = mod.get_global_type_var("List") 104 | list_adt = mod[l] 105 | cons = list_adt.constructors[0] 106 | nil = list_adt.constructors[1] 107 | 108 | li = nil() 109 | for a in arr: 110 | li = cons(relay.const(a), li) 111 | ex = relay.create_executor(mod=mod) 112 | adt = ex.evaluate(li) 113 | mod["main"] = expr 114 | return adt 115 | 116 | 117 | def test_simple(): 118 | code = """ 119 | #[version = "0.0.5"] 120 | def @simple[A, B](%f: fn(A) -> B, %xs: A) -> B { 121 | %f(%xs) 122 | } 123 | def @main(%l: Tensor[(5, 5), float32]) -> Tensor[(5, 5), float32] { 124 | %0 = fn[A](%x: A) -> A { 125 | %x 126 | }; 127 | @simple(%0, %l) 128 | } 129 | """ 130 | mod = tvm.parser.fromtext(code) 131 | defunc_mod = defunctionalized(mod) 132 | 133 | input = np.random.rand(5, 5).astype("float32") 134 | 135 | ex = relay.create_executor("debug", mod=mod) 136 | defunc_ex = relay.create_executor("debug", mod=defunc_mod) 137 | 138 | out = ex.evaluate()(input) 139 | defunc_out = defunc_ex.evaluate()(input) 140 | 141 | np.testing.assert_equal(out.asnumpy(), defunc_out.asnumpy()) 142 | 143 | 144 | def test_global_recursion(): 145 | code = """ 146 | #[version = "0.0.5"] 147 | type List[A] { 148 | Cons(A, List[A]), 149 | Nil, 150 | } 151 | def @id[A](%x: A) -> A { 152 | %x 153 | } 154 | def @map[A, B](%f: fn(A) -> B, %xs: List[A]) -> List[B] { 155 | match (%xs) { 156 | Cons(%x, %rest) => Cons(%f(%x), @map(%f, %rest)), 157 | Nil => Nil, 158 | } 159 | } 160 | def @main(%l: List[float32]) -> List[float32] { 161 | @map(@id, %l) 162 | } 163 | """ 164 | mod = tvm.parser.fromtext(code) 165 | defunc_mod = defunctionalized(mod) 166 | 167 | input = np.random.rand(10).astype("float32") 168 | 169 | ex = relay.create_executor("debug", mod=mod) 170 | defunc_ex = relay.create_executor("debug", mod=defunc_mod) 171 | 172 | out = ex.evaluate(mod["main"])(to_adt_list(mod, input)) 173 | defunc_out = defunc_ex.evaluate()(to_adt_list(defunc_mod, input)) 174 | 175 | np.testing.assert_array_equal(to_list(mod, out), to_list(defunc_mod, defunc_out)) 176 | 177 | 178 | def test_recursive_datatype(): 179 | # CPS will create recursive datatype 180 | code = """ 181 | #[version = "0.0.5"] 182 | type List[A] { 183 | Cons(A, List[A]), 184 | Nil, 185 | } 186 | def @sum(%f: fn(int32) -> int32, %k: List[int32]) -> int32 { 187 | match (%k) { 188 | Cons(%x, %rest) => %0 = fn(%n) { 189 | %x + %f(%n) 190 | }; 191 | @sum(%0, %rest), 192 | Nil => %f(0), 193 | } 194 | } 195 | def @id[A](%x: A) -> A { 196 | %x 197 | } 198 | def @main(%l: List[int32]) -> int32 { 199 | @sum(@id, %l) 200 | } 201 | """ 202 | mod = tvm.parser.fromtext(code) 203 | defunc_mod = defunctionalized(mod) 204 | 205 | input = np.random.randint(1, 100, 10) 206 | 207 | ex = relay.create_executor("debug", mod=mod) 208 | defunc_ex = relay.create_executor("debug", mod=defunc_mod) 209 | 210 | out = ex.evaluate(mod["main"])(to_adt_list(mod, input)) 211 | defunc_out = defunc_ex.evaluate()(to_adt_list(defunc_mod, input)) 212 | 213 | tvm.testing.assert_allclose(out.asnumpy(), defunc_out.asnumpy()) -------------------------------------------------------------------------------- /TVMfuzz/getAST.py: -------------------------------------------------------------------------------- 1 | import ast 2 | import os 3 | from TVMfuzz.colors import * 4 | from TVMfuzz.analyzeSyntax import dealWithStatement, dealWithImport 5 | from TVMfuzz.ASTutils import * 6 | import random 7 | from TVMfuzz.elements import * 8 | import copy 9 | 10 | class NodeTransformer(ast.NodeTransformer): 11 | 12 | def visit_ClassDef(self, node): 13 | helperFuncDef[node.name] = node 14 | funcDefs.append(node) 15 | 16 | def visit_FunctionDef(self, FunctionDef, func=None): 17 | 18 | copy = False 19 | if FunctionDef.args.args: 20 | copy = True 21 | else: 22 | for ele in FunctionDef.body: 23 | if isinstance(ele, ast.Return): 24 | copy = True 25 | break 26 | if FunctionDef.name not in forbiddenFuncDef: 27 | helperFuncDef[FunctionDef.name] = FunctionDef 28 | tp = () 29 | if func and func in helperStatDef_local: 30 | tp = helperStatDef_local[func] 31 | funcDefParents[FunctionDef] = tuple(helperStatDef_global) + \ 32 | tuple(funcDefs) + \ 33 | tp 34 | funcDefs.append(FunctionDef) 35 | if copy: 36 | functionDefNames.add(FunctionDef.name) 37 | 38 | elif not random.randint(0, 14): 39 | 40 | funcID = int(os.getenv('funcID')) 41 | funcID += 1 42 | os.environ['isFunc'] = 'True' 43 | os.environ['funcID'] = str(funcID) 44 | function_body = FunctionDef.body 45 | for function_element in function_body: 46 | 47 | if isinstance(function_element, ast.Assign): 48 | AssignNode(function_element, func=FunctionDef) 49 | elif isinstance(function_element, ast.Expr): 50 | self.visit_Expr(function_element) 51 | elif isinstance(function_element, ast.With): 52 | self.visit_With(function_element, func=FunctionDef) 53 | elif isinstance(function_element, ast.FunctionDef): 54 | self.visit_FunctionDef(function_element, func=FunctionDef) 55 | elif isinstance(function_element, ast.ClassDef): 56 | self.visit_ClassDef(function_element) 57 | 58 | def visit_WithItems(self, With, surround=None, indent=0): 59 | 60 | items = With.items 61 | param = pWith() 62 | for item in items: 63 | 64 | param1 = None 65 | param2 = None 66 | if hasattr(item, 'context_expr'): 67 | param1 = recognizeMultiAssignment(item.context_expr, indent=indent) 68 | if isinstance(param1, pFunc): 69 | randomname = varNameGenerator(varnamesRead) 70 | pfunc = pFunc(funcName=param1.funcName, 71 | params=param1.params, 72 | suffix=param1.suffix, 73 | Type='function', 74 | restricted=param1.restricted, 75 | indent=indent) 76 | if pfunc.restricted: 77 | pfunc.add_Type('restrictedFunc') 78 | pfunc.add_surround(surround) 79 | pfunc.add_child(param) 80 | vparam = pVar(randomname) 81 | param1 = copy.deepcopy(vparam) 82 | dealWithStatement(param=pfunc, varobjects=[vparam]) 83 | param.add_parent(pfunc) 84 | param1.update_varTofunc_ele(pfunc) 85 | 86 | elif isinstance(param1, pVar): 87 | pass 88 | 89 | else: 90 | raise Exception(Cyan('with context_expr\'s type is not handled: ' \ 91 | + str(param1))) 92 | if item.optional_vars: 93 | param2 = recognizeMultiAssignment(item.optional_vars, indent=indent) 94 | param.add_withitem((param1, param2)) 95 | param.add_surround(surround) 96 | param.add_indent(indent) 97 | 98 | return param 99 | 100 | def visit_WithBody(self, With, param, indent, func): 101 | for ele in With.body: 102 | if isinstance(ele, ast.Expr): 103 | self.visit_Expr(ele, param, indent+1) 104 | elif isinstance(ele, ast.With): 105 | self.visit_With(ele, param, indent+1) 106 | elif isinstance(ele, ast.Assign): 107 | AssignNode(ele, param, indent+1, func=func) 108 | 109 | def visit_With(self, With, surround=None, indent=0, func=None): 110 | param = self.visit_WithItems(With, surround=surround, indent=indent) 111 | dealWithStatement(param=param) 112 | self.visit_WithBody(With, param, indent, func) 113 | 114 | def visit_Import(self, Import): 115 | for name in Import.names: 116 | if hasattr(name, 'asname') and name.asname: 117 | dealWithImport('import', 118 | importWhat=name.name, 119 | asWhat=name.asname) 120 | else: 121 | dealWithImport('import', 122 | importWhat=name.name) 123 | 124 | def visit_ImportFrom(self, ImportFrom): 125 | for name in ImportFrom.names: 126 | if hasattr(name, 'asname') and name.asname: 127 | dealWithImport('fromImport', 128 | fromWhat=ImportFrom.module, 129 | importWhat=name.name, 130 | asWhat=name.asname) 131 | else: 132 | dealWithImport('fromImport', 133 | fromWhat=ImportFrom.module, 134 | importWhat=name.name) 135 | 136 | def visit_Assign(self, Assign): 137 | AssignNode(Assign) 138 | 139 | def visit_Expr(self, Expr, surround=None, indent=0): 140 | if isinstance(Expr.value, ast.Call): 141 | param = recognizeMultiAssignment(value=Expr.value, 142 | indent=indent, 143 | surround=surround) 144 | if not isinstance(param, pFunc): 145 | raise Exception('Type error! Expect pFunc but receive ' + str(type(param))) 146 | if not param.restricted: 147 | dealWithStatement(param=param) 148 | else: 149 | dealWithStatement(param=param) 150 | 151 | def visit_If(self, node): 152 | pass 153 | 154 | def visit_While(self, node): 155 | pass 156 | 157 | def visit_For(self, node): 158 | pass 159 | -------------------------------------------------------------------------------- /tests/test_pass_to_a_normal_form.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import te 4 | from tvm import relay 5 | from tvm.relay.analysis import detect_feature 6 | from tvm.relay import op, create_executor, transform 7 | from tvm.relay.prelude import Prelude 8 | from tvm.relay.testing import count 9 | from tvm.relay.analysis import Feature 10 | 11 | 12 | def run_opt_pass(expr, passes): 13 | passes = passes if isinstance(passes, list) else [passes] 14 | mod = tvm.IRModule.from_expr(expr) 15 | seq = tvm.transform.Sequential(passes) 16 | with tvm.transform.PassContext(opt_level=3): 17 | mod = seq(mod) 18 | entry = mod["main"] 19 | return entry if isinstance(expr, relay.Function) else entry.body 20 | 21 | 22 | def check_eval(expr, expected_result, mod=None, rtol=1e-07): 23 | ctx = tvm.context("llvm", 0) 24 | intrp = create_executor(mod=mod, ctx=ctx, target="llvm") 25 | 26 | result = intrp.evaluate(expr) 27 | np.testing.assert_allclose(result.asnumpy(), expected_result, rtol=rtol) 28 | 29 | 30 | def test_explicit_bound(): 31 | x = relay.const(1) 32 | y = op.add(x, x) 33 | z = op.add(y, y) 34 | f = relay.Function([], op.add(z, z)) 35 | assert not Feature.fLet in detect_feature(f) 36 | anf = run_opt_pass(f, transform.ToANormalForm()) 37 | assert Feature.fLet in detect_feature(anf) 38 | check_eval(f(), 8.0) 39 | check_eval(anf(), 8.0) 40 | 41 | 42 | # test that the construction order does not matter, 43 | # and is instead ordered by the scope and by post-dfs ordering. 44 | def test_order(): 45 | z = relay.const(3) 46 | y = relay.const(2) 47 | x = relay.const(1) 48 | val = x + y * z 49 | check_eval(val, 7.0) 50 | anf = run_opt_pass(val, [transform.ToANormalForm(), transform.InferType()]) 51 | a = relay.Var("a", relay.IncompleteType()) 52 | b = relay.Var("b", relay.IncompleteType()) 53 | c = relay.Var("c", relay.IncompleteType()) 54 | d = relay.Var("d", relay.IncompleteType()) 55 | e = relay.Var("e", relay.IncompleteType()) 56 | expected_output = e 57 | expected_output = relay.Let(e, a + d, expected_output) 58 | expected_output = relay.Let(d, b * c, expected_output) 59 | expected_output = relay.Let(c, z, expected_output) 60 | expected_output = relay.Let(b, y, expected_output) 61 | expected_output = relay.Let(a, x, expected_output) 62 | expected_output = run_opt_pass(expected_output, transform.InferType()) 63 | assert tvm.ir.structural_equal(anf, expected_output) 64 | 65 | 66 | def test_if(): 67 | cond = relay.const(True) 68 | x = relay.If(cond, relay.const(2), relay.const(3)) 69 | anf = run_opt_pass(x, [transform.ToANormalForm(), transform.InferType()]) 70 | a = relay.Var("a", relay.IncompleteType()) 71 | b = relay.Var("b", relay.IncompleteType()) 72 | c = relay.Var("c", relay.IncompleteType()) 73 | d = relay.Var("d", relay.IncompleteType()) 74 | true_branch = relay.Let(a, relay.const(2), a) 75 | false_branch = relay.Let(b, relay.const(3), b) 76 | expected_output = relay.If(c, true_branch, false_branch) 77 | expected_output = relay.Let(d, expected_output, d) 78 | expected_output = relay.Let(c, cond, expected_output) 79 | expected_output = run_opt_pass(expected_output, transform.InferType()) 80 | assert tvm.ir.structural_equal(anf, expected_output) 81 | 82 | 83 | # make sure we dont infinite loop. 84 | # it is too large so we wont check for the exact program. 85 | def test_recursion(): 86 | """ 87 | Program: 88 | let f(n: i32) -> i32 = { 89 | m = (n * 2) 90 | if (n == 0) { 91 | return m; 92 | } else { 93 | return m + f(n - 1); 94 | } 95 | } 96 | f(5); 97 | """ 98 | mod = tvm.IRModule() 99 | i64 = relay.TensorType((), "int64") 100 | f = relay.GlobalVar("f") 101 | n = relay.Var("n", i64) 102 | m = n * relay.const(2, "int64") 103 | funcbody = relay.If( 104 | relay.equal(n, relay.const(0, "int64")), m, m + f(n - relay.const(1, "int64")) 105 | ) 106 | value = relay.Function([n], funcbody, i64, []) 107 | mod[f] = value 108 | check_eval(f(relay.const(5, "int64")), 30.0, mod=mod) 109 | old_f = mod[f] 110 | mod = transform.ToANormalForm()(mod) 111 | f = mod[f] 112 | check_eval(f(relay.const(5, "int64")), 30.0, mod=mod) 113 | 114 | 115 | def test_ref(): 116 | i = relay.Var("i") 117 | iv = relay.Var("iv") 118 | u = relay.Var("u") 119 | uv = relay.Var("uv") 120 | body = relay.add(iv, uv) 121 | body = relay.Let(uv, relay.RefRead(i), body) 122 | body = relay.Let(u, relay.RefWrite(i, relay.const(2)), body) 123 | body = relay.Let(iv, relay.RefRead(i), body) 124 | body = relay.Let(i, relay.RefCreate(relay.const(1)), body) 125 | check_eval(body, 3) 126 | opt_body = run_opt_pass(body, transform.ToANormalForm()) 127 | check_eval(opt_body, 3) 128 | 129 | 130 | def test_nat_add(): 131 | mod = tvm.IRModule() 132 | p = Prelude(mod) 133 | # add_nat_definitions(p) 134 | nat = p.nat 135 | add = p.add 136 | s = p.s 137 | z = p.z 138 | ctx = tvm.context("llvm", 0) 139 | intrp = create_executor(mod=mod, ctx=ctx, target="llvm") 140 | assert mod[add].checked_type == relay.FuncType([nat(), nat()], nat()) 141 | assert count(p, intrp.evaluate(add(s(z()), s(z())))) == 2 142 | expr = add(s(z()), s(z())) 143 | f = relay.GlobalVar("f") 144 | mod[f] = relay.Function([], expr) 145 | mod = transform.ToANormalForm()(mod) 146 | expr = mod["f"] 147 | assert count(p, intrp.evaluate(expr.body)) == 2 148 | assert Feature.fLet in detect_feature(mod[add]) 149 | 150 | 151 | def test_let(): 152 | x = relay.Var("x") 153 | y = relay.Var("y") 154 | d = relay.const(4.0, "float32") 155 | body = relay.Let(y, x, x + y) 156 | body = relay.Let(x, d, body) 157 | check_eval(body, 8) 158 | opt_body = run_opt_pass(body, transform.ToANormalForm()) 159 | check_eval(opt_body, 8) 160 | 161 | 162 | def test_function(): 163 | t = relay.TensorType((), "float32") 164 | x = relay.Var("x", t) 165 | f = relay.Function([x], x + x) 166 | d = relay.const(4.0, "float32") 167 | anf_f = run_opt_pass(f, transform.ToANormalForm()) 168 | assert isinstance(anf_f, relay.Function) 169 | check_eval(f(d), 8) 170 | check_eval(anf_f(d), 8) 171 | 172 | 173 | def test_gradient_if(): 174 | x = relay.var("a", shape=(1, 16)) 175 | y = relay.var("y", shape=(1, 16)) 176 | cond = relay.var("cond", shape=(), dtype="uint1") 177 | net = relay.If(cond, x, x) 178 | net = relay.add(x, net) 179 | net = relay.Function([cond, x, y], net) 180 | mod = tvm.IRModule.from_expr(net) 181 | mod = relay.transform.ToANormalForm()(mod) 182 | mod["main"] = relay.transform.gradient(mod["main"], mode="higher_order") 183 | mod = relay.transform.ToANormalForm()(mod) -------------------------------------------------------------------------------- /tests/test_pass_context_analysis.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | 4 | import tvm 5 | from tvm import relay 6 | from tvm.relay import expr as _expr 7 | from tvm.relay.analysis import context_analysis 8 | 9 | 10 | def test_device_copy(): 11 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 12 | return 13 | 14 | mod = tvm.IRModule() 15 | x = relay.var("x", shape=(2, 3)) 16 | copy = relay.op.device_copy(x, tvm.cpu(), tvm.gpu()) 17 | out = copy + relay.const(np.random.rand(2, 3)) 18 | glb_var = relay.GlobalVar("main") 19 | mod[glb_var] = relay.Function([x], out) 20 | ca = context_analysis(mod, tvm.cpu()) 21 | 22 | cpu_dev = tvm.cpu().device_type 23 | gpu_dev = tvm.gpu().device_type 24 | for expr, dev in ca.items(): 25 | if isinstance(expr, _expr.Call): 26 | assert dev[0].value == gpu_dev 27 | elif isinstance(expr, _expr.Var): 28 | assert dev[0].value == cpu_dev 29 | elif isinstance(expr, _expr.Constant): 30 | assert dev[0].value == gpu_dev 31 | 32 | 33 | def test_shape_func(): 34 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 35 | return 36 | 37 | mod = tvm.IRModule() 38 | data_shape = (relay.Any(),) 39 | x = relay.var("x", shape=data_shape) 40 | y = relay.op.vm.shape_of(x) 41 | z = relay.nn.relu(y) 42 | p0 = relay.var("p0", shape=data_shape) 43 | fn = relay.Function([p0], z) 44 | out = relay.var("out", shape=(1,), dtype="int64") 45 | ins = relay.Tuple([y]) 46 | outs = relay.Tuple([out]) 47 | is_inputs = [False] 48 | shape_func = relay.op.vm.shape_func(fn, ins, outs, is_inputs) 49 | mod["main"] = relay.Function([x, out], shape_func) 50 | ca = context_analysis(mod, tvm.gpu()) 51 | main = mod["main"] 52 | 53 | cpu_dev = tvm.cpu().device_type 54 | gpu_dev = tvm.gpu().device_type 55 | assert main.params[0] in ca and ca[main.params[0]][0].value == gpu_dev 56 | # The output of shape func should be on cpu. 57 | assert main.params[1] in ca and ca[main.params[1]][0].value == cpu_dev 58 | # shape func is the body and it should be on cpu 59 | assert main.body in ca and ca[main.body][0].value == cpu_dev 60 | 61 | 62 | def test_vm_shape_of(): 63 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 64 | return 65 | 66 | mod = tvm.IRModule() 67 | data_shape = (relay.Any(),) 68 | x = relay.var("x", shape=data_shape) 69 | y = relay.op.vm.shape_of(x) 70 | mod["main"] = relay.Function([x], y) 71 | ca = context_analysis(mod, tvm.gpu()) 72 | main = mod["main"] 73 | 74 | cpu_dev = tvm.cpu().device_type 75 | gpu_dev = tvm.gpu().device_type 76 | assert main.params[0] in ca and ca[main.params[0]][0].value == gpu_dev 77 | assert main.body in ca and ca[main.body][0].value == cpu_dev 78 | 79 | 80 | def test_alloc_storage(): 81 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 82 | return 83 | 84 | mod = tvm.IRModule() 85 | mod.import_from_std("core.rly") 86 | size = relay.Var("size", relay.scalar_type("int64")) 87 | alignment = relay.Var("alignment", relay.scalar_type("int64")) 88 | # allocate a chunk on of memory on gpu. 89 | sto = relay.op.memory.alloc_storage(size, alignment, tvm.gpu()) 90 | mod["main"] = relay.Function([size, alignment], sto) 91 | ca = context_analysis(mod, tvm.gpu()) 92 | main = mod["main"] 93 | body = main.body 94 | 95 | cpu_dev = tvm.cpu().device_type 96 | gpu_dev = tvm.gpu().device_type 97 | # Inputs are unified with alloc storage inputs which are on cpu 98 | assert main.params[0] in ca and ca[main.params[0]][0].value == cpu_dev 99 | assert main.params[1] in ca and ca[main.params[1]][0].value == cpu_dev 100 | 101 | assert isinstance(body, relay.Call) and len(body.args) == 2 102 | # size of alloc_storage is on cpu 103 | assert body.args[0] in ca and ca[body.args[0]][0].value == cpu_dev 104 | # alignment of alloc_storage is on cpu 105 | assert body.args[1] in ca and ca[body.args[1]][0].value == cpu_dev 106 | # alloc_storage is on gpu as specified 107 | assert body in ca and ca[body][0].value == gpu_dev 108 | 109 | 110 | def test_alloc_tensor(): 111 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 112 | return 113 | 114 | mod = tvm.IRModule() 115 | mod.import_from_std("core.rly") 116 | sto_type = relay.TypeCall(mod.get_global_type_var("Storage"), []) 117 | sto = relay.Var("x", sto_type) 118 | sh = relay.const(np.array([3, 2]), dtype="int64") 119 | at = relay.op.memory.alloc_tensor(sto, relay.const(0, dtype="int64"), sh) 120 | mod["main"] = relay.Function([sto], at) 121 | ca = context_analysis(mod, tvm.gpu()) 122 | main = mod["main"] 123 | body = main.body 124 | 125 | cpu_dev = tvm.cpu().device_type 126 | gpu_dev = tvm.gpu().device_type 127 | # Input of the function falls back to the default device gpu 128 | assert main.params[0] in ca and ca[main.params[0]][0].value == gpu_dev 129 | 130 | assert isinstance(body, relay.Call) and len(body.args) == 3 131 | # storage of alloc_tensor falls back to the default device gpu 132 | assert body.args[0] in ca and ca[body.args[0]][0].value == gpu_dev 133 | # shape of alloc_tensor is on cpu 134 | assert body.args[1] in ca and ca[body.args[1]][0].value == cpu_dev 135 | # alloc_tensor keeps the same device context as storage which is is on gpu 136 | assert body in ca and ca[body][0].value == gpu_dev 137 | 138 | 139 | def test_vm_reshape_tensor(): 140 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 141 | return 142 | 143 | x = relay.var("x", shape=(2, 8), dtype="float32") 144 | shape = relay.const([-1, 4, 2], dtype="int64") 145 | y = relay.op.vm.reshape_tensor(x, shape, [2, 4, 2]) 146 | mod = tvm.IRModule() 147 | mod["main"] = relay.Function([x], y) 148 | ca = context_analysis(mod, tvm.gpu()) 149 | main = mod["main"] 150 | body = main.body 151 | 152 | cpu_dev = tvm.cpu().device_type 153 | gpu_dev = tvm.gpu().device_type 154 | # Input of the function falls back to the default device gpu 155 | assert main.params[0] in ca and ca[main.params[0]][0].value == gpu_dev 156 | 157 | # dats of reshape_tensor falls back to the default device gpu 158 | assert body.args[0] in ca and ca[body.args[0]][0].value == gpu_dev 159 | # shape of reshape_tensor is on cpu 160 | assert body.args[1] in ca and ca[body.args[1]][0].value == cpu_dev 161 | # reshape_tensor sits on the same device as the data 162 | assert body in ca and ca[body][0].value == gpu_dev 163 | 164 | 165 | def test_dynamic_input(): 166 | if not tvm.testing.device_enabled("cuda") or not tvm.gpu(0).exist: 167 | return 168 | 169 | mod = tvm.IRModule() 170 | data_shape = (relay.Any(), relay.Any()) 171 | x0 = relay.var("x0", shape=data_shape) 172 | x1 = relay.var("x1", shape=data_shape) 173 | mod["main"] = relay.Function([x0, x1], x0 + x1) 174 | 175 | compiler = relay.vm.VMCompiler() 176 | mod, _ = compiler.optimize(mod, target="cuda") 177 | ca = context_analysis(mod, tvm.cpu()) 178 | main = mod["main"] 179 | 180 | gpu_dev = tvm.gpu().device_type 181 | assert main.params[0] in ca and ca[main.params[0]][0].value == gpu_dev 182 | assert main.params[1] in ca and ca[main.params[1]][0].value == gpu_dev 183 | assert main.body in ca and ca[main.body][0].value == gpu_dev -------------------------------------------------------------------------------- /tests/test_external_codegen.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import numpy as np 4 | 5 | import tvm 6 | from tvm import te 7 | import tvm.relay.testing 8 | import tvm.relay.transform 9 | from tvm import relay 10 | from tvm import runtime 11 | 12 | def set_external_func_attr(func, compiler, ext_symbol): 13 | func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1)) 14 | func = func.with_attr("Compiler", compiler) 15 | func = func.with_attr("global_symbol", ext_symbol) 16 | return func 17 | 18 | 19 | def test_multi_node_subgraph(): 20 | x = relay.var("x", shape=(10, 10)) 21 | w0 = relay.var("w0", shape=(10, 10)) 22 | w1 = relay.var("w1", shape=(10, 10)) 23 | w2 = relay.var("w2", shape=(10, 10)) 24 | w3 = relay.var("w3", shape=(10, 10)) 25 | w4 = relay.var("w4", shape=(10, 10)) 26 | w5 = relay.var("w5", shape=(10, 10)) 27 | w6 = relay.var("w6", shape=(10, 10)) 28 | w7 = relay.var("w7", shape=(10, 10)) 29 | 30 | # subgraph0 31 | x0 = relay.var("x0", shape=(10, 10)) 32 | w00 = relay.var("w00", shape=(10, 10)) 33 | w01 = relay.var("w01", shape=(10, 10)) 34 | w02 = relay.var("w02", shape=(10, 10)) 35 | z00 = relay.add(x0, w00) 36 | p00 = relay.subtract(z00, w01) 37 | q00 = relay.multiply(p00, w02) 38 | subgraph0 = relay.Function([x0, w00, w01, w02], q00) 39 | subgraph0 = set_external_func_attr(subgraph0, "ccompiler", "ccompiler_0") 40 | call0 = relay.Call(subgraph0, [x, w0, w1, w2]) 41 | 42 | # subgraph1 43 | x1 = relay.var("x1", shape=(10, 10)) 44 | w10 = relay.var("w10", shape=(10, 10)) 45 | w11 = relay.var("w11", shape=(10, 10)) 46 | w12 = relay.var("w12", shape=(10, 10)) 47 | z10 = relay.add(x1, w10) 48 | p10 = relay.subtract(z10, w11) 49 | q10 = relay.multiply(p10, w12) 50 | subgraph1 = relay.Function([x1, w10, w11, w12], q10) 51 | subgraph1 = set_external_func_attr(subgraph1, "ccompiler", "ccompiler_1") 52 | call1 = relay.Call(subgraph1, [x, w3, w4, w5]) 53 | 54 | # Other parts on TVM 55 | z2 = relay.add(x, w6) 56 | q2 = relay.subtract(z2, w7) 57 | 58 | r = relay.concatenate((call0, call1, q2), axis=0) 59 | f = relay.Function([x, w0, w1, w2, w3, w4, w5, w6, w7], r) 60 | mod = tvm.IRModule() 61 | mod["main"] = f 62 | mod = relay.transform.InferType()(mod) 63 | 64 | x_data = np.random.rand(10, 10).astype("float32") 65 | w_data = [] 66 | for _ in range(8): 67 | w_data.append(np.random.rand(10, 10).astype("float32")) 68 | 69 | 70 | def test_extern_gcc_single_op(): 71 | x = relay.var("x", shape=(8, 8)) 72 | y = relay.var("y", shape=(8, 8)) 73 | 74 | x0 = relay.var("x0", shape=(8, 8)) 75 | y0 = relay.var("y0", shape=(8, 8)) 76 | z = x0 + y0 77 | f = relay.Function([x0, y0], z) 78 | f = set_external_func_attr(f, "ccompiler", "ccompiler_0") 79 | call = relay.Call(f, [x, y]) 80 | mod = tvm.IRModule.from_expr(call) 81 | x_data = np.random.rand(8, 8).astype("float32") 82 | y_data = np.random.rand(8, 8).astype("float32") 83 | 84 | 85 | 86 | def test_extern_gcc_single_op_int(): 87 | x = relay.var("x", shape=(8, 8), dtype="int32") 88 | y = relay.var("y", shape=(8, 8), dtype="int32") 89 | 90 | x0 = relay.var("x0", shape=(8, 8), dtype="int32") 91 | y0 = relay.var("y0", shape=(8, 8), dtype="int32") 92 | z = x0 + y0 93 | f = relay.Function([x0, y0], z) 94 | f = set_external_func_attr(f, "ccompiler", "ccompiler_0") 95 | call = relay.Call(f, [x, y]) 96 | mod = tvm.IRModule.from_expr(call) 97 | x_data = np.random.rand(8, 8).astype("int32") 98 | y_data = np.random.rand(8, 8).astype("int32") 99 | 100 | 101 | 102 | def test_extern_gcc(): 103 | x = relay.var("x", shape=(2, 2)) 104 | y = relay.var("y", shape=(2, 2)) 105 | 106 | # subgraph for mul 107 | x0 = relay.var("x0", shape=(2, 2)) 108 | y0 = relay.var("y0", shape=(2, 2)) 109 | mul = x0 * y0 110 | mul = relay.Function([x0, y0], mul) 111 | mul = set_external_func_attr(mul, "ccompiler", "ccompiler_2") 112 | call_mul = relay.Call(mul, [y, y]) 113 | 114 | # subgraph for add 115 | x1 = relay.var("x1", shape=(2, 2)) 116 | y1 = relay.var("y1", shape=(2, 2)) 117 | add = x1 + y1 118 | add = relay.Function([x1, y1], add) 119 | add = set_external_func_attr(add, "ccompiler", "ccompiler_1") 120 | call_add = relay.Call(add, [x, x]) 121 | 122 | # subgraph for sub 123 | x2 = relay.var("x2", shape=(2, 2)) 124 | y2 = relay.var("y2", shape=(2, 2)) 125 | sub = x2 - y2 126 | sub = relay.Function([x2, y2], sub) 127 | sub = set_external_func_attr(sub, "ccompiler", "ccompiler_0") 128 | call_sub = relay.Call(sub, [call_mul, call_add]) 129 | mod = tvm.IRModule.from_expr(call_sub) 130 | 131 | x_data = np.random.rand(2, 2).astype("float32") 132 | y_data = np.random.rand(2, 2).astype("float32") 133 | 134 | 135 | 136 | def test_extern_dnnl(): 137 | # if not tvm.get_global_func("relay.ext.dnnl", True): 138 | # print("skip because DNNL codegen is not available") 139 | # return 140 | 141 | dtype = "float32" 142 | ishape = (1, 32, 14, 14) 143 | w1shape = (32, 1, 3, 3) 144 | data0 = relay.var("data0", shape=(ishape), dtype=dtype) 145 | weight0 = relay.var("weight0", shape=(w1shape), dtype=dtype) 146 | 147 | data1 = relay.var("data0", shape=(ishape), dtype=dtype) 148 | weight1 = relay.var("weight0", shape=(w1shape), dtype=dtype) 149 | weight2 = relay.var("weight1", shape=(w1shape), dtype=dtype) 150 | depthwise_conv2d_1 = relay.nn.conv2d( 151 | data1, weight1, kernel_size=(3, 3), padding=(1, 1), groups=32 152 | ) 153 | depthwise_conv2d_2 = relay.nn.conv2d( 154 | depthwise_conv2d_1, weight2, kernel_size=(3, 3), padding=(1, 1), groups=32 155 | ) 156 | out = relay.add(depthwise_conv2d_1, depthwise_conv2d_2) 157 | 158 | f = relay.Function([data1, weight1, weight2], out) 159 | ref_mod = tvm.IRModule() 160 | ref_mod["main"] = f 161 | 162 | f = set_external_func_attr(f, "dnnl", "dnnl_0") 163 | call = relay.Call(f, [data0, weight0, weight0]) 164 | mod = tvm.IRModule.from_expr(call) 165 | 166 | i_data = np.random.uniform(0, 1, ishape).astype(dtype) 167 | w_data = np.random.uniform(0, 1, w1shape).astype(dtype) 168 | 169 | ref_ex = relay.create_executor("graph", mod=ref_mod, ctx=tvm.cpu()) 170 | ref_res = ref_ex.evaluate()(i_data, w_data, w_data) 171 | 172 | def test_extern_dnnl_const(): 173 | dtype = "float32" 174 | ishape = (1, 32, 14, 14) 175 | w1shape = (32, 1, 3, 3) 176 | data0 = relay.var("data0", shape=(ishape), dtype=dtype) 177 | w_data = np.random.uniform(0, 1, w1shape).astype(dtype) 178 | 179 | data1 = relay.var("data0", shape=(ishape), dtype=dtype) 180 | weight1 = relay.const(w_data, dtype=dtype) 181 | weight2 = relay.const(w_data, dtype=dtype) 182 | depthwise_conv2d_1 = relay.nn.conv2d( 183 | data1, weight1, kernel_size=(3, 3), padding=(1, 1), groups=32 184 | ) 185 | depthwise_conv2d_2 = relay.nn.conv2d( 186 | depthwise_conv2d_1, weight2, kernel_size=(3, 3), padding=(1, 1), groups=32 187 | ) 188 | out = relay.add(depthwise_conv2d_1, depthwise_conv2d_2) 189 | 190 | f = relay.Function([data1], out) 191 | ref_mod = tvm.IRModule() 192 | ref_mod["main"] = f 193 | 194 | f = set_external_func_attr(f, "dnnl", "dnnl_0") 195 | call = relay.Call(f, [data0]) 196 | mod = tvm.IRModule.from_expr(call) 197 | 198 | i_data = np.random.uniform(0, 1, ishape).astype(dtype) 199 | 200 | ref_ex = relay.create_executor("graph", mod=ref_mod, ctx=tvm.cpu()) 201 | ref_res = ref_ex.evaluate()(i_data) 202 | 203 | -------------------------------------------------------------------------------- /tests/test_dynamic_op_level3.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | import tvm 4 | from tvm import te 5 | from tvm import relay 6 | from tvm.relay import create_executor, transform 7 | from tvm.relay.testing import check_grad, run_infer_type 8 | import tvm.testing 9 | 10 | 11 | def verify_func(func, data, ref_res): 12 | assert isinstance(data, list) 13 | for target, ctx in tvm.testing.enabled_targets(): 14 | for kind in ["vm", "debug"]: 15 | mod = tvm.ir.IRModule.from_expr(func) 16 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 17 | op_res = intrp.evaluate()(*data) 18 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-5) 19 | relay.backend.compile_engine.get().clear() 20 | 21 | 22 | @tvm.testing.uses_gpu 23 | def test_dyn_reshape(): 24 | def verify_reshape(shape, newshape, oshape): 25 | x = relay.var("x", relay.TensorType(shape, "float32")) 26 | y = relay.var("y", relay.TensorType((len(newshape),), "int64")) 27 | z = relay.reshape(x, y) 28 | 29 | func = relay.Function([x, y], z) 30 | x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32") 31 | x_data = np.ones(shape).astype("float32") 32 | ref_res = np.reshape(x_data, oshape) 33 | check_grad( 34 | run_infer_type(func), 35 | inputs=[x_data, np.array(newshape).astype("int64")], 36 | test_inputs=[x_data], 37 | eps=1e-3, 38 | ) 39 | verify_func(func, [x_data, np.array(newshape).astype("int64")], ref_res) 40 | 41 | verify_reshape((2, 3, 4), (8, 3), (8, 3)) 42 | verify_reshape((4, 7), (2, 7, 2), (2, 7, 2)) 43 | verify_reshape((2, 3, 4), (4, 0, 2), (4, 3, 2)) 44 | verify_reshape((2, 3, 4), (2, 0, 0), (2, 3, 4)) 45 | verify_reshape((2, 3, 4), (0, -1), (2, 12)) 46 | verify_reshape((2, 3, 4), (-1, 0), (8, 3)) 47 | verify_reshape((2, 3, 4), (-3, 4), (6, 4)) 48 | verify_reshape((2, 3, 4, 5), (-3, -3), (6, 20)) 49 | verify_reshape((2, 3, 4), (0, -3), (2, 12)) 50 | 51 | 52 | @tvm.testing.uses_gpu 53 | def test_dyn_shape_reshape(): 54 | def verify_reshape(shape, newshape, oshape): 55 | x = relay.var("x", relay.TensorType(shape, "float32")) 56 | y = relay.var("y", relay.TensorType(newshape, "float32")) 57 | z = relay.reshape(x, relay.shape_of(y)) 58 | 59 | func = relay.Function([x, y], z) 60 | x_data = np.random.uniform(low=-1, high=1, size=shape).astype("float32") 61 | y_data = np.random.uniform(low=-1, high=1, size=newshape).astype("float32") 62 | ref_res = np.reshape(x_data, oshape) 63 | check_grad(run_infer_type(func), inputs=[x_data, y_data], eps=1e-3) 64 | verify_func(func, [x_data, y_data], ref_res) 65 | 66 | verify_reshape((2, 3, 4), (8, 3), (8, 3)) 67 | verify_reshape((4, 7), (2, 7, 2), (2, 7, 2)) 68 | 69 | 70 | @tvm.testing.uses_gpu 71 | def test_dyn_tile(): 72 | def verify_tile(dshape, reps): 73 | x = relay.var("x", relay.TensorType(dshape, "float32")) 74 | r = relay.var("reps", relay.TensorType((len(reps),), "float32")) 75 | z = relay.tile(x, r) 76 | 77 | func = relay.Function([x, r], z) 78 | x_data = np.random.uniform(low=-1, high=1, size=dshape).astype("float32") 79 | ref_res = np.tile(x_data, reps=reps) 80 | reps_data = np.array(reps).astype("float32") 81 | verify_func(func, [x_data, np.array(reps).astype("float32")], ref_res) 82 | 83 | verify_tile((2, 3, 4), (3, 2, 1)) 84 | verify_tile((2, 3, 4), (1, 2)) 85 | verify_tile((2, 3), (3, 2, 1)) 86 | 87 | 88 | @tvm.testing.uses_gpu 89 | def test_dyn_zeros_ones(): 90 | def verify_zeros_ones(shape, dtype): 91 | for op, ref in [(relay.zeros, np.zeros), (relay.ones, np.ones)]: 92 | rank = len(shape) 93 | dyn_shape = relay.Var("shape", relay.ty.TensorType((rank,), "int64")) 94 | y = op(dyn_shape, dtype) 95 | yy = run_infer_type(y) 96 | assert yy.checked_type == relay.ty.TensorType((relay.Any(),) * rank, dtype) 97 | 98 | func = relay.Function([dyn_shape], y) 99 | ref_res = ref(shape, dtype) 100 | verify_func(func, [np.array(shape).astype("int64")], ref_res.astype("int64")) 101 | 102 | verify_zeros_ones((1, 3), "int64") 103 | verify_zeros_ones((8, 9, 1, 2), "float32") 104 | 105 | 106 | @tvm.testing.uses_gpu 107 | def test_dyn_full(): 108 | def verify_full(fill_value, src_shape, dtype): 109 | x = relay.var("x", relay.scalar_type(dtype)) 110 | rank = len(src_shape) 111 | dyn_src_shape = relay.var("dyn_scr_shape", relay.ty.TensorType((rank,), "int64")) 112 | z = relay.full(x, dyn_src_shape, dtype) 113 | func = relay.Function([x, dyn_src_shape], z) 114 | ref_res = np.full(src_shape, fill_value).astype(dtype) 115 | 116 | verify_func( 117 | func, [np.array(fill_value).astype(dtype), np.array(src_shape).astype("int64")], ref_res 118 | ) 119 | 120 | verify_full(4, (1, 3, 4, 4), "int32") 121 | verify_full(4, (1, 3, 4, 4), "int64") 122 | verify_full(4.0, (2, 50), "float32") 123 | 124 | 125 | @tvm.testing.uses_gpu 126 | def test_dyn_sparse_to_dense(): 127 | def verify_sparse_to_dense(sparse_indices, sparse_values, default_value, output_shape, xpected): 128 | sparse_indices_data = np.array(sparse_indices) 129 | sparse_values_data = np.array(sparse_values) 130 | default_value_data = np.array(default_value) 131 | output_shape_data = np.array(output_shape) 132 | 133 | a = relay.var( 134 | "a", relay.TensorType(sparse_indices_data.shape, str(sparse_indices_data.dtype)) 135 | ) 136 | b = relay.var( 137 | "b", relay.TensorType(sparse_values_data.shape, str(sparse_values_data.dtype)) 138 | ) 139 | output_shape_var = relay.var( 140 | "output_shape", relay.TensorType(output_shape_data.shape, str(output_shape_data.dtype)) 141 | ) 142 | if default_value is None: 143 | args = [a, b, output_shape_var] 144 | d = relay.sparse_to_dense(a, output_shape_var, b) 145 | else: 146 | c = relay.var( 147 | "c", relay.TensorType(default_value_data.shape, str(default_value_data.dtype)) 148 | ) 149 | args = [a, b, c, output_shape_var] 150 | d = relay.sparse_to_dense(a, output_shape_var, b, c) 151 | 152 | zz = run_infer_type(d) 153 | assert len(zz.checked_type.shape) == len(output_shape) 154 | 155 | func = relay.Function(args, d) 156 | 157 | if default_value is None: 158 | arguments = [sparse_indices_data, sparse_values_data, output_shape_data] 159 | else: 160 | arguments = [ 161 | sparse_indices_data, 162 | sparse_values_data, 163 | default_value_data, 164 | output_shape_data, 165 | ] 166 | 167 | verify_func(func, arguments, xpected) 168 | 169 | verify_sparse_to_dense(1, 3, 0, [5], [0, 3, 0, 0, 0]) # scalar 170 | verify_sparse_to_dense([0, 1, 4], [3, 3, 3], 0, [5], [3, 3, 0, 0, 3]) # vector 171 | verify_sparse_to_dense( 172 | [[0, 0], [1, 2]], [1, 2], 0, [3, 4], [[1, 0, 0, 0], [0, 0, 2, 0], [0, 0, 0, 0]] 173 | ) # nXd 174 | verify_sparse_to_dense( 175 | [[0, 0, 0], [1, 2, 3]], 176 | [1, 2], 177 | 4, 178 | [2, 3, 4], 179 | [[[1, 4, 4, 4], [4, 4, 4, 4], [4, 4, 4, 4]], [[4, 4, 4, 4], [4, 4, 4, 4], [4, 4, 4, 2]]], 180 | ) # nXd 181 | verify_sparse_to_dense( 182 | [0, 1, 4], [3.1, 3.1, 3.1], 3.5, [5], [3.1, 3.1, 3.5, 3.5, 3.1] 183 | ) # floats 184 | verify_sparse_to_dense(1, 3, None, [5], [0, 3, 0, 0, 0]) # default value not specified 185 | -------------------------------------------------------------------------------- /dataset/drawing_script.R: -------------------------------------------------------------------------------- 1 | library(ggplot2) 2 | library(readxl) 3 | library(plyr) 4 | library(patchwork) 5 | library(corrplot) 6 | 7 | bugs <- read_excel(path = 'dataset.xlsx', sheet="ALL") 8 | bugs$project=factor(bugs$project, levels = c("TVM","Glow","nGraph")) 9 | #bugs$`root causes` = factor(bugs$`root causes`, levels = c("Crash","Wrong Code","Bad Performance","Hang","Build Failure","Unreported")) 10 | tvm_bugs = subset(bugs, project=="TVM") 11 | glow_bugs = subset(bugs, project=="Glow") 12 | ngraph_bugs = subset(bugs, project=="nGraph") 13 | bugs_number = as.vector(table(bugs$project)) 14 | bugs_number = c() 15 | bugs_number[1:6] <- 318 # tvm 16 | bugs_number[7:12] <- 145 #glow 17 | bugs_number[13:17] <- 140 # nGraph no hang 18 | 19 | sym_palette <- c("#F4ACB7","#FFD3DB","#DDEEF4","#ACD9E9","#74ADD1","#4575B4") 20 | sym_palette <- rev(sym_palette) 21 | 22 | my_axis_size <- 9.5 23 | my_title_size <- 12 24 | my_legend_size <- 9.5 25 | my_axis_color="black" 26 | 27 | 28 | # plot root causes_stack 29 | 30 | 31 | # order in my order 32 | causes_data <- table(bugs$`root causes`) 33 | causes_data <- sort(causes_data, decreasing = F) 34 | causes_name <- as.data.frame(causes_data)$Var1 35 | causes_name <- as.array(causes_name) 36 | 37 | bugs$`root causes` = factor(bugs$`root causes`, levels = causes_name) 38 | 39 | causes_palette <- c("#D2E6F1","#9ECAE1","#3182BD") #blue 40 | bugs$project=factor(bugs$project, levels = c("nGraph","Glow","TVM")) 41 | p <- ggplot(bugs, aes(y=`root causes`, fill = project)) 42 | p + geom_bar( alpha = 0.9) + 43 | xlab("") + 44 | ylab("") + 45 | scale_fill_manual(values = causes_palette, guide = guide_legend(reverse=TRUE))+ 46 | theme(panel.background = element_rect(fill = "#efefef") 47 | ,axis.text = element_text( face="bold", 48 | size = my_axis_size, 49 | colour = my_axis_color), 50 | legend.position = "bottom" 51 | ,legend.title=element_blank() 52 | ,legend.text = element_text(size = my_legend_size, 53 | face = "bold"), 54 | )+ 55 | geom_text(aes(label=..count..), 56 | stat = 'count', 57 | position = position_stack(vjust = 0.5)) 58 | 59 | ggsave("causes.pdf") 60 | 61 | 62 | ################ plot stage_pie ########################## 63 | stage_palette <- c("#DEEBF7","#9ECAE1","#3182BD") #blue !!! 64 | 65 | 66 | pie_function <- function(input_data, title_){ 67 | temp <- subset(input_data, !is.na(input_data$stages)) 68 | total_ <- sum(table(temp$stages)) # bug number in each stage 69 | pie1 <- ggplot(temp, aes(x=1 ,fill=stages)) 70 | pie1 <- pie1 + geom_bar() + 71 | coord_polar(theta = "y")+ 72 | scale_fill_manual(values = stage_palette, 73 | breaks = c("Model Loading","High-Level IR Transformation","Low-Level IR Transformation"), 74 | )+ 75 | labs(title=title_)+ 76 | theme_void() + 77 | theme(axis.text=element_blank(), 78 | axis.ticks=element_blank(), 79 | plot.title = element_text(hjust = 0.5, 80 | vjust = -60, 81 | face = "bold", 82 | size = my_title_size, 83 | colour = my_axis_color), 84 | legend.title=element_blank(), 85 | legend.text = element_text(size = my_legend_size, face = "bold"), 86 | legend.position = "right") + 87 | # guides(fill=FALSE)+ 88 | geom_text(aes(label=paste(round(..count.. / total_,4)*100,"%", sep = ''),x=1.1), 89 | stat = 'count', size=4.8, position = position_stack(vjust =0.5 )) 90 | 91 | } 92 | 93 | pie1 <- pie_function(tvm_bugs,"TVM") 94 | pie2 <- pie_function(glow_bugs, "Glow") 95 | pie3 <- pie_function(ngraph_bugs, "nGraph") 96 | 97 | p <- pie1 +pie2 +pie3 +plot_layout(ncol=3, guides = 'collect') 98 | p 99 | ggsave("stage.pdf", height=4) 100 | 101 | 102 | 103 | # ################### correlation ########################### 104 | 105 | bugs <- read_excel(path = 'bug_github2 (16).xlsx', sheet="All in one") 106 | tvm_bugs = subset(bugs, project=="TVM") 107 | glow_bugs = subset(bugs, project=="Glow") 108 | ngraph_bugs = subset(bugs, project=="nGraph") 109 | bugs_number = as.vector(table(bugs$project)) 110 | bugs_number = c() 111 | bugs_number[1:12] <- as.vector(table(bugs$project))[1] # 12 categories of root causes, including: other 112 | bugs_number[13:24] <- as.vector(table(bugs$project))[2] 113 | bugs_number[25:36] <- as.vector(table(bugs$project))[3] 114 | 115 | row_name <- c("TVM", "Glow", "nGraph") 116 | ########################## root causes ########################## 117 | a <- table(bugs$`root causes`, bugs$project) 118 | a <- a/bugs_number 119 | glow_ <- a[1:12] 120 | ngraph_ <- a[13:24] 121 | tvm_ <- a[25:36] 122 | 123 | cor_tvm_glow_causes <- cor(glow_,tvm_, method = "spearman") 124 | cor_tvm_ngraph_causes <- cor(tvm_,ngraph_,method = "spearman") 125 | cor_glow_ngraph_causes <- cor(ngraph_,glow_,method = "spearman") 126 | 127 | res_causes <- c(1, cor_tvm_glow_causes, cor_tvm_ngraph_causes, 128 | cor_tvm_glow_causes, 1, cor_glow_ngraph_causes, 129 | cor_tvm_ngraph_causes, cor_glow_ngraph_causes, 1) 130 | 131 | my_color <- c() 132 | my_color[1:16] <- "grey" 133 | #my_color[17:20] <- c("#E6F5D0","#B8E186","#7FBC41","#4D9221") # green!!!!!! 134 | my_color[17:20]<- c("white","#9ECAE1","#438DC3","#3182BD") #blue !!! 135 | 136 | 137 | col_my=colorRampPalette(my_color) 138 | 139 | res_cor <- matrix(data=res_causes, nrow = 3, ncol = 3, dimnames = list(row_name, row_name)) 140 | p1 <- corrplot::corrplot(corr=res_cor, 141 | method = "color", 142 | order = "AOE", 143 | col = col_my(400), 144 | addCoef.col = "black", 145 | type = "lower", 146 | diag=T, 147 | bg="white", 148 | outline=TRUE, 149 | rect.col="blue", 150 | number.cex=1.1, 151 | 152 | tl.srt = 0, 153 | tl.offset=1, 154 | tl.pos = "ld", 155 | tl.col="black", 156 | tl.cex = 1.2, 157 | 158 | cl.lim = c(.7, 1), 159 | cl.pos = "b", # legend in the bottom 160 | cl.ratio = .3, # legend width 161 | #cl.align.text = "l", 162 | cl.length=4, 163 | cl.cex = 1,) 164 | p1 165 | 166 | ###################### symptoms #################################### 167 | b <- table(bugs$`symptoms`, bugs$project) 168 | # b <- b/bugs_number 169 | glow_symptoms <- b[1:6] 170 | ngraph_symptoms <- b[7:12] 171 | tvm_symptoms <- b[13:18] 172 | 173 | 174 | cor_tvm_glow_sym <- cor(glow_symptoms,tvm_symptoms, method = "spearman") 175 | cor_tvm_ngraph_sym <- cor(tvm_symptoms,ngraph_symptoms,method = "spearman") 176 | cor_glow_ngraph_sym <- cor(ngraph_symptoms,glow_symptoms,method = "spearman") 177 | 178 | res <- c(1, cor_tvm_glow_sym, cor_tvm_ngraph_sym, 179 | cor_tvm_glow_sym, 1, cor_glow_ngraph_sym, 180 | cor_tvm_ngraph_sym, cor_glow_ngraph_sym, 1) 181 | 182 | 183 | res_cor <- matrix(data=res, nrow = 3, ncol = 3, dimnames = list(row_name, row_name)) 184 | 185 | 186 | p1 <- corrplot::corrplot(corr=res_cor, 187 | method = "color", 188 | order = "AOE", 189 | col = col_my(400), 190 | addCoef.col = "black", 191 | type = "lower", 192 | diag=T, 193 | bg="white", 194 | outline=TRUE, 195 | rect.col="blue", 196 | number.cex=1.1, 197 | 198 | tl.srt = 0, 199 | tl.offset=1, 200 | tl.pos = "ld", 201 | tl.col="black", 202 | tl.cex = 1.2, 203 | 204 | cl.lim = c(.7, 1), 205 | cl.pos = "b", # legend in the right 206 | cl.ratio = .3, # legend width 207 | #cl.align.text = "l", 208 | 209 | cl.length=4, 210 | cl.cex = 1,) 211 | p1 212 | 213 | 214 | -------------------------------------------------------------------------------- /tests/test_op_qnn_mul.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | import numpy as np 4 | from tvm import relay 5 | from tvm.contrib import graph_runtime 6 | import tvm.topi.testing 7 | 8 | # "unquantize" a quantized tensor 9 | def recover(data, scale, zp): 10 | return scale * (np.asarray(data) - zp) 11 | 12 | 13 | def generate_golden_output(x_recovered, y_recovered, scale, zp): 14 | mul = x_recovered * y_recovered 15 | output = np.around(mul / scale + zp) 16 | 17 | q_min = np.iinfo(np.uint8).min 18 | q_max = np.iinfo(np.uint8).max 19 | return np.clip(output, q_min, q_max) 20 | 21 | 22 | def test_tflite_same_io_qnn_params(): 23 | data_dtype = "uint8" 24 | 25 | lhs_scale = rhs_scale = output_scale = 0.00784314 26 | lhs_zero_point = rhs_zero_point = output_zero_point = 127 27 | 28 | x = relay.var("x", shape=(1, 4), dtype=data_dtype) 29 | y = relay.var("y", shape=(1, 4), dtype=data_dtype) 30 | z = relay.qnn.op.mul( 31 | lhs=x, 32 | rhs=y, 33 | lhs_scale=relay.const(lhs_scale, "float32"), 34 | lhs_zero_point=relay.const(lhs_zero_point, "int32"), 35 | rhs_scale=relay.const(rhs_scale, "float32"), 36 | rhs_zero_point=relay.const(rhs_zero_point, "int32"), 37 | output_scale=relay.const(output_scale, "float32"), 38 | output_zero_point=relay.const(output_zero_point, "int32"), 39 | ) 40 | 41 | func = relay.Function([x, y], z) 42 | mod = tvm.IRModule.from_expr(func) 43 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 44 | func = mod["main"] 45 | 46 | x_datas = [ 47 | np.array((1, 153, 2, 178)).reshape((1, 4)), 48 | np.array((25, 1, 178, 216)).reshape((1, 4)), 49 | np.array((25, 153, 1, 165)).reshape((1, 4)), 50 | ] 51 | y_datas = [ 52 | np.array((204, 178, 1, 8)).reshape((1, 4)), 53 | np.array((204, 178, 191, 1)).reshape((1, 4)), 54 | np.array((204, 178, 1, 191)).reshape((1, 4)), 55 | ] 56 | 57 | for i in range(0, 3): 58 | x_data = x_datas[i] 59 | y_data = y_datas[i] 60 | 61 | x_rec = recover(x_data, lhs_scale, lhs_zero_point) 62 | y_rec = recover(y_data, rhs_scale, rhs_zero_point) 63 | golden = generate_golden_output(x_rec, y_rec, output_scale, output_zero_point) 64 | 65 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 66 | op_res = intrp.evaluate(func)(x_data, y_data) 67 | 68 | np.testing.assert_equal(op_res.asnumpy(), np.uint8(golden)) 69 | 70 | 71 | def test_tflite_different_io_qnn_params(): 72 | data_dtype = "uint8" 73 | 74 | lhs_scale = 0.0156863 75 | lhs_zero_point = 127 76 | rhs_scale = 0.0117647 77 | rhs_zero_point = 85 78 | output_scale = 0.0235294 79 | output_zero_point = 128 80 | 81 | x = relay.var("x", shape=(1, 4), dtype=data_dtype) 82 | y = relay.var("y", shape=(1, 4), dtype=data_dtype) 83 | z = relay.qnn.op.mul( 84 | lhs=x, 85 | rhs=y, 86 | lhs_scale=relay.const(lhs_scale, "float32"), 87 | lhs_zero_point=relay.const(lhs_zero_point, "int32"), 88 | rhs_scale=relay.const(rhs_scale, "float32"), 89 | rhs_zero_point=relay.const(rhs_zero_point, "int32"), 90 | output_scale=relay.const(output_scale, "float32"), 91 | output_zero_point=relay.const(output_zero_point, "int32"), 92 | ) 93 | 94 | func = relay.Function([x, y], z) 95 | mod = tvm.IRModule.from_expr(func) 96 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 97 | func = mod["main"] 98 | 99 | x_datas = [ 100 | np.array((76, 140, 153, 172)).reshape((1, 4)), 101 | np.array((133, 140, 146, 153)).reshape((1, 4)), 102 | np.array((76, 140, 172, 146)).reshape((1, 4)), 103 | ] 104 | y_datas = [ 105 | np.array((136, 119, 128, 17)).reshape((1, 4)), 106 | np.array((136, 119, 111, 94)).reshape((1, 4)), 107 | np.array((136, 119, 17, 128)).reshape((1, 4)), 108 | ] 109 | 110 | for i in range(0, 3): 111 | x_data = x_datas[i] 112 | y_data = y_datas[i] 113 | 114 | x_rec = recover(x_data, lhs_scale, lhs_zero_point) 115 | y_rec = recover(y_data, rhs_scale, rhs_zero_point) 116 | golden = generate_golden_output(x_rec, y_rec, output_scale, output_zero_point) 117 | 118 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 119 | op_res = intrp.evaluate(func)(x_data, y_data) 120 | np.testing.assert_equal(op_res.asnumpy(), np.uint8(golden)) 121 | 122 | 123 | def test_saturation(): 124 | # Same params 125 | data_dtype = "uint8" 126 | lhs_scale = rhs_scale = output_scale = 0.125 127 | lhs_zero_point = rhs_zero_point = output_zero_point = 0 128 | 129 | x = relay.var("x", shape=(1, 4), dtype=data_dtype) 130 | y = relay.var("y", shape=(1, 4), dtype=data_dtype) 131 | z = relay.qnn.op.mul( 132 | lhs=x, 133 | rhs=y, 134 | lhs_scale=relay.const(lhs_scale, "float32"), 135 | lhs_zero_point=relay.const(lhs_zero_point, "int32"), 136 | rhs_scale=relay.const(rhs_scale, "float32"), 137 | rhs_zero_point=relay.const(rhs_zero_point, "int32"), 138 | output_scale=relay.const(output_scale, "float32"), 139 | output_zero_point=relay.const(output_zero_point, "int32"), 140 | ) 141 | 142 | func = relay.Function([x, y], z) 143 | mod = tvm.IRModule.from_expr(func) 144 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 145 | func = mod["main"] 146 | 147 | x_data = np.array((255, 1, 1, 0)).reshape((1, 4)) 148 | y_data = np.array((255, 255, 128, 0)).reshape((1, 4)) 149 | 150 | x_rec = recover(x_data, lhs_scale, lhs_zero_point) 151 | y_rec = recover(y_data, rhs_scale, rhs_zero_point) 152 | 153 | golden = generate_golden_output(x_rec, y_rec, output_scale, output_zero_point) 154 | 155 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 156 | op_res = intrp.evaluate(func)(x_data, y_data) 157 | np.testing.assert_equal(op_res.asnumpy(), np.uint8(golden)) 158 | 159 | # Same params, different scale 160 | 161 | lhs_scale = rhs_scale = 0.125 162 | output_scale = 0.25 163 | 164 | z = relay.qnn.op.mul( 165 | lhs=x, 166 | rhs=y, 167 | lhs_scale=relay.const(lhs_scale, "float32"), 168 | lhs_zero_point=relay.const(lhs_zero_point, "int32"), 169 | rhs_scale=relay.const(rhs_scale, "float32"), 170 | rhs_zero_point=relay.const(rhs_zero_point, "int32"), 171 | output_scale=relay.const(output_scale, "float32"), 172 | output_zero_point=relay.const(output_zero_point, "int32"), 173 | ) 174 | 175 | func = relay.Function([x, y], z) 176 | mod = tvm.IRModule.from_expr(func) 177 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 178 | func = mod["main"] 179 | 180 | x_data = np.array((255, 1, 1, 0)).reshape((1, 4)) 181 | y_data = np.array((255, 255, 127, 0)).reshape((1, 4)) 182 | 183 | x_rec = recover(x_data, lhs_scale, lhs_zero_point) 184 | y_rec = recover(y_data, rhs_scale, rhs_zero_point) 185 | 186 | golden = generate_golden_output(x_rec, y_rec, output_scale, output_zero_point) 187 | 188 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 189 | op_res = intrp.evaluate(func)(x_data, y_data) 190 | np.testing.assert_equal(op_res.asnumpy(), np.uint8(golden)) 191 | 192 | # All params different 193 | 194 | lhs_scale = 0.5 195 | rhs_scale = 0.25 196 | output_scale = 0.125 197 | 198 | z = relay.qnn.op.mul( 199 | lhs=x, 200 | rhs=y, 201 | lhs_scale=relay.const(lhs_scale, "float32"), 202 | lhs_zero_point=relay.const(lhs_zero_point, "int32"), 203 | rhs_scale=relay.const(rhs_scale, "float32"), 204 | rhs_zero_point=relay.const(rhs_zero_point, "int32"), 205 | output_scale=relay.const(output_scale, "float32"), 206 | output_zero_point=relay.const(output_zero_point, "int32"), 207 | ) 208 | 209 | func = relay.Function([x, y], z) 210 | mod = tvm.IRModule.from_expr(func) 211 | mod = relay.qnn.transform.CanonicalizeOps()(mod) 212 | func = mod["main"] 213 | 214 | x_data = np.array((255, 0, 1, 0)).reshape((1, 4)) 215 | y_data = np.array((0, 128, 64, 0)).reshape((1, 4)) 216 | 217 | x_rec = recover(x_data, lhs_scale, lhs_zero_point) 218 | y_rec = recover(y_data, rhs_scale, rhs_zero_point) 219 | 220 | golden = generate_golden_output(x_rec, y_rec, output_scale, output_zero_point) 221 | 222 | intrp = relay.create_executor("graph", ctx=tvm.cpu(0), target="llvm") 223 | op_res = intrp.evaluate(func)(x_data, y_data) 224 | np.testing.assert_equal(op_res.asnumpy(), np.uint8(golden)) 225 | 226 | -------------------------------------------------------------------------------- /tests/test_pass_check_kind.py: -------------------------------------------------------------------------------- 1 | import tvm 2 | from tvm import te 3 | from tvm import relay 4 | from tvm.relay.analysis import check_kind 5 | import pytest 6 | 7 | def test_typevar_kind(): 8 | # returns the same kind 9 | tp1 = relay.TypeVar("tp1", relay.TypeKind.Type) 10 | tp2 = relay.TypeVar("tp2", relay.TypeKind.ShapeVar) 11 | tp3 = relay.TypeVar("tp3", relay.TypeKind.Constraint) 12 | 13 | assert check_kind(tp1) == relay.TypeKind.Type 14 | assert check_kind(tp2) == relay.TypeKind.ShapeVar 15 | assert check_kind(tp3) == relay.TypeKind.Constraint 16 | 17 | 18 | def test_tuple_kind(): 19 | # only contain type kinds 20 | tp = relay.TypeVar("tp", relay.TypeKind.Type) 21 | tt = relay.TensorType(tvm.runtime.convert([1, 2, 3]), "float32") 22 | tf = relay.FuncType( 23 | tvm.runtime.convert([]), tt, tvm.runtime.convert([]), tvm.runtime.convert([]) 24 | ) 25 | fields = tvm.runtime.convert([tp, tf, tt]) 26 | 27 | tup_ty = relay.TupleType(fields) 28 | assert check_kind(tup_ty) == relay.TypeKind.Type 29 | 30 | 31 | def test_func_kind(): 32 | # only contain type kinds 33 | tp1 = relay.TypeVar("tp1", relay.TypeKind.Type) 34 | tp2 = relay.TypeVar("tp2", relay.TypeKind.Type) 35 | 36 | shape = tvm.runtime.convert([1, 2, 3]) 37 | dtype = "float32" 38 | tensor_type = relay.TensorType(shape, dtype) 39 | 40 | tr = relay.TypeRelation(None, tvm.runtime.convert([tensor_type, tp1]), 1, None) 41 | 42 | type_params = tvm.runtime.convert([tp1, tp2]) 43 | type_constraints = tvm.runtime.convert([tr]) 44 | arg_types = tvm.runtime.convert([tp1, tensor_type]) 45 | ret_type = relay.TupleType(tvm.runtime.convert([tp2, tensor_type])) 46 | 47 | tf = relay.FuncType(arg_types, ret_type, type_params, type_constraints) 48 | assert check_kind(tf) == relay.TypeKind.Type 49 | 50 | 51 | def test_ref_kind(): 52 | # only contain type kinds 53 | tt = relay.TensorType(tvm.runtime.convert([1, 2, 3]), "float32") 54 | ft = relay.FuncType( 55 | tvm.runtime.convert([]), tt, tvm.runtime.convert([]), tvm.runtime.convert([]) 56 | ) 57 | 58 | rt1 = relay.RefType(tt) 59 | assert check_kind(rt1) == relay.TypeKind.Type 60 | rt2 = relay.RefType(ft) 61 | assert check_kind(rt2) == relay.TypeKind.Type 62 | rt3 = relay.RefType(relay.TupleType([rt1, rt2])) 63 | assert check_kind(rt3) == relay.TypeKind.Type 64 | 65 | 66 | def test_relation_kind(): 67 | # only have type kinds for arguments 68 | tp = relay.TypeVar("tp", relay.TypeKind.Type) 69 | tt = relay.TensorType(tvm.runtime.convert([1, 2, 3]), "float32") 70 | tf = relay.FuncType( 71 | tvm.runtime.convert([]), tt, tvm.runtime.convert([]), tvm.runtime.convert([]) 72 | ) 73 | args = tvm.runtime.convert([tf, tt, tp]) 74 | 75 | tr = relay.TypeRelation(None, args, 2, None) 76 | assert check_kind(tr) == relay.TypeKind.Constraint 77 | 78 | 79 | def test_global_typevar_kind(): 80 | v1 = relay.GlobalTypeVar("gtv1", relay.TypeKind.AdtHandle) 81 | v2 = relay.GlobalTypeVar("gtv2", relay.TypeKind.Type) 82 | 83 | assert check_kind(v1) == relay.TypeKind.AdtHandle 84 | assert check_kind(v2) == relay.TypeKind.Type 85 | 86 | 87 | def test_typecall_kind(): 88 | gtv = relay.GlobalTypeVar("gtv") 89 | 90 | mod = tvm.IRModule() 91 | data = relay.TypeData(gtv, [], []) 92 | mod[gtv] = data 93 | empty_call = relay.TypeCall(gtv, []) 94 | assert check_kind(empty_call, mod) == relay.TypeKind.Type 95 | 96 | new_mod = tvm.IRModule() 97 | tv = relay.TypeVar("tv") 98 | new_data = relay.TypeData(gtv, [tv], []) 99 | new_mod[gtv] = new_data 100 | call = relay.TypeCall(gtv, [relay.TupleType([])]) 101 | assert check_kind(call, new_mod) == relay.TypeKind.Type 102 | 103 | 104 | @pytest.mark.xfail(raises=tvm.error.TVMError) 105 | def test_invalid_tuple_kind(): 106 | tp1 = relay.TypeVar("tp1", relay.TypeKind.ShapeVar) 107 | tp2 = relay.TypeVar("tp2", relay.TypeKind.BaseType) 108 | tp3 = relay.TypeVar("tp3", relay.TypeKind.Constraint) 109 | fields = tvm.runtime.convert([tp1, tp2, tp3]) 110 | 111 | tup_ty = relay.TupleType(fields) 112 | check_kind(tup_ty) 113 | 114 | 115 | @pytest.mark.xfail(raises=tvm.error.TVMError) 116 | def test_invalid_func_kind(): 117 | tp1 = relay.TypeVar("tp1", relay.TypeKind.ShapeVar) 118 | tp2 = relay.TypeVar("tp2", relay.TypeKind.BaseType) 119 | tp3 = relay.TypeVar("tp3", relay.TypeKind.Constraint) 120 | 121 | type_params = tvm.runtime.convert([tp1, tp2, tp3]) 122 | type_constraints = tvm.runtime.convert([]) 123 | arg_types = tvm.runtime.convert([tp1, tp2]) 124 | ret_type = tp3 125 | 126 | tf = relay.FuncType(arg_types, ret_type, type_params, type_constraints) 127 | check_kind(tf) 128 | 129 | 130 | @pytest.mark.xfail(raises=tvm.error.TVMError) 131 | def test_invalid_ref_kind(): 132 | tp = relay.TypeVar("tp", relay.TypeKind.ShapeVar) 133 | rt = relay.RefType(tp) 134 | check_kind(rt) 135 | 136 | 137 | @pytest.mark.xfail(raises=tvm.error.TVMError) 138 | def test_invalid_relation_kind(): 139 | tp1 = relay.TypeVar("tp1", relay.TypeKind.ShapeVar) 140 | tp2 = relay.TypeVar("tp2", relay.TypeKind.BaseType) 141 | tp3 = relay.TypeVar("tp3", relay.TypeKind.Constraint) 142 | args = tvm.runtime.convert([tp1, tp2, tp3]) 143 | 144 | func = tvm.ir.EnvFunc.get("tvm.relay.type_relation.Broadcast") 145 | tr = relay.TypeRelation(func, args, 2, None) 146 | check_kind(tr) 147 | 148 | 149 | @pytest.mark.xfail(raises=tvm.error.TVMError) 150 | def test_typecall_invalid_callee(): 151 | # global type var must be an ADT handle 152 | gtv = relay.GlobalTypeVar("v1", relay.TypeKind.Type) 153 | check_kind(relay.TypeCall(gtv, [])) 154 | 155 | 156 | @pytest.mark.xfail(raises=tvm.error.TVMError) 157 | def test_typecall_invalid_args(): 158 | # args must all be type kind 159 | mod = tvm.IRModule() 160 | gtv = relay.GlobalTypeVar("v1") 161 | data = relay.TypeData(gtv, [], []) 162 | mod[gtv] = data 163 | 164 | check_kind(relay.TypeCall(gtv, [data])) 165 | 166 | 167 | @pytest.mark.xfail(raises=tvm.error.TVMError) 168 | def test_typecall_invalid_num_args(): 169 | mod = tvm.IRModule() 170 | gtv = relay.GlobalTypeVar("v1") 171 | tv = relay.TypeVar("tv") 172 | data = relay.TypeData(gtv, [tv], []) 173 | mod[gtv] = data 174 | check_kind(relay.TypeCall(gtv, [])) 175 | 176 | 177 | @pytest.mark.xfail(raises=tvm.error.TVMError) 178 | def test_func_with_invalid_ret_type(): 179 | tp1 = relay.TypeVar("tp1", relay.TypeKind.Type) 180 | tp2 = relay.TypeVar("tp2", relay.TypeKind.ShapeVar) 181 | tf = relay.FuncType( 182 | tvm.runtime.convert([tp1]), tp2, tvm.runtime.convert([tp1, tp2]), tvm.runtime.convert([]) 183 | ) 184 | 185 | check_kind(tf) 186 | 187 | 188 | @pytest.mark.xfail(raises=tvm.error.TVMError) 189 | def test_func_with_invalid_arg_types(): 190 | tp1 = relay.TypeVar("tp1", relay.TypeKind.ShapeVar) 191 | tp2 = relay.TypeVar("tp2", relay.TypeKind.Type) 192 | tf = relay.FuncType( 193 | tvm.runtime.convert([tp1]), tp2, tvm.runtime.convert([tp1, tp2]), tvm.runtime.convert([]) 194 | ) 195 | 196 | check_kind(tf) 197 | 198 | 199 | @pytest.mark.xfail(raises=tvm.error.TVMError) 200 | def test_func_with_invalid_tuple(): 201 | tp1 = relay.TypeVar("tp1", relay.TypeKind.ShapeVar) 202 | 203 | ret_type = relay.TupleType(tvm.runtime.convert([tp1, tp1, tp1])) 204 | 205 | tf = relay.FuncType( 206 | tvm.runtime.convert([]), ret_type, tvm.runtime.convert([tp1]), tvm.runtime.convert([]) 207 | ) 208 | check_kind(tf) 209 | 210 | 211 | @pytest.mark.xfail(raises=tvm.error.TVMError) 212 | def test_func_with_invalid_relation(): 213 | tp1 = relay.TypeVar("tp1", relay.TypeKind.Type) 214 | tp2 = relay.TypeVar("tp2", relay.TypeKind.ShapeVar) 215 | tp3 = relay.TypeVar("tp3", relay.TypeKind.Constraint) 216 | 217 | func = tvm.ir.EnvFunc.get("tvm.relay.type_relation.Identity") 218 | tr = relay.TypeRelation(func, tvm.runtime.convert([tp2, tp3]), 1, None) 219 | 220 | tf = relay.FuncType( 221 | tvm.runtime.convert([tp1]), 222 | tp1, 223 | tvm.runtime.convert([tp1, tp2, tp3]), 224 | tvm.runtime.convert([tr]), 225 | ) 226 | check_kind(tf) 227 | 228 | 229 | @pytest.mark.xfail(raises=tvm.error.TVMError) 230 | def test_tuple_with_invalid_func(): 231 | tensor_type = relay.TensorType(tvm.runtime.convert([1, 2, 3]), "float32") 232 | 233 | tp1 = relay.TypeVar("tp1", relay.TypeKind.ShapeVar) 234 | tf = relay.FuncType( 235 | tvm.runtime.convert([]), tp1, tvm.runtime.convert([tp1]), tvm.runtime.convert([]) 236 | ) 237 | 238 | tup_ty = relay.TupleType(tvm.runtime.convert([tensor_type, tf])) 239 | check_kind(tup_ty) -------------------------------------------------------------------------------- /tests/test_dynamic_op_level2.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import tvm 3 | from tvm import relay 4 | from tvm import te 5 | from tvm.relay.testing import enabled_targets 6 | import random 7 | # from test_dynamic_op_level3 import verify_func 8 | import tvm.topi.testing 9 | from tvm.relay.testing import run_infer_type 10 | 11 | 12 | @tvm.testing.uses_gpu 13 | def test_dyn_upsampling_run(): 14 | def verify_upsampling(dshape, scale_h, scale_w, layout, method, align_corners=False): 15 | 16 | if layout == "NCHW": 17 | (n, c, h, w) = dshape 18 | x_data = np.random.uniform(size=(n, c, h, w)).astype("float32") 19 | 20 | elif layout == "NHWC": 21 | (n, h, w, c) = dshape 22 | x_data = np.random.uniform(size=(n, h, w, c)).astype("float32") 23 | 24 | if method == "nearest_neighbor": 25 | ref_res = tvm.topi.testing.upsampling_python(x_data, (scale_h, scale_w), layout) 26 | else: 27 | ref_res = tvm.topi.testing.bilinear_resize_python( 28 | x_data, (int(round(h * scale_h)), int(round(w * scale_w))), layout 29 | ) 30 | x = relay.Var("x", relay.TensorType(dshape, "float32")) 31 | scale_h_var = relay.var("scale_h", relay.TensorType((), "float32")) 32 | scale_w_var = relay.var("scale_h", relay.TensorType((), "float32")) 33 | 34 | z = relay.nn.upsampling( 35 | x, scale_h_var, scale_w_var, method=method, layout=layout, align_corners=align_corners 36 | ) 37 | zz = run_infer_type(z) 38 | func = relay.Function([x, scale_h_var, scale_w_var], z) 39 | 40 | for target, ctx in tvm.testing.enabled_targets(): 41 | for kind in ["vm", "debug"]: 42 | mod = tvm.ir.IRModule.from_expr(func) 43 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 44 | op_res = intrp.evaluate()( 45 | x_data, np.array(scale_h).astype("float32"), np.array(scale_w).astype("float32") 46 | ) 47 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-4, atol=1e-6) 48 | 49 | verify_upsampling((1, 16, 32, 32), 3, 2.0, "NCHW", "nearest_neighbor") 50 | verify_upsampling((1, 16, 32, 32), 5, 2.0, "NCHW", "bilinear", True) 51 | verify_upsampling((1, 16, 32, 32), 2.0, 6, "NHWC", "nearest_neighbor") 52 | verify_upsampling((1, 16, 32, 32), 2.0, 2.0, "NHWC", "bilinear", True) 53 | 54 | 55 | # tests upsampling type inference with scale_h passed in as a constant and scale_w as a variable 56 | @tvm.testing.uses_gpu 57 | def test_dyn_upsampling_infer_type_const(): 58 | n, c, h, w = te.size_var("n"), te.size_var("c"), te.size_var("h"), te.size_var("w") 59 | 60 | data = relay.var("data", relay.TensorType((n, c, h, w), "int8")) 61 | scale_w = relay.Var("scale_w", relay.TensorType((), "float32")) 62 | 63 | z = relay.nn.upsampling(data, 2.0, scale_w) 64 | zz = run_infer_type(z) 65 | assert zz.checked_type == relay.TensorType((n, c, relay.Any(), relay.Any()), "int8") 66 | 67 | 68 | @tvm.testing.uses_gpu 69 | def test_dyn_upsampling3d_run(): 70 | def verify_upsampling3d( 71 | dshape, scale_d, scale_h, scale_w, layout, method, coord_trans="half_pixel" 72 | ): 73 | 74 | if layout == "NCDHW": 75 | (n, c, d, h, w) = dshape 76 | x_data = np.random.uniform(size=(n, c, d, h, w)).astype("float32") 77 | 78 | elif layout == "NDHWC": 79 | (n, d, h, w, c) = dshape 80 | x_data = np.random.uniform(size=(n, d, h, w, c)).astype("float32") 81 | 82 | if method == "nearest_neighbor": 83 | ref_res = tvm.topi.testing.upsampling3d_python( 84 | x_data, (scale_d, scale_h, scale_w), layout 85 | ) 86 | else: 87 | ref_res = tvm.topi.testing.trilinear_resize3d_python( 88 | x_data, 89 | (int(round(d * scale_d)), int(round(h * scale_h)), int(round(w * scale_w))), 90 | layout, 91 | ) 92 | x = relay.Var("x", relay.TensorType(dshape, "float32")) 93 | scale_d_var = relay.var("scale_d", relay.TensorType((), "float32")) 94 | scale_h_var = relay.var("scale_h", relay.TensorType((), "float32")) 95 | scale_w_var = relay.var("scale_h", relay.TensorType((), "float32")) 96 | 97 | z = relay.nn.upsampling3d( 98 | x, 99 | scale_d_var, 100 | scale_h_var, 101 | scale_w_var, 102 | method=method, 103 | layout=layout, 104 | coordinate_transformation_mode=coord_trans, 105 | ) 106 | zz = run_infer_type(z) 107 | func = relay.Function([x, scale_d_var, scale_h_var, scale_w_var], z) 108 | 109 | for target, ctx in enabled_targets(): 110 | for kind in ["vm", "debug"]: 111 | mod = tvm.ir.IRModule.from_expr(func) 112 | intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target) 113 | op_res = intrp.evaluate()( 114 | x_data, 115 | np.array(scale_d).astype("float32"), 116 | np.array(scale_h).astype("float32"), 117 | np.array(scale_w).astype("float32"), 118 | ) 119 | tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-4, atol=1e-6) 120 | 121 | verify_upsampling3d((1, 1, 1, 1, 1), 2, 3, 4, "NCDHW", "nearest_neighbor") 122 | verify_upsampling3d((1, 8, 16, 16, 16), 2.0, 3.0, 4.0, "NCDHW", "nearest_neighbor") 123 | verify_upsampling3d((1, 8, 16, 16, 16), 2.0, 5.0, 1.0, "NCDHW", "trilinear", "align_corners") 124 | verify_upsampling3d((1, 20, 3, 4, 16), 2.0, 2.0, 2.0, "NDHWC", "nearest_neighbor") 125 | verify_upsampling3d((1, 8, 4, 16, 15), 2.0, 2.0, 2.0, "NDHWC", "trilinear", "align_corners") 126 | 127 | 128 | # tests upsampling type inference with scale_h passed in as a constant and scale_w as a variable 129 | def test_dyn_upsampling3d_infer_type_const(): 130 | n, c, d, h, w = ( 131 | te.size_var("n"), 132 | te.size_var("c"), 133 | te.size_var("d"), 134 | te.size_var("h"), 135 | te.size_var("w"), 136 | ) 137 | 138 | data = relay.var("data", relay.TensorType((n, c, d, h, w), "int8")) 139 | scale_d = relay.Var("scale_h", relay.TensorType((), "float32")) 140 | scale_w = relay.Var("scale_w", relay.TensorType((), "float32")) 141 | 142 | z = relay.nn.upsampling3d(data, scale_d, 2.0, scale_w, layout="NCDHW", method="trilinear") 143 | zz = run_infer_type(z) 144 | assert zz.checked_type == relay.TensorType( 145 | (n, c, relay.Any(), relay.Any(), relay.Any()), "int8" 146 | ) 147 | 148 | 149 | @tvm.testing.uses_gpu 150 | def test_dyn_pad(): 151 | def verify_pad(dshape, pad_width, pad_val, dtype): 152 | x = relay.var("x", relay.TensorType(dshape, dtype)) 153 | ndim = len(dshape) 154 | pad_width_var = relay.var("pad_width_var", relay.TensorType((ndim, 2), "int64")) 155 | pad_val_var = relay.var("pad_val_var", relay.TensorType((), dtype)) 156 | y = relay.nn.pad(x, pad_width_var, pad_val_var) 157 | yy = run_infer_type(y) 158 | 159 | assert yy.checked_type == relay.ty.TensorType((relay.Any(),) * ndim, dtype) 160 | func = relay.Function([x, pad_width_var, pad_val_var], y) 161 | data = np.random.uniform(size=dshape).astype(dtype) 162 | ref_res = np.pad(data, pad_width, "constant", constant_values=(((pad_val,) * 2),) * ndim) 163 | pad_width = np.array(pad_width).astype("int64") 164 | 165 | # verify_func(func, [data, pad_width, np.array(pad_val).astype(dtype)], ref_res) 166 | 167 | def verify_pad_default_fill(dshape, pad_width, dtype): 168 | x = relay.var("x", relay.TensorType(dshape, dtype)) 169 | ndim = len(dshape) 170 | pad_width_var = relay.var("pad_width_var", relay.TensorType((ndim, 2), "int64")) 171 | y = relay.nn.pad(x, pad_width_var) 172 | yy = run_infer_type(y) 173 | 174 | assert yy.checked_type == relay.ty.TensorType((relay.Any(),) * ndim, dtype) 175 | func = relay.Function([x, pad_width_var], y) 176 | data = np.random.uniform(size=dshape).astype(dtype) 177 | ref_res = np.pad(data, pad_width) 178 | pad_width = np.array(pad_width).astype("int64") 179 | 180 | # verify_func(func, [data, pad_width], ref_res) 181 | 182 | verify_pad((4, 10, 7, 7), ((1, 1), (2, 2), (3, 3), (4, 4)), 2.0, "int32") 183 | verify_pad((2, 7), ((1, 4), (2, 2)), 4.0, "float64") 184 | verify_pad_default_fill((4, 10, 7, 7), ((1, 1), (2, 2), (3, 3), (4, 4)), "float64") 185 | verify_pad_default_fill((2, 7), ((1, 4), (2, 2)), "int32") 186 | 187 | 188 | if __name__ == "__main__": 189 | test_dyn_pad() 190 | test_dyn_upsampling_infer_type_const() 191 | test_dyn_upsampling_run() 192 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | 3 | Version 2.0, January 2004 4 | 5 | http://www.apache.org/licenses/ 6 | 7 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 8 | 9 | 1. Definitions. 10 | 11 | "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. 16 | 17 | "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. 18 | 19 | "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. 20 | 21 | "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. 22 | 23 | "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). 24 | 25 | "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. 26 | 27 | "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." 28 | 29 | "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 30 | 31 | 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 32 | 33 | 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 34 | 35 | 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: 36 | 37 | You must give any other recipients of the Work or Derivative Works a copy of this License; and 38 | You must cause any modified files to carry prominent notices stating that You changed the files; and 39 | You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and 40 | If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. 41 | 42 | You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 43 | 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 44 | 45 | 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 46 | 47 | 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 48 | 49 | 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 50 | 51 | 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. 52 | 53 | END OF TERMS AND CONDITIONS 54 | 55 | Copyright 2021 The Authors: Qingchao Shen, Haoyang Ma, Junjie Chen, Yongqiang Tian, Shing-Chi Cheung, Xiang Che 56 | 57 | Licensed under the Apache License, Version 2.0 (the "License"); 58 | you may not use this file except in compliance with the License. 59 | You may obtain a copy of the License at 60 | 61 | http://www.apache.org/licenses/LICENSE-2.0 62 | 63 | Unless required by applicable law or agreed to in writing, software 64 | distributed under the License is distributed on an "AS IS" BASIS, 65 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 66 | See the License for the specific language governing permissions and 67 | limitations under the License. -------------------------------------------------------------------------------- /TVMfuzz/LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | 3 | Version 2.0, January 2004 4 | 5 | http://www.apache.org/licenses/ 6 | 7 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 8 | 9 | 1. Definitions. 10 | 11 | "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. 16 | 17 | "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. 18 | 19 | "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. 20 | 21 | "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. 22 | 23 | "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). 24 | 25 | "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. 26 | 27 | "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." 28 | 29 | "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 30 | 31 | 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 32 | 33 | 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 34 | 35 | 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: 36 | 37 | You must give any other recipients of the Work or Derivative Works a copy of this License; and 38 | You must cause any modified files to carry prominent notices stating that You changed the files; and 39 | You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and 40 | If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. 41 | 42 | You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 43 | 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 44 | 45 | 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 46 | 47 | 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 48 | 49 | 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 50 | 51 | 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. 52 | 53 | END OF TERMS AND CONDITIONS 54 | 55 | Copyright 2021 The Authors: Qingchao Shen, Haoyang Ma, Junjie Chen, Yongqiang Tian, Shing-Chi Cheung, Xiang Che 56 | 57 | Licensed under the Apache License, Version 2.0 (the "License"); 58 | you may not use this file except in compliance with the License. 59 | You may obtain a copy of the License at 60 | 61 | http://www.apache.org/licenses/LICENSE-2.0 62 | 63 | Unless required by applicable law or agreed to in writing, software 64 | distributed under the License is distributed on an "AS IS" BASIS, 65 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 66 | See the License for the specific language governing permissions and 67 | limitations under the License. --------------------------------------------------------------------------------