├── .gitignore └── readme.md /.gitignore: -------------------------------------------------------------------------------- 1 | .vscode/ 2 | .vscode/ 3 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # WNNX Models 2 | 3 | wnnx is a new model format which exports models directly from pytorch without onnx. It can be infered via **wnn** framework, the infer lib is also incrediablly simple and insamely small to serveral kbs. 4 | 5 | Even though, **wnn** can achieve a extremly fast speed and low memory consumption in both X86 and arm platform. 6 | 7 | **wnnx_models** contains some popular models which exports from pytorch. It can be viewed using: 8 | 9 | ``` 10 | pip install wnetron 11 | wnetron . 12 | ``` 13 | 14 | When wnnx merged by `netron` official repo, you can also view *wnnx* model with `netron` as well. 15 | 16 | 17 | ## Results 18 | 19 | - `SimpleFC`: 20 | 21 | ![](https://raw.githubusercontent.com/jinfagang/public_images/master/20220622155647.png) 22 | 23 | - `MobilenetV3`: 24 | 25 | ![](https://raw.githubusercontent.com/jinfagang/public_images/master/20220622155829.png) 26 | 27 | - `DynamicLayernorm`: 28 | 29 | ![](https://raw.githubusercontent.com/jinfagang/public_images/master/20220622160247.png) 30 | 31 | 32 | As you can see, wnnx can be fully replace onnx as an option to inference model. It has many strength compare with onnx: 33 | 34 | - Powered with flatbuffer, instead using protobuf which is heavy; 35 | - Easy to convert from trained model via pytorch; 36 | - Without **any** glu ops, the whole graph is extremly clean; 37 | - Can be efficiently inferenced via **wnn**. 38 | 39 | 40 | --------------------------------------------------------------------------------