├── .gitignore ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── MODEL_CARD.md ├── README.md ├── download.sh ├── example.py ├── llama ├── __init__.py ├── generation.py ├── model.py └── tokenizer.py ├── requirements.txt └── setup.py /.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/.gitignore -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/CODE_OF_CONDUCT.md -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/CONTRIBUTING.md -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/LICENSE -------------------------------------------------------------------------------- /MODEL_CARD.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/MODEL_CARD.md -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/README.md -------------------------------------------------------------------------------- /download.sh: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/download.sh -------------------------------------------------------------------------------- /example.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/example.py -------------------------------------------------------------------------------- /llama/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/llama/__init__.py -------------------------------------------------------------------------------- /llama/generation.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/llama/generation.py -------------------------------------------------------------------------------- /llama/model.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/llama/model.py -------------------------------------------------------------------------------- /llama/tokenizer.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/llama/tokenizer.py -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch 2 | fairscale 3 | fire 4 | sentencepiece -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/b0kch01/llama-cpu/HEAD/setup.py --------------------------------------------------------------------------------