├── .gitignore ├── .streamlit └── config.toml ├── LICENSE ├── README.md ├── app ├── __init__.py └── app.py ├── launcher.sh ├── models └── config.json ├── requirements.txt ├── smoothquant ├── modeling_llama.py ├── requirements.txt └── run_generation.py └── static └── llama2-chat-demo.gif /.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/.gitignore -------------------------------------------------------------------------------- /.streamlit/config.toml: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/.streamlit/config.toml -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/LICENSE -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/README.md -------------------------------------------------------------------------------- /app/__init__.py: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /app/app.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/app/app.py -------------------------------------------------------------------------------- /launcher.sh: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/launcher.sh -------------------------------------------------------------------------------- /models/config.json: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/models/config.json -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/requirements.txt -------------------------------------------------------------------------------- /smoothquant/modeling_llama.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/smoothquant/modeling_llama.py -------------------------------------------------------------------------------- /smoothquant/requirements.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/smoothquant/requirements.txt -------------------------------------------------------------------------------- /smoothquant/run_generation.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/smoothquant/run_generation.py -------------------------------------------------------------------------------- /static/llama2-chat-demo.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/aahouzi/llama2-chatbot-cpu/HEAD/static/llama2-chat-demo.gif --------------------------------------------------------------------------------