├── .env.example ├── .gitignore ├── README.md ├── main.py └── requirements.txt /.env.example: -------------------------------------------------------------------------------- 1 | REPLICATE_API_TOKEN="0c6230de1413771b3e2ed80f1548358caea18c8a" -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.env -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Streamlit Chat Application with Replicate LlamaV2 Model 2 | 3 | This is a simple, interactive chat application powered by Streamlit and the Replicate LlamaV2 model. It uses Streamlit for the front end interface and Replicate's LlamaV2 model for generating responses based on user input. 4 | 5 | ## Prerequisites: 6 | 7 | - Python 3.6 or higher 8 | - Streamlit 9 | - Python-dotenv 10 | - Replicate 11 | 12 | ## Quickstart 13 | 14 | 1. Clone the repository 15 | 16 | ```bash 17 | git clone 18 | cd 19 | ``` 20 | 21 | 2. Install the dependencies 22 | 23 | ```bash 24 | pip install -r requirements.txt 25 | ``` 26 | 27 | 3. Set the environment variables 28 | 29 | Create a `.env` file in the root of your project and add the following environment variables. 30 | 31 | ```bash 32 | # .env 33 | REPLICATE_API_TOKEN= 34 | ``` 35 | 36 | 4. Run the Streamlit app 37 | 38 | ```bash 39 | streamlit run main.py 40 | ``` 41 | 42 | ## Usage 43 | 44 | Just type your message in the text input box and press Enter. The AI model will generate a response that will be displayed on the screen. 45 | 46 | ## How it Works 47 | 48 | The `generate_response` function takes the user's input, sends it to the Replicate LlamaV2 model, and then receives the model's response. The response is then displayed on the Streamlit interface. 49 | 50 | ## Contributing 51 | 52 | Contributions are welcome! Please read the [contributing guidelines](CONTRIBUTING.md) before getting started. 53 | 54 | ## License 55 | 56 | This project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE.md) file. 57 | 58 | ## Sponsors 59 | 60 | ✨ Find profitable ideas faster: [Exploding Insights](https://explodinginsights.com/) 61 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from dotenv import load_dotenv 3 | from streamlit_chat import message 4 | import replicate 5 | 6 | load_dotenv() 7 | 8 | def generate_response(human_input): 9 | 10 | output = replicate.run( 11 | "replicate/llama70b-v2-chat:e951f18578850b652510200860fc4ea62b3b16fac280f83ff32282f87bbd2e48", 12 | input={"prompt": human_input},) 13 | # The replicate/llama70b-v2-chat model can stream output as it's running. 14 | # Collect all response parts into a list 15 | response_parts = [] 16 | for item in output: 17 | response_parts.append(item) 18 | 19 | response = " ".join(response_parts) 20 | 21 | return response 22 | 23 | st.title("🤖 Replicate OpenAI LlamaV2") 24 | 25 | if 'generated' not in st.session_state: 26 | st.session_state['generated'] = [] 27 | 28 | if 'past' not in st.session_state: 29 | st.session_state['past'] = [] 30 | 31 | def get_text(): 32 | input_text = st.text_input(" ", key="input") 33 | return input_text 34 | 35 | user_input = get_text() 36 | 37 | if user_input: 38 | output = generate_response(user_input) 39 | st.session_state.past.append(user_input) 40 | st.session_state.generated.append(output) 41 | 42 | if st.session_state['generated']: 43 | for i in range(len(st.session_state['generated'])): 44 | message(st.session_state['past'][i], is_user=True, key=str(i) + '_user') 45 | message(st.session_state["generated"][i], key=str(i)) -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | altair==5.0.1 2 | annotated-types==0.5.0 3 | attrs==23.1.0 4 | blinker==1.6.2 5 | cachetools==5.3.1 6 | certifi==2023.7.22 7 | charset-normalizer==3.2.0 8 | click==8.1.6 9 | decorator==5.1.1 10 | gitdb==4.0.10 11 | GitPython==3.1.32 12 | idna==3.4 13 | importlib-metadata==6.8.0 14 | Jinja2==3.1.2 15 | jsonschema==4.18.4 16 | jsonschema-specifications==2023.7.1 17 | markdown-it-py==3.0.0 18 | MarkupSafe==2.1.3 19 | mdurl==0.1.2 20 | numpy==1.25.1 21 | packaging==23.1 22 | pandas==2.0.3 23 | Pillow==9.5.0 24 | protobuf==4.23.4 25 | pyarrow==12.0.1 26 | pydantic==2.0.3 27 | pydantic_core==2.3.0 28 | pydeck==0.8.0 29 | Pygments==2.15.1 30 | Pympler==1.0.1 31 | python-dateutil==2.8.2 32 | python-dotenv==1.0.0 33 | pytz==2023.3 34 | pytz-deprecation-shim==0.1.0.post0 35 | referencing==0.30.0 36 | replicate==0.9.0 37 | requests==2.31.0 38 | rich==13.4.2 39 | rpds-py==0.9.2 40 | six==1.16.0 41 | smmap==5.0.0 42 | streamlit==1.25.0 43 | streamlit-chat==0.1.1 44 | tenacity==8.2.2 45 | toml==0.10.2 46 | toolz==0.12.0 47 | tornado==6.3.2 48 | typing_extensions==4.7.1 49 | tzdata==2023.3 50 | tzlocal==4.3.1 51 | urllib3==2.0.4 52 | validators==0.20.0 53 | watchdog==3.0.0 54 | zipp==3.16.2 55 | --------------------------------------------------------------------------------