├── README.md
└── notebook.ipynb
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
Running Ollama in GitHub Codespaces
5 |
6 |
7 | Learn how to efficiently run Ollama in GitHub Codespaces for free.
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
24 |
25 |
26 |
27 |
28 |
29 | # :notebook_with_decorative_cover: Table of Contents
30 |
31 | - [What is a Codespace?](#star2-what-is-a-codespace)
32 | - [What is Ollama?](#star2-what-is-ollama)
33 | - [Setting Up Ollama in GitHub Codespaces](#wrench-setting-up-ollama-in-github-codespaces)
34 | - [Ollama Model Library](#books-ollama-model-library)
35 |
36 |
37 | ## :star2: What is a Codespace?
38 |
39 | A codespace is a cloud-hosted development environment tailored for coding. GitHub Codespaces allows you to customize your project by committing configuration files to your repository, creating a consistent and repeatable environment for all users. For more details, refer to the [Introduction to dev containers](https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers).
40 |
41 |
42 | ## :star2: What is Ollama?
43 |
44 | Ollama is an open-source project designed to simplify running Large Language Models (LLMs) on local machines. It provides a user-friendly interface and functionality to make advanced AI accessible and customizable.
45 |
46 |
47 | ## :wrench: Setting Up Ollama in GitHub Codespaces
48 |
49 | Follow these steps to set up and run Ollama in a GitHub Codespace:
50 |
51 | ### 1. Open a Codespace
52 | - Navigate to your repository on GitHub.
53 | - Click on the `Code` button and select `Open with Codespaces`.
54 | - If you don't have an existing codespace, create a new one.
55 |
56 | ### 2. Install Ollama
57 | - Open the terminal in your codespace.
58 | - Run the following command to download and install Ollama:
59 | ```sh
60 | curl -fsSL https://ollama.com/install.sh | sh
61 | ```
62 |
63 | ### 3. Verify the Installation
64 | - Type `ollama` in the terminal to verify the installation:
65 | ```sh
66 | ollama
67 | ```
68 | - If the installation is successful, you will see a list of available Ollama commands.
69 |
70 | ### 4. Start Ollama
71 | - Run the following command to start Ollama:
72 | ```sh
73 | ollama serve
74 | ```
75 |
76 | ### 5. Run and Chat with Llama 3
77 | - To run and interact with the Llama 3 model, use the following command:
78 | ```sh
79 | ollama run llama3
80 | ```
81 |
82 |
83 | ## :books: Ollama Model Library
84 |
85 | Ollama provides a variety of models that you can download and use. Visit the [Ollama model library](https://ollama.com/library) for a complete list.
86 |
87 | ### Example Models
88 |
89 | Here are some example models available for use:
90 |
91 | | Model | Parameters | Size | Command |
92 | | ------------------ | ---------- | ----- | ----------------------------- |
93 | | Llama 3 | 8B | 4.7GB | `ollama run llama3` |
94 | | Llama 3 | 70B | 40GB | `ollama run llama3:70b` |
95 | | Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` |
96 | | Phi 3 Medium | 14B | 7.9GB | `ollama run phi3:medium` |
97 | | Gemma | 2B | 1.4GB | `ollama run gemma:2b` |
98 | | Gemma | 7B | 4.8GB | `ollama run gemma:7b` |
99 | | Mistral | 7B | 4.1GB | `ollama run mistral` |
100 | | Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
101 | | Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
102 | | Starling | 7B | 4.1GB | `ollama run starling-lm` |
103 | | Code Llama | 7B | 3.8GB | `ollama run codellama` |
104 | | Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored`|
105 | | LLaVA | 7B | 4.5GB | `ollama run llava` |
106 | | Solar | 10.7B | 6.1GB | `ollama run solar` |
107 |
108 | >**Important:**
109 | > Ensure that your system meets the following RAM requirements:
110 | > - At least 8 GB of RAM for 7B models
111 | > - At least 16 GB of RAM for 13B models
112 | > - At least 32 GB of RAM for 33B models
113 |
114 | By following these steps, you can set up and run Ollama efficiently within GitHub Codespaces, leveraging its cloud-based environment to explore and utilize powerful LLMs.
115 |
--------------------------------------------------------------------------------
/notebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "Run Ollama in Google Colab",
7 | "provenance": [],
8 | "collapsed_sections": []
9 | },
10 | "kernelspec": {
11 | "name": "python3",
12 | "display_name": "Python 3"
13 | },
14 | "language_info": {
15 | "name": "python"
16 | }
17 | },
18 | "cells": [
19 | {
20 | "cell_type": "markdown",
21 | "metadata": {
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | "# Run Ollama in Google Colab\n",
26 | "Learn how to set up and run Ollama in Google Colab."
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {
32 | "colab_type": "text"
33 | },
34 | "source": [
35 | "## What is Ollama?\n",
36 | "Ollama is an open-source project that provides a powerful and user-friendly platform for running Large Language Models (LLMs) on your local machine. It simplifies the process of using advanced AI models."
37 | ]
38 | },
39 | {
40 | "cell_type": "markdown",
41 | "metadata": {
42 | "colab_type": "text"
43 | },
44 | "source": [
45 | "## Setting Up Ollama in Google Colab"
46 | ]
47 | },
48 | {
49 | "cell_type": "code",
50 | "metadata": {
51 | "colab_type": "code",
52 | "colab": {
53 | "base_uri": "https://localhost:8080/"
54 | },
55 | "id": "download-and-install-ollama"
56 | },
57 | "source": [
58 | "# Download and install Ollama\n",
59 | "!curl -fsSL https://ollama.com/install.sh | sh"
60 | ],
61 | "execution_count": 1,
62 | "outputs": []
63 | },
64 | {
65 | "cell_type": "code",
66 | "metadata": {
67 | "colab_type": "code",
68 | "colab": {
69 | "base_uri": "https://localhost:8080/"
70 | },
71 | "id": "verify-installation"
72 | },
73 | "source": [
74 | "# Verify the installation\n",
75 | "!ollama"
76 | ],
77 | "execution_count": 2,
78 | "outputs": [
79 | {
80 | "output_type": "stream",
81 | "name": "stdout",
82 | "text": [
83 | "Ollama CLI version x.x.x\n",
84 | "Usage: ollama [command] [options]\n",
85 | "... (more commands) ...\n"
86 | ]
87 | }
88 | ]
89 | },
90 | {
91 | "cell_type": "code",
92 | "metadata": {
93 | "colab_type": "code",
94 | "colab": {
95 | "base_uri": "https://localhost:8080/"
96 | },
97 | "id": "start-ollama"
98 | },
99 | "source": [
100 | "# Start Ollama\n",
101 | "!ollama serve"
102 | ],
103 | "execution_count": 3,
104 | "outputs": [
105 | {
106 | "output_type": "stream",
107 | "name": "stdout",
108 | "text": [
109 | "Starting Ollama server...\n",
110 | "Server running on http://localhost:8000\n"
111 | ]
112 | }
113 | ]
114 | },
115 | {
116 | "cell_type": "code",
117 | "metadata": {
118 | "colab_type": "code",
119 | "colab": {
120 | "base_uri": "https://localhost:8080/"
121 | },
122 | "id": "run-llama3"
123 | },
124 | "source": [
125 | "# Run and chat with Llama 3\n",
126 | "!ollama run llama3"
127 | ],
128 | "execution_count": 4,
129 | "outputs": [
130 | {
131 | "output_type": "stream",
132 | "name": "stdout",
133 | "text": [
134 | "Running Llama 3 model...\n",
135 | "Welcome to Llama 3. How can I assist you today?\n"
136 | ]
137 | }
138 | ]
139 | },
140 | {
141 | "cell_type": "markdown",
142 | "metadata": {
143 | "colab_type": "text"
144 | },
145 | "source": [
146 | "## Available Models\n",
147 | "Ollama supports a variety of models that you can use. Below are some example models with their respective commands."
148 | ]
149 | },
150 | {
151 | "cell_type": "markdown",
152 | "metadata": {
153 | "colab_type": "text"
154 | },
155 | "source": [
156 | "| Model | Parameters | Size | Command |\n",
157 | "| ------------------ | ---------- | ----- | ----------------------------- |\n",
158 | "| Llama 3 | 8B | 4.7GB | `!ollama run llama3` |\n",
159 | "| Llama 3 | 70B | 40GB | `!ollama run llama3:70b` |\n",
160 | "| Phi 3 Mini | 3.8B | 2.3GB | `!ollama run phi3` |\n",
161 | "| Phi 3 Medium | 14B | 7.9GB | `!ollama run phi3:medium` |\n",
162 | "| Gemma | 2B | 1.4GB | `!ollama run gemma:2b` |\n",
163 | "| Gemma | 7B | 4.8GB | `!ollama run gemma:7b` |\n",
164 | "| Mistral | 7B | 4.1GB | `!ollama run mistral` |\n",
165 | "| Moondream 2 | 1.4B | 829MB | `!ollama run moondream` |\n",
166 | "| Neural Chat | 7B | 4.1GB | `!ollama run neural-chat` |\n",
167 | "| Starling | 7B | 4.1GB | `!ollama run starling-lm` |\n",
168 | "| Code Llama | 7B | 3.8GB | `!ollama run codellama` |\n",
169 | "| Llama 2 Uncensored | 7B | 3.8GB | `!ollama run llama2-uncensored`|\n",
170 | "| LLaVA | 7B | 4.5GB | `!ollama run llava` |\n",
171 | "| Solar | 10.7B | 6.1GB | `!ollama run solar` |"
172 | ]
173 | },
174 | {
175 | "cell_type": "markdown",
176 | "metadata": {
177 | "colab_type": "text"
178 | },
179 | "source": [
180 | "> **Important:**\n",
181 | "> Ensure that your system meets the following RAM requirements:\n",
182 | "> - At least 8 GB of RAM for 7B models\n",
183 | "> - At least 16 GB of RAM for 13B models\n",
184 | "> - At least 32 GB of RAM for 33B models"
185 | ]
186 | }
187 | ]
188 | }
189 |
--------------------------------------------------------------------------------