The response has been limited to 50k tokens of the smallest files in the repo. You can remove this limitation by removing the max tokens filter.
├── LICENSE
└── README.md


/LICENSE:
--------------------------------------------------------------------------------
  1 | Creative Commons Legal Code
  2 | 
  3 | CC0 1.0 Universal
  4 | 
  5 |     CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
  6 |     LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
  7 |     ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
  8 |     INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
  9 |     REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
 10 |     PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
 11 |     THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
 12 |     HEREUNDER.
 13 | 
 14 | Statement of Purpose
 15 | 
 16 | The laws of most jurisdictions throughout the world automatically confer
 17 | exclusive Copyright and Related Rights (defined below) upon the creator
 18 | and subsequent owner(s) (each and all, an "owner") of an original work of
 19 | authorship and/or a database (each, a "Work").
 20 | 
 21 | Certain owners wish to permanently relinquish those rights to a Work for
 22 | the purpose of contributing to a commons of creative, cultural and
 23 | scientific works ("Commons") that the public can reliably and without fear
 24 | of later claims of infringement build upon, modify, incorporate in other
 25 | works, reuse and redistribute as freely as possible in any form whatsoever
 26 | and for any purposes, including without limitation commercial purposes.
 27 | These owners may contribute to the Commons to promote the ideal of a free
 28 | culture and the further production of creative, cultural and scientific
 29 | works, or to gain reputation or greater distribution for their Work in
 30 | part through the use and efforts of others.
 31 | 
 32 | For these and/or other purposes and motivations, and without any
 33 | expectation of additional consideration or compensation, the person
 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she
 35 | is an owner of Copyright and Related Rights in the Work, voluntarily
 36 | elects to apply CC0 to the Work and publicly distribute the Work under its
 37 | terms, with knowledge of his or her Copyright and Related Rights in the
 38 | Work and the meaning and intended legal effect of CC0 on those rights.
 39 | 
 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be
 41 | protected by copyright and related or neighboring rights ("Copyright and
 42 | Related Rights"). Copyright and Related Rights include, but are not
 43 | limited to, the following:
 44 | 
 45 |   i. the right to reproduce, adapt, distribute, perform, display,
 46 |      communicate, and translate a Work;
 47 |  ii. moral rights retained by the original author(s) and/or performer(s);
 48 | iii. publicity and privacy rights pertaining to a person's image or
 49 |      likeness depicted in a Work;
 50 |  iv. rights protecting against unfair competition in regards to a Work,
 51 |      subject to the limitations in paragraph 4(a), below;
 52 |   v. rights protecting the extraction, dissemination, use and reuse of data
 53 |      in a Work;
 54 |  vi. database rights (such as those arising under Directive 96/9/EC of the
 55 |      European Parliament and of the Council of 11 March 1996 on the legal
 56 |      protection of databases, and under any national implementation
 57 |      thereof, including any amended or successor version of such
 58 |      directive); and
 59 | vii. other similar, equivalent or corresponding rights throughout the
 60 |      world based on applicable law or treaty, and any national
 61 |      implementations thereof.
 62 | 
 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention
 64 | of, applicable law, Affirmer hereby overtly, fully, permanently,
 65 | irrevocably and unconditionally waives, abandons, and surrenders all of
 66 | Affirmer's Copyright and Related Rights and associated claims and causes
 67 | of action, whether now known or unknown (including existing as well as
 68 | future claims and causes of action), in the Work (i) in all territories
 69 | worldwide, (ii) for the maximum duration provided by applicable law or
 70 | treaty (including future time extensions), (iii) in any current or future
 71 | medium and for any number of copies, and (iv) for any purpose whatsoever,
 72 | including without limitation commercial, advertising or promotional
 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
 74 | member of the public at large and to the detriment of Affirmer's heirs and
 75 | successors, fully intending that such Waiver shall not be subject to
 76 | revocation, rescission, cancellation, termination, or any other legal or
 77 | equitable action to disrupt the quiet enjoyment of the Work by the public
 78 | as contemplated by Affirmer's express Statement of Purpose.
 79 | 
 80 | 3. Public License Fallback. Should any part of the Waiver for any reason
 81 | be judged legally invalid or ineffective under applicable law, then the
 82 | Waiver shall be preserved to the maximum extent permitted taking into
 83 | account Affirmer's express Statement of Purpose. In addition, to the
 84 | extent the Waiver is so judged Affirmer hereby grants to each affected
 85 | person a royalty-free, non transferable, non sublicensable, non exclusive,
 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and
 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the
 88 | maximum duration provided by applicable law or treaty (including future
 89 | time extensions), (iii) in any current or future medium and for any number
 90 | of copies, and (iv) for any purpose whatsoever, including without
 91 | limitation commercial, advertising or promotional purposes (the
 92 | "License"). The License shall be deemed effective as of the date CC0 was
 93 | applied by Affirmer to the Work. Should any part of the License for any
 94 | reason be judged legally invalid or ineffective under applicable law, such
 95 | partial invalidity or ineffectiveness shall not invalidate the remainder
 96 | of the License, and in such case Affirmer hereby affirms that he or she
 97 | will not (i) exercise any of his or her remaining Copyright and Related
 98 | Rights in the Work or (ii) assert any associated claims and causes of
 99 | action with respect to the Work, in either case contrary to Affirmer's
100 | express Statement of Purpose.
101 | 
102 | 4. Limitations and Disclaimers.
103 | 
104 |  a. No trademark or patent rights held by Affirmer are waived, abandoned,
105 |     surrendered, licensed or otherwise affected by this document.
106 |  b. Affirmer offers the Work as-is and makes no representations or
107 |     warranties of any kind concerning the Work, express, implied,
108 |     statutory or otherwise, including without limitation warranties of
109 |     title, merchantability, fitness for a particular purpose, non
110 |     infringement, or the absence of latent or other defects, accuracy, or
111 |     the present or absence of errors, whether or not discoverable, all to
112 |     the greatest extent permissible under applicable law.
113 |  c. Affirmer disclaims responsibility for clearing rights of other persons
114 |     that may apply to the Work or any use thereof, including without
115 |     limitation any person's Copyright and Related Rights in the Work.
116 |     Further, Affirmer disclaims responsibility for obtaining any necessary
117 |     consents, permissions or other rights required for any use of the
118 |     Work.
119 |  d. Affirmer understands and acknowledges that Creative Commons is not a
120 |     party to this document and has no duty or obligation with respect to
121 |     this CC0 or use of the Work.
122 | 


--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
  1 | <div align="center">
  2 |     <h1>Awesome Totally Open Chatgpt</h1>
  3 |     <a href="https://github.com/sindresorhus/awesome"><img src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg"/></a>
  4 | </div>
  5 | 
  6 | ChatGPT is GPT-3.5 finetuned with RLHF (Reinforcement Learning with Human Feedback) for human instruction and chat.
  7 | 
  8 | Alternatives are projects featuring different instruct finetuned language models for chat. 
  9 | Projects are **not** counted if they are:
 10 | - Alternative frontend projects which simply call OpenAI's APIs. 
 11 | - Using language models which are not finetuned for human instruction or chat.
 12 | 
 13 | Tags:
 14 | -   Bare: only source code, no data, no model's weight, no chat system
 15 | -   Standard: yes data, yes model's weight, bare chat via API
 16 | -   Full: full yes data, yes model's weight, fancy chat system including TUI and GUI
 17 | -   Complicated: semi open source, not really open source, based on closed model, etc...
 18 | 
 19 | Other revelant lists:
 20 | - [yaodongC/awesome-instruction-dataset](https://github.com/yaodongC/awesome-instruction-dataset): A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
 21 | 
 22 | # Table of Contents
 23 | 1. [The template](#The-template)
 24 | 2. [The list](#The-list)
 25 |    - [lucidrains/PaLM-rlhf-pytorch](#lucidrainsPaLM-rlhf-pytorch)
 26 |    - [togethercomputer/OpenChatKit](#togethercomputerOpenChatKit)
 27 |    - [oobabooga/text-generation-webui](#oobaboogatext-generation-webui)
 28 |    - [KoboldAI/KoboldAI-Client](#KoboldAIKoboldAI-Client)
 29 |    - [LAION-AI/Open-Assistant](#LAION-AIOpen-Assistant)
 30 |    - [tatsu-lab/stanford_alpaca](#tatsu-labstanford_alpaca)
 31 |      - [Other LLaMA-derived projects](#other-llama-derived-projects)
 32 |    - [BlinkDL/ChatRWKV](#BlinkDLChatRWKV)
 33 |    - [THUDM/ChatGLM-6B](#THUDMChatGLM-6B)
 34 |    - [bigscience-workshop/xmtf](#bigscience-workshopxmtf)
 35 |    - [carperai/trlx](#carperaitrlx)
 36 |    - [databrickslabs/dolly](#databrickslabsdolly)
 37 |    - [LianjiaTech/BELLE](#lianjiatechbelle)
 38 |    - [ethanyanjiali/minChatGPT](#ethanyanjialiminchatgpt)
 39 |    - [cerebras/Cerebras-GPT](#cerebrascerebras-gpt)
 40 |    - [TavernAI/TavernAI](#tavernaitavernai)
 41 |    - [Cohee1207/SillyTavern](#cohee1207sillytavern)
 42 |    - [h2oai/h2ogpt](#h2oaih2ogpt)
 43 |    - [mlc-ai/web-llm](#mlc-aiweb-llm)
 44 |    - [Stability-AI/StableLM](#stability-aistablelm)
 45 |    - [clue-ai/ChatYuan](#clue-aichatyuan)
 46 |    - [OpenLMLab/MOSS](#openlmlabmoss)
 47 | 
 48 | # The template
 49 | 
 50 | Append the new project at the end of file
 51 | 
 52 | ```markdown
 53 | ## [{owner}/{project-name}]{https://github.com/link/to/project}
 54 | 
 55 | Description goes here
 56 | 
 57 | Tags: Bare/Standard/Full/Complicated
 58 | ```
 59 | 
 60 | # The list
 61 | 
 62 | ## [lucidrains/PaLM-rlhf-pytorch](https://github.com/lucidrains/PaLM-rlhf-pytorch)
 63 | 
 64 | Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
 65 | 
 66 | Tags: Bare
 67 | 
 68 | ## [togethercomputer/OpenChatKit](https://github.com/togethercomputer/OpenChatKit)
 69 | 
 70 | OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications. 
 71 | 
 72 | Related links:
 73 | - [spaces/togethercomputer/OpenChatKit](https://huggingface.co/spaces/togethercomputer/OpenChatKit)
 74 | 
 75 | Tags: Full
 76 | 
 77 | ## [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
 78 | 
 79 | A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
 80 | 
 81 | Tags: Full
 82 | 
 83 | ## [KoboldAI/KoboldAI-Client](https://github.com/KoboldAI/KoboldAI-Client)
 84 | 
 85 | This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author’s Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.
 86 | 
 87 | Tags: Full
 88 | 
 89 | ## [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) 
 90 | 
 91 | OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
 92 | 
 93 | Related links:
 94 | - [huggingface.co/OpenAssistant](https://huggingface.co/OpenAssistant)
 95 | - [r/OpenAssistant/](https://www.reddit.com/r/OpenAssistant/)
 96 | 
 97 | Tags: Full
 98 | 
 99 | ## [tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
100 | 
101 | This is the repo for the Stanford Alpaca project, which aims to build and share an instruction-following LLaMA model.
102 | 
103 | Tags: Complicated
104 | 
105 | ### Other LLaMA-derived projects:
106 | 
107 | - [pointnetwork/point-alpaca](https://github.com/pointnetwork/point-alpaca) Released weights recreated from Stanford Alpaca, an experiment in fine-tuning LLaMA on a synthetic instruction dataset.
108 | - [tloen/alpaca-lora](https://github.com/tloen/alpaca-lora) Code for rproducing the Stanford Alpaca results using low-rank adaptation (LoRA).
109 | - [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) Ports for inferencing LLaMA in C/C++ running on CPUs, supports alpaca, gpt4all, etc.
110 | - [setzer22/llama-rs](https://github.com/setzer22/llama-rs) Rust port of the llama.cpp project.
111 | - [juncongmoo/chatllama](https://github.com/juncongmoo/chatllama) Open source implementation for LLaMA-based ChatGPT runnable in a single GPU.
112 | - [Lightning-AI/lit-llama](https://github.com/Lightning-AI/lit-llama) Implementation of the LLaMA language model based on nanoGPT.
113 | - [nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMA.
114 | - [hpcaitech/ColossalAI#ColossalChat](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat) An open-source solution for cloning ChatGPT with a complete RLHF pipeline.
115 | - [lm-sys/FastChat](https://github.com/lm-sys/FastChat) An open platform for training, serving, and evaluating large language model based chatbots.
116 | - [nsarrazin/serge](https://github.com/nsarrazin/serge) A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
117 | 
118 | ## [BlinkDL/ChatRWKV](https://github.com/BlinkDL/ChatRWKV)
119 | 
120 | ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
121 | 
122 | Tags: Full
123 | 
124 | ## [THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B)
125 | 
126 | ChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).
127 | 
128 | Related links:
129 | 
130 | - Alternative Web UI: [Akegarasu/ChatGLM-webui](https://github.com/Akegarasu/ChatGLM-webui)
131 | - Slim version (remove 20K image tokens to reduce memory usage): [silver/chatglm-6b-slim](https://huggingface.co/silver/chatglm-6b-slim)
132 | - Fintune ChatGLM-6b using low-rank adaptation (LoRA): [lich99/ChatGLM-finetune-LoRA](https://github.com/lich99/ChatGLM-finetune-LoRA)
133 | - Deploying ChatGLM on Modelz: [tensorchord/modelz-ChatGLM](https://github.com/tensorchord/modelz-ChatGLM)
134 | - Docker image with built-on playground UI and streaming API compatible with OpenAI, using [Basaran](https://github.com/hyperonym/basaran): [peakji92/chatglm:6b](https://hub.docker.com/r/peakji92/chatglm/tags)
135 | 
136 | Tags: Full
137 | 
138 | ## [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
139 | 
140 | This repository provides an overview of all components used for the creation of BLOOMZ & mT0 and xP3 introduced in the paper [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786).
141 | 
142 | Related links:
143 | - [bigscience/bloomz](https://huggingface.co/bigscience/bloomz)
144 | - [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base)
145 | 
146 | Tags: Standard
147 | 
148 | ## [carperai/trlx](https://github.com/carperai/trlx)
149 | 
150 |  A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF), supporting online RL up to 20b params and offline RL to larger models. Basically what you would use to finetune GPT into ChatGPT. 
151 | 
152 | Tags: Bare
153 | 
154 | ## [databrickslabs/dolly](https://github.com/databrickslabs/dolly)
155 | 
156 | Databricks’ dolly-v2-12b, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based on pythia-12b trained on ~15k instruction/response fine tuning records [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees in capability domains from the InstructGPT paper.
157 | 
158 | Related links:
159 | - [dolly v2 12B commercial commercially available model](https://huggingface.co/databricks/dolly-v2-12b)
160 | - [dolly v1 6b model card](https://huggingface.co/databricks/dolly-v1-6b)
161 | 
162 | Tags: Standard
163 | 
164 | ## [LianjiaTech/BELLE](https://github.com/LianjiaTech/BELLE)
165 | 
166 | The goal of this project is to promote the development of the open-source community for Chinese language large-scale conversational models. This project optimizes Chinese performance in addition to original Stanford Alpaca. The model finetuning uses only data generated via ChatGPT (without other data). This repo contains: 175 chinese seed tasks used for generating the data, code for generating the data, 0.5M generated data used for fine-tuning the model, model finetuned from BLOOMZ-7B1-mt on data generated by this project.
167 | 
168 | Related links:
169 | - [English readme](https://github.com/LianjiaTech/BELLE#-belle-be-large-language-model-engine-1)
170 | 
171 | Tags: Standard
172 | 
173 | ## [ethanyanjiali/minChatGPT](https://github.com/ethanyanjiali/minChatGPT)
174 | 
175 | A minimum example of aligning language models with RLHF similar to ChatGPT
176 | 
177 | Related links:
178 | - [huggingface.co/ethanyanjiali/minChatGPT](https://huggingface.co/ethanyanjiali/minChatGPT)
179 | 
180 | Tags: Standard
181 | 
182 | ## [cerebras/Cerebras-GPT](https://huggingface.co/cerebras/Cerebras-GPT-6.7B)
183 | 
184 | 7 open source GPT-3 style models with parameter ranges from 111 million to 13 billion, trained using the [Chinchilla](https://arxiv.org/abs/2203.15556) formula. Model weights have been released under a permissive license (Apache 2.0 license in particular).
185 | 
186 | Related links:
187 | - [Announcement](https://www.cerebras.net/blog/cerebras-gpt-a-family-of-open-compute-efficient-large-language-models/)
188 | - [Models with other amount of parameters](https://huggingface.co/cerebras)
189 | 
190 | Tags: Standard
191 | 
192 | ## [TavernAI/TavernAI](https://github.com/TavernAI/TavernAI)
193 | 
194 | Atmospheric adventure chat for AI language model **Pygmalion** by default and other models such as **KoboldAI**, ChatGPT, GPT-4
195 | 
196 | Tags: Full
197 | 
198 | ## [Cohee1207/SillyTavern](https://github.com/Cohee1207/SillyTavern)
199 | 
200 | SillyTavern is a fork of TavernAI 1.2.8 which is under more active development, and has added many major features. At this point they can be thought of as completely independent programs. On its own Tavern is useless, as it's just a user interface. You have to have access to an AI system backend that can act as the roleplay character. There are various supported backends: OpenAPI API (GPT), KoboldAI (either running locally or on Google Colab), and more.
201 | 
202 | Tags: Full
203 | 
204 | ## [h2oai/h2ogpt](https://github.com/h2oai/h2ogpt)
205 | 
206 | h2oGPT - The world's best open source GPT
207 | - Open-source repository with fully permissive, commercially usable code, data and models
208 | - Code for preparing large open-source datasets as instruction datasets for fine-tuning of large language models (LLMs), including prompt engineering
209 | - Code for fine-tuning large language models (currently up to 20B parameters) on commodity hardware and enterprise GPU servers (single or multi node)
210 | - Code to run a chatbot on a GPU server, with shareable end-point with Python client API
211 | - Code to evaluate and compare the performance of fine-tuned LLMs
212 | 
213 | Related links:
214 | - [h2oGPT 20B](https://gpt.h2o.ai/)
215 | - [🤗 h2oGPT 12B #1](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot)
216 | - [🤗 h2oGPT 12B #2](https://huggingface.co/spaces/h2oai/h2ogpt-chatbot2)
217 | 
218 | Tags: Full
219 | 
220 | ## [mlc-ai/web-llm](https://github.com/mlc-ai/web-llm)
221 | 
222 | Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
223 | 
224 | Related links:
225 | - https://mlc.ai/web-llm
226 | 
227 | Tags: Full
228 | 
229 | ## [Stability-AI/StableLM](https://github.com/Stability-AI/StableLM)
230 | 
231 | This repository contains Stability AI's ongoing development of the StableLM series of language models and will be continuously updated with new checkpoints.
232 | 
233 | Related links:
234 | - [huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat](https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat)
235 | - [StableVicuna](https://github.com/Stability-AI/StableLM#stablevicuna) an RLHF fine-tune of Vicuna-13B v0, which itself is a fine-tune of LLaMA-13B. 
236 | 
237 | Tags: Full
238 | 
239 | ## [clue-ai/ChatYuan](https://github.com/clue-ai/ChatYuan)
240 | 
241 | ChatYuan: Large Language Model for Dialogue in Chinese and English (The repos are mostly in Chinese)
242 | 
243 | Related links:
244 | - [A bit translated readme to English](https://github.com/nichtdax/awesome-totally-open-chatgpt/issues/18#issuecomment-1492826662)
245 | 
246 | Tags: Full
247 | 
248 | 
249 | ## [OpenLMLab/MOSS](https://github.com/OpenLMLab/MOSS)
250 | 
251 | MOSS: An open-source tool-augmented conversational language model from Fudan University. (Most examples are in Chinese)
252 | 
253 | Related links:
254 | - [English readme](https://github.com/OpenLMLab/MOSS/blob/main/README_en.md)
255 | 
256 | Tags: Full
257 | 
258 | 


--------------------------------------------------------------------------------