Author: | 9 |10 | 11 | Matt Rickard 12 | 13 | | 14 |
Twitter: | 17 |18 | 19 | @mattrickard 20 | 21 | | 22 |
GitHub: | 25 |26 | 30 | r2d4/react-llm 31 | 32 | | 33 |
License: | 36 |MIT | 37 |
10 | {JSON.stringify(gpuDevice)} 11 | Sorry! LLamaTab is not supported on your device. Reason $ 12 | {gpuDevice.unsupportedReason}. LLamaTab requires a device with WebGPU 13 | support. 14 |
15 |22 | A Large Language Model that runs entirely in the browser via WebGPU. 23 |
24 |25 | No data is sent to a server. Loading the model for the first time may 26 | take a few minutes. Afterwards, the model will load instantly. 27 |
28 | 34 | 35 | 36 |Loading model...
44 | {progress < 0.5 &&Reticulating splines...
} 45 | {progress > 0.5 &&Herding Llamas...
} 46 |{Math.floor(progress * 100 * 100) / 100}%
57 |32 | A template for the prompt. $TEXT will be replaced by the input text. 33 |
34 |{prompt}
51 |Sorry, unsupported!
13 |Reason: {gpuDevice.unsupportedReason}
14 |15 | react-llm runs models in 16 | the browser with WebGPU and only works in Google Chrome v113 and above 17 | on Desktop with supported GPUs. 18 |
19 |36 | A Large Language Model that runs entirely in the browser with 37 | WebGPU. 38 |
39 |40 | No data is sent to the server. Conversations are cached in local 41 | storage. 42 |
43 |44 | WebGPU is only supported in Desktop Google Chrome 113 45 |
46 |47 | Powered by Apache TVM and MLC Relax Runtime. Vicuna trained by 48 | LMSys 49 |
50 |250 | A Large Language Model that runs entirely in the browser with WebGPU. 251 |
252 |253 | No data is sent to the server. Conversations are cached in local 254 | storage. 255 |
256 |257 | WebGPU is only supported in Desktop Google Chrome 113 258 |
259 |260 | Powered by Apache TVM and MLC Relax Runtime. Vicuna trained by LMSys 261 |
262 |