├── .github └── workflows │ ├── process_mdx_file.yml │ └── tutorials-embeddings.yml ├── blog ├── ar │ └── README.md ├── en │ ├── 100k-community-members.mdx │ ├── 11-speech-to-text-apps.mdx │ ├── 12-days-12-launches-openai-kicks-off-with-an-unleashed-o1-and-dollar200-pro-tier.mdx │ ├── 3-ai-game-changers-for-building-apps.mdx │ ├── 3d-ai-models-hackathon.mdx │ ├── 5-apps-ideas-you-can-build-with-langchain.mdx │ ├── 5-proven-strategies-for-startups-to-boost-visibility.mdx │ ├── 50-000-amazing-ai-builders-in-one-place.mdx │ ├── 7-amazing-ai-apps-ideas-you-can-build-with-anthropic-claude.mdx │ ├── 7-app-ideas-you-can-build-with-stable-diffusion.mdx │ ├── 7-apps-ideas-you-can-build-with-gpt-3.mdx │ ├── 8-mind-blowing-things-you-might-have-missed-about-gpt-4.mdx │ ├── How_to_Use_Flux:_Unlocking_the_Potential_of_AI_Image_Generation.mdx │ ├── a-40k-special-participants-story.mdx │ ├── a-brand-new-startup.mdx │ ├── ai-101-a-comprehensive-guide-to-understanding-artificial-intelligence-in-2023.mdx │ ├── ai-agents-hackathon-summary.mdx │ ├── ai-agents-revolution.mdx │ ├── ai-hackathon-tutorial-step-1-how-to-prepare-for-the-ai-hackathon.mdx │ ├── ai-in-business-how-use-ai-to-stay-ahead-of-the-competition.mdx │ ├── ai-in-enterprise-in-2023.mdx │ ├── ai-industry-trends-in-2023.mdx │ ├── ai-podcasts-you-need-to-listen-to-in-2022.mdx │ ├── ai-revolution-for-game-industry-best-best-ai-text-and-image-to-3d-generation.mdx │ ├── ai-revolution-for-game-industry-best-best-image-to-3d-ai-tools.mdx │ ├── ai-revolution-for-game-industry-best-txt-to-texture-tools.mdx │ ├── ai-revolution-for-movie-industry-best-ai-tools-for-scriptwriters.mdx │ ├── ai-slingshot-building-the-best-ai-startup-hackathon-3-winners.mdx │ ├── ai-worldmap-the-ai-landscape-in-2023.mdx │ ├── ai21-labs-hackathon-2-winner-announcement.mdx │ ├── ai21-labs-hackathon-winner-announcement.mdx │ ├── aria-allegro-multimodal-hack-summary.mdx │ ├── autogpt-hackathon-celebrating-innovation-and-announcing-the-winners.mdx │ ├── babyagi-and-autogpt-which-one-to-choose.mdx │ ├── benin-hackathon-summary.mdx │ ├── best-4-text-to-video-ai-tools.mdx │ ├── best-5-ai-detection-tools.mdx │ ├── best-7-ai-podcasts-you-should-listen-to-in-2023.mdx │ ├── beta-testing-and-customer-feedback-loops.mdx │ ├── build-fast-ship-fast-hack-summary.mdx │ ├── build-your-ai-startup-hack-summary.mdx │ ├── build-your-ai-startup-hackathon-2-winner-announcement.mdx │ ├── build-your-ai-startup-hackathon-winner-announcement.mdx │ ├── building-with-google-cloud-ai-vertex-hackathon-insights.mdx │ ├── chatgpt-api-and-whisper-api-global-hackathon-winner-announcement.mdx │ ├── chatgpt-plugins-are-out.mdx │ ├── chatgpt-update-chatgpt-plugins-are-out.mdx │ ├── claude-vs-chatgpt.mdx │ ├── co-founders-agreement.mdx │ ├── codestral-hackathon-summary.mdx │ ├── cogx-fest-online-hackathon-summary.mdx │ ├── cohere-and-qdrant-multilingual-semantic-search-hackathon-winner-announcement.mdx │ ├── cohere-coral-hackathon-winners-and-main-highlights.mdx │ ├── cohere-hackathon-winner-announcement.mdx │ ├── cohere-thanksgiving-hackathon-winner-announcement.mdx │ ├── create-ai-artwork-free-2023.mdx │ ├── day-3-12-openai-sora-just-changed-video-creation-forever.mdx │ ├── day-4-12-openai-canvas-goes-public-running-python-inside-chatgpt.mdx │ ├── day-5-12-openai-chatgpt-goes-native-on-2-billion-apple-devices.mdx │ ├── doge-hack-summary.mdx │ ├── driving-voice-ai-innovation-highlights-from-the-elevenlabs-hackathon.mdx │ ├── edge-runners-3-2-hack-summary.mdx │ ├── enhance-your-website-with-the-top-10-ai-plugins-for-chatgpt-2023.mdx │ ├── essential-skills-every-first-time-founder-should-develop.mdx │ ├── evolutionizing-business-with-autogpt-why-you-should-use-autogpt.mdx │ ├── falcon-hackathon-summary.mdx │ ├── falcon-llms-hackathon-sponsored-by-gaia-summary-and-winners.mdx │ ├── falcon-models-explore.mdx │ ├── from-hackathons-to-market-leadership.mdx │ ├── from-poc-to-series-a.mdx │ ├── gemini-ultra-hackathon-summary.mdx │ ├── gemma-2-ai-challenge-summary.mdx │ ├── generative-ai-ai-transformers.mdx │ ├── generative-ai-generative-ai-models.mdx │ ├── generative-ai-hackathon-winner-announcement.mdx │ ├── generative-ai-in-action-real-world-applications-and-examples.mdx │ ├── generative-ai-platforms-enable-rapid-application-development.mdx │ ├── giving-voice-to-the-community-new-features-on-lablabai.mdx │ ├── gpt-4o-highlights.mdx │ ├── gpt4-trulens-hackathon-winners-and-summary.mdx │ ├── guidelines-for-creating-a-project-pitch.mdx │ ├── how-ai-startups-can-capitalize-on-spacex.mdx │ ├── how-large-language-models-will-change-your-business.mdx │ ├── how-magicdevs-ltm-2-mini-is-redefining-ais-ability-to-handle-vast-contexts.mdx │ ├── how-openais-structured-outputs-are-transforming-api-reliability-and-developer-control.mdx │ ├── how-optimized-data-curation-is-changing-the-game.mdx │ ├── how-to-access-o1-models.mdx │ ├── how-to-build-an-ai-startup.mdx │ ├── how-to-create-an-image-with-ai-5-top-ai-generative-apps.mdx │ ├── how-to-improve-business-with-ai-google-workspace-and-microsoft-365-copilot.mdx │ ├── how-to-start-a-career-in-the-fastest-growing-industry-on-earth.mdx │ ├── how-to-use-cohere-top-5-business-ideas-with-cohere-api.mdx │ ├── how-to-use-gpt-4-for-content-moderation.mdx │ ├── ibm-watsonx-assistant-hack-summary.mdx │ ├── ibm-watsonx-hack-summary.mdx │ ├── identifying-your-ideal-customer.mdx │ ├── innovating-with-ai-at-the-ai21-labs-hackathon.mdx │ ├── inside-lablab-next-cohort1.mdx │ ├── inside-the-mind-of-chatgot.mdx │ ├── lablabai-ai-hackathons-summary-the-participants-story.mdx │ ├── langflow-hack-summary.mdx │ ├── learn-ai-in-2024.mdx │ ├── llama-2-hackathon-with-clarifai.mdx │ ├── llama-3.1-unleashing-the-open-source-ai-revolution.mdx │ ├── llama-impact-hack-rome-summary.mdx │ ├── llama-impact-hackathon-sf-summary.mdx │ ├── llama-impact-pan-latam-hack-summary.mdx │ ├── llama-stack-comparison-blog.mdx │ ├── llama3-hackathon-summary.mdx │ ├── llama3.2-blog.mdx │ ├── lokahi-healthcare-accelerator-hack-summary.mdx │ ├── managing-stress-and-avoiding-burnout-as-a-startup-founder.mdx │ ├── maximizing-website-performance-with-chatgpt-plugins.mdx │ ├── measuring-growth-key-metrics-for-ai-startups.mdx │ ├── mistral-large-2.mdx │ ├── model-distillation-openais-solution-for-efficient-ai-deployment.mdx │ ├── navigating-the-transition-from-employee-to-founder.mdx │ ├── news-for-developers-september-2023.mdx │ ├── next-hackathon-2-edge-runners-summary.mdx │ ├── nextgen-gpt-ai-hackathon-summary.mdx │ ├── niche-vs-broad-market-strategies.mdx │ ├── openai-assistants-vs-llama-index-mongodb-recap.mdx │ ├── openai-day-2-reinforcement-fine-tuning-brings-strategic-shift-in-ai-development.mdx │ ├── openai-hackathon-winner-announcement.mdx │ ├── openai-stack-hack-winner-announcement.mdx │ ├── openai-whisper-ai-hackathon-summary-the-participants-story-part-2.mdx │ ├── openai-whisper-announcement.mdx │ ├── openai-whisper-hackathon-summary-the-participants-story.mdx │ ├── rag-advanced-hackathon-summary.mdx │ ├── reasoning-with-o1-hack-summary.mdx │ ├── redis-sidequest-winner-announcement.mdx │ ├── redis.mdx │ ├── revolutionizing-ai-unveiling-the-winning-creations-from-the-langchain-x-gpt-agents-hackathon.mdx │ ├── revolutionizing-business-with-autogpt-how-to-use-autogpt-in-your-business.mdx │ ├── salz21-ai-hackathon-winner-announcement.mdx │ ├── semantic-search-ai-hackathon-winner-announcement.mdx │ ├── stable-diffusion-hackathon-the-first-ai-art-competition-summary.mdx │ ├── stable-diffusion-hackathon-winner-announcement.mdx │ ├── stable-diffusion-new-model-sdxl-beta.mdx │ ├── state-of-the-art-ai-5-apps-you-can-build-with-coheres-multilingual-text-understanding-model.mdx │ ├── state-of-the-art-ai-a-30-k-special-participants-story.mdx │ ├── state-of-the-art-ai-coheres-multilingual-text-understanding-model.mdx │ ├── state-of-the-art-ai-get-the-most-out-of-your-ai-hackathon.mdx │ ├── state-of-the-art-ai-how-to-start-your-career-in-ai-industry.mdx │ ├── state-of-the-art-ai-how-to-use-ai-to-start-your-business.mdx │ ├── state-of-the-art-ai-jason-calacanis-the-greatest-startup-investor-of-all-time-number-one-prediction-for-2023.mdx │ ├── state-of-the-art-ai-lablabai-founders-as-the-riseup-lineup.mdx │ ├── state-of-the-art-ai-lablabai-on-product-hunt.mdx │ ├── state-of-the-art-ai-should-every-developer-know-ai-nowadays.mdx │ ├── state-of-the-art-ai-stable-diffusion-models.mdx │ ├── state-of-the-art-ai-top-10-ai-technologies-in-2023.mdx │ ├── state-of-the-art-ai-top-5-ai-apps-you-can-build-with-cohere-api.mdx │ ├── state-of-the-art-ai-two-online-ai-hackathons-you-should-visit-in-december.mdx │ ├── state-of-the-art-ai-why-2023-will-be-the-year-of-ai.mdx │ ├── the-ai-breakthroughs-of-2023.mdx │ ├── the-best-7-ai-interior-design-tools.mdx │ ├── the-best-ai-agents-in-2023.mdx │ ├── the-founders-mindset-cultivating-growth-and-adaptability.mdx │ ├── the-mondaycom-ai-app-hackathon-winners-are-here.mdx │ ├── the-power-of-autogpt-exploring-intelligent-agents.mdx │ ├── the-power-of-the-assistants-api-by-openai.mdx │ ├── the-search-for-a-co-founder.mdx │ ├── the-unavoidable-challenges-every-startup-founder-faces.mdx │ ├── the_new_features_in_grok_2.mdx │ ├── this-week-in-ai-exploring-the-latest-from-metagpt-and-gpt4-and-more.mdx │ ├── title-stable-diffusion-hackathon-summary-the-winners-story.mdx │ ├── title-state-of-the-art-ai-5-apps-you-can-build-with-chatgpt.mdx │ ├── tldr-era-is-here-why-you-should-use-ai-in-everyday-life.mdx │ ├── transform-your-ideas-into-3d-models-instantly-with-ai.mdx │ ├── understanding-openai-swarm-a-framework-for-multi-agent-systems.mdx │ ├── unleashing-creativity-with-ai-top-5-app-ideas-for-your-next-ai-hackathon.mdx │ ├── unlocking-new-dimensions-a-deep-dive-into-openais-vision-fine-tuning-with-gpt-4o.mdx │ ├── unraveling-the-stable-diffusion-hackathon-winners-and-their-masterpieces.mdx │ ├── vectara-hackathon-overview.mdx │ ├── were-growing-with-our-community-new-features-on-lablabai.mdx │ ├── what-are-llms-and-how-do-large-language-models-work.mdx │ ├── what-is-ai-summarizing.mdx │ ├── what-is-antrophic-claude-and-how-to-get-access-to-it.mdx │ ├── what-is-autogpt-and-how-can-i-benefit-from-it.mdx │ ├── what-is-babyagi-and-how-can-i-benefit-from-it.mdx │ ├── what-is-gpt-4-vision.mdx │ ├── whisper-openai-hackathon-winners.mdx │ ├── why-building-your-ai-startup-is-your-best-bet-in-2023.mdx │ ├── why-should-i-use-ai21-labs-technology.mdx │ └── winners-of-the-leap-2024-oasis-ai-hackathon.mdx └── readme.md ├── package.json ├── readme.md ├── script_output.txt ├── scripts ├── embedTutorials.js └── process_images.py ├── technologies ├── README.md ├── agentops │ └── index.mdx ├── ai21-labs │ └── index.mdx ├── ai71 │ ├── falcon-2-11b-vlm.mdx │ ├── falcon-2-11b.mdx │ ├── falcon-llm.mdx │ └── index.mdx ├── aiml-api │ └── index.mdx ├── anthropic │ ├── claude.mdx │ └── index.mdx ├── arize │ └── index.mdx ├── audiocraft │ └── index.mdx ├── autogen │ └── index.mdx ├── autogpt │ └── index.mdx ├── aws-sagemaker │ └── index.mdx ├── babyagi │ └── index.mdx ├── bert │ └── index.mdx ├── camel-ai │ └── index.mdx ├── camel │ └── index.mdx ├── chroma │ └── index.mdx ├── clarifai │ └── index.mdx ├── codium │ └── index.mdx ├── cohere │ ├── classify.mdx │ ├── coral.mdx │ ├── embed.mdx │ ├── generate.mdx │ ├── index.mdx │ ├── neural-search.mdx │ └── rerank.mdx ├── composio │ └── index.mdx ├── crew-ai │ └── index.mdx ├── cursor │ └── index.mdx ├── data-resources-ai-for-connectivity │ └── index.mdx ├── deepseek │ ├── deepseek-r1.mdx │ ├── deepseek-v3.mdx │ └── index.mdx ├── easyocr │ └── index.mdx ├── elevenlabs │ └── index.mdx ├── featherless │ └── index.mdx ├── fine-tuner-ai │ └── index.mdx ├── fuyu-8b.mdx ├── fuyu │ └── index.mdx ├── gan │ └── index.mdx ├── generative-agents │ └── index.mdx ├── get3d │ └── index.mdx ├── godmode │ └── index.mdx ├── google-colab │ └── index.mdx ├── google │ ├── chirp.mdx │ ├── codey.mdx │ ├── gemini-ai.mdx │ ├── gemma-2.mdx │ ├── gemma.mdx │ ├── generative-ai-studio.mdx │ ├── imagen.mdx │ ├── index.mdx │ ├── model-garden.mdx │ └── palm.mdx ├── gorilla │ └── index.mdx ├── groq │ └── index.mdx ├── grounded-sam │ └── index.mdx ├── ibm │ ├── granite.mdx │ ├── index.mdx │ ├── watsonx-ai.mdx │ └── watsonx-assistant.mdx ├── langchain │ ├── index.mdx │ └── opengpts.mdx ├── langflow │ └── index.mdx ├── llama-meta-rome-datasets │ └── index.mdx ├── llama3 │ └── index.mdx ├── llamaindex │ └── index.mdx ├── llava │ └── index.mdx ├── lokahi-hackathon-datasets │ └── index.mdx ├── ltm-2-mini │ └── index.mdx ├── meta │ ├── index.mdx │ ├── llama-2.mdx │ ├── llama-3.1.mdx │ ├── llama-3.2.mdx │ ├── llama.mdx │ └── seamlessm4t.mdx ├── metagpt │ └── index.mdx ├── microsoft │ ├── index.mdx │ └── phi-3.mdx ├── minds-db │ └── index.mdx ├── mistral-ai │ └── index.mdx ├── monday │ ├── index.mdx │ ├── monday-ai-assistant.mdx │ └── mondaycom.mdx ├── mongodb │ └── index.mdx ├── multion │ └── index.mdx ├── nomicai │ ├── gpt4all.mdx │ └── index.mdx ├── novita │ └── index.mdx ├── open-elm │ └── index.mdx ├── open-interpreter │ └── index.mdx ├── openai │ ├── assistants-api.mdx │ ├── chatgpt.mdx │ ├── codex.mdx │ ├── dall-e-2.mdx │ ├── dall-e-mini.mdx │ ├── gpt-4-vision.mdx │ ├── gpt-4o.mdx │ ├── gpt3-5.mdx │ ├── gpt3.mdx │ ├── gpt4.mdx │ ├── gpts.mdx │ ├── image-generation-api.mdx │ ├── index.mdx │ ├── o1.mdx │ ├── openai-gym.mdx │ ├── point-e.mdx │ ├── shap-e.mdx │ └── whisper.mdx ├── pinecone │ └── index.mdx ├── portkey │ └── index.mdx ├── privategpt │ └── index.mdx ├── qdrant │ └── index.mdx ├── redis │ └── index.mdx ├── reinforcement-learning │ └── index.mdx ├── replit │ └── index.mdx ├── restack │ └── index.mdx ├── rhymes-ai │ └── index.mdx ├── sambanova │ └── index.mdx ├── sdxl │ └── index.mdx ├── snowflake │ └── index.mdx ├── solo-tech │ └── index.mdx ├── stability-ai │ ├── index.mdx │ ├── stable-diffusion.mdx │ ├── stable-lm.mdx │ ├── stable-video.mdx │ └── stablecode.mdx ├── stable-dreamfusion │ └── index.mdx ├── stanford-alpaca │ └── index.mdx ├── streamlit │ └── index.mdx ├── superagi │ └── index.mdx ├── tabbyml │ ├── index.mdx │ └── tabby.mdx ├── template.mdx ├── text-generation-webui │ └── index.mdx ├── tiny-llama │ └── index.mdx ├── together-ai │ └── index.mdx ├── tonic-ai │ └── index.mdx ├── toolhouse │ └── index.mdx ├── trae-ide │ └── index.mdx ├── truera │ ├── index.mdx │ └── trulens.mdx ├── twelvelabs │ └── index.mdx ├── unstructuredio │ └── index.mdx ├── upstage │ ├── index.mdx │ └── solar-pro-preview.mdx ├── vectara │ └── index.mdx ├── vectorboard │ └── index.mdx ├── vercel │ └── index.mdx ├── weaviate │ └── index.mdx ├── webgpu │ └── index.mdx ├── x-ai │ ├── grok.mdx │ └── index.mdx ├── yi-llms │ └── index.mdx ├── yolo │ ├── index.mdx │ ├── yolov5.mdx │ ├── yolov6.mdx │ ├── yolov7.mdx │ └── yolov8.mdx └── zilliz │ └── index.mdx ├── topics ├── app │ └── chatbot │ │ └── index.json ├── appTechnology │ └── openai │ │ ├── gpt3.json │ │ └── index.json └── readme.md └── tutorials ├── README.md ├── ar ├── README.md └── create-a-simple-app-with-openai-gpt-4-and-streamlit.mdx ├── en ├── Hackernoon_ Going Global.docx ├── How_to_Use_AI_to_Create_Components_for_Figma:_A_Beginner’s_Guide.mdx ├── agentops-tutorial.mdx ├── agents-retrieval-chatbot.mdx ├── ai-agents-tutorial-how-to-use-and-create-them.mdx ├── ai-ml-tutorial.mdx ├── ai21-labs-streamlit-tutorial-sport-guesser.mdx ├── ai21-labs-tutorial-building-ai-powered-blog-editor.mdx ├── ai21-labs-tutorial-how-to-adopt-ai21-to-your-ai-project.mdx ├── ai21-labs-tutorial-how-to-create-a-contextual-answers-app.mdx ├── ai21-labs-tutorial-how-to-create-a-text-improver-app.mdx ├── ai21-labs-tutorial-how-to-use-the-playground.mdx ├── ai21-memory.mdx ├── ai21-sd-tutorial.mdx ├── ai71-platform-guide.mdx ├── allegro-beginner-tut.mdx ├── anthropic-claude-simple-chatbot.mdx ├── anthropic-claude-summarization.mdx ├── anthropic-claude-tutorial-building-a-simple-and-safe-collaborative-writing-app.mdx ├── anthropic-tutorial-how-to-build-your-own-judicial-ai-assistant.mdx ├── anthropics-claude-and-langchain-tutorial-bulding-personal-assistant-app.mdx ├── applying-stable-diffusion-api-to-google-colab.mdx ├── aria-api-tutorial.mdx ├── arxiv-summarizer-related-papers.mdx ├── audiocraft-tutorial.mdx ├── auto-gpt-advanced-tutorial-creating-ai-generated-linkedin-content.mdx ├── auto-gpt-forge-tutorial.mdx ├── auto-gpt-tutorial-how-to-set-up-auto-gpt.mdx ├── autogpt-tutorial-building-ai-agent-powered-research-assistant-app.mdx ├── autogpt-tutorial-creating-a-research-assistant-with-auto-gpt-forge.mdx ├── autogpt-tutorial-how-to-set-up-your-own-ai-bot-in-under-30-minutes.mdx ├── autogpt-tutorial-how-to-use-and-create-agent-for-coding-game.mdx ├── beginner-level-tutorial-on-using-Llama2model.mdx ├── beginners-introduction-llm-open-ai-low-code.mdx ├── best-practices-for-deploying-ai-agents-with-the-llama-stack.mdx ├── beyond-commands-teaching-claude-to-drive-your-computer.mdx ├── bing-chatbot-tutorial.mdx ├── building-a-multimodal-edge-application-with-llama-32-and-llama-guard.mdx ├── building-an-ai-powered-personal-health-dashboard-with-falcon-180b.mdx ├── building-an-intelligent-ai-agent-for-content-moderation-with-structured-output.mdx ├── building-efficient-ai-models-with-openais-model-distillation-a-comprehensive-guide.mdx ├── camel-tutorial-building-communicative-agents-for-large-scale-language-model-exploration.mdx ├── chatgpt-guide.mdx ├── chatgpt-how-to-improve-your-work-with-chatgpt.mdx ├── chatgpt-plugin-tutorial-how-to-create-chatgpt-plugin.mdx ├── chatgpt-plugin-tutorial.mdx ├── chatgpt-tutorial-how-to-create-a-website-with-chatgpt.mdx ├── chatgpt-tutorial-how-to-easily-improve-your-coding-skills-with-chatgpt.mdx ├── chatgpt-tutorial-how-to-integrate-chatgpt-and-whisper-api-into-your-project.mdx ├── chatgpt-tutorial-how-to-use-chatgpt-for-seo.mdx ├── chatgpt-tutorial-how-to-use-chatgpt-to-create-your-marketing-strategy.mdx ├── chirp-tutorial-how-to-use-chirp-model-on-google-cloud.mdx ├── choosing-the-right-ai-model-for-synthetic-data-a-deep-dive.mdx ├── chroma-stable-diffusion-tutorial.mdx ├── chroma-tutorial-with-anthropics-claude-model-for-enhancing-the-chatbot-knowledge-base.mdx ├── chroma-tutorial-with-cohere-platform-building-helpdesk-app-for-superheroes.mdx ├── chroma-tutorial-with-openais-gpt-35-model-for-memory-feature-in-chatbot.mdx ├── chroma-tutorial-with-stable-diffusion-building-simple-image-generation-gallery-app-with-semantic-search-capabilities.mdx ├── cohere-chat-summarizer.mdx ├── cohere-chatbot.mdx ├── cohere-content-moderation.mdx ├── cohere-headline-classify-nextjs.mdx ├── cohere-how-to-use-the-new-multilingual-model.mdx ├── cohere-in-depth-sentiment-analysis-of-reviews.mdx ├── cohere-models-guide.mdx ├── cohere-playground.mdx ├── cohere-posts-summarizer.mdx ├── cohere-product-description-generator.mdx ├── cohere-rerank-model-the-solution-for-search-ai-application.mdx ├── cohere-rerank-tutorial.mdx ├── cohere-semantic-search.mdx ├── cohere-tabular-data-requests.mdx ├── cohere-text-classifier.mdx ├── cohere-text-embedder.mdx ├── cohere-tutorial-answer-bot.mdx ├── cohere-tutorial-entity-extraction.mdx ├── cohere-tutorial-how-to-create-a-dogs-breed-recognition-api.mdx ├── composio-tutorial.mdx ├── conversational-ai-and-personalized-advertising.mdx ├── crafting-engaging-stories-with-ai-building-an-interactive-media-app.mdx ├── craiyon-tutorial.mdx ├── creating-workflows-in-clarifai-community-a-comprehensive-tutorial.mdx ├── crewai-multi-agent-system.mdx ├── deep-learning-introduction.mdx ├── developing-intelligent-agents-with-crewai.mdx ├── e-learning-with-llama-tutorial.mdx ├── easyocr-and-gpt-extraction-summarization.mdx ├── efficient-vector-similarity-search-with-redis-a-step-by-step-tutorial.mdx ├── elevenlabs-langchain-tutorial-how-to-create-custom-podcast-generator-streamlit-app.mdx ├── elevenlabs-tutorial-adding-a-witty-narrator-into-your-minecraft-game-via-simple-mod.mdx ├── elevenlabs-tutorial-build-your-fully-voiced-ai-powered-brainstorming-partner-app.mdx ├── elevenlabs-tutorial-building-ai-powered-auto-dubbing-service.mdx ├── elevenlabs-tutorial-building-simple-word-spelling-app.mdx ├── elevenlabs-tutorial.mdx ├── enhancing-large-language-models-with-long-document-interaction-a-comprehensive-tutorial.mdx ├── esrgan.mdx ├── falcon-llms-models-tutorial.mdx ├── fine-tuning-llama3.mdx ├── fine-tuning-phi3.mdx ├── fine-tuning-tinyllama.mdx ├── getting-started-with-ai21labs-for-non-techies.mdx ├── going-global-how-coheres-multilingual-model-is-helping-businesses-connect-and-succeed-worldwide.mdx ├── google-ai-studio.mdx ├── gpt-4-tutorial-how-to-build-a-website-with-bing-chatbot.mdx ├── gpt-4-tutorial-how-to-integrate-gpt-4-into-your-app.mdx ├── gpt-4-tutorial-how-to-use-gpt-4-built-in-bing.mdx ├── gpt-trip-scheduler.mdx ├── gpt3-streamlit.mdx ├── gpt3.mdx ├── gpt4all-sd-tutorial.mdx ├── guide-to-ibm-watsonx-assistant.mdx ├── how-to-get-started-with-clarifai.mdx ├── how-to-get-started-with-clarify.mdx ├── how-to-get-started-with-stablecode.mdx ├── how-to-get-started-with-superagi.mdx ├── how-to-integrate-stable-diffusion-into-your-existing-project.mdx ├── how-to-protect-api-key-in-hackathons.mdx ├── how-to-use-ai21labs.mdx ├── how-to-use-babyagi.mdx ├── how-to-use-collaborators-and-organization-features-within-the-clarifai-platform.mdx ├── how-to-use-generative-ai-studio-by-google.mdx ├── how-to-use-github-for-your-hackathon-project.mdx ├── how-to-use-llama-2-model-with-langchain-on-clarifai.mdx ├── image-generator.mdx ├── imagen-vertexai-tutorial.mdx ├── integrating-dall-e-2-api-with-trulens-elevating-image-generation-capabilities.mdx ├── koboldai-tutorial-using-free-and-open-source-ai-models-to-craft-fun-and-quirky-stories.mdx ├── llama3-with-ollama.mdx ├── llama3.1-multilingual.mdx ├── llama3.2-vision-cooking-tutorial.mdx ├── llava-fuyu-8b-integration-tutorial-crafting-an-automated-social-media-ad-generato.mdx ├── making-ai-smarter-and-smaller-a-practical-guide-to-efficient-model-training.mdx ├── mastering-ai-content-creation-leveraging-llama-3-and-groq-api.mdx ├── mastering-generative-models-on-the-clarifai-platform.mdx ├── midjourney.mdx ├── model-evaluation-tutorial-with-clarifai.mdx ├── monday-app-LLM-debug-tutorial.mdx ├── monday-first-api-call.mdx ├── monday-langchain-ai-agent-with-tools.mdx ├── monday-palm-tutorial.mdx ├── monday-stable-diffusion-tutorial.mdx ├── mongodb-agent.mdx ├── natural-language-to-sql-codex.mdx ├── openai-assistants-api-unleashed.mdx ├── openais-swarm-a-deep-dive-into-multi-agent-orchestration-for-everyone.mdx ├── palm2-tutorial-building-character-based-chatbot-app-using-powerful-ai-model.mdx ├── palm2-tutorial.mdx ├── prototyping-with-stable-diffusion-webui.mdx ├── python-vscode-beginner-tutorial.mdx ├── qdrant-cohere-tutorial.mdx ├── question-and-answer-on-your-data-with-qdrant.mdx ├── redis-langchain-ecommerce-chatbot.mdx ├── run-stable-diffusion-on-gcp.mdx ├── setting-up-jupyter-notebook.mdx ├── shape-e-tutorial-how-to-set-up-and-use-shap-e-model.mdx ├── stable-diffusion-api.mdx ├── stable-diffusion-img2img.mdx ├── stable-diffusion-inpainting.mdx ├── stable-diffusion-interpolation.mdx ├── stable-diffusion-lambda-diffuser.mdx ├── stable-diffusion-lexica.mdx ├── stable-diffusion-prompt-engineering.mdx ├── stable-diffusion-tutorial-build-a-text-to-image-generator-with-stable-diffusion-nextjs-and-vercel.mdx ├── stable-diffusion-tutorial-how-to-create-video-with-text-prompts.mdx ├── stable-diffusion-tutorial-how-to-make-an-ai-art-with-qr-code.mdx ├── stable-diffusion-vercel.mdx ├── stable-diffusion-webui.mdx ├── streamline-your-trello-workflow-with-synapse-copilot.mdx ├── streamlit-deploy-tutorial.mdx ├── superagi-tutorial-generate-a-codebase-and-push-it-to-github.mdx ├── t2i-assistant-redis.mdx ├── task-specific-apis-ai21.mdx ├── travel-with-aria-and-allegro.mdx ├── trulens-and-openai-gpt4-turbo-crafting-advanced-customer-service-solution.mdx ├── trulens-google-vertex-ai-tutorial-building-rag-applications.mdx ├── trulens-google-vertex-ai-tutorial-improve-the-customers-support.mdx ├── trulens-tutorial-langchain-chatbot.mdx ├── unleashing-the-power-of-gpt-4o-a-comprehensive-guide.mdx ├── upstage-tutorial.mdx ├── using_flux_in_replicate.mdx ├── vectara-advanced-app-tutorial-showcase-the-creation-of-vectara-app-in-legal-or-customer-support-use-case.mdx ├── vectara-beginner-app-tutorial-showcase-the-creation-of-vectara-app-in-legal-use-case.mdx ├── vectara-chat-essentials-harness-ai-for-next-gen-hackathon-chatbot.mdx ├── vectara-hackathon-guide.mdx ├── visionary-data-leveraging-trulens-with-mongodb-and-llamaindex.mdx ├── watsonx-ai-guide.mdx ├── whisper-api-flask-docker.mdx ├── whisper-api-gpt3-flask-docker.mdx ├── whisper-sd.mdx ├── whisper-transcribe-youtube-video.mdx ├── whisper-transcription-and-speaker-identification.mdx ├── whisper-tutorial.mdx ├── why-you-should-use-chatgpt-in-your-business.mdx ├── xai-beginner-tutorial.mdx ├── xai-custom-workflows.mdx ├── xai-dynamic-content-generation.mdx └── yolov7.mdx └── template.mdx /.github/workflows/tutorials-embeddings.yml: -------------------------------------------------------------------------------- 1 | name: AI Chatbot Update Workflow 2 | 3 | on: 4 | pull_request: 5 | types: [closed] 6 | branches: 7 | - main 8 | 9 | jobs: 10 | update-ai-chatbot: 11 | runs-on: ubuntu-latest 12 | 13 | steps: 14 | - name: Checkout Repository 15 | uses: actions/checkout@v2 16 | 17 | - name: Set up Node.js 18 | uses: actions/setup-node@v2 19 | with: 20 | node-version: '16.x' # Use the latest LTS version of Node.js 21 | always-auth: false 22 | check-latest: true 23 | - name: Install Dependencies 24 | run: npm install 25 | 26 | - name: Run Script 27 | env: 28 | OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} 29 | PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }} 30 | run: node scripts/embedTutorials.js 31 | -------------------------------------------------------------------------------- /blog/ar/README.md: -------------------------------------------------------------------------------- 1 | # This is the folder for blog posts in arabic languages 2 | -------------------------------------------------------------------------------- /blog/en/gemini-ultra-hackathon-summary.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Gemini Ultra 1.0 Hackathon Summary" 3 | description: "The Gemini Ultra 1.0 AI Hackathon brought together nearly 1,500 creators and over 220 teams, all focused on showcasing their AI innovation skills in a thrilling challenge." 4 | image: "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/84f198ba-65fc-4a32-15e8-46c81722fe00/full" 5 | authorUsername: "Mexorsu" 6 | --- 7 | 8 | ## **🌟Event Overview** 9 | The Gemini Ultra 1.0 AI Hackathon was an incredible gathering of creative minds focused on AI innovation. The event attracted almost 1,5 thousand creators and over 220 teams, all eager to showcase their skills in this exciting challenge. 10 | ## **🎉Prizes and Opportunities** 11 | The lablab NEXT acceleration program is a prestigious award aimed at propelling the most promising projects forward by providing invaluable resources, mentorship, and support. Participants would gain access to a network of industry experts, potential investors, and advanced tools, all designed to help transform their innovative ideas into scalable, successful solutions. This prize was more than just recognition; it was a launchpad for turning visionary concepts into reality. 12 | ## **🏆Highlights and Winners** 13 | Participants were tasked with developing innovative AI applications and tools across various fields. Utilizing state-of-the-art technologies, teams were encouraged to push tech boundaries to the limit. And here are our winners: 14 | 15 | 🥇1st Place: [Food Genie](https://lablab.ai/event/gemini-ultra-hackathon/auh/food-genie), an AI-powered assistant that provides detailed nutrition information, healthy meal options, and food storage tips, promoting informed dietary choices and healthier lifestyles. 16 | 17 | 🥈2nd Place: [Games Gemini Ultra Simulator](https://lablab.ai/event/gemini-ultra-hackathon/games/games-gemini-ultra-simulator) - The project showcased Gemini's strengths in conceptualizing complex projects, creating a narrative-rich simulation environment with realistic agent interactions, and utilizing synthetic data for AI training and problem-solving. 18 | 19 | 🥉3rd Place: [ML Based Compliance Application and Chatbot](https://lablab.ai/event/gemini-ultra-hackathon/titan-warriors/ml-based-compliance-application-and-chatbot) - this application uses Gemini Pro to automate IT audits by gathering necessary artifacts, assessing asset patch levels, tracking license compliance, and answering common audit-related queries via a chatbot. 20 | 21 | ## **🦾Conclusion** 22 | The hackathon was not just a competition but a community experience. Participants enjoyed the collaborative environment, learning opportunities, and networking with industry leaders. Many found the blend of competition and cooperation to be a highlight of their experience. The focus will remain on fostering innovation and building a strong AI community. Stay tuned for more opportunities to push the boundaries of AI technology. 23 | -------------------------------------------------------------------------------- /blog/en/generative-ai-hackathon-winner-announcement.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Generative AI Hackathon: Winner Announcement" 3 | description: "A summary of the Generative AI Hackathon’s winners and presentation of their projects." 4 | image: "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/e340ef40-8a7e-46a5-de81-a2c7cbc2c500/full" 5 | authorUsername: "Olesia" 6 | --- 7 | ## The results are in! 8 | 9 | Generative AI Hackathon has come to an end and the results are in! With over 1400 participants, 127 teams and 27 submitted amazing projects, the lablab.ai team is proud to announce the winners of the Generative AI Hackathon! 10 | 11 | 12 | ### 🎉 First place: 13 | 14 | 15 | **Phoeniks team for their project “Baith al suroor”** 🥇 16 | 17 | This Artificial Intelligence powered solution uses the power of Cohere’s language Command model to generate descriptions of desired designs and the Stable Diffusion generative model to create relevant images and transitional videos. 18 | 19 | Congratulations to the team! And if you want to know more about Phoeniks’s project [visit the submission page](https://lablab.ai/event/undefined/phoeniks/Baith%20al%20suroor). 20 | 21 | ### 🎉 Second place: 22 | 23 | **A tie between two teams: Mental Health AI and SundarAI** 🥈 24 | 25 | Mental Health AI’s project [“I Rene”](https://lablab.ai/event/generative-ai-hackathon/Mental%20Health%20AI/i-rene) uses Cohere’s Conversant AI tool to provide CBT for users, while SundarAI’s project [“vidSummarizer”](https://lablab.ai/event/generative-ai-hackathon/vidsummarizer/vidsummarizer) helps users to analyze sentiment & summarize audio & video files into simple text within seconds, making them more accessible to those with hearing disabilities! 26 | 27 | ## More AI Hackathons to come! 28 | The Lablab.ai team would like to thank all the participants for their energy and enthusiasm throughout the event. We would also like to give special thanks to our partners, [Cohere](https://cohere.ai/), for their support in making this hackathon a success 🙌 29 | 30 | Congratulations again to the winners and all the participants! 31 | 32 | We look forward to seeing you at our next event 🚀You can see list of the upcoming AI Hackathons and informations regarding theme and technology on our [event page.](https://lablab.ai/event) 33 | -------------------------------------------------------------------------------- /blog/en/guidelines-for-creating-a-project-pitch.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "How to create an AI pitch: Guidelines for Creating a Project Pitch" 3 | description: "A great presentation is a key to impress judges and a unique opportunity to catch investors eyes. So here are best tips for you to make your pitch interesting and understendable for audience." 4 | image: "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/8ec5e26c-08ed-479a-ac86-95da88716900/full" 5 | authorUsername: "Olesia" 6 | --- 7 | 8 | ## 📚 Guidelines for Creating a Project Pitch 9 | 10 | A good presentation is critically important, because it will form the impression of your product for the judges and for the community (which can be your potential users). And if you decide to continue developing the project after the Hackathon, it will be a great pitch that you can use for customers, investors or mentors in the future. 11 | 12 | In this article, we're going to show you how to prepare an amazing professional presentation for your project, so that everyone will want to try it out! 13 | 14 | ## 📈 Here's a few pro tips you need to know to make people excited about your product! 15 | 16 | 1. Start from the idea and problem (users' pain) that your product can solve 17 | 2. Make a detailed explanation of how your product works, and the technologies you used for this 18 | 3. Make people excited about your project through a case study of how the user interaction with your product will look like. For this, a screen recording of how you perform a common user's interaction might be perfect. 19 | 4. Do not forget to mention what prospects there are for the product, and where it can be applied - this is a very important question that you should ask yourself in order to understand whether it is worth continuing to develop your idea and investing you time and money in it. 20 | 5. Make sure that your presentation is not longer than 5 minutes. 21 | 6. Last but not least, try to not overload the slides with text -most people won't read it. Try to be concise, using 2-3 sentences maximum to describe each point that we listed above. And preferably allocate most of the time to the showcase of how your product actually works - this is always the most interesting part. 22 | 23 | ### Check out our video guide: 24 | 25 | 26 | 27 | So, this is how you can make people fans of your project even if they have never used it yet. And of course, your great presentation will impress the judges and help your team win the Hackathon! 28 | 29 | We wish you success, and don't forget that in case of any questions you can reach us out on Discord. We'll be more than happy to help. 30 | -------------------------------------------------------------------------------- /blog/en/openai-whisper-announcement.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Whisper OpenAI: New AI based technology to change everything!" 3 | description: "A sneak peak into OpenAI's Whisper, a voice recognition system that changes the rules of the game." 4 | image: "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/cc65e22b-0996-4f8d-9673-993c0976ed00/full" 5 | authorUsername: "ezzcodeezzlife" 6 | --- 7 | 8 | ## Whisper from OpenAI - voice recognition system that changes the rules of the game 9 | OpenAI has just announced an amazing new development in the world of AI - the release of Whisper, a neural net designed for English speech recognition. 10 | 11 | The model was trained for 680,000 hours (that's about 77 years!) that is said to be so it has a high level of robustness and accuracy, approaching human level performance. 12 | 13 | This is a huge step forward in the field of AI and has the potential to change the way many businesses operate. 14 | 15 | There are lots of applications for voice recognition technology, including customer service, virtual assistants, and hands-free controls. This technology is often used in call centers, where it can be used to route calls or transcribe customer interactions. 16 | 17 | ## Generative AI Model with unique Whispering Approach 18 | The new Whisper system from OpenAI could significantly improve the accuracy of these apps, making them much more reliable. This development is also significant for businesses that are developing virtual assistants. 19 | 20 | These systems often use voice recognition to interact with users and perform tasks. With the release of Whisper, these systems can now be developed with the goal of achieving human level accuracy. This could lead to more natural and efficient interactions with users. 21 | 22 | Overall, the release of Whisper from OpenAI is a huge development in the field of voice recognition. This system has the potential to grow businesses that rely on this technology in a really positive way.  23 | 24 | Among other things, with the help of Whisper model you can create your own AI product or integrate it with other AI tools. 25 | 26 | You can build with Whisper by taking part in the Whisper OpenAI Hackathon. With access to the latest AI tools, mentoring and support from the top people in AI, in 48 hours you can build your AI product (and even get support if you want to continue working on it). It's not too late to join the Hackathon, which takes place October 14-17. 27 | 28 | We can't wait to see how it will be used in the future! 29 | -------------------------------------------------------------------------------- /blog/en/state-of-the-art-ai-coheres-multilingual-text-understanding-model.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Cohere's Multilingual Text Understanding Model" 3 | description: "A showcase of a newest Cohere’s model and examples how to use it" 4 | image: "https://storage.googleapis.com/lablab-static-eu/images/blog/DALL%C2%B7E%202022-12-13%2016.07.03%20-%20fall%20of%20tower%20of%20the%20babylon%20thanks%20to%20new%20multilingual%20model%2C%20%20pieter%20bruegel's%20style.png" 5 | authorUsername: "Bamboleo" 6 | --- 7 | 8 | New breakthrough from Cohere - Multilingual Text Understanding Model! 9 | 10 | If you went offline for the last 24 hours you might have missed it, but don’t worry - lablab.ai team gots you covered. So, on 12th December 2022 Cohere released a new, [multilingual model](https://txt.cohere.ai/multilingual/). Long story short - it’s the industry’s first multilingual text understanding model that supports 100+ languages. It also delivers 3 times better performance than existing open-source models. We all know what it means - a better tool to serve users all over the world. 11 | 12 | And with [Cohere’s Semantic Search AI Hackathon](https://lablab.ai/event/semantic-search-hackathon) round the corner (16-23 December) we would like to give you some examples of how you can use this cutting-edge technology to benefit your AI Hackathon project and create a working prototype for your upcoming million dollar startup! 13 | 14 | ## So… how can I use a Multilingual Text Understanding Model in my AI Hackathon? 15 | 16 | 1. Machine Translation: Quickly and accurately translate multilingual text into any language. 17 | 18 | 2. Text Analytics: Automatically extract insights from multilingual text. 19 | 20 | 3. Natural Language Processing: Identify parts of speech, entities, and other natural language components from multilingual text. 21 | 22 | 4. Language Identification: Automatically identify the language of multilingual text. 23 | 24 | 5. Text Generation: Generate natural-sounding and high-quality text in any language. 25 | 26 | 6. Text Summarization: Quickly summarize multilingual text. 27 | 28 | 7. Document Classification: Automatically classify multilingual text documents. 29 | 30 | With Cohere’s Multilingual Text Understanding Model, AI hackathons can become much more multilingual, allowing developers to quickly and accurately create solutions that use multilingual text. This model is an invaluable tool for anyone looking to create the next great AI product. 31 | 32 | And why not try it out during an event where one of the prizes is a chance to meet Cohere Founder, grab a virtual coffee & record a video of your demo that will be promoted on Cohere's channels? Sounds unreal? No, it’s an AI Hackathon from lablab.ai! 33 | -------------------------------------------------------------------------------- /blog/en/state-of-the-art-ai-lablabai-founders-as-the-riseup-lineup.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "lablab.ai founders as the RiseUp Lineup!" 3 | description: "During this year's RiseUp Summit taking place in Saudi Arabia, our founders, Paweł Czech and Mathias Asberg, will give a keynote speech regarding the state of the art AI! " 4 | image: "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/ef67365a-cd63-4639-10f6-59fb8efcbe00/full" 5 | authorUsername: "Zakrz" 6 | --- 7 | 8 | During this year's [**RiseUp Summit**](https://riseupsaudi.com/home) taking place in Saudi Arabia, our founders, Paweł Czech and Mathias Asberg, will give a keynote speech regarding the state of the art AI! 9 | 10 | ## What is RiseUp and why should you follow it? 11 | 12 | RiseUp is a platform that connects startups to the most relevant resources worldwide. 13 | 14 | During a couple of days the world's greatest minds will share their insight regarding three of the topics: Capital, tech and creative. So if you follow the panel, you will get a pill of what’s in today and how our future will be shaped. Yeah, for sure place to be and grow from. 15 | 16 | ## So where and when will Paweł and Mathias give a talk? 17 | 18 | Both of our founders will give a talk on a tech stage. Mathias's talk about “how artificial intelligence is transforming 19 | the way we build & operate software will take place on 21st of November, at 3.30 PM, and Paweł's talk “Generative AI and 20 | its impact on venture capital” will take place the same day, 21 st November, starting at 7 PM. 21 | 22 | ## And can I join the AI industry? 23 | 24 | Of course! If you want to become a part of the most booming industry and change your career, join lablab.ai's community - the community of builders, creators and innovators and shape your future with AI. 25 | Enroll to our Hackathons and with assistance of our mentors create in 48 hours a functional prototype and start your own startup! 26 | 27 | Check upcoming Hackathons [**here.**](https://lablab.ai/event/stable-diffusion-hackathon) 28 | -------------------------------------------------------------------------------- /blog/readme.md: -------------------------------------------------------------------------------- 1 | # Blog 2 | 3 | Repo to handle submitting & updating blog posts on lablab.ai 4 | 5 | ## How to publish a new blog post on lablab 6 | 7 | In this guide you will learn how to publish blog postss on lablab. 8 | 9 | ## General information 10 | 11 | - [Check out our example blog posts page](https://github.com/lablab-ai/community-content/blob/main/blog/en/ai-in-business-how-use-ai-to-stay-ahead-of-the-competition.mdx) 12 | - Please **don’t** copy the content from other websites! 13 | - Please **don’t** use AI content generators to create the content for this page! 14 | - Make sure the blog post has a **clear structure**. Use a minimum of three H2 headings, including one for the introduction, another for the topic input, and the last one to summarize all previously discussed points. Additionally, use H3 subheadings for every significant point you cover. 15 | 16 | ## If you want to publish a new blog post on lablab.ai, follow these steps: 17 | 18 | 1. Create an mdx file for the blog post with the post title in slug format (you can slugify [here](https://slugify.online/)) as the filename in this GitHub repository: [https://github.com/lablab-ai/community-content/edit/main/blog/](https://github.com/lablab-ai/community-content/edit/main/blog/) 19 | 20 | 2. Please keep in mind we might change the filename and title of the blog post to make it more SEO friendly. 21 | 22 | 3. For each blog post page, include the following information: 23 | 24 | - **title**: Title of the post 25 | - **description**: Description of the post 26 | - **authorUsername**: Your username on lablab.ai 27 | 28 | 4. To add image use the `{img_alt},` component. 29 | 30 | 5. After you create a PR we will check the blog post content and merge it if everything is fine. 31 | 32 | Finally, visit our GitHub repo and add AI Blog posts here, get inspiration from the existing pages when creating your own: [https://github.com/lablab-ai/community-content/edit/main/blog/](https://github.com/lablab-ai/community-content/edit/main/blog/). 33 | 34 | ## How to add page? 35 | 36 | 1. Write it! 37 | 2. Create two pull requests: 38 | - to `community-content` branch - thanks to that our internal system will be able to check if your files contains plagiarism/AI generated content (required) 39 | - to `main` branch 40 | 41 | ## Adding as sponsored content/Adding sponsored banner 42 | 43 | 1. To add sponsor as author and link to the sponsor website add to the top of the file under the mandatory fields: 44 | - **sponsor**: Sponsored by ... 45 | - **sponsorUrl**: https://... 46 | 47 | Keep in mind still you have to add a valid authorUsername! 48 | 49 | 2. To place a sponsor banner inside the post --> add `` somewhere in the post 50 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "embeddings-tutorials", 3 | "version": "0.14.301", 4 | "description": "A project to update AI chatbot with tutorials using embeddings.", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "dependencies": { 10 | "openai": "^4.28.0", 11 | "@pinecone-database/pinecone": "^2.0.1", 12 | "node-fetch": "^3.3.2" 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | [![Open Bounties](https://img.shields.io/endpoint?url=https%3A%2F%2Fconsole.algora.io%2Fapi%2Fshields%2Flablab-ai%2Fbounties%3Fstatus%3Dopen)](https://console.algora.io/org/lablab-ai/bounties?status=open) 2 | [![Rewarded Bounties](https://img.shields.io/endpoint?url=https%3A%2F%2Fconsole.algora.io%2Fapi%2Fshields%2Flablab-ai%2Fbounties%3Fstatus%3Dcompleted)](https://console.algora.io/org/lablab-ai/bounties?status=completed) 3 | 4 | # Community powered content on lablab.ai 5 | 6 | Welcome to the lablab.ai Community Content Repository! This repository serves as a platform for our community members to publish blogs, tutorials, and technology pages related to artificial intelligence. Our goal is to create a collaborative and engaging environment where individuals can contribute to the development of content on our website. 7 | 8 | ## How to Contribute 9 | 10 | Here you can learn how to publish: 11 | 12 | - [Tutorials](https://github.com/lablab-ai/community-content/blob/main/tutorials/README.md) 13 | - [Technology pages](https://github.com/lablab-ai/community-content/blob/main/technologies/README.md) 14 | - [Blog posts](https://github.com/lablab-ai/community-content/blob/main/blog/readme.md) 15 | 16 | ## Top contributers 17 | 18 | 19 | 20 | 21 | Leaderboard of lablab-ai 22 | 23 | 24 | -------------------------------------------------------------------------------- /script_output.txt: -------------------------------------------------------------------------------- 1 | Processing file: blog/en/day-5-12-openai-chatgpt-goes-native-on-2-billion-apple-devices.mdx 2 | Downloading image from URL: https://iili.io/2WrRKrX.md.jpg 3 | Successfully downloaded image to: images/2WrRKrX.md.jpg 4 | Uploading image: images/2WrRKrX.md.jpg 5 | Successfully uploaded image. Variant URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/0c441e48-c819-4f9e-c8af-55772a512700/full 6 | Replaced image with new URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/0c441e48-c819-4f9e-c8af-55772a512700/full 7 | Deleted local image: images/2WrRKrX.md.jpg 8 | Downloading image from URL: https://iili.io/2WrRB7s.md.jpg 9 | Successfully downloaded image to: images/2WrRB7s.md.jpg 10 | Uploading image: images/2WrRB7s.md.jpg 11 | Successfully uploaded image. Variant URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/e5089f92-a916-4b4a-94b4-504f3671ee00/full 12 | Replaced image with new URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/e5089f92-a916-4b4a-94b4-504f3671ee00/full 13 | Deleted local image: images/2WrRB7s.md.jpg 14 | Downloading image from URL: https://iili.io/2WrRq2n.md.jpg 15 | Successfully downloaded image to: images/2WrRq2n.md.jpg 16 | Uploading image: images/2WrRq2n.md.jpg 17 | Successfully uploaded image. Variant URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/28100a0f-73ee-4fa0-6513-3a968d9a0500/full 18 | Replaced image with new URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/28100a0f-73ee-4fa0-6513-3a968d9a0500/full 19 | Deleted local image: images/2WrRq2n.md.jpg 20 | Downloading image from URL: https://iili.io/2WrRCkG.md.jpg 21 | Successfully downloaded image to: images/2WrRCkG.md.jpg 22 | Uploading image: images/2WrRCkG.md.jpg 23 | Successfully uploaded image. Variant URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/a621c10a-3bf4-4c37-c6ce-3e17aae73200/full 24 | Replaced image with new URL: https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/a621c10a-3bf4-4c37-c6ce-3e17aae73200/full 25 | Deleted local image: images/2WrRCkG.md.jpg 26 | Successfully processed file: blog/en/day-5-12-openai-chatgpt-goes-native-on-2-billion-apple-devices.mdx 27 | CHANGES_MADE 28 | -------------------------------------------------------------------------------- /technologies/agentops/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "AgentOps" 3 | description: "AgentOps is a platform for monitoring, debugging, and optimizing AI agents in development and production. It offers session replays, metrics dashboards, and custom reporting to track performance, costs, and interactions." 4 | --- 5 | 6 | # AgentOps 7 | 8 | AgentOps is a comprehensive platform designed for monitoring, debugging, and optimizing AI agents in both development and production environments. It provides advanced tools such as session replays, metrics dashboards, and custom reporting, enabling developers to track the performance, cost, and interactions of their AI agents in real-time. 9 | 10 | Some of the out-of-the-box integrations include: 11 | 12 | - CrewAI, 13 | - Autogen, 14 | - Langchain, 15 | - Cohere, 16 | - LiteLLM, 17 | - MultiOn. 18 | 19 | This wide compatibility ensures seamless integration with a diverse range of AI systems and development environments. 20 | 21 | | General | | 22 | | --- | --- | 23 | | Author | AgentOps, Inc. | 24 | | Release Date | 2023 | 25 | | Website | https://www.agentops.ai/ | 26 | | Documentation | https://docs.agentops.ai/v1/introduction | 27 | | Technology Type | Monitoring Tool | 28 | 29 | ## Key Features 30 | 31 | - **LLM Cost Management:** Track and manage the costs associated with large language models (LLMs). 32 | 33 | - **Session Replays:** Replay agent sessions to analyze interactions and identify issues. 34 | 35 | - **Custom Reporting:** Generate tailored reports to meet specific analytical needs. 36 | 37 | - **Recursive Thought Detection:** Monitor recursive thinking patterns in agents to ensure optimal performance. 38 | 39 | - **Time Travel Debugging:** Debug and audit agent behaviors at any point in their operational timeline. 40 | 41 | - **Compliance and Security:** Built-in features to ensure that agents operate within security and compliance standards. 42 | 43 | 44 | ## Start Building with AgentOps 45 | 46 | AgentOps offers developers powerful tools to enhance the monitoring and management of AI agents. With easy integration through SDKs, it provides real-time insights into the performance and behavior of agents. Developers are encouraged to explore community-built use cases and applications to unlock the full potential of AgentOps. 47 | 48 | 👉 [Start building with AgentOps](https://docs.agentops.ai/v1/quickstart) 49 | 50 | 👉 [Examples](https://docs.agentops.ai/v1/examples/examples) 51 | 52 | 53 | 54 | 55 | -------------------------------------------------------------------------------- /technologies/ai71/falcon-2-11b-vlm.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Falcon 2 11B VLM" 3 | author: "ai71" 4 | description: "Falcon 2 11B VLM is an advanced variant of the Falcon 2 series, developed by the Technology Innovation Institute (TII). This model is designed to excel in multimodal tasks, bridging the gap between visual and linguistic data processing." 5 | --- 6 | 7 | # About Falcon 2 11B VLM 8 | Falcon2-11B-VLM is an 11B parameters causal decoder-only model built by TII and trained on over 5,000B tokens of RefinedWeb enhanced with curated corpora. To bring vision capabilities, we integrate the pretrained CLIP ViT-L/14 vision encoder with our Falcon2-11B chat-finetuned model and train with image-text data. For enhancing the VLM's perception of fine-grained details w.r.t small objects in images, we employ a dynamic encoding mechanism at high-resolution for image inputs. 9 | The model is built on the same robust foundation as Falcon 2 11B, featuring 11 billion parameters. It matches or exceeds the performance of other leading models, such as Meta’s Llama 3 8B and Google’s Gemma 7B, particularly in tasks that require vision-language integration. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Relese date | May 13, 2024 | 14 | | Author | [AI71](https://ai71.ai/) | 15 | | Dataset | https://huggingface.co/tiiuae/falcon-11B | 16 | | Type | Vision Language Model | 17 | 18 | 19 | ## Falcon 2 Tutorials 20 | 21 | -------------------------------------------------------------------------------- /technologies/ai71/falcon-2-11b.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Falcon 2 11B" 3 | author: "ai71" 4 | description: "The Falcon 2 11B is an advanced AI language model developed by the Technology Innovation Institute (TII) in Abu Dhabi. Building on the success of its predecessor, Falcon 1, this model is part of the Falcon series, known for its impressive capabilities in natural language processing (NLP) and AI research." 5 | --- 6 | 7 | # About Falcon 2 11B 8 | GPT-4 is an OpenAI's 4th generation Generative Pre-trained Transformer, a successor of ChatGPT. 9 | It is a multimodal large language model that uses deep learning to generate human-like text and it 10 | can accept image and text as inputs. GPT-4 is OpenAI's most advanced system, producing safer and more useful 11 | responses, it hallucinates less than previous models and is a major step forward. There are two versions 12 | with context windows of 8192 and 32768 tokens, which is a significant improvement over GPT-3.5 and GPT-3. 13 | OpenAI introduced the concept of "system message" that can be given to chat-optimized versions of GPT-4 in 14 | order to set the tone and task of the response. 15 | 16 | | General | | 17 | | --- | --- | 18 | | Relese date | May 13, 2024 | 19 | | Author | [TIIUAE](https://falconllm.tii.ae/) | 20 | | Dataset | https://huggingface.co/tiiuae/falcon-11B | 21 | | Type | Large Language Model | 22 | 23 | 24 | ## Falcon 2 Tutorials 25 | 26 | -------------------------------------------------------------------------------- /technologies/anthropic/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Anthropic" 3 | description: "Anthropic’s Constitutional AI training approach research focuses on developing AI systems safe by design 4 | and aligned with human values." 5 | --- 6 | 7 | # Anthropic 8 | 9 | Anthropic’s Constitutional AI training approach research focuses on developing AI systems safe by design 10 | and aligned with human values. By prioritizing safety, we can create strong and corrigible AI systems that 11 | are safe for humans to use. 12 | 13 | | General | | 14 | | ------- | ----------------------------------------------- | 15 | | Company | [Anthropic](https://www.anthropic.com/) | 16 | | Founded | 2021 | 17 | | Discord | https://discord.gg/lablab-ai-877056448956346408 | 18 | 19 | ## Anthropic Claude 20 | 21 | Claude is your friendly and versatile AI language model that can assist you as a company representative, research assistant, creative partner, or task automator. 22 | 23 | Claude is Safe, Clever, and Yours. Built with safety at its core and with industry leading security practices, Claude can be customized to handle complex multi-step instructions and help you achieve your tasks. 24 | 25 | You can easily use Claude for your app, and all necessary APIs, boilerplates, tutorials explaining how to do so and more, you can find on our [Claude tech page](/tech/anthropic/claude). 26 | 27 | --- 28 | -------------------------------------------------------------------------------- /technologies/autogen/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "AutoGen" 3 | description: "AutoGen is an open-source framework for creating multi-agent systems with LLMs, enabling AI agents to interact with humans and tools. It's modular and ideal for automating complex workflows efficiently." 4 | --- 5 | 6 | # AutoGen 7 | AutoGen is an advanced open-source framework developed by Chi Wang designed to simplify the creation of multi-agent systems powered by large language models (LLMs). The platform allows developers to build conversational AI agents that can interact with each other, humans, and various tools in a coordinated manner. AutoGen is highly modular and supports a wide range of applications, making it an essential tool for developers looking to implement complex, automated workflows with minimal manual intervention. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | Chi Wang | 12 | | Release Date | September 2023 | 13 | | Website | https://microsoft.github.io/autogen/ | 14 | | Repository | https://github.com/microsoft/autogen | 15 | | Documentation | https://microsoft.github.io/autogen/docs/Getting-Started | 16 | | Discord | https://discord.com/invite/pAbnFJrkgZ | 17 | | Technology Type | AI/ML Framework | 18 | 19 | ## Key Features 20 | 21 | - **Multi-Agent Framework:** Facilitates the design of agents with specialized roles, enabling them to communicate and collaborate efficiently. 22 | 23 | - **Enhanced LLM Inference:** Provides advanced APIs for improving LLM performance, reducing inference costs. 24 | 25 | - **Customizable Workflows:** Supports complex, dynamic workflows by allowing agents to interact through conversational patterns, enabling seamless automation. 26 | 27 | - **Tool Integration:** Agents can be configured to use external tools, adding flexibility and enhancing their problem-solving capabilities. 28 | 29 | - **Human-in-the-Loop:** Integrates human feedback into the workflow, allowing for oversight and intervention when necessary. 30 | 31 | 32 | ## Start Building with AutoGen 33 | 34 | AutoGen simplifies the development of complex AI applications by providing a robust framework for creating multi-agent systems. With its modular design, developers can quickly build and customize AI workflows that combine LLMs, human intelligence, and various tools to tackle intricate tasks. Whether you are looking to automate customer support, enhance software development processes, or optimize supply chains, AutoGen offers the flexibility and power needed to create sophisticated AI-driven solutions. Explore the community-built use cases and applications to see the full potential of what AutoGen can do. 35 | 36 | 👉 [Start building with AutoGen](https://microsoft.github.io/autogen/docs/Getting-Started) 37 | 38 | 👉 [Examples](https://microsoft.github.io/autogen/docs/Examples) 39 | 40 | -------------------------------------------------------------------------------- /technologies/aws-sagemaker/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "AWS SageMaker" 3 | description: "AWS SageMaker is a fully managed machine learning service that enables developers to quickly and easily build, train, and deploy machine learning models at scale. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models." 4 | --- 5 | 6 | # AWS SageMaker 7 | Amazon SageMaker is a fully-managed machine learning service that makes it easy for data scientists 8 | and developers to quickly and easily build, train, and deploy models into a production-ready hosted 9 | environment. It provides an integrated Jupyter notebook instance for convenient access to data sources 10 | for exploration and analysis, eliminating the need to manage servers. Additionally, it offers optimized 11 | common machine learning algorithms that are designed to function efficiently with large data sets in 12 | distributed environments. SageMaker also allows users to bring their own algorithms and frameworks and 13 | offers flexible distributed training that can be adapted to individual workflows. Models can be quickly 14 | deployed into a secure and scalable environment using the SageMaker Studio or the SageMaker console. 15 | 16 | | General | | 17 | | --- | --- | 18 | | Relese date | November 29, 2017 | 19 | | Author | [AWS](https://aws.amazon.com) | 20 | | Type | Learning Service | 21 | 22 | --- 23 | 24 | ### AWS Amazon SageMaker Libraries 25 | Discover the AWS Amazon SageMaker libraries and SDKs. 26 | 27 | * [AWS Amazon SageMaker](https://aws.amazon.com/sagemaker) Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows 28 | * [SageMaker Tutorials](https://aws.amazon.com/sagemaker/getting-started/) Machine Learning Tutorials with Amazon SageMaker 29 | -------------------------------------------------------------------------------- /technologies/babyagi/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "BabyAGI API, Libraries & Extensions" 3 | description: "BabyAGI is a Python project that demonstrates an AI-powered task management system that uses OpenAI and Pinecone APIs to create, prioritize and execute tasks." 4 | --- 5 | 6 | # BabyAGI API, Libraries & Plugins 7 | BabyAGI is a Python project that demonstrates an AI-powered task management system that uses OpenAI and Pinecone APIs to create, prioritize, and execute tasks. The system creates tasks based on the result of previous tasks and a predefined objective, and uses the LLMs capabilities to create new tasks based on the objective. It runs in an infinite loop that pulls tasks from a task list, sends them to the execution agent, enriches the results using Pinecone, and creates new tasks based on the objective and the result of the previous task. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Relese date | April 2, 2023 | 12 | | Repository | https://github.com/yoheinakajima/babyagi/tree/main | 13 | | Type | Autonomous Agent | 14 | 15 | ## Start building with BabyAGI 16 | We have collected the best BabyAGI libraries and resources to help you get started to build with BabyAGI today. To see what others are building with BabyAGI, check out the community built [BabyAGI Use Cases and Applications](/apps/tech/babyagi). 17 | 18 | ## BabyAGI Extensions 19 | BabyAGI allows extended capabilites through extensions. Learn more by browsing the [official documentation](https://docs.agpt.co/plugins/) 20 | 21 | ### BabyAGI Tutorials 22 | 23 | 24 | --- 25 | 26 | ### BabyAGI Resources 27 | Kickstart your development with BabyAGI and include it in your next project. 28 | 29 | * [LangChain implementation of BabyAGI](https://python.langchain.com/en/latest/use_cases/autonomous_agents/baby_agi.html) 30 | 31 | --- 32 | -------------------------------------------------------------------------------- /technologies/bert/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "BERT" 3 | description: "BERT stands for Bidirectional Encoder Representations from Transformers. You can fine-tune it and get state-of-the-art results in a wide variety of natural language processing tasks" 4 | --- 5 | 6 | # BERT 7 | The BERT paper by Jacob Devlin was released not long after the publication of the first GPT model. 8 | It achieved significant improvements on many important NLP benchmarks, such as GLUE. Since then, 9 | their ideas have influenced many state-of-the-art models in language understanding. Bidirectional 10 | Encoder Representations from Transformers (BERT) is a natural language processing technique (NLP) 11 | that was proposed in 2018. (NLP is the field of artificial intelligence aiming for computers to read, 12 | analyze, interpret and derive meaning from text and spoken words. This practice combines linguistics, 13 | statistics, and Machine Learning to assist computers in ‘understanding’ human language.) BERT is based 14 | on the idea of pretraining a transformer model on a large corpus of text and then fine-tuning it for 15 | specific NLP tasks. The transformer model is a deep learning model that is designed to handle sequential 16 | data, such as text. The bidirectional transformer architecture stacks encoders from the original transformer 17 | on top of each other. This allows the model to better capture the context of the text. 18 | 19 | | General | | 20 | | --- | --- | 21 | | Relese date | 2018 | 22 | | Author | Google | 23 | | Repository | https://github.com/google-research/bert | 24 | | Type | masked-language models | 25 | 26 | 27 | ### Libraries 28 | 29 | * [BERT Model](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4) Get the basic BERT pre-trained model from TensorFlowHub and fine tune to your needs 30 | * [Text Classification with BERT](https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f) How to leverage a pre-trained BERT model from Hugging Face to classify text of news articles 31 | * [Question Answering with a fine-tuned BERT](https://towardsdatascience.com/question-answering-with-a-fine-tuned-bert-bc4dafd45626) using Hugging Face Transformers and PyTorch on CoQA dataset by Stanford 32 | 33 | --- 34 | -------------------------------------------------------------------------------- /technologies/camel/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "CAMEL" 3 | description: "CAMEL stands for Communicative Agents for 'Mind' Exploration of Large Scale Language Model Society. The purpose of this framework is to enhance the collaboration among AI chat agents to achieve tasks with minimal human involvement." 4 | --- 5 | 6 | # CAMEL 7 | CAMEL stands for Communicative Agents for "Mind" Exploration of Large Scale Language Model Society. The purpose of this framework is to enhance the collaboration among AI chat agents to achieve tasks with minimal human involvement. The project aims to come up with effective ways of understanding the thinking patterns of these agents and finding solutions to the challenges they encounter while working together in a scalable manner. CAMEL provides a role-playing approach and inception prompting to guide chat agents in completing tasks that align with human intentions. Research into communicative agents can greatly benefit from this approach as it enables us to thoroughly examine and assess agent behavior and abilities. By addressing cooperation challenges, CAMEL can enhance the performance of conversational AI systems. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Relese date | March 31, 2023 | 12 | | Repository | https://github.com/lightaime/camel | 13 | | Type | Autonomous Agent Simulation | 14 | 15 | ## Start building with CAMEL 16 | We have collected the best CAMEL libraries and resources to help you get started to build with CAMEL today. To see what others are building with CAMEL, check out the community built [CAMEL Use Cases and Applications](/apps/tech/camel). 17 | 18 | ### CAMEL Tutorials 19 | 20 | 21 | --- 22 | 23 | ### CAMEL resources 24 | Kickstart your development with CAMEL and include it in your next project. 25 | 26 | * [Project website](https://www.camel-ai.org/) 27 | * [LangChain implementation of CAMEL](https://python.langchain.com/en/latest/use_cases/agent_simulations/camel_role_playing.html) 28 | * [Arxiv paper](https://arxiv.org/abs/2303.17760) 29 | 30 | --- 31 | -------------------------------------------------------------------------------- /technologies/chroma/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Chroma" 3 | description: "Chroma is an AI-native open-source embedding database. The fastest way to build Python or JavaScript LLM apps with memory" 4 | --- 5 | 6 | # Chroma 7 | 8 | Chroma is building the database that learns. It is an open-source AI-native embedding database. 9 | Chroma makes it easy to build LLM apps by making knowledge, facts, and skills pluggable for LLMs. 10 | The fastest way to build Python or JavaScript LLM apps with memory 11 | 12 | | General | | 13 | | ----------- | -------------------------- | 14 | | Relese date | 2023 | 15 | | Author | [Chroma](https://www.trychroma.com/) | 16 | | Type | embedding database | 17 | 18 | --- 19 | 20 | ## Tutorials 21 | 22 | Great tutorials on how to build with Chroma 23 | 24 | 25 | 26 | ### Chroma - Helpful Resources 27 | 28 | Check it out to become Chroma Master! 29 | 30 | - [Chroma](https://www.trychroma.com/) Chroma website 31 | - [Chroma Docs](https://docs.trychroma.com/) Documentation of Chroma 32 | - [Chroma Repo](https://github.com/chroma-core/chroma) Open-source Chroma repository 33 | 34 | ### Chroma - Clients 35 | 36 | Connect Chroma to your Project! 37 | 38 | - [Chroma Python](https://docs.trychroma.com/getting-started?lang=py) Python Client for Chroma 39 | - [Chroma JavaScript](https://docs.trychroma.com/getting-started?lang=js) High-Performance Chroma Client for JavaScript 40 | 41 | 42 | ### Chroma - integration 43 | 44 | Get started with Chroma! 45 | 46 | - [Chroma LangChain integration with JavaScript](https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/chroma) 47 | - [LangChain and Chroma integration](https://blog.langchain.dev/langchain-chroma/) 48 | - [Chroma and LangChain demo](https://github.com/hwchase17/chroma-langchain) 49 | - [Llama Using Chroma Vector Stores](https://gpt-index.readthedocs.io/en/latest/how_to/integrations/vector_stores.html) 50 | - [Llama Chroma Loader](https://llamahub.ai/l/chroma) 51 | 52 | --- 53 | -------------------------------------------------------------------------------- /technologies/codium/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Codium" 3 | description: "Revolutionizing Pull Request Reviews with AI-Powered Tools" 4 | --- 5 | 6 | # Codium AI: Revolutionizing Pull Request Reviews 7 | 8 | Codium AI is at the forefront of transforming the way developers review pull requests with its suite of AI-powered tools. By leveraging advanced AI algorithms, Codium AI aims to streamline the pull request review process, providing developers with automated analysis, feedback, suggestions, and more. This technology empowers developers to code smarter, create more value, and boost confidence when pushing their code changes. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [Codium AI](https://www.codium.ai) | 13 | | Repository | https://github.com/Codium-ai/pr-agent | 14 | | Type | AI-Powered Code Review abd Testing Tool | 15 | 16 | 17 | ## Start building with Codium AI 18 | Visit the [GitHub repository](https://github.com/Codium-ai/pr-agent) to explore and access all the Codium AI resources. Here, you can find code samples, documentation, and more to help you integrate Codium AI into your development workflow. 19 | 20 | ### Codium Tutorials 21 | 22 | --- 23 | 24 | ### Codium AI Resourses 25 | Explore a wealth of Codium AI resources to enhance your pull request review process. These resources provide a great head start when revolutionizing your code review workflow. 26 | 27 | * [Codium AI GitHub](https://github.com/Codium-ai/pr-agent) 28 | * [Codium Platform](https://www.codium.ai) 29 | 30 | --- 31 | -------------------------------------------------------------------------------- /technologies/cohere/classify.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Classify" 3 | description: "Access large language models that can understand text and take appropriate action — like highlight a post that violates your community guidelines, or trigger accurate chatbot responses. Just set your parameters, and Classify will do the rest." 4 | --- 5 | 6 | # Cohere classify 7 | Cohere classify is a large language model that classify text content. 8 | Classify organizes information for more effective content moderation, analysis, and chatbot experiences. 9 | 10 | 11 | | General | | 12 | | --- | --- | 13 | | Relese date | November 15, 2021 | 14 | | Author | [Cohere](https://cohere.ai/) | 15 | | Documentation | https://docs.cohere.ai/reference/classify | 16 | | Type | Autoregressive, Transformer, Language model | 17 | | Discord | https://discord.gg/lablab-ai-877056448956346408 | 18 | 19 | ## Start building with Cohere Classify 20 | To see what others are building with Cohere Classify, check out the community built [Cohere Use Cases and Applications](/apps/tech/cohere). 21 | 22 | 23 | ### Cohere Classify Tutorials 24 | 25 | --- 26 | 27 | 28 | ### Cohere Classify Boilerplates 29 | Kickstart your development with a Cohere Classify based boilerplate. Boilerplates is a great way to headstart when building your next project with Classify. 30 | 31 | * [Next(js) with Frontend](https://github.com/lablab-ai/nextjs-cohere-boilerplate) 32 | 33 | 34 | --- 35 | 36 | ### Cohere Classify Libraries 37 | A curated list of libraries and technologies to help you build great projects with Cohere Classify. 38 | 39 | * [Cohere Node SDK](https://github.com/cohere-ai/cohere-node) simplifying interacting with api in node.js environments 40 | * [Cohere Classify Go SDK](https://github.com/cohere-ai/cohere-go) simplifying interacting with api in GoLang environments 41 | * [Cohere Classify Python SDK](https://github.com/cohere-ai/cohere-python) simplifying interacting with api in GoLang environments 42 | 43 | 44 | ### Awesome Cohere Classify resources 45 | Complimentary resources that will help you build even better applications 46 | * [Cohere Playground](https://dashboard.cohere.ai/playground/generate) Interact with Cohere API through their playground 47 | * [Langchain](https://lablab.ai/tech/langchain) Toolset for building applications powered by LLM 48 | 49 | --- 50 | -------------------------------------------------------------------------------- /technologies/cohere/coral.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Cohere Coral" 3 | author: "Cohere" 4 | description: "A knowledge assistant for enterprises, supercharging productivity." 5 | --- 6 | 7 | # Cohere Coral: Supercharge Enterprise Productivity 8 | 9 | Cohere Coral is a powerful knowledge assistant designed to empower enterprises by enhancing the productivity of their most critical teams. With its cutting-edge technology, Coral provides verifiable insights and answers from your documents, backed with citations, ensuring confidence in the information you receive. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Author | [Cohere](https://lablab.ai/tech/cohere) | 14 | | Platform | https://cohere.com/coral | 15 | | Type | Knowledge Assistant | 16 | 17 | 18 | ## Start Building with Cohere Coral 19 | Cohere Coral is a versatile knowledge assistant with a wide range of capabilities to assist your teams. Whether you need step-by-step instructions, data summaries, or explanations, Coral can be customized to fit your unique job functions. It also seamlessly integrates with various data sources, enhancing its knowledge base. 20 | 21 | ### 'author technology' Tutorials 22 | 23 | --- 24 | 25 | ### Cohere Coral Resources 26 | 27 | Here are some valuable resources to help you build powerful projects and make the most of Cohere Coral: 28 | 29 | * [Cohere Coral Page](https://cohere.com/coral) 30 | 31 | --- 32 | -------------------------------------------------------------------------------- /technologies/composio/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Composio" 3 | description: "Composio is a versatile platform that enhances AI agents with 100+ tools for tasks like GitHub management, API integration, and automation. It supports major AI frameworks and offers managed authorization and extensibility." 4 | --- 5 | 6 | # Composio 7 | 8 | Composio is a robust platform designed to enhance AI agents by equipping them with a wide array of tools to perform complex tasks seamlessly. Whether it's managing GitHub repositories, integrating with APIs, or automating workflows, Composio provides a versatile, scalable solution for developers. The platform offers support for over 100 tools and is compatible with major AI frameworks like OpenAI, Langchain, CrewAI, and many others. With features like managed authorization, high accuracy, and extensibility, Composio is tailored for developers looking to integrate sophisticated AI-powered functionalities into their applications. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | ComposioHQ | 13 | | Release Date | July 17, 2024 | 14 | | Website | https://composio.dev/ | 15 | | Repository | https://github.com/ComposioHQ/composio | 16 | | Documentation | https://docs.composio.dev/introduction/intro/overview | 17 | | Technology Type | AI Agent Integration Platform | 18 | 19 | 20 | ## Key Features 21 | 22 | - **Extensive Toolset:** Over 100 tools across various categories, including software, OS, browser, and search utilities. 23 | 24 | - **Framework Compatibility:** Works seamlessly with frameworks like Langchain, OpenAI, and more. 25 | 26 | - **Managed Authorization:** Simplifies authentication across multiple protocols (OAuth, API keys, etc.). 27 | 28 | - **Accuracy:** Enhanced tool accuracy, offering up to 40% better performance in AI agent tasks. 29 | 30 | - **Extensibility:** Easily add new tools, frameworks, and authorization protocols. 31 | 32 | 33 | ## Start Building with Composio 34 | 35 | Composio is a cutting-edge platform that empowers developers to integrate powerful AI agents into their applications with minimal effort. Whether you’re working with OpenAI or another major framework, Composio simplifies the process of creating intelligent workflows and automations. Dive into the documentation and explore the community-driven use cases to see how others are leveraging Composio to build innovative solutions. 36 | 37 | 👉 [Start building with Composio](https://docs.composio.dev/introduction/intro/quickstart) 38 | 39 | 👉 [Composio Guides](https://docs.composio.dev/patterns/functions/multiple-users) 40 | 41 | 👉 [Examples](https://docs.composio.dev/examples/examples/Example) 42 | 43 | 44 | 45 | 46 | -------------------------------------------------------------------------------- /technologies/deepseek/deepseek-r1.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "DeepSeek R1" 3 | author: "deepseek" 4 | description: "The DeepSeek R1 model offers efficient language processing for basic NLP tasks, optimized for quick deployment and cost-effective AI solutions." 5 | --- 6 | 7 | # DeepSeek R1 8 | 9 | | General | | 10 | | -------------- | ---------------------------------------------------------------- | 11 | | Release date | 2023 | 12 | | Author | [DeepSeek](https://www.deepseek.com) | 13 | | Website | [DeepSeek Models](https://www.deepseek.com) | 14 | | Repository | https://github.com/deepseek-ai | 15 | | Type | Foundation Language Model | 16 | 17 | The DeepSeek R1 model provides a lightweight yet powerful solution for basic natural language processing tasks. Optimized for speed and efficiency, this model delivers reliable performance for text classification, entity recognition, and simple text generation. 18 | 19 | ## Key Features 20 | - **4K Token Context Window**: Handles medium-length documents effectively 21 | - **Multi-Lingual Support**: Base capabilities in 5 major languages 22 | - **Low Resource Requirements**: Runs efficiently on standard hardware 23 | - **Fine-Tuning Ready**: Compatible with common ML frameworks 24 | 25 | ### Useful Links 26 | 👉 [Deepseek R1 Paper] (https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf) 27 | 👉 [Access on Hugging Face] (https://huggingface.co/deepseek-ai/DeepSeek-R1) 28 | 👉 [Try Deepseek] (https://deepseek.com) 29 | 👉 [API Documentation] (https://api-docs.deepseek.com/) 30 | -------------------------------------------------------------------------------- /technologies/deepseek/deepseek-v3.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "DeepSeek V3" 3 | author: "deepseek" 4 | description: "DeepSeek V3 advanced language model for complex reasoning and code generation, featuring 8K context window and multi-task capabilities." 5 | --- 6 | 7 | # DeepSeek V3 8 | 9 | | General | | 10 | | -------------- | ---------------------------------------------------------------- | 11 | | Release date | 2024 | 12 | | Author | [DeepSeek](https://www.deepseek.com) | 13 | | Website | [DeepSeek Models](https://www.deepseek.com/models) | 14 | | Repository | https://github.com/deepseek-ai | 15 | | Type | MoE (Mixture of Experts) Language Models | 16 | 17 | The DeepSeek V3 model represents our most advanced AI architecture, designed for complex reasoning tasks and code generation. With enhanced context handling and improved instruction following, this model excels in technical applications and enterprise deployments. 18 | 19 | ## Key Features 20 | - DeepSeek-V3: 671B parameters (37B activated per token), optimized for math, code, and multilingual tasks. 21 | - **Code Generation**: Supports 12+ programming languages 22 | - **Advanced Reasoning**: Chain-of-thought capabilities for multi-step problems 23 | - **Enterprise-Grade Security**: Built-in content filtering and compliance features 24 | - **Speed**: 3x faster generation than previous versions (60 TPS) 25 | - **Open-Source**: FP8/BF16 weights available on Hugging Face 26 | 27 | ### Usefull Links 28 | 👉 [**Local Deployment Guide for DeepSeek V3**](https://www.deepseekv3.com/en/blog/deepseek-deploy-guide) 29 | 👉 [**Model Weights on Hugging Face**](https://huggingface.co/deepseek-ai) 30 | 👉 [**API Documentation**](https://api-docs.deepseek.com/) 31 | 👉 [**Deepseek V3 Paper**](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf) 32 | 👉 [**Performance Highlights**](https://github.com/deepseek-ai/DeepSeek-V3) 33 | -------------------------------------------------------------------------------- /technologies/easyocr/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Easy OCR" 3 | description: "EasyOCR is Python package for Optical Character Recognition. It is a general OCR that can read both natural scene text and dense text in document. It supports more than 80 languages!" 4 | --- 5 | 6 | # EasyOCR 7 | EasyOCR is a font-dependent printed character reader based on a template matching algorithm. 8 | It has been designed to read any kind of short text part numbers, serial numbers, expiry dates, 9 | manufacturing dates, lot codes, printed on labels or directly on parts. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Relese date | June 09, 2019 | 14 | | Type | Character reader template matching algorithm | 15 | 16 | --- 17 | 18 | ### EasyOCR Libraries 19 | Discover EasyOCR 20 | 21 | * [EasyOCR Paper](https://arxiv.org/abs/1406.2661) EasyOCR Original Generative Adversarial Network Paper 22 | * [EasyOCR repository](https://github.com/JaidedAI/EasyOCR) Ready-to-use OCR with 80+ supported languages and all popular writing scripts including: Latin, Chinese, Arabic, Devanagari, Cyrillic, etc. 23 | * [EasyOCR DEMO](https://www.jaided.ai/easyocr/) EasyOCR demo from Jaided AI who created this open source library 24 | -------------------------------------------------------------------------------- /technologies/elevenlabs/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "ElevenLabs" 3 | description: "ElevenLabs is a voice technology research company, developing the most compelling AI speech software for publishers and creators." 4 | --- 5 | 6 | # ElevenLabs 7 | 8 | ElevenLabs is a voice technology research company, developing the most compelling AI speech software for publishers and creators. 9 | The goal is to instantly convert spoken audio between languages. 10 | ElevenLabs was founded in 2022 by best friends: Piotr, an ex-Google machine learning engineer, and Mati, an ex-Palantir deployment strategist. 11 | It's backed by Credo Ventures, Concept Ventures and other angel investors, founders, strategic operators and former executives from the industry. 12 | 13 | | General | | 14 | | ------------ | -------------------------------------- | 15 | | Release date | 2022 | 16 | | Author | [ElevenLabs](https://elevenLabs.io) | 17 | | Type | Voice technology research | 18 | 19 | --- 20 | 21 | ## Products 22 | 23 | ### [Speech Synthesis](https://beta.elevenlabs.io/speech-synthesis) 24 | 25 | Speech Synthesis tool lets you convert any writing to professional audio. Powered by a deep learning model, Speech Synthesis lets you voice anything from a single sentence to a whole book in top quality, at a fraction of the time and resources traditionally involved in recording. 26 | 27 | ### [VoiceLab](https://beta.elevenlabs.io/voice-lab) 28 | 29 | Design entirely new synthetic voices or clone your own voice. The generative AI model lets you create completely new voices from scratch, while the voice cloning model learns any speech profile from just a minute of audio. 30 | 31 | ## Resources 32 | 33 | Useful resources on how to build with ElevenLabs 34 | 35 | 36 | 37 | ### ElevenLabs - Helpful Resources 38 | 39 | Check it out to become a ElevenLabs Master! 40 | 41 | - [Education](https://beta.elevenlabs.io/education) How to use ElevenLabs technology 42 | - [Help Center](https://help.elevenlabs.io/hc/en-us) Advice and answers from ElevenLabs team 43 | - [Tutorials](https://beta.elevenlabs.io/tutorials) Learn how to make the most of ElevenLabs 44 | - [API Docs](https://api.elevenlabs.io/docs) API docs for developers 45 | - [ElevenLabs GitHub](https://github.com/elevenLabs) ElevenLabs's open-source repositories 46 | 47 | --- 48 | -------------------------------------------------------------------------------- /technologies/fine-tuner-ai/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "FineTuner.ai" 3 | description: "FineTuner.ai is a no-code AI platform that lets users create and deploy custom AI components, without any coding required." 4 | --- 5 | 6 | # FineTuner.ai 7 | 8 | FineTuner.ai is a no-code AI platform that lets users create and deploy custom AI agents and components without any coding. The platform offers an intuitive UI/UX and rapid API deployment, making AI development simpler for users, allowing them to focus on their unique use cases and ideas. 9 | 10 | | General | | 11 | | ------------ | ------------------------------------------------------ | 12 | | Author | [FineTuner.ai](https://fine-Tuner.ai) | 13 | | Type | AI development platform | 14 | 15 | --- 16 | 17 | ## Advantages 18 | 19 | ### No Coding Required 20 | 21 | Our user-friendly UI/UX ensures a seamless experience, enabling you to build and customize AI agents with just a few clicks. 22 | 23 | ### Rapid Deployment 24 | 25 | With FineTuner.ai, you can deploy AI agents and components via an API in a matter of minutes, significantly reducing development time. 26 | 27 | ### Comprehensive Support 28 | 29 | FineTuner.ai handles the hosting of both front-end and back-end, connects to vector databases and LLMs, and offers extensive support, so you can focus on your unique use case. 30 | 31 | --- 32 | 33 | ## Quick links 34 | 35 | - [Custom Chatbot](https://help.fine-tuner.ai/use-cases/custom-chatbot) 36 | - [Agents](https://help.fine-tuner.ai/use-cases/agents) 37 | - [Autonomous Agents](https://help.fine-tuner.ai/use-cases/autonomous-agents) 38 | - [Embeddings](https://help.fine-tuner.ai/use-cases/embeddings) 39 | - [Model Fine Tuning](https://help.fine-tuner.ai/use-cases/model-fine-tuning) 40 | 41 | --- 42 | 43 | ## Get Started 44 | 45 | We've put together some helpful guides for you to get setup with our product quickly and easily: 46 | 47 | - [Getting set up](https://help.fine-tuner.ai/fundamentals/getting-set-up) 48 | - [How to set up an AI-powered chatbot](https://help.fine-tuner.ai/fundamentals/getting-set-up/how-to-set-up-an-ai-powered-chatbot) 49 | 50 | --- 51 | 52 | [Tutorials](https://help.fine-tuner.ai/fundamentals/getting-set-up) 53 | 54 | Get a walkthrough on getting started with FineTuner.ai. 55 | 56 | --- 57 | 58 | ## Resources 59 | 60 | ### FineTuner.ai - Helpful Resources 61 | 62 | If you need further assistance, check out these resources: 63 | 64 | - [Documentation](https://help.fine-tuner.ai/fine-tuner.ai/overview/what-is-fine-tuner.ai) In-depth guide to use FineTuner.ai 65 | 66 | --- -------------------------------------------------------------------------------- /technologies/fuyu-8b.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Fuyu-8B" 3 | author: "Adept AI" 4 | description: "Explore Fuyu-8B, a versatile multimodal text and image transformer model designed for digital agents." 5 | --- 6 | 7 | # Fuyu-8B: A Leap in Multimodal AI 8 | Fuyu-8B is an innovative multimodal text and image transformer model developed by Adept AI, designed to empower digital agents with advanced image understanding and natural language processing capabilities. 9 | 10 | This small-sized model offers exciting possibilities for various applications and is available on HuggingFace. Here, we dive into the details of Fuyu-8B, highlighting its unique features and potential applications. 11 | 12 | | General | | 13 | | --- | --- | 14 | | Relese date | Month XX, 20XX | 15 | | Author | [Adept AI](https://www.adept.ai) | 16 | | Repository | [Fuyu-8B on HuggingFace](https://huggingface.co/adept/fuyu-8b) | 17 | | Type | Multimodal Text and Image Transformer | 18 | 19 | 20 | ## Start Building with Fuyu-8B 21 | 22 | Fuyu-8B is a groundbreaking model that simplifies the world of multimodal AI. Here are some key features and capabilities that make it stand out: 23 | 24 | - **Simplified Architecture**: Fuyu-8B boasts a much simpler architecture and training procedure compared to other multimodal models. This simplicity makes it easier to understand, scale, and deploy for a wide range of applications. 25 | 26 | - **Digital Agent-Focused**: This model is purpose-built for digital agents, making it adept at handling various tasks. It can support arbitrary image resolutions, answer questions about graphs and diagrams, respond to UI-based queries, and perform fine-grained localization on screen images. 27 | 28 | - **Remarkable Speed**: Fuyu-8B is designed for speed. It can provide responses for large images in less than 100 milliseconds, ensuring rapid interactions with digital agents. 29 | 30 | - **Performance**: Despite being optimized for specific use cases, Fuyu-8B performs well on standard image understanding benchmarks, including visual question-answering and natural image captioning. 31 | 32 | Please note that the model we provide is a base model. Depending on your specific use case, you may need to fine-tune it for tasks like verbose captioning or multimodal chat. Fuyu-8B has proven to be highly adaptable through few-shot learning and fine-tuning for a variety of use cases. 33 | 34 | ### Technology Tutorials 35 | 36 | --- 37 | 38 | ### Technology Resources 39 | 40 | Here are some valuable resources to help you get the most out of Fuyu-8B: 41 | 42 | * [Fuyu-8B Documentation](https://huggingface.co/adept/fuyu-8b): Comprehensive documentation for the model, including usage guides and tips. 43 | * [Adept AI Blog Post](https://www.adept.ai/blog/fuyu-8b): Read Adept AI's official blog post for the latest insights and announcements about Fuyu-8B. 44 | 45 | 46 | --- 47 | -------------------------------------------------------------------------------- /technologies/fuyu/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Fuyu-8B Model: Multi-Modal Text and Image Transformer" 3 | description: "Empowering developers with a versatile, fast-response multi-modal text and image transformer for digital agents." 4 | --- 5 | 6 | # Fuyu-8B Model 7 | Fuyu-8B, developed by Adept AI, is a cutting-edge multi-modal text and image transformer tailored specifically for digital agents. Released in Month XX, 20XX, this model is optimized for swift response times (under 100 milliseconds) while excelling in a range of image-related tasks. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [Adept AI](https://www.adept.ai) | 12 | | Repository | [Fuyu-8B on HuggingFace](https://huggingface.co/adept/fuyu-8b) | 13 | | Type | Multi-modal text and image transformer | 14 | 15 | ## Model Capabilities 16 | 17 | Fuyu-8B is engineered to empower digital agents with a diverse set of capabilities, including: 18 | 19 | - **Image-Related Queries:** Efficient handling of image-related queries and tasks. 20 | - **Fine-Grained Image Localization:** Precision in identifying and localizing elements within images. 21 | - **Rapid Responses for Large Images:** Swift and accurate responses even with large-sized images. 22 | 23 | ### Technology Tutorials 24 | 25 | --- 26 | 27 | ### Fuyu-8B Model Resources 28 | 29 | To leverage the potential of Fuyu-8B, explore these resources: 30 | 31 | - [Fuyu-8B Documentation](https://www.adept.ai/docs/fuyu-8b): Detailed documentation for implementation and usage. 32 | - [Adept AI Blog Post on Fuyu-8B](https://www.adept.ai/blog/fuyu-8b): Gain deeper insights and official announcements about Fuyu-8B. 33 | --- 34 | -------------------------------------------------------------------------------- /technologies/gan/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "GAN networks" 3 | description: "GAN models contain two neural network models - a generator and a discriminator. They play together to be better in every epoch. One generates images, second verifies them." 4 | --- 5 | 6 | # GAN 7 | GAN (Generative Adversarial Network) machine learning models used to generate realistic data that resembles the data it was train on. These models consist in two components, a generator and a discriminator, both being neural networks. 8 | During the training process the generator takes a random input and tries to produce data that is realistic enough to fool the discriminator, while the discriminator tries to identify the fake dat genrated by the generator. In this way both networks ae trained in a feedback loop, where the generator tries to produce more realistic data to fool the discriminator and the discriminator becomes better at clasifying real data from fake data. This process is what give GANs its name, its name, they are adversarial because the two networks are competing against each other. 9 | As the two networks continue to improve, the generated data becomes more and more similar to the training data, resulting in highly realistic and diverse output. GANs have been used in a variety of applications, including image and video generation, music synthesis, and natural language processing. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Relese date | 2014 | 14 | | Type | Machine Learning Framework | 15 | 16 | --- 17 | 18 | ### Discover GAN 19 | 20 | * [GAN Paper](https://arxiv.org/abs/1406.2661) Original Generative Adversarial Network Paper 21 | -------------------------------------------------------------------------------- /technologies/generative-agents/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Generative Agents" 3 | description: "Generative Agents are computational software that simulate human behavior in interactive applications." 4 | --- 5 | 6 | # Generative Agents 7 | Generative Agents are computer programs designed to replicate human actions and responses within interactive software. To create believable individual and group behavior, they utilize memory, reflection, and planning in combination. These agents have the ability to recall past experiences, make inferences about themselves and others, and devise strategies based on their surroundings. They have a wide range of applications, including creating immersive environments, rehearsing interpersonal communication, and prototyping. In a simulated world resembling The Sims, automated agents can interact, build relationships, and collaborate on group tasks while users watch and intervene as necessary. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Relese date | April 7, 2023 | 12 | | Type | Autonomous Agent Simulation | 13 | 14 | ## Start building with Generative Agents 15 | We have collected the best Generative Agents libraries and resources to help you get started to build with Generative Agents today. To see what others are building with Generative Agents, check out the community built [Generative Agents Use Cases and Applications](/apps/tech/generative-agents). 16 | 17 | ### Generative Agents Tutorials 18 | 19 | 20 | --- 21 | 22 | ### Generative Agents resources 23 | Kickstart your development with Generative Agents. 24 | 25 | * [LangChain implementation of Generative Agents](https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html) 26 | * [Arxiv paper](https://arxiv.org/abs/2304.03442) 27 | 28 | --- 29 | -------------------------------------------------------------------------------- /technologies/godmode/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Godmode AI" 3 | description: "Godmode AI is a new AI platform that gives access to innovative AI agents like Auto-GPT and AabyAGI." 4 | --- 5 | 6 | # Godmode: Enhancing Auto-GPT and BabyAGI 7 | Godmode AI is a new platform that gives access to innovative AI agents like autoGPT and babyAGI. While AI is still developing, Godmode allows people to utilize these technologies even in the early stages. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Platform | [Godmode](https://godmode.space) | 12 | | Type | Web platform, powered by AI Agents | 13 | 14 | 15 | ## What is Godmode AI? 16 | Godmode AI is a web application developed by FOLLGAD that requires JavaScript enabled browsers. It utilizes AI agents to analyze input data and generate creative outputs. The key features include: 17 | 18 | - Conversational interface to get AI perspectives 19 | - Ability to explore hypothetical scenarios 20 | - Generates unique ideas and creative content 21 | - Accessible to anyone without technical skills 22 | 23 | The tool is inspired by autoGPT and babyAGI and supports models like GPT-3.5 and GPT-4. It aims to harness the potential of AI agents to provide innovative insights. 24 | 25 | 26 | ### What can I use Godmode for? 27 | With Godmode AI, users can unlock new possibilities and insights. Some potential use cases include: 28 | 29 | - Getting a new perspective on markets to launch products in 30 | - Generating resignation letters or other business documents 31 | - Exploring hypothetical scenarios e.g. advanced pre-ice age civilization 32 | - Creative writing and brainstorming sessions 33 | 34 | The AI analyzes prompts and generates creative outputs ranging from text to images. This unlocks innovative ideas people may not have considered before. 35 | 36 | 37 | ## How to get started with Godmode? 38 | Godmode has an intuitive user interface that does not need any technical expertise. Users can simply: 39 | 40 | 1. Visit [Godmode.space](https://godmode.space) and create an account 41 | 2. Input a text prompt or question 42 | 3. View generated outputs from the AI agent 43 | 4. Refine and customize as needed 44 | 45 | The conversational nature allows users to guide the AI by building on responses with follow-up prompts. 46 | 47 | ### AI Agents Tutorials 48 | 49 | --- 50 | 51 | ### 'author technology' Libraries 52 | A curated list of libraries and technologies to help you build great projects with Godmode. 53 | 54 | * [Godmode Platform](https://godmode.space) 55 | * [Demo video](https://twitter.com/_Lonis_/status/1646641412182536196) 56 | 57 | --- 58 | -------------------------------------------------------------------------------- /technologies/google-colab/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Google Colab" 3 | description: "Google Colab is a free cloud service that allows users to collaborate on Jupyter notebooks. Notebooks can be shared with other users and can be made public. It provides a virtual machine with 12GB of RAM and an NVIDIA Tesla K80 GPU. Google Colab is available to anyone with a Google account." 4 | --- 5 | 6 | # GAN 7 | Colaboratory, or “Colab” for short, is a product from Google Research. Colab allows anybody to 8 | write and execute arbitrary python code through the browser, and is especially well suited to 9 | machine learning, data analysis and education. More technically, Colab is a hosted Jupyter notebook 10 | service that requires no setup to use, while providing access free of charge to computing resources including GPUs. 11 | 12 | | General | | 13 | | --- | --- | 14 | | Type | Data analysis and machine learning tool | 15 | 16 | --- 17 | 18 | ### Discover GAN 19 | 20 | * [Google Colab](https://arxiv.org/abs/1406.2661) Here you can create your first Colab notebook, you can choose from several examples to get you started -------------------------------------------------------------------------------- /technologies/google/chirp.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Chirp" 3 | author: "google" 4 | description: "Meet Chirp, the revolutionary speech-to-text technology powered by Google AI, bringing unparalleled accuracy and language support to speech recognition services." 5 | --- 6 | 7 | # Google AI's Chirp: Cutting-Edge Speech-to-Text Technology 8 | 9 | Chirp represents the latest breakthrough in speech-to-text processing, developed by Google AI and integrated into Google Cloud's Speech API. This revolutionary model boasts 2 billion parameters and leverages self-supervised learning from millions of hours of audio and 28 billion text sentences across more than 100 languages. Chirp achieves a remarkable 98% speech recognition accuracy in English and a 300% relative improvement in several languages spoken by less than 10 million people. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Release date | 2023 | 14 | | Author | [Google AI](https://ai.google/) | 15 | | Type | Speech-to-Text | 16 | 17 | ## Standout Capabilities 18 | 19 | - **Broad Language Support:** Chirp caters to over 100 languages, ensuring top-notch speech recognition for a wide array of languages and accents. 20 | - **Unparalleled Accuracy:** With 98% speech recognition accuracy in English and notable enhancements in other languages, Chirp sets a new industry standard. 21 | - **Massive Model Size:** Chirp's 2-billion-parameter model outpaces previous speech models to deliver superior performance. 22 | - **Innovative Training Approach:** Chirp's encoder is initially trained with an enormous amount of unsupervised (unlabeled) audio data from 100+ languages, followed by fine-tuning for transcription in each specific language using smaller supervised datasets. 23 | 24 | 25 | ## Start Building with Chirp 26 | 27 | We have collected the best Chirp libraries and resources to help you get started and build state-of-the-art speech-to-text applications. 28 | 29 | ### Chirp Libraries 30 | 31 | A curated list of libraries and technologies to help you build great projects with Chirp. 32 | 33 | * [Client Libraries](https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries?hl=en) 34 | 35 | ### Chirp Boilerplates 36 | 37 | Kickstart your development with a Chirp based boilerplate. Boilerplates is a great way to headstart when building your next project with Chirp. 38 | 39 | * [Quickstart](https://cloud.google.com/speech-to-text/v2/docs/transcribe-client-libraries?hl=en) 40 | 41 | --- 42 | 43 | -------------------------------------------------------------------------------- /technologies/google/codey.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Codey" 3 | author: "google" 4 | description: "Discover Codey, the AI-powered coding assistant developed by Google AI, designed to transform software development with its advanced code generation capabilities across various programming languages." 5 | --- 6 | 7 | # Codey: Google AI's Revolutionary Coding Assistant 8 | 9 | Introducing Codey, a cutting-edge AI-driven coding assistant that reshapes software development by enhancing productivity and streamlining workflows. Developed by Google AI and built on the powerful PaLM 2, Codey supports over 20 programming languages, including Python, Java, JavaScript, Go, Google Standard SQL, and TypeScript. Leveraging large language models, Codey assists developers in a variety of coding tasks, optimizing speed, improving code quality, and bridging skill gaps. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Release date | 2024 | 14 | | Author | [Google AI](https://ai.google/) | 15 | | Type | AI-driven Coding Assistant | 16 | 17 | ## Key Capabilities 18 | 19 | - **Wide Language Support:** Codey is designed to work with over 20 programming languages, offering assistance for a diverse range of development scenarios. 20 | - **Advanced Code Completion:** Codey delivers expertly crafted code suggestions based on the developer's input and context, significantly accelerating the coding process. 21 | - **Dynamic Code Generation:** By sequentially generating code in response to developers' natural language prompts, Codey streamlines the coding experience and saves valuable time and effort. 22 | - **Interactive Code Chat:** Developers can leverage Codey's chat functionality to interact with an intelligent bot, addressing debugger issues, documentation, learning new concepts, and resolving code-related queries, thus overcoming development challenges with ease. 23 | 24 | ## Wide-Ranging Applications 25 | 26 | Codey's advanced capabilities are integrated into numerous Google platforms, such as Colab, Android Studio, Google Cloud, and Google Search, providing an array of benefits to developers, including: 27 | 28 | - Accelerating coding speeds with context-sensitive suggestions. 29 | - Elevating code quality through AI-assisted code snippets. 30 | - Balancing skill gaps by offering accessible guidance and support to both novice and expert developers. -------------------------------------------------------------------------------- /technologies/google/gemini-ai.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Introducing Gemini: The Future of AI" 3 | author: "google" 4 | description: "Explore Gemini, Google DeepMind's most advanced AI model, blending multimodality and high efficiency." 5 | --- 6 | 7 | # Gemini AI 8 | 9 | Gemini AI represents a groundbreaking achievement in the field of artificial intelligence, developed by Google DeepMind. It's a model that epitomizes the blend of multimodality and efficiency, designed to work seamlessly across various platforms, from data centers to mobile devices. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Relese date | December 13, 2023 | 14 | | Author | [Google DeepMind](https://deepmind.google) | 15 | | Type | Multimodal AI model | 16 | 17 | 18 | ## Introducing Gemini AI 19 | 20 | Demis Hassabis, CEO and Co-Founder of Google DeepMind, introduces **[Gemini AI](https://deepmind.google/technologies/gemini/#introduction)** as the culmination of a lifelong passion for AI and neuroscience. Gemini AI aims to create intuitive, multimodal AI models, extending beyond traditional smart software to a more holistic, assistant-like experience. 21 | 22 | ### **Key Highlights of Gemini AI:** 23 | 24 | - **Multimodal Capabilities:** Gemini AI is designed to understand and process various types of information, including text, code, audio, image, and video. 25 | - **Flexibility:** Efficient across platforms, from data centers to mobile devices. 26 | - **Optimized Versions:** Gemini Ultra, Pro, and Nano, each tailored for specific requirements. 27 | - **Advanced Performance:** Leading performance in various benchmarks, surpassing human expertise in some areas. 28 | - **Next-Generation Capabilities:** Natively multimodal, trained across different modalities for superior performance. 29 | - **Advanced Coding:** Capable of understanding and generating high-quality code in multiple programming languages. 30 | 31 | ### **Gemini AI and Google's Ecosystem:** 32 | 33 | - **Enhanced with Google's Infrastructure:** Utilizes Google’s Tensor Processing Units (TPUs) for optimized performance. 34 | - **Integration Across Products:** From Google Bard to Pixel 8 Pro, Gemini AI is being rolled out in a variety of Google products. 35 | 36 | ### **Responsibility and Safety:** 37 | 38 | - **Comprehensive Safety Evaluations:** Rigorous testing for bias, toxicity, and other potential risks. 39 | - **Collaborative Development:** Engagement with external experts and adherence to Google's AI Principles 40 | 41 | ### **Availability and Access:** 42 | 43 | - **Gemini API:** Accessible via Google AI Studio or Google Cloud Vertex AI starting December 13. 44 | - **AICore for Android Developers:** Build with Gemini Nano on Android 14, starting with Pixel 8 Pro devices. 45 | -------------------------------------------------------------------------------- /technologies/google/generative-ai-studio.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Generative AI Studio" 3 | author: "google" 4 | description: "Discover the capabilities of Google's Vertex AI, a managed environment that simplifies the integration and deployment of generative AI studio and foundation models for production." 5 | --- 6 | 7 | # Google's Generative AI Studio 8 | Experience the power of Google's Vertex AI through Generative AI Studio, a managed environment that streamlines the interaction, customization, and deployment of foundation models for production applications. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Release date | 2023 | 13 | | Author | [Google](https://cloud.google.com) | 14 | | Documentation | [Link](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/generative-ai-studio) | 15 | | Type | Generative AI Model Management | 16 | 17 | ## Start building with Generative AI Studio 18 | Explore the best Generative AI Studio resources and libraries to help you get started with building projects using Google's Vertex AI today. 19 | 20 | ### Generative AI Studio Links 21 | 22 | A curated list of libraries and resources to help you build outstanding projects with Generative AI Studio. 23 | 24 | * [Generative AI Studio Official Documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/generative-ai-studio) Learn how to use Generative AI Studio with the official documentation. 25 | * [Google's Foundation Models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models) Browse the wide array of foundation models available in Generative AI Studio, such as PaLM, Imagen, Codey, and Chirp. 26 | * [Generative AI Studio API Quickstart](https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/api-quickstart) Get started with the Generative AI Studio API with this helpful quickstart guide. 27 | 28 | --- -------------------------------------------------------------------------------- /technologies/google/imagen.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Imagen" 3 | author: "google" 4 | description: "Discover Imagen, Google's AI marvel that transforms text into stunning visuals, setting a new benchmark in the realm of text-to-image diffusion models." 5 | --- 6 | 7 | # Imagen: A Pioneering Text-to-Image Diffusion Model 8 | 9 | Discover Imagen, an awe-inspiring text-to-image diffusion model that brilliantly merges photorealistic image synthesis with an unparalleled language comprehension mechanism. Born out of rigorous research by Google's Brain Team, Imagen harnesses the exceptional capabilities of large transformer language models for text understanding, while tapping into the prowess of diffusion models to generate high-definition images. 10 | 11 | ## Unearthing Imagen's Key Insights and Features 12 | 13 | - Imagen showcases the extraordinary potential of generic large language models (like T5) when pretrained on text-only data, proving their effectiveness at encoding language for image creation. 14 | - By fine-tuning the language model in Imagen, both sample fidelity and image-text alignment receive a boost, yielding more significant improvements than scaling up the image diffusion model. 15 | - Imagen sets new benchmarks, achieving a stunning Fréchet Inception Distance (FID) score of 7.27 on the COCO dataset—despite never having trained on the COCO dataset. 16 | - Human evaluators have determined that Imagen's image-text alignment capabilities are on par with the COCO dataset, signaling its exceptional performance. 17 | 18 | Embrace Imagen, the pinnacle of text-to-image technology, and explore a new frontier of AI-driven image generation capabilities. 19 | 20 | ### Imagen Links 21 | Kickstart your development with a imagen 22 | 23 | * [Imagen Documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/image/overview) 24 | -------------------------------------------------------------------------------- /technologies/google/model-garden.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Model Garden" 3 | author: "google" 4 | description: "Discover the power of Google's Model Garden on Vertex AI, a comprehensive hub for exploring, interacting, and implementing a diverse array of AI models to accelerate your ML journey." 5 | --- 6 | 7 | # Google's Model Garden on Vertex AI: A Treasure Trove of ML Assets 8 | 9 | Embark on your machine learning journey with a centralized platform to explore, uncover, and interact with an array of models from Google and its partners. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Release date | 2023 | 14 | | Author | [Google](https://cloud.google.com/vertex-ai) |(https://github.com/tensorflow/models) | 15 | | Type | AI Model Repository | 16 | 17 | ## Start building with Model Garden on Vertex AI 18 | We have collected the best Model Garden resources to help you get started with Google's Model Garden on Vertex AI today. 19 | 20 | ### Model Garden on Vertex AI Links, Boilerplates & Libraries 21 | Kickstart your development with a Model Garden-based boilerplate. Boilerplates are a great way to headstart when building your next project with Model Garden on Vertex AI. 22 | 23 | * [Link to Awesome Model Garden boilerplate](https://console.cloud.google.com/vertex-ai/model-garden) This Link will help you get started with Model Garden on Vertex AI. There are many boilerplates linked for each model 24 | * [Model Garden Documentation](https://cloud.google.com/vertex-ai/docs/start/explore-models) 25 | 26 | 27 | -------------------------------------------------------------------------------- /technologies/google/palm.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "PaLM" 3 | author: "Google" 4 | description: "Discover PaLM 2, Google's groundbreaking large language model, excelling in complex reasoning, multilingual translation, and coding expertise. Experience the next-generation AI technology that is setting new standards in the field of artificial intelligence." 5 | --- 6 | 7 | # PaLM 2: Google's Revolutionary Large Language Model 8 | 9 | PaLM 2 is a groundbreaking large language model developed by Google, setting extraordinary standards in the realm of artificial intelligence. Outperforming its predecessors and other LLMs in various aspects, PaLM 2 showcases unparalleled capabilities in complex reasoning, multilingual translation, and coding expertise. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Release date | TBA | 14 | | Author | [Google](https://ai.google/) | 15 | | Type | Large Language Model | 16 | 17 | ## Experience PaLM 2: Where Unparalleled Performance Meets Responsible Innovation 18 | 19 | We have collected the best PaLM 2 resources to help you explore the capabilities and potential applications of Google's cutting-edge large language model. Unleash the power of PaLM 2 with these valuable resources and see for yourself the future of artificial intelligence. 20 | 21 | --- 22 | 23 | ### PaLM 2 Links 24 | Explore curated libraries and resources to help you build outstanding projects with PaLM 2, Google's transformative large language model. 25 | 26 | * [PaLM 2 Documentation](https://ai.google/static/documents/palm2techreport.pdf) - Dive into the in-depth technical report of PaLM 2, providing valuable insights into its development and capabilities. 27 | 28 | * [PaLM 2 Overview](https://ai.google/discover/palm2/) - Discover the key features, capabilities, and evolution of PaLM 2, Google's groundbreaking large language model. 29 | 30 | * [PaLM 2 Developer Guide](https://developers.generativeai.google/guide) - Access a comprehensive guide to using Google's generative AI models like PaLM 2, offering a plethora of support and resources for developers. -------------------------------------------------------------------------------- /technologies/gorilla/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Gorilla" 3 | description: "Gorilla is a powerful LLM that can accurately invoke 1,600+ APIs by understanding natural language queries." 4 | --- 5 | 6 | # Gorilla: Large Language Model Connected with Massive APIs 7 | 8 | Gorilla is a state-of-the-art Large Language Model (LLM) designed to understand and accurately invoke over 1,600 APIs by interpreting natural language queries. Its primary aim is to reduce hallucination and offer a user-friendly experience for both developers and non-developers. Gorilla is built by fine-tuning LLaMA weights and harnesses the power of transformer architecture to provide semantically and syntactically correct API calls. 9 | 10 | ## Key Features 11 | 12 | - Accurately invoke 1,600+ APIs using natural language queries 13 | - Reduce hallucination in LLMs 14 | - User-friendly and adaptable to various needs and tools 15 | - Open-source and constantly evolving with community contributions 16 | 17 | ### Setup and Usage 18 | 19 | To get started with Gorilla, follow these steps: 20 | 21 | 1. Visit the [Gorilla GitHub repository](https://github.com/ShishirPatil/gorilla) and follow the instructions to set up your local environment. 22 | 2. Try Gorilla with the provided [Google Colab notebook](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing), or run it locally using the instructions in the `inference` folder. 23 | 3. Explore the APIBench dataset and evaluation code released by the Gorilla team. 24 | 4. Contribute your API to the growing APIZoo, following the [contribution guide](https://github.com/ShishirPatil/gorilla/tree/main/data/README.md). 25 | 26 | ### Integration with Other Tools 27 | 28 | Gorilla is designed to work seamlessly with other LLM tools, such as Langchain, ToolFormer, and AutoGPT. Its adaptability makes it an ideal candidate for integration into a wide range of applications and toolchains. 29 | 30 | ### System Requirements 31 | 32 | To use Gorilla, you must have Python 3.10 or later installed. Earlier versions of Python will not compile. It is recommended to have a reliable internet connection and sufficient computational resources for optimal performance. 33 | 34 | ### Community and Support 35 | 36 | Gorilla is an open-source project, and the team behind it actively encourages community contributions and support. You can join the [Gorilla Discord server](https://discord.gg/3apqwwME) to engage with the community, ask questions, or provide feedback. If you'd like to contribute to the project, follow the [APIZoo contribution guide](https://github.com/ShishirPatil/gorilla/tree/main/data/README.md) and be a part of the growing API ecosystem. 37 | -------------------------------------------------------------------------------- /technologies/grounded-sam/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Grounded-Segment-Anything" 3 | description: "Grounded-Segment-Anything is a framework that combines Grounding DINO and Segment Anything to detect and segment objects in images using text prompts. The project also incorporates other models like Stable-Diffusion, Tag2Text, and BLIP for various tasks like image generation and automatic labeling." 4 | --- 5 | 6 | # Grounded-Segment-Anything 7 | 8 | Grounded-Segment-Anything is a framework that combines [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO) and [Segment Anything](https://github.com/facebookresearch/segment-anything) to detect and segment objects in images using text prompts. The project also incorporates other models like [Stable-Diffusion](https://github.com/CompVis/stable-diffusion), [Tag2Text](https://github.com/xinyu1205/Tag2Text), and [BLIP](https://github.com/salesforce/lavis) for various tasks like image generation and automatic labeling. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Release date | March 31, 2023 | 13 | | Repository | https://github.com/IDEA-Research/Grounded-Segment-Anything | 14 | | Type | Image Segmentation and Detection | 15 | 16 | **🔥 Highlighted Projects** 17 | 18 | - Checkout the [Automated Dataset Annotation and Evaluation with GroundingDINO and SAM](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/automated-dataset-annotation-and-evaluation-with-grounding-dino-and-sam.ipynb) which is an amazing tutorial on automatic labeling! Thanks a lot for [Piotr Skalski](https://github.com/SkalskiP) and [Robotflow](https://github.com/roboflow/notebooks)! 19 | - Checkout the [Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once) demo! It supports segmenting with various types of prompts (text, point, scribble, referring image, etc.) and any combination of prompts. 20 | - Checkout the [OpenSeeD](https://github.com/IDEA-Research/OpenSeeD) for the interactive segmentation with box input to generate mask. 21 | - Visual instruction tuning with GPT-4! Please check out the multimodal model **LLaVA**: [[Project Page](https://llava-vl.github.io/)] [[Paper](https://arxiv.org/abs/2304.08485)] [[Demo](https://llava.hliu.cc/)] [[Data](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K)] [[Model](https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0)] 22 | 23 | --- 24 | -------------------------------------------------------------------------------- /technologies/langchain/opengpts.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "OpenGPTs: Customizable AI Model Framework" 3 | author: "LangChain" 4 | description: "Empower your projects with customizable language models, prompt precision, and extensive tool integration through OpenGPTs by LangChain." 5 | --- 6 | 7 | # OpenGPTs 8 | OpenGPTs, powered by LangChain's technology stack, offers developers a versatile framework for harnessing AI capabilities. Leveraging over 60 language models, LangSmith's prompt customization, and a suite of 100+ tools, OpenGPTs provides unparalleled control and flexibility in AI model configurations. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [LangChain](https://lablab.ai/tech/langchain) | 13 | | Repository | [GitHub - LangChain OpenGPTs](https://github.com/langchain-ai/opengpts) | 14 | | Type | Customizable AI Model Framework | 15 | 16 | 17 | ## Framework Overview 18 | 19 | OpenGPTs serves as a customizable AI framework, allowing users to fine-tune language models, prompts, tools, vector databases, retrieval algorithms, and chat history databases. This level of control surpasses direct usage of OpenAI, enabling developers to interact with APIs directly and craft tailored user interfaces. 20 | 21 | ### Technology Tutorials 22 | 23 | 24 | ### Customization 25 | 26 | - **1. Language Models (LLMs):** Select from over 60 LLMs integrated with LangChain. Note the varying prompts required for different models. 27 | - **2. Prompt Customization:** Debug and fine-tune prompts with LangSmith for enhanced accuracy. 28 | - **3. Tool Integration:** Access a diverse suite of 100+ tools provided by LangChain or easily create custom tools. 29 | - **4. Vector Databases:** Choose from 60+ vector database integrations within LangChain. 30 | - **5. Retrieval Algorithms:** Optimize retrieval algorithms based on project requirements. 31 | - **6. Chat History Databases:** Tailor chat history databases to suit specific project needs. 32 | 33 | ### Agent Types (Default): 34 | 35 | 1. "GPT 3.5 Turbo" 36 | 2. "GPT 4" 37 | 3. "Azure OpenAI" 38 | 4. "Claude 2" 39 | 40 | OpenGPTs' appeal lies in its high level of customization compared to direct usage of OpenAI. Users gain control over language model selection, seamless addition of custom tools, and direct API utilization. Furthermore, developers can craft custom UIs as needed. 41 | 42 | Utilize OpenGPTs to harness the power of AI tailored precisely to your project requirements. 43 | 44 | For a deeper dive into usage and configuration, refer to the [OpenGPTs Documentation](https://github.com/langchain-ai/opengpts#quickstart). 45 | -------------------------------------------------------------------------------- /technologies/llamaindex/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "LlamaIndex" 3 | description: "LlamaIndex connects data sources to large language models, providing tools to ingest, index, and query data for building powerful AI apps augmented by your knowledge." 4 | --- 5 | 6 | # LlamaIndex: a Data Framework for LLM Applications 7 | LlamaIndex is an open source data framework that allows you to connect custom data sources to large language models (LLMs) like GPT-4, Claude, Cohere LLMs or AI21 Studio. It provides tools for ingesting, indexing, and querying data to build powerful AI applications augmented by your own knowledge. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [LlamaIndex](https://www.llamaindex.ai) | 12 | | Repository | https://github.com/jerryjliu/llama_index | 13 | | Type | Data framework for LLM applications | 14 | 15 | 16 | ## Key Features of LlamaIndex 17 | - Data Ingestion: Easily connect to existing data sources like APIs, documents, databases, etc. and ingest data in various formats. 18 | - Data Indexing: Store and structure ingested data for optimized retrieval and usage with LLMs. Integrate with vector stores and databases. 19 | - Query Interface: LlamaIndex provides a simple prompt-based interface to query your indexed data. Ask a question in natural language and get an LLM-powered response augmented with your data. 20 | - Flexible & Customizable: LlamaIndex is designed to be highly flexible. You can customize data connectors, indices, retrieval, and other components to fit your use case. 21 | 22 | ### How to Get Started with LlamaIndex 23 | LlamaIndex is open source and available on GitHub. [Visit the repo](https://github.com/run-llama/llama-index) to install the Python package, access documentation, guides, examples, and join the community: 24 | 25 | ### AI Tutorials 26 | 27 | --- 28 | 29 | ### LlamaIndex Libraries 30 | A curated list of libraries and technologies to help you build great projects with LlamaIndex. 31 | 32 | * [LlamaIndex Documentation](https://gpt-index.readthedocs.io/en/latest/) 33 | * [GPT Index](https://pypi.org/project/gpt-index/) 34 | * [LlamaHub](https://llamahub.ai) - the community library of data loaders 35 | * [LlamaLab](https://github.com/run-llama/llama-lab) - the cutting-edge AGI projects using LlamaIndex 36 | 37 | --- 38 | -------------------------------------------------------------------------------- /technologies/llava/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "LLaVA" 3 | description: "A state-of-the-art multimodal language and vision model achieving impressive chat capabilities and setting new accuracy benchmarks." 4 | --- 5 | 6 | # LLaVA: Large Language and Vision Assistant 7 | 8 | LLaVA represents a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding, achieving impressive chat capabilities mimicking spirits of the multimodal GPT-4 and setting a new state-of-the-art accuracy on Science QA. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Relese date | November 20, 2023 | 13 | | Repository | https://github.com/haotian-liu/LLaVA | 14 | | Type | Multimodal Language and Vision Model | 15 | 16 | ## What is LLaVA? 17 | 18 | Visual Instruction Tuning: LLaVA, short for Large Language-and-Vision Assistant, represents a significant leap in multimodal AI models. 19 | 20 | With a focus on visual instruction tuning, LLaVA has been engineered to rival the capabilities of GPT-4V, demonstrating its exceptional prowess in understanding both language and vision. This state-of-the-art model excels in tasks ranging from impressive chatbot interactions to setting a new standard in science question-answering accuracy, achieving a remarkable 92.53%. With LLaVA's innovative approach to instruction-following data and the effective combination of vision and language models, it promises a versatile solution for diverse applications, marking a significant milestone in the field of multimodal AI. 21 | 22 | ### LLaVA Tutorials 23 | 24 | --- 25 | 26 | ### LLaVA Libraries 27 | A curated list of libraries and technologies to help you build great projects with 'technology'. 28 | 29 | * [LLaVA Research Paper](https://llava-vl.github.io) 30 | * [GitHub Repo](https://github.com/haotian-liu/LLaVA) 31 | * [LLaVA Demo](https://llava.hliu.cc) 32 | * [Model Zoo](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md) 33 | 34 | --- 35 | -------------------------------------------------------------------------------- /technologies/meta/llama.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "LLaMA" 3 | author: "meta" 4 | description: "LLaMA (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in the subfield of AI. It is available in multiple sizes (7B, 13B, 33B, and 65B parameters) and aims to democratize access to large language models by requiring less computing power and resources for training and deployment." 5 | --- 6 | 7 | # LLaMA (Large Language Model Meta AI) 8 | 9 | LLaMA is a state-of-the-art foundational large language model designed to help researchers advance their work in the subfield of AI. It is available in multiple sizes (7B, 13B, 33B, and 65B parameters) and aims to democratize access to large language models by requiring less computing power and resources for training and deployment. LLaMA is developed by the FAIR team of Meta AI and has been trained on a large set of unlabeled data, making it ideal for fine-tuning for a variety of tasks. 10 | 11 | | General | | 12 | | -------------------- | -------------------------------------------------------- | 13 | | Release date | 2023 | 14 | | Author | [Meta AI FAIR Team](https://research.facebook.com/ai) | 15 | | Model sizes | 7B, 13B, 33B, 65B parameters | 16 | | Model Architecture | Transformer | 17 | | Training data source | CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange | 18 | | Supported languages | 20 languages with Latin and Cyrillic alphabets | 19 | 20 | ## Start building with LLaMA 21 | 22 | LLaMA provides an opportunity for researchers and developers to study large language models and explore their applications in various domains. To get started with LLaMA, you can access its code through the [GitHub repository](https://github.com/facebookresearch/llama). 23 | 24 | ### LLaMA Links 25 | 26 | Important links about LLaMA in one place: 27 | 28 | - [LLaMA Research Paper](https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/) - Read the research paper about LLaMA to learn more about the model and its development process. 29 | - [LLaMA GitHub Repository](https://github.com/facebookresearch/llama) - Access the LLaMA model, code, and resources on GitHub. 30 | 31 | --- -------------------------------------------------------------------------------- /technologies/meta/seamlessm4t.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "SeamlessM4T" 3 | author: "meta" 4 | description: "Explore SeamlessM4T, a multilingual AI model for speech/text translation in 100+ languages." 5 | --- 6 | 7 | # SeamlessM4T by Meta AI 8 | SeamlessM4T is a cutting-edge AI model developed by Meta AI, designed for seamless speech and text translation across more than 100 languages. Whether you need speech-to-speech, speech-to-text, or text-to-text translation, SeamlessM4T offers state-of-the-art capabilities to meet your communication needs. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [Meta AI](https://lablab.ai/tech/meta) | 13 | | Repository | https://github.com/metaai/seamlessm4t | 14 | | Type | Multilingual AI Model | 15 | 16 | 17 | ## Start building with SeamlessM4T 18 | 19 | We have compiled essential resources to help you kickstart your journey with SeamlessM4T. Dive into the world of multilingual translation and discover how you can leverage SeamlessM4T's power in your applications. 20 | 21 | --- 22 | 23 | ### Hackathon Preparation 24 | 25 | If you're gearing up for a hackathon or want to experiment with SeamlessM4T, here's what you can do: 26 | 27 | * **Access the Code**: Visit the [GitHub repo](https://github.com/facebookresearch/seamless_communication) to access the source code of SeamlessM4T. You can explore the codebase, contribute, or use it as a reference for your projects. 28 | * **Try the Demo**: Head over to [HuggingFace](https://huggingface.co/spaces/facebook/seamless_m4t) to experience the capabilities of SeamlessM4T in action. The demo allows you to experiment with various translation scenarios. 29 | * **Build Your Applications**: Leverage SeamlessM4T to build your own applications or integrations. Whether it's language translation, speech recognition, or text analysis, SeamlessM4T can power your AI projects. 30 | * **Responsible AI**: Ensure you follow best practices for responsible AI development. Be mindful of issues related to model bias and toxicity, and refer to Meta's responsible AI framework for guidance. 31 | 32 | For more in-depth insights and updates about SeamlessM4T, explore the [Meta AI blog](https://ai.meta.com/blog/seamless-m4t/). 33 | 34 | --- 35 | -------------------------------------------------------------------------------- /technologies/metagpt/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "MetaGPT" 3 | description: "Empower your software projects with MetaGPT, a collaborative AI model for complex tasks." 4 | --- 5 | 6 | # MetaGPT: Collaborative AI for Complex Tasks 7 | 8 | MetaGPT is a groundbreaking AI technology, designed to transform the landscape of software development. This innovative AI model can be thought of as a collaborative software entity, bringing together different roles within a software company to streamline complex tasks. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Relese date | August, 2023 | 13 | | Repository | https://github.com/geekan/MetaGPT | 14 | | Type | Collaborative AI Agent | 15 | 16 | 17 | ## Start Building with MetaGPT 18 | 19 | If you're ready to harness the power of MetaGPT for your software development projects, dive right in and start building! 20 | 21 | 22 | ### MetaGPT Tutorials 23 | 24 | 25 | --- 26 | 27 | ### MetaGPT Libraries 28 | 29 | * [MetaGPT Documentation](https://github.com/geekan/MetaGPT#documentation) 30 | 31 | --- 32 | 33 | ## Learn More About MetaGPT 34 | 35 | Installation Guide: Get MetaGPT up and running quickly with step-by-step instructions. 36 | * [Installation Guide](https://github.com/geekan/MetaGPT#installation): Get MetaGPT up and running quickly with step-by-step instructions. 37 | * [MetaGPT's Abilities](https://github.com/geekan/MetaGPT#metagpts-abilities): Explore the full range of capabilities MetaGPT offers. 38 | * [MetaGPT Quickstart](https://deepwisdom.feishu.cn/wiki/CyY9wdJc4iNqArku3Lncl4v8n2b): Dive into a quickstart guide to kickstart your projects with MetaGPT. 39 | * [Product Demo](https://github.com/geekan/MetaGPT#demo): See MetaGPT in action with a product demo. 40 | * [MetaGPT Discord Community](https://discord.com/invite/ZRHeExS6xv): Join the MetaGPT community on Discord to connect with other users and developers. 41 | * [MetaGPT Research Paper](https://arxiv.org/abs/2308.00352): Read the research paper to understand the technical details behind MetaGPT's capabilities. 42 | 43 | MetaGPT is the future of software development, bringing unprecedented efficiency and innovation to your projects. 44 | -------------------------------------------------------------------------------- /technologies/mistral-ai/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Mistral AI: Frontier AI in Your Hands" 3 | description: "Empowering AI enthusiasts with cutting-edge Mistral AI models." 4 | --- 5 | 6 | # Mistral AI: Frontier AI in Your Hands 7 | 8 | Mistral AI is at the forefront of pushing the boundaries of artificial intelligence. Their commitment to open models and community-driven innovation sets them apart. Discover Mistral 7B, their latest breakthrough in AI technology. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [Mistral AI](https://docs.mistral.ai) | 13 | | Repository | [GitHub](https://github.com/mistralai/) | 14 | | Type | Large Language Model | 15 | 16 | 17 | ## Introduction 18 | 19 | Mistral 7B v0.1 is Mistral AI's first Large Language Model (LLM). A Large Language Model (LLM) is an artificial intelligence algorithm trained on massive amounts of data that is able to generate coherent text and perform various natural language processing tasks. 20 | 21 | The raw model weights are downloadable from the [documentation](https://docs.mistral.ai) and on [GitHub](https://github.com/mistralai/). 22 | 23 | A Docker image bundling vLLM, a fast Python inference server, with everything required to run our model is provided to quickly spin a completion API on any major cloud provider with NVIDIA GPUs. 24 | 25 | ### Where to Start? 26 | 27 | If you are interested in deploying the Mistral AI LLM on your infrastructure, check out the [Quickstart](https://docs.mistral.ai). If you want to use the API served by a deployed instance, go to the [Interacting with the model page](https://docs.mistral.ai/api) or view the [API specification](https://docs.mistral.ai/api). 28 | 29 | ## Mistral AI Resources 30 | 31 | * [Quickstart Guide](https://docs.mistral.ai) 32 | * [API Documentation](https://docs.mistral.ai/api) 33 | * [Cloud Deployment](https://docs.mistral.ai/category/cloud-deployment) 34 | * [Usage Guidelines](https://docs.mistral.ai/category/usage) 35 | * [Large Language Models](https://docs.mistral.ai/category/large-language-models) 36 | 37 | 38 | ### Mistral AI Tutorials 39 | 40 | --- 41 | -------------------------------------------------------------------------------- /technologies/monday/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "monday" 3 | description: "Explore the various tools, resources, and integrations for the Monday.com platform to enhance and customize your workflow management experience." 4 | --- 5 | 6 | # Monday.com Platform Overview 7 | 8 | Monday.com is a versatile work operating system that enables teams to manage their projects, workflows, and everyday tasks more efficiently. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Release date | 2012 | 13 | | Author | [Monday.com](https://www.monday.com/) | 14 | | Marketplace | [Link](https://monday.com/marketplace/) | 15 | 16 | ## Monday.com API 17 | 18 | Check out the [Monday.com API documentation](https://developer.monday.com/api-reference/docs) to learn more about the API and how to use it. 19 | 20 | ## Monday.com Apps Marketplace 21 | 22 | Visit the [Monday.com apps marketplace](https://monday.com/marketplace/) for more tools, integrations, and resources that complement the Monday.com platform. You can extend the functionality of your workflows by using in-built apps, as well as build your own custom apps. 23 | 24 | --- -------------------------------------------------------------------------------- /technologies/monday/monday-ai-assistant.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Monday AI Assistant" 3 | author: "monday" 4 | description: "Discover the power of Artificial Intelligence with Monday AI Assistant, a versatile platform for developers to create AI apps for the Monday marketplace." 5 | --- 6 | 7 | # Monday AI Assistant: Boost Your Productivity 8 | Get ready to harness the power of Artificial Intelligence in your workspaces. Monday AI Assistant, a new and innovative platform, is designed to enhance your productivity and revolutionize the way you work. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Release date | May, 2023 | 13 | | Author | [Monday.com](https://www.monday.com/) | 14 | | Type | AI-driven productivity assistant | 15 | 16 | ## Level up your Monday workspace with AI 17 | Monday AI Assistant offers a range of cutting-edge features that can streamline your everyday tasks and elevate efficiency. From generating tasks and composing emails to summarizing complex content and building formulas, Monday AI Assistant will transform your workspace like never before. 18 | 19 | ### Monday AI Assistant Beta Features 20 | - Task Generation 21 | - Email Composing and Rephrasing 22 | - Complex Task Summarization 23 | - Building Formulas 24 | 25 | ### Monday AI Assistant Marketplace 26 | Monday AI Assistant opens up new possibilities for developers by offering access to an extensive market for AI-powered apps. Bring your innovative ideas to life and make the future of work smarter and more efficient with Monday AI Assistant. 27 | 28 | ### Helpful Resources 29 | - [Monday AI Assistant Annoucement](https://community.monday.com/t/introducing-monday-ai-assistant/56341) 30 | - [Monday Marketplace](https://monday.com/marketplace) 31 | 32 | -------------------------------------------------------------------------------- /technologies/monday/mondaycom.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Monday.com: Team Collaboration and Task Management" 3 | author: "monday" 4 | description: "Streamline your team's workflow and optimize collaboration with Monday.com, an all-in-one platform for task management, communication, and project tracking." 5 | --- 6 | 7 | # monday.com: Improve Team Collaboration and Task Management 8 | 9 | monday.com is an intuitive work operating system that simplifies collaboration, task management, and project tracking for teams of all sizes. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Release date | 2012 | 14 | | Author | [monday.com](https://monday.com/) | 15 | | Marketplace | [monday.com Marketplace](https://monday.com/marketplace) | 16 | | Type | Work OS, Task Management, and Collaboration | 17 | 18 | ## monday.com Templates 19 | Get started with ready-made templates from the Monday.com template center. These templates are designed for various industries, businesses, and teams and can be easily customized to fit your needs. Explore the [monday.com Templates](https://monday.com/templates) to find your perfect fit. 20 | 21 | Looking for your next app idea? Get more app ideas here: [monday.com/appdeveloper/appideas](https://monday.com/appdeveloper/appideas) 22 | --- 23 | 24 | ## monday.com API and Integrations 25 | Bring your workflow to the next level by utilizing Monday.com's API and integration possibilities. 26 | 27 | * [monday.com API Documentation](https://developer.monday.com/api-reference/docs) 28 | * [monday.com App integration documentation](https://developer.monday.com/apps/docs ) 29 | --- 30 | 31 | ## monday.com Pricing 32 | Find the pricing package that suits your organization's needs with [Monday.com Pricing](https://monday.com/pricing). 33 | 34 | --- 35 | 36 | ### monday.com Help and Contact 37 | Explore the [Monday.com Help Center](https://monday.com/help) for guides, articles, and support resources, or get in touch with the team for further assistance. 38 | -------------------------------------------------------------------------------- /technologies/mongodb/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "MongoDB" 3 | description: "A powerful and flexible document-based NoSQL database that has gained popularity among developers." 4 | --- 5 | 6 | # MongoDB: The Developer Data Platform 7 | Get the latest and greatest with MongoDB 6.0! Build faster and smarter with a developer data platform built on the leading modern database. Whether you’re tackling transactional, search, analytics, or mobile use cases, MongoDB has you covered. Plus, enjoy a common query interface and a data model that developers love. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Platform | [MongoDB](https://www.mongodb.com) 12 | | Repository | https://github.com/mongodb/mongo | 13 | | Type | NoSQL Database | 14 | 15 | 16 | ### Key features 17 | **Build Faster:** 18 | Ship and iterate 3–5x faster using MongoDB’s document data model. 19 | Enjoy a unified query interface for any use case. 20 | 21 | **Scale Further:** 22 | Whether it’s your first customer or 20 million users, meet your performance SLAs in any environment. 23 | 24 | **Safety first:** 25 | Ensure high availability, protect data integrity, and meet security and compliance standards for mission-critical workloads. 26 | 27 | **Fully Managed in the Cloud:** 28 | Start in seconds and scale to millions with MongoDB Atlas, our cloud services. 29 | Explore a multi-cloud developer data platform for various use cases, from transactional to analytical. 30 | 31 | **Mobile Real-Time Data at the Edge:** 32 | Launch secure mobile apps with native, edge-to-cloud sync and automatic conflict resolution. 33 | 34 | **Self-Managed Option:** 35 | Run MongoDB anywhere, from your laptop to your data center. 36 | 37 | **Community Edition:** 38 | Our distributed document database is where it all began. 39 | Free forever with seamless migration to Atlas. 40 | 41 | **Enterprise Advanced:** 42 | For robust features and support, consider the enterprise version. 43 | 44 | **Built by Developers, for Developers:** 45 | MongoDB’s document data model maps to how developers think and code. 46 | 47 | ### What You Can Do with MongoDB 48 | It’s a flexible document data model that lets you ship and iterate faster. Enjoy a unified query interface for all use cases. Whether it’s your first customer or 20 million users worldwide, MongoDB ensures performance SLAs in any environment. Easily ensure high availability, protect data integrity, and meet security and compliance standards for your mission-critical workloads. Plus, MongoDB Atlas provides fully managed cloud services, and you can run it anywhere, from your laptop to your data center. 49 | 50 | ### AI Tutorials 51 | 52 | --- 53 | -------------------------------------------------------------------------------- /technologies/multion/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "MultiOn" 3 | description: "MultiOn, the world's first personal AI Agent and Life Copilot." 4 | --- 5 | 6 | # MultiOn: Your Personal AI Agent & Life Copilot 7 | 8 | MultiOn is a cutting-edge AI assistant designed to simplify your life by performing various tasks autonomously. With MultiOn, you can order food, book flights, and even get workplace certifications - all on autopilot. As the world's first personal AI agent, it introduces a new era of convenience and efficiency. 9 | 10 | ## Key Features 11 | 12 | - Autonomously perform tasks like food ordering and flight booking 13 | - Seamlessly integrates with various platforms 14 | - User-friendly and intuitive interface 15 | - Constantly evolving and learning from user interactions 16 | 17 | ### Setup and Usage 18 | 19 | To get started with MultiOn, follow these steps: 20 | 21 | 1. Visit the [MultiOn website](https://www.multion.ai/) and request access. 22 | 2. Once you receive access, follow the on-screen instructions to set up your personal AI agent. 23 | 3. Explore and instruct MultiOn to perform tasks by typing in natural language commands. 24 | 4. Monitor the task progress on the MultiOn dashboard. 25 | 26 | ### Integration with Other Tools 27 | 28 | MultiOn is designed to work seamlessly with various platforms and services. MultiOn can integrate and interact autonomously. 29 | 30 | ### System Requirements 31 | 32 | MultiOn is a web-based platform and requires a stable internet connection for optimal performance. It is compatible with all modern web browsers. 33 | 34 | ### Join the Future of Autonomous Apps 35 | 36 | Embrace the future of autonomous applications with MultiOn, your personal AI agent. Whether you need to send an autonomous meeting invitation or order a burger from your favorite restaurant, MultiOn is built to be intuitive, adaptive, and empowering. 37 | -------------------------------------------------------------------------------- /technologies/nomicai/gpt4all.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "GPT4All" 3 | author: "NomicAI" 4 | description: "GPT4All is an open-source ecosystem of on-edge large language models that run locally on consumer-grade CPUs, providing a powerful and customizable AI assistant." 5 | --- 6 | 7 | # GPT4All 8 | 9 | GPT4All is an open-source ecosystem of on-edge large language models that run locally on consumer-grade CPUs. It offers a powerful and customizable AI assistant for a variety of tasks, including answering questions, writing content, understanding documents, and generating code. 10 | 11 | GPT4All is supported and maintained by Nomic AI, which aims to make it easier for individuals and enterprises to train and deploy their own large language models on the edge. 12 | 13 | | General | | 14 | | ----------- | --------------------------------------- | 15 | | Release date | 2023 | 16 | | Author | [Nomic AI](https://www.nomic.ai/) | 17 | | Type | Natural Language Processing | 18 | 19 | ## Start building with GPT4All 20 | 21 | To start building with GPT4All, visit the [GPT4All website](https://gpt4all.io/) and follow the installation instructions for your operating system. 22 | 23 | --- 24 | 25 | ### GPT4All Libraries 26 | 27 | A curated list of libraries to help you build great projects with GPT4All. 28 | 29 | - [GPT4All Website](https://gpt4all.io/) 30 | - [GPT4All Documentation](https://docs.gpt4all.io/) 31 | - [Python Bindings](https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python) 32 | - [Typescript Bindings](https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/typescript) 33 | - [GoLang Bindings](https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/golang) 34 | - [C# Bindings](https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/csharp) 35 | 36 | --- 37 | 38 | ### GPT4All Examples 39 | 40 | - [GPT4All Chat Client](https://gpt4all.io/) 41 | - [GPT4All in Python](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) 42 | 43 | --- 44 | 45 | For more information on GPT4All, including installation instructions, technical reports, and contribution guidelines, visit the [GPT4All GitHub repository](https://github.com/nomic-ai/gpt4all). 46 | -------------------------------------------------------------------------------- /technologies/nomicai/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "NomicAI" 3 | description: "Nomic's goal is to improve the explainability and accessibility of AI." 4 | --- 5 | 6 | # NomicAI 7 | 8 | NomicAI is a company focusing on improving the explainability and accessibility of artificial intelligence (AI). They aim to address the challenges brought about by the rapid rise and concentration of AI technologies in a small number of well-funded AI labs. NomicAI's products include Atlas, a data engine equipped with a scalable embedding space explorer, and GPT4All, an open-source, open-data ecosystem of edge language models. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Company | [NomicAI](https://home.nomic.ai/) | 13 | | Area served | Worldwide | 14 | 15 | ## Key Products and Services 16 | 17 | NomicAI offers products and services aimed at improving the accessibility, understanding, and control of AI technologies. Their key products and services include: 18 | 19 | - **Atlas**: A data engine featuring the world's most scalable embedding space explorer, allowing users to visualize, organize, curate, search, and share massive datasets in their browsers. Atlas gives users insights into the data AI models learn from and helps them understand the associations AI models establish. 20 | 21 | - **GPT4All**: An ecosystem of open-source, open-data edge language models designed to ensure unprecedented access to AI technology. GPT4All allows anyone to benefit from AI, regardless of hardware, 22 | 23 | --- 24 | -------------------------------------------------------------------------------- /technologies/novita/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Novita" 3 | description: "Novita is a scalable AI cloud platform offering over 200 powerful APIs for LLMs, image/video generation, speech, and more. With serverless GPU infrastructure, seamless model deployment, and Hugging Face integration, Novita makes advanced AI accessible and affordable for developers and enterprises." 4 | --- 5 | 6 | # Novita 7 | 8 | Novita is a scalable AI cloud platform offering over 200 powerful APIs for LLMs, image/video generation, speech, and more. With serverless GPU infrastructure, seamless model deployment, and Hugging Face integration, Novita makes advanced AI accessible and affordable for developers and enterprises. 9 | 10 | | General | | 11 | | ----------- | --------------------------- | 12 | | Release date | 2023 | 13 | | Author | [Novita](https://novita.ai/) | 14 | | Type | Multimodal AI Cloud Platform | 15 | 16 | --- 17 | 18 | ### Novita - APIs & Services 19 | 20 | Explore Novita’s wide range of ready-to-use APIs for building AI applications 21 | 22 | - [All Models](https://novita.ai/models) Over 200 pre-integrated APIs including text-to-image, speech synthesis, and video generation 23 | - [Quickstart Guide](https://novita.ai/docs/guides/quickstart) Get started with Novita in minutes 24 | - [API Docs](https://novita.ai/docs) Full documentation for integrating APIs 25 | - [Serverless GPU Inference](https://novita.ai/gpus) Scalable, pay-as-you-go inference infrastructure 26 | 27 | --- 28 | 29 | ### Novita - Integrations 30 | 31 | Tools and integrations to supercharge your workflow 32 | 33 | - [Novita on Hugging Face](https://blogs.novita.ai/novita-ai-is-now-available-on-hugging-face/) Use Novita as your inference backend directly from Hugging Face 34 | 35 | --- 36 | 37 | ### Novita - Tutorials & Demos 38 | 39 | Written guides and demos to help you get started 40 | 41 | - [How to Use Novita APIs](https://novita.ai/docs/guides/quickstart) End-to-end guide to integrating Novita APIs 42 | - [Deploy Your Custom Model](https://novita.ai/) Scale your custom models with Novita support 43 | - [Video-to-Image & Diffusion APIs](https://novita.ai/models) Visual AI tools with just an API call 44 | 45 | --- 46 | 47 | ### Novita - Key Resources 48 | 49 | Important links to explore the platform 50 | 51 | - [Homepage](https://novita.ai/) Start here 52 | - [GPU Pricing](https://novita.ai/gpus) Compare pricing for A100, RTX 4090, L40S and more 53 | - [Referral Program](https://novita.ai/referral) Invite friends and earn up to $500 in credits 54 | - [Blog & Updates](https://blogs.novita.ai/) News, feature drops, and tutorials 55 | 56 | --- 57 | -------------------------------------------------------------------------------- /technologies/open-interpreter/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Open Interpreter" 3 | description: "An open-source, locally running implementation of OpenAI's Code Interpreter." 4 | --- 5 | 6 | # Open Interpreter - Get Started With The Technology 7 | Description of the technology. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [KillianLucas](https://github.com/KillianLucas) | 12 | | Repository | https://github.com/KillianLucas/open-interpreter | 13 | | Type | AI Development Tool | 14 | 15 | 16 | ## Start building with Open Interpreter 17 | 18 | Open Interpreter empowers language models with code execution capabilities in your local environment. It supports Python, JavaScript, Shell, and more. Engage with it through a ChatGPT-like interface in your terminal by running '$ interpreter' after installation. 19 | 20 | This provides a natural-language interface for various tasks: 21 | 22 | * Edit photos, videos, PDFs, and more. 23 | * Control a Chrome browser for research. 24 | * Analyze large datasets, create plots, and perform data cleaning. 25 | 26 | Note, code execution requires user approval. 27 | 28 | ### Open Interpreter Tutorials 29 | 30 | --- 31 | 32 | ## Open Interpreter Libraries and Resources 33 | 34 | A curated list of libraries and technologies to help you build great projects with Open Interpreter. 35 | 36 | * [GitHub Repo](https://github.com/KillianLucas/open-interpreter) 37 | * [Technology Demo](https://github.com/KillianLucas/open-interpreter#demo) 38 | * [Quick Start Guide](https://github.com/KillianLucas/open-interpreter#quick-start) 39 | * [Contribution](https://github.com/KillianLucas/open-interpreter#contributing) 40 | 41 | --- 42 | -------------------------------------------------------------------------------- /technologies/openai/assistants-api.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Assistants API" 3 | author: "OpenAI" 4 | description: "Simplify AI integration with OpenAI's Assistants API, empowering developers to create AI assistants effortlessly." 5 | --- 6 | 7 | # OpenAI's Assistants API 8 | OpenAI's Assistants API simplifies AI integration for developers, eliminating the need for managing conversation histories and providing access to tools like Code Interpreter and Retrieval. The API also allows developers to integrate their own tools, making it a versatile platform for AI assistant development. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [OpenAI](https://lablab.ai/tech/openai) | 13 | | Documentation | [Link](https://platform.openai.com/docs/assistants/overview) | 14 | | Type | AI Assistant | 15 | 16 | 17 | ## Model Overview 18 | 19 | The Assistants API enables developers to create AI assistants using OpenAI models and tools. It supports various functionalities such as managing conversation threads, triggering responses, and integrating customized tools. 20 | 21 | ### Assistants API Tutorials 22 | 23 | --- 24 | 25 | ### Technology Resources 26 | - [Assistants API Documentation](https://platform.openai.com/docs/assistants/overview) - Comprehensive documentation for integrating and utilizing the Assistants API efficiently. 27 | 28 | The Assistants API allows developers to construct AI assistants within their applications. An assistant can leverage models, tools, and knowledge to respond to user queries effectively. Presently supporting Code Interpreter, Retrieval, and Function calling, the API aims to introduce more tools developed by OpenAI while also allowing user-provided tools on the platform. 29 | 30 | To explore its capabilities, developers can use the Assistants Playground or follow the integration guide in the official documentation. The integration process involves defining an Assistant, enabling tools, managing conversation threads, and triggering responses. 31 | --- 32 | -------------------------------------------------------------------------------- /technologies/openai/codex.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Codex" 3 | author: "OpenAI" 4 | description: "OpenAI Codex is an artificial intelligence system that enables developers to translate natural language into code & much more." 5 | --- 6 | 7 | # OpenAI Codex 8 | 9 | OpenAI Codex is an artificial intelligence model developed by OpenAI. It parses natural language and generates code in response. It is used to power GitHub Copilot, a programming autocompletion tool. Codex is a descendant of OpenAI's GPT-3 model, fine-tuned for use in programming applications. OpenAI has released an API for Codex in closed beta. Based on GPT-3, a neural network trained on text, Codex has additionally been trained on 159 gigabytes of Python code from 54 million GitHub repositories. You can find more information here [https://openai.com/blog/openai-codex/](https://openai.com/blog/openai-codex/) 10 | 11 | | General | | 12 | | ----------- | ---------------------------------------------------------------- | 13 | | Relese date | August 31, 2021 | 14 | | Author | [OpenAI](https://openai.com/) | 15 | | Repository | - | 16 | | Type | Autoregressive, Transformer, Language model | 17 | 18 | ## Start building with Codex 19 | 20 | We have collected the best Codex libraries and resources to help you get started to build with Codex today. To see what others are building with Codex, check out the community built [Codex Use Cases and Applications](/apps/tech/openai/codex). 21 | 22 | 23 | 24 | --- 25 | 26 | ### Boilerplates 27 | 28 | Kickstart your development with a Codex based boilerplate. Boilerplates is a great way to headstart when building your next project with Codex. 29 | 30 | - [Codex Boilerplate](https://github.com/lablab-ai/codex-python-boilerplate) Create a function just by typing what it should do, with help of OpenAI Codex. 31 | 32 | --- 33 | 34 | ### Libraries 35 | 36 | A curated list of libraries and technologies to help you build great projects with Codex. 37 | 38 | - [OpenAI Codex Python Client Library](https://github.com/openai/openai-python) The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language. 39 | - [OpenAI Codex NodeJS Client Library](https://www.npmjs.com/package/openai) The OpenAI Node.js library provides convenient access to the OpenAI API from Node.js applications. 40 | 41 | --- 42 | -------------------------------------------------------------------------------- /technologies/openai/dall-e-2.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "DALL-E 2" 3 | author: "OpenAI" 4 | description: "DALL-E 2 is a new AI system that can create realistic images and art from a description in natural language." 5 | --- 6 | 7 | # OpenAI DALL-E 2 8 | 9 | DALL-E 2 is a new version of DALL-E, a generative language model that creates corresponding original images from sentences. 10 | DALL-E 2 has 3.5B parameters, making it large but not nearly as large as GPT-3. Interestingly, it is smaller than its predecessor, which had 12B parameters. 11 | Despite its size, DALL-E 2 generates images 4 times better in resolution than DALL-E. Additionally, human judges prefer DALL-E 2 over DALL-E 70% of the time in caption matching and photorealism. 12 | 13 | | General | | 14 | | ----------- | ---------------------------------------------------------------- | 15 | | Relese date | April 6, 2022 | 16 | | Author | [OpenAI](https://openai.com/) | 17 | | Repository | - | 18 | | Type | Autoregressive, Transformer, Image generation model | 19 | 20 | ## Start building with DALL-E 2 21 | 22 | We have collected the best DALL-E 2 libraries and resources to help you get started to build with DALL-E 2 today. To see what others are building with DALL-E 2, check out the community built [DALL-E 2 Use Cases and Applications](/apps/tech/openai/dall-e-2). 23 | 24 | ### OpenAI DALL-E 2 Tutorials 25 | 26 | 27 | 28 | --- 29 | 30 | ### Libraries 31 | 32 | A curated list of libraries and technologies to help you build great projects with DALL-E 2. 33 | 34 | - [Blog](https://openai.com/blog/dall-e-api-now-available-in-public-beta/) Open AI's blog post about DALL-E 2 API 35 | - [Documentation](https://beta.openai.com/docs/guides/images) DALL-E 2 Documentation 36 | - [API](https://beta.openai.com/docs/guides/images/usage) DALL-E 2 API Documentation 37 | 38 | --- 39 | -------------------------------------------------------------------------------- /technologies/openai/dall-e-mini.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "OpenAI Dall-e Mini" 3 | author: "OpenAI" 4 | description: "Dall-e Mini is a version of Openai's Dall-e algorithm that is specifically designed to generate small, high-quality images." 5 | --- 6 | 7 | # OpenAI Dall-e Mini 8 | DALL-E mini is a small, open-source artificial intelligence (AI) system developed by OpenAI. 9 | It is based on the same technology as OpenAI's larger DALL-E model, but is designed to be more 10 | accessible and easier to use. The system is designed to generate images, text, and audio, based on text input. 11 | 12 | | General | | 13 | | --- | --- | 14 | | Relese date | January, 2021 | 15 | | Author | [OpenAI](https://openai.com/) | 16 | | Research | https://openai.com/research/dall-e | 17 | | Type | Deep learning model | 18 | 19 | --- 20 | 21 | ### Libraries 22 | Discover DALL-E mini 23 | 24 | * [DALL-E mini on Hugging Face](https://huggingface.co/spaces/dalle-mini/dalle-mini) Gym is a standard API for reinforcement learning, and a diverse collection of reference environments 25 | * [Craiyon](https://www.craiyon.com/) Formerly DALL-E mini 26 | * [DALL-E mini on GitHub](https://github.com/borisdayma/dalle-mini) DALL-E mini python package and colab example 27 | -------------------------------------------------------------------------------- /technologies/openai/gpt-4o.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "GPT 4o" 3 | author: "OpenAI" 4 | description: "The GPT-4o family is a multimodal model series for efficient text, image, and audio processing." 5 | --- 6 | 7 | 8 | # GPT-4o Family of Models 9 | 10 | ## Overview 11 | The GPT-4o family, introduced in mid-2024, is OpenAI’s advanced multimodal series designed for high-efficiency and interactive applications. Built on the foundation of the GPT-4 architecture, the 4o models process text, images, and audio, supporting highly responsive and nuanced interactions. 12 | 13 | ## Key Features 14 | 15 | 1. **Multimodal Capabilities** – Handles text, images, and voice, making it versatile for applications across domains such as customer support, education, and content creation. 16 | 2. **Real-Time Voice Interaction** – Responds to audio inputs with minimal latency, allowing for natural, conversational exchanges. 17 | 3. **Multilingual Support** – Supports over 50 languages, enabling global accessibility and adaptability. 18 | 4. **Cost-Effectiveness** – The model runs twice as fast as GPT-4 Turbo while being 50% more cost-effective, making it attractive for businesses with high interaction volumes. 19 | 20 | ## Variations 21 | 22 | - **GPT-4o Base** – Designed for general multimodal applications, optimized for balanced performance across text, image, and audio inputs. 23 | - **GPT-4o Mini** – A smaller, cost-effective version for high-demand, lower-cost applications, ideal for scaling large deployments. 24 | 25 | ## Applications 26 | 27 | - **Customer Support** – Enables real-time support across text, audio, and images, enhancing user experience. 28 | - **Content Creation and Translation** – Automates content generation and accurate translation across multiple languages. 29 | - **Accessibility Solutions** – Enhances accessibility tools for people with disabilities, using voice and visual processing. 30 | 31 | ## Getting Started with GPT-4o 32 | 33 | While a dedicated tech page is forthcoming, OpenAI offers APIs for developers to experiment with the GPT-4o family in various interactive and multimodal applications. 34 | -------------------------------------------------------------------------------- /technologies/openai/gpt3-5.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "GPT-3.5" 3 | author: "OpenAI" 4 | description: "GPT-3.5 stands for Generative Pre-trained Transformer and is fine-tuned version of GPT-3. It is an autoregressive language model that uses deep learning to produce human-like text. There are many models in GPT-3.5 family as gpt-3.5-turbo or gpt-3.5-turbo." 5 | --- 6 | 7 | # OpenAI GPT-3.5 8 | 9 | GPT-3.5 is a set of models that improve on GPT-3 and can understand as well as generate natural language or code. It is an autoregressive language model (LLM) from [OpenAI](https://lablab.ai/tech/openai) that uses deep learning to produce human-like text. It is a fine-tuned version of GPT-3, the third-generation language prediction model in the GPT series created by OpenAI. GPT-3.5 has garnered significant attention and acclaim for its unparalleled ability to understand and generate human-like text. With an astounding 175 billion parameters at its disposal, GPT-3.5 stands as one of the most expansive and powerful language models ever constructed at the time of its release. 10 | 11 | GPT-3.5's exceptional performance is not only limited to its sheer size but also stems from its highly refined architecture. Harnessing the power of deep learning, GPT-3.5 delivers consistently accurate and relevant results, elevating the standard for language models and establishing itself as a trailblazer in the field of artificial intelligence. 12 | 13 | | General | | 14 | | ----------- | ---------------------------------------------------------------- | 15 | | Relese date | March 15, 2022 | 16 | | Author | [OpenAI](https://openai.com/) | 17 | | Type | Autoregressive, Transformer, Language model | 18 | 19 | ## Start building with OpenAI GPT-3.5 20 | 21 | OpenAI GPT-3 has a rich ecosystem of libraries and resources to help you get started. We have collected the best GPT-3.5 libraries and resources to help you get started to build with GPT-3 today. To see what others are building with GPT-3, check out the community built [GPT-3 Use Cases and Applications](/apps/tech/openai/gpt3). 22 | 23 | ### OpenAI GPT-3.5 Tutorials 24 | 25 | 26 | 27 | ### OpenAI GPT-3.5 Boilerplates 28 | 29 | Kickstart your development with a GPT-3.5 based boilerplate. Boilerplates is a great way to headstart when building your next project with GPT-3. 30 | 31 | --- 32 | 33 | ### OpenAI GPT-3.5 Libraries 34 | 35 | A curated list of libraries and technologies to help you build great projects with GPT-3.5. 36 | 37 | --- 38 | -------------------------------------------------------------------------------- /technologies/openai/gpt4.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "OpenAI GPT-4" 3 | author: "OpenAI" 4 | description: "GPT-4 is OpenAI's 4th generation Generative Pre-trained Transformer. It is a multimodal large language model that uses deep learning to produce human-like text, accepting image and text inputs. GPT-4 is OpenAI's most advanced system, producing safer and more useful responses" 5 | --- 6 | 7 | # OpenAI GPT-4 8 | GPT-4 is an OpenAI's 4th generation Generative Pre-trained Transformer, a successor of ChatGPT. 9 | It is a multimodal large language model that uses deep learning to generate human-like text and it 10 | can accept image and text as inputs. GPT-4 is OpenAI's most advanced system, producing safer and more useful 11 | responses, it hallucinates less than previous models and is a major step forward. There are two versions 12 | with context windows of 8192 and 32768 tokens, which is a significant improvement over GPT-3.5 and GPT-3. 13 | OpenAI introduced the concept of "system message" that can be given to chat-optimized versions of GPT-4 in 14 | order to set the tone and task of the response. 15 | 16 | | General | | 17 | | --- | --- | 18 | | Relese date | March 14, 2023 | 19 | | Author | [OpenAI](https://openai.com/) | 20 | | Research | https://openai.com/research/gpt-4 | 21 | | Type | Autoregressive, Transformer, Language model | 22 | 23 | --- 24 | 25 | ### Libraries 26 | A curated list of libraries and technologies to help you build great projects with GPT-4. 27 | 28 | * [Research Paper](https://openai.com/research/gpt-4) Open AI's GPT-4 research paper 29 | * [Product Page](https://openai.com/product/gpt-4) GPT-4 Product description page 30 | * [Playground](https://beta.openai.com/playground) Quickly create and test your GPT-4 app ideas. You need to choose Mode: 'Chat' to be able to use GPT-4 31 | -------------------------------------------------------------------------------- /technologies/openai/gpts.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Custom GPTs: Tailored AI Models for Diverse Tasks" 3 | author: "OpenAI" 4 | description: "Empower your workflows with tailored AI models. Custom GPTs offer versatile solutions for personalized tasks and seamless integration. Create, customize, and share your GPTs effortlessly." 5 | --- 6 | 7 | # Custom GPTs: Redefining AI Customization 8 | Custom GPTs redefine your ChatGPT experience, enabling the creation of personalized AI models tailored to specific tasks. Whether you strive to optimize your project management workflows, simplify complex legal jargon, or generate captivating marketing copy, Custom GPTs offer a versatile solution to meet your needs. 9 | 10 | 11 | | General | | 12 | | --- | --- | 13 | | Release date | November 6, 2023 | 14 | | Author | [OpenAI](https://lablab.ai/tech/openai) | 15 | | Type | Customizable AI model | 16 | 17 | 18 | Custom GPTs redefine ChatGPT's capabilities, enabling tailored AI models that seamlessly integrate external data sources, catering to personal, corporate, or public requirements without coding skills. 19 | 20 | ## Key Features 21 | 22 | - **Tailored Solutions**: Customize AI models for various needs. 23 | - **Real-World Integration**: Seamlessly incorporate external data sources. 24 | - **Simplified Creation**: Build AI models without coding knowledge. 25 | 26 | For more details and updates, refer to the [OpenAI Blog - Introducing GPTs](https://openai.com/blog/introducing-gpts) and explore diverse [Use Cases](https://openai.com/chatgpt#do-more-with-gpts) demonstrating the potential of Custom GPTs. 27 | 28 | 29 | ### GPTs Tutorials 30 | 31 | --- 32 | -------------------------------------------------------------------------------- /technologies/openai/image-generation-api.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "DALL·E Image Generation API" 3 | author: "OpenAI" 4 | description: "Explore image generation capabilities with the cutting-edge DALL·E API." 5 | --- 6 | 7 | # Image Generation DALL·E API 8 | The DALL·E API empowers users to create, manipulate, and generate images through textual prompts, showcasing unprecedented advancements in AI-driven image creation. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [OpenAI](https://lablab.ai/tech/openai) | 13 | | Documentation | [Link](https://platform.openai.com/docs/guides/images/image-generation?context=node) | 14 | | Type | AI Image Generation | 15 | 16 | 17 | ## Introduction to DALL·E Image Generation 18 | 19 | The DALL·E API facilitates image creation and manipulation through textual prompts. Whether generating images from scratch or altering existing ones, DALL·E's cutting-edge capabilities revolutionize the creative process. 20 | 21 | DALL·E, offered in two versions - DALL·E 3 and DALL·E 2, enables three core functionalities: 22 | 23 | - **Creating Images from Text Prompts**: Both DALL·E versions 24 | - **Editing Images with Textual Input**: Exclusive to DALL·E 2 25 | - **Generating Variations of Existing Images**: Exclusive to DALL·E 2 26 | 27 | 28 | ### State-of-the-Art Image Generation 29 | 30 | DALL·E's flexibility spans artistic to photorealistic images, responding adeptly to natural language descriptions. Continual advancements in image quality, latency, scalability, and usability underscore OpenAI's commitment to pushing the boundaries of AI-driven image creation. 31 | 32 | ### Built-in Moderation 33 | 34 | With insights from deploying DALL·E to millions worldwide, the API incorporates trust and safety measures, including filters for sensitive content. This commitment to responsible deployment ensures developers can focus on innovation while relying on built-in mitigations. 35 | 36 | For inspiring use cases and comprehensive insights, explore the [OpenAI article](https://openai.com/blog/dall-e-api-now-available-in-public-beta). 37 | 38 | ### Image Generation DALL·E API Tutorials 39 | 40 | --- 41 | -------------------------------------------------------------------------------- /technologies/openai/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "OpenAI" 3 | description: "OpenAI is an American artificial intelligence (AI) research laboratory. OpenAI systems run on an Azure-based supercomputing platform from Microsoft." 4 | --- 5 | 6 | # OpenAI Overview 7 | 8 | **About OpenAI** 9 | OpenAI is a leading AI research lab founded in 2015, focused on creating friendly AGI (Artificial General Intelligence) that is safe and beneficial for humanity. The organization develops state-of-the-art AI models and tools across various domains, including natural language processing, image generation, and voice recognition. 10 | 11 | ## General Information 12 | 13 | | Attribute | Details | 14 | |---------------|-----------------------------------------------| 15 | | **Company** | OpenAI | 16 | | **Founded** | December 11, 2015 | 17 | | **Repository**| [GitHub](https://github.com/openai/) | 18 | | **Discord** | [Join the OpenAI channel on Discord](https://discord.com/invite/openai) | 19 | 20 | ## Popular Models and Technologies 21 | 22 | This is a quick summary of some of OpenAI's widely adopted and impactful models: 23 | 24 | 1. **[GPT-4](https://openai.com/gpt-4/)** – The fourth-generation language model, multimodal, capable of handling text and images with advanced reasoning and safety features. 25 | 2. **[GPT-3](https://openai.com/gpt-3/)** – Known for its versatility, GPT-3 is used in diverse applications such as chatbots, content creation, and interactive experiences. 26 | 3. **[GPT-4o Family](https://openai.com/hello-gpt-4o/)** – A multimodal powerhouse, GPT-4o extends OpenAI’s capabilities in text, image, and voice applications. 27 | 4. **[o1 Series](https://openai.com/introducing-openai-o1-preview/)** – Optimized for reasoning and complex problem-solving in fields like math and coding. 28 | 5. **[Whisper](https://openai.com/whisper/)** – A robust automatic speech recognition (ASR) model handling multiple languages and accents with impressive accuracy. 29 | 6. **[DALL-E 2](https://openai.com/dall-e-2/)** – A model generating realistic images from text descriptions, popular in creative fields for visual content creation. 30 | 7. **[Codex](https://openai.com/openai-codex/)** – Powering GitHub Copilot, Codex converts natural language into code, facilitating faster programming and code generation. 31 | 32 | ## Integrating OpenAI's Technology 33 | 34 | OpenAI provides extensive documentation, APIs, and resources for developers to implement its models across diverse applications. While specific tech pages for individual models are in development, we encourage developers to leverage OpenAI’s unified resources. 35 | -------------------------------------------------------------------------------- /technologies/openai/openai-gym.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "OpenAI gym" 3 | author: "OpenAI" 4 | description: "OpenAI gym is a toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Pinball." 5 | --- 6 | 7 | # OpenAI gym 8 | Gym is a toolkit from OpenAI that offers a wide array of simulated environments (e.g. Atari games, board games, 2D and 3D physical simulations) for you to train agents, benchmark them, and create new Reinforcement Learning algorithms. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Relese date | April 27, 2016 | 13 | | Author | [OpenAI](https://openai.com/) | 14 | | Repository | https://github.com/openai/gym | 15 | | Type | Reinforcement Learning | 16 | 17 | --- 18 | 19 | ### Libraries 20 | A curated list of libraries and technologies to help you play with OpenAI Gym. 21 | 22 | * [Gym Library](https://www.gymlibrary.dev/) Gym is a standard API for reinforcement learning, and a diverse collection of reference environments 23 | * [OpenAI Gym repository](https://github.com/openai/gym) Gym is an open source Python library for developing and comparing reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Since its release, Gym's API has become the field standard for doing this. 24 | * [Gymnasium](https://github.com/Farama-Foundation/Gymnasium) Gymnasium is an open source Python library for developing and comparing reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. This is a fork of OpenAI's Gym library -------------------------------------------------------------------------------- /technologies/openai/shap-e.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "OpenAI Shap-E" 3 | author: "OpenAI" 4 | description: "OpenAI Shap-E is a text to 3D generation model." 5 | --- 6 | 7 | # OpenAI Shap-E 8 | Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields 9 | 10 | | General | | 11 | | --- | --- | 12 | | Relese date | May, 2023 | 13 | | Author | [OpenAI](https://openai.com/) | 14 | | Research Paper | https://arxiv.org/abs/2305.02463 | 15 | | Repository | https://github.com/openai/shap-e | 16 | | Type | conditional generative model for 3D assets| 17 | 18 | --- 19 | 20 | ### OpenAI Shap-E Tutorials 21 | 22 | 23 | 24 | --- 25 | 26 | ### Libraries 27 | A curated list of libraries and technologies to help you play with OpenAI Shap-E. 28 | 29 | * [Shap-E Hugging Face Space](https://huggingface.co/spaces/hysts/Shap-E) 30 | * [OpenAI Shap-E repository](https://github.com/openai/shap-e) 31 | * [Sample text to 3D](https://github.com/openai/shap-e/blob/main/shap_e/examples/sample_text_to_3d.ipynb) 32 | * [Encode Model](https://github.com/openai/shap-e/blob/main/shap_e/examples/encode_model.ipynb) 33 | -------------------------------------------------------------------------------- /technologies/pinecone/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Pinecone" 3 | description: "Discover the power of Pinecone for lightning-fast vector similarity search." 4 | --- 5 | 6 | # Pinecone: Next-Gen Vector Similarity Search 7 | 8 | Pinecone is a cutting-edge technology provider specializing in vector similarity search. Founded in 2020, Pinecone offers a scalable and efficient solution for searching through high-dimensional data. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [Pinecone](https://www.pinecone.io/) | 13 | | Repository | https://github.com/pinecone-io | 14 | | Type | Vector database for ML apps | 15 | 16 | ## Key Features 17 | 18 | - Swiftly finds similar items in vast datasets, providing precise results for recommendations and searches 19 | - Offers near-instant responses, ideal for applications needing quick feedback 20 | - Integrates into existing applications with minimal setup 21 | - Handles large datasets and ensures consistent performance as data grows 22 | 23 | ### Start building with Pinecone's products 24 | 25 | Pinecone offers a suite of products designed to streamline vector similarity search and accelerate innovation in various fields. Dive into Pinecone's offerings and unleash the potential of your data-driven applications. Don't forget to explore the apps created with Pinecone technology showcased during lablab.ai hackathons! 26 | 27 | ### List of Pinecone's products 28 | 29 | ## Pinecone SDK 30 | 31 | The Pinecone SDK empowers developers to integrate vector similarity search capabilities into their applications seamlessly. With easy-to-use APIs and robust documentation, developers can leverage the power of Pinecone's technology to enhance search experiences and unlock new insights. 32 | 33 | ## Pinecone Console 34 | 35 | The Pinecone Console provides a user-friendly interface for managing and querying vector indexes. With intuitive controls and real-time monitoring features, users can efficiently navigate through vast datasets and optimize search performance. 36 | 37 | ## Pinecone Hub 38 | 39 | Pinecone Hub is a centralized repository of pre-trained embeddings and models, offering a treasure trove of resources for accelerating development cycles. From image recognition to natural language processing, Pinecone Hub provides access to a diverse range of embeddings for various use cases. 40 | 41 | ### System Requirements 42 | 43 | Pinecone runs on Linux, macOS, and Windows systems, needing a minimum of 4 GB RAM and sufficient storage for datasets. A multicore processor is recommended for optimal performance, with stable internet for cloud access. Modern browsers with JavaScript support are necessary, while GPU acceleration is optional for enhanced performance. 44 | -------------------------------------------------------------------------------- /technologies/portkey/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Portkey" 3 | description: "Discover the power of Portkey for seamless data integration and transformation." 4 | --- 5 | 6 | # Portkey: Unlocking Next-Gen Data Solutions 7 | 8 | Portkey is a leading technology provider specializing in data integration and transformation solutions. Founded in 2018, Portkey offers innovative tools and services to streamline data workflows and unlock valuable insights. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Author | [Portkey](https://portkey.ai/) | 13 | | Repository | https://github.com/Portkey-Wallet| 14 | | Type | Control Panel for AI Apps | 15 | 16 | ## Key Features 17 | 18 | - Simplifies connecting different data sources, making it easy to extract, transform, and load data for analysis 19 | - Allows users to modify and enhance their data without complex coding 20 | - Provides a single platform for managing data workflows, offering better visibility and control over data pipelines 21 | - Adjusts to changing data needs, ensuring performance and reliability 22 | 23 | ### Start building with Portkey's products 24 | 25 | Portkey provides a comprehensive suite of products designed to simplify data integration and transformation processes. Dive into Portkey's offerings to accelerate your data projects and unlock new possibilities. Don't miss out on exploring the apps created with Portkey technology showcased during lablab.ai hackathons! 26 | ### List of Pinecone's products 27 | 28 | ## Portkey Data Hub 29 | Portkey Data Hub is a centralized platform for managing and orchestrating data workflows. With advanced features for data ingestion, transformation, and governance, Portkey Data Hub empowers organizations to streamline their data pipelines and drive innovation. 30 | 31 | ## Portkey Connect 32 | Portkey Connect is a powerful data integration tool that enables seamless connectivity between diverse data sources. From databases to cloud applications, Portkey Connect simplifies the process of extracting, transforming, and loading data, allowing organizations to leverage their data assets more effectively. 33 | 34 | ## Portkey Transform 35 | Portkey Transform is an intuitive data transformation tool that enables users to easily manipulate and enrich their data. With a user-friendly interface and a rich library of transformation functions, Portkey Transform empowers users to cleanse, reshape, and enrich their data to meet their specific business needs. 36 | 37 | ### System Requirements 38 | 39 | Portkey runs on Windows, macOS, and Linux systems, needing at least 4 GB RAM and sufficient storage for datasets. It operates on standard processors but performs better with multicore CPUs. Stable internet and modern browsers are necessary for cloud access and web-based interfaces, respectively. 40 | -------------------------------------------------------------------------------- /technologies/qdrant/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Qdrant" 3 | description: "Qdrant is a search engine and database designed for vector similarity. With its user-friendly API, it offers a production-ready service for storing, managing, and searching vectors, including an additional payload. Qdrant is particularly well-suited for neural-network or semantic-based matching, faceted search, and other applications that require extended filtering support." 4 | --- 5 | 6 | # Qdrant 7 | 8 | Qdrant is a high-performance search engine and database designed specifically for vector similarity. 9 | Written in Rust, Qdrant offers fast and reliable performance even under high load, making it an ideal 10 | choice for applications that demand speed and scalability. 11 | With Qdrant, you can turn your embeddings or neural network encoders into powerful, full-fledged 12 | applications for a wide range of use cases. Whether you need to match, search, recommend, or perform 13 | other complex operations on large datasets, Qdrant provides a convenient and user-friendly API that simplifies the process. 14 | Qdrant's extended filtering support also makes it well-suited for a variety of applications, including 15 | faceted search and semantic-based matching. Plus, with its managed solution available on the Qdrant Cloud, 16 | you can easily deploy and manage your applications with minimal setup and maintenance. 17 | 18 | | General | | 19 | | ----------- | ------------------------------ | 20 | | Relese date | October 29, 2021 | 21 | | Author | [Qdrant](https://qdrant.tech/) | 22 | | Type | Search engine, database | 23 | 24 | --- 25 | 26 | ## Tutorials 27 | 28 | Great tutorials on how to build with Qdrant 29 | 30 | 31 | 32 | ### Qdrant - General sources 33 | 34 | Learn even more about Qdrant! 35 | 36 | - [Qdrant Cloud](https://cloud.qdrant.io/) Qdrant Cloud is a managed solution for Qdrant. It provides a production-ready service for storing, managing, and searching vectors 37 | - [Qdrant Repository](https://github.com/qdrant/qdrant) Qdrant is an open-source search engine and database designed for vector similarity 38 | - [Qdrant Documentation](https://qdrant.github.io/qdrant/redoc/index.html) Qdrant Documentation 39 | - [Qdrant Demo](https://github.com/qdrant/qdrant_demo) Qdrant Demo Project. This project contains a set of examples of using Qdrant 40 | 41 | ### Qdrant x Cohere resources 42 | 43 | Use Qdrant and Cohere to make powerful applications! 44 | 45 | - [Basic example of the Qdrant and Cohere integration](https://qdrant.tech/documentation/integrations/#cohere) This example shows how to use Qdrant and Cohere to start with this stack 46 | - [Q&A application](https://qdrant.tech/articles/qa-with-cohere-and-qdrant/) How to build a Q&A application with Qdrant and Cohere 47 | 48 | --- 49 | -------------------------------------------------------------------------------- /technologies/reinforcement-learning/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Reinforcement Learning" 3 | description: "Reinforcement learning is a computational approach to machine learning where agents take actions in an environment to maximize some notion of cumulative reward." 4 | --- 5 | 6 | # Reinforcement Learning 7 | 8 | Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought 9 | to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement 10 | learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. 11 | 12 | | General | | 13 | | ----------- | ---- | 14 | | Relese date | 1960 | 15 | 16 | --- 17 | 18 | ### About Reinforcement Learning 19 | 20 | - [What is reinforcement learning?](https://github.com/ultralytics/yolov5) Comprehensive article on Wikipedia about reinforcement learning 21 | 22 | --- 23 | -------------------------------------------------------------------------------- /technologies/stability-ai/stable-lm.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Stable LM" 3 | author: "stability-ai" 4 | description: "An experimental 3 billion parameter decoder-only language model designed for sustainable, high-performance language generation on smart devices." 5 | --- 6 | 7 | # Stable LM: Sustainable, High-Performance Language Model 8 | 9 | Stable LM is a groundbreaking language model introduced by Stability AI. With 3 billion parameters, this compact language model is specifically designed for high-performance language generation on smart devices, such as handhelds and laptops. 10 | 11 | Its efficiency and affordability make it an ideal choice for a wide range of applications, from creative writing assistance to specialized programming tasks. 12 | 13 | | General | | 14 | | --- | --- | 15 | | Relese date | October 2, 2023 | 16 | | Author | Stability AI | 17 | | Repository | https://github.com/Stability-AI/StableLM | 18 | | Type | Language Model | 19 | 20 | 21 | ## Start building with Stable LM 22 | StableLM-3B-4E1T is a 3 billion parameter decoder-only language model pre-trained on 1 trillion tokens of diverse English and code datasets for 4 epochs. 23 | 24 | ### Stable LM Tutorials 25 | 26 | --- 27 | 28 | 29 | ### Stable LM Usage and Resourses 30 | StableLM-3B-4E1T is intended to serve as a foundational base model for application-specific fine-tuning. Developers are encouraged to evaluate and fine-tune the model for safe performance in downstream applications. 31 | 32 | * [Model on HuggingFace](https://huggingface.co/stabilityai/stablelm-3b-4e1t) 33 | * [Intro article from Stability AI](https://stability.ai/blog/stable-lm-3b-sustainable-high-performance-language-models-smart-devices) 34 | * [GitHub Repository](https://github.com/Stability-AI/StableLM#stablelm-alpha-v2) 35 | 36 | #### Stable LM models 37 | 38 | * [StableLM-Base-Alpha-7B-v2](https://huggingface.co/stabilityai/stablelm-base-alpha-7b-v2) 39 | * [StableLM-Base-Alpha-3B-v2](https://huggingface.co/stabilityai/stablelm-base-alpha-3b-v2) 40 | * [StableLM-3B-4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t) 41 | 42 | --- 43 | -------------------------------------------------------------------------------- /technologies/stability-ai/stable-video.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Stable Video: Revolutionizing Video Generation with AI" 3 | author: "stability-ai" 4 | description: "Discover Stable Video by Stability AI, the first open generative AI video model transforming media and education." 5 | --- 6 | 7 | # Stability AI's Stable Video 8 | 9 | Stable Video is Stability AI’s pioneering venture into generative AI video models, designed for a broad spectrum of video applications in media, entertainment, education, and marketing. It enables users to convert text and image inputs into dynamic scenes, bringing concepts to life in a cinematic format. 10 | 11 | | General | | 12 | | --- | --- | 13 | | Relese date | November 21, 2023 | 14 | | Author | [Stability AI](https://stability.ai) | 15 | | Repository | [Link](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt) | 16 | | Type | Generative image-to-video model | 17 | 18 | 19 | ## **Stable Video Specifications** 20 | 21 | Stable Video comes in two models, generating 14 and 25 frames respectively, at frame rates ranging from 3 to 30 FPS. These models have shown to surpass leading closed models in user preference studies. 22 | 23 | * **Video duration:** 2-5 seconds 24 | * **Frame rate:** Up to 30 FPS 25 | * **Processing time:** 2 minutes or less 26 | 27 | ### **Stable Video License** 28 | 29 | Stable Video Diffusion is available under a non-commercial community license. Review the full License and Stability’s Acceptable Use Policy [here](https://stability.ai/license). 30 | 31 | ### **Model Sources** 32 | 33 | * Repository: [Stability AI GitHub](https://github.com/Stability-AI/generative-models) 34 | * Paper: [Stable Video Diffusion Research](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets) 35 | 36 | ### **Evaluation** 37 | 38 | A user preference study comparing SVD Image-to-Video with GEN-2 and PikaLabs shows a higher preference for SVD in terms of video quality. Details are available in the research paper. 39 | 40 | ### **Uses** 41 | 42 | Intended for research, Stable Video can be utilized in generative model research, model safety, understanding model limitations, artistic creation, and educational tools. It is not designed for factual or true representations and should adhere to Stability AI's Acceptable Use Policy. 43 | 44 | ### **Limitations and Recommendations** 45 | 46 | The model produces short videos with potential limitations in motion, photorealism, and text rendering. It is primarily for research purposes. 47 | -------------------------------------------------------------------------------- /technologies/stability-ai/stablecode.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "StableCode" 3 | author: "stability-ai" 4 | description: "StableCode-Instruct-Alpha-3B is a decoder-only model with 3 billion parameters, fine-tuned for instructions and pre-trained." 5 | --- 6 | 7 | # 'StableCode' 8 | StableCode-Instruct-Alpha-3B is a decoder-only model with 3 billion parameters. Pre-trained on top programming languages from the StackOverflow developer survey, it's a product by Stability AI intended for coding. 9 | 10 | | General | https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b | 11 | | --- | --- | 12 | | Relese date | August 8, 2023 | 13 | | Author | [Stability AI](https://lablab.ai/tech/stability-ai/stable-diffusion) | 14 | | Repository | https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b | 15 | 16 | 17 | ## StableCode: Features & Highlights 18 | 19 | - Trained on a diverse set of languages from the stack-dataset (v1.2) by BigCode, including Python, Go, Java, and more. 20 | - Fine-tuned with ~120,000 code instruction/response pairs for specific use cases. 21 | - Extended context window for precise autocomplete suggestions. It can handle 2-4X more code than earlier models. 22 | - Designed to assist both seasoned and novice developers. 23 | 24 | --- 25 | 26 | ### StableCode Resources 27 | A curated list of libraries and technologies to help you build great projects with StableCode. 28 | 29 | * [StableCode-Instruct-Alpha-3B Documantation](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b) 30 | * [StableCode-Completion-Alpha-3B-4K Documantation](https://huggingface.co/stabilityai/stablecode-completion-alpha-3b-4k) 31 | * [Model Usage](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b#usage) 32 | * [Model Architecture](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b#model-architecture) 33 | * [Use and Limitations](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b#use-and-limitations) 34 | 35 | --- 36 | -------------------------------------------------------------------------------- /technologies/stable-dreamfusion/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Stable Dreamfusion" 3 | description: "Stable-Dreamfusion is an innovative PyTorch implementation that combines text-to-2D modeling with Dreamfusion technology to create 3D visuals" 4 | --- 5 | 6 | # Stable Dreamfusion 7 | 8 | Stable-Dreamfusion is an avant-garde PyTorch implementation that seamlessly integrates text-to-3D modeling by synergizing the Stable Diffusion text-to-2D model with Dreamfusion technology. 9 | 10 | Enabling the translation of textual content into rich 3D visuals, this cutting-edge model leverages a latent diffusion approach and a multi-resolution grid encoder to offer rapid rendering. While it offers groundbreaking potential in fields like 3D gaming, virtual reality, and film production, it's important to note that the current version contains differences from the original paper, resulting in varying generation quality. 11 | 12 | With ongoing development and exciting recent support for Perp-Neg, Stable-Dreamfusion stands as a promising frontier in 3D AI technologies. 13 | 14 | | **General** | | 15 | |-------------|---| 16 | | Repository | [https://github.com/ashawkey/stable-dreamfusion](https://github.com/ashawkey/stable-dreamfusion) | 17 | | Type | 3D AI Models | 18 | 19 | ## Stable-Dreamfusion Resources: 20 | 21 | - [Installation Guide](https://github.com/ashawkey/stable-dreamfusion#install) 22 | - [The Usage of the Model](https://github.com/ashawkey/stable-dreamfusion#install) 23 | - [Hugging Face Implementation](https://huggingface.co/Webaverse/Stable-Dreamfusion) 24 | -------------------------------------------------------------------------------- /technologies/stanford-alpaca/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Alpaca" 3 | description: "Stanford Alpaca is an open-source project that demonstrates the capabilities of an instruction-following LLaMA model. Developed by a team of researchers at Stanford University, Alpaca is designed to understand and execute tasks based on user instructions." 4 | --- 5 | 6 | # Alpaca 7 | 8 | Stanford Alpaca is an open-source project that demonstrates the capabilities of an instruction-following LLaMA model. Developed by a team of researchers at Stanford University, Alpaca is designed to understand and execute tasks based on user instructions. The project provides a dataset, data generation process, and fine-tuning code for reproducibility. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Repository | https://github.com/tatsu-lab/stanford_alpaca | 13 | | Type | Instruction-Following LLaMA Model | 14 | 15 | ### Alpaca - Resources 16 | Resources to get started with Stanford Alpaca: 17 | 18 | * [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) GitHub Repository 19 | * [Release Blog Post](https://crfm.stanford.edu/2023/03/13/alpaca.html) Detailed information about the Alpaca model, its potential harm and limitations, and the team's thought process behind releasing a reproducible model. 20 | 21 | --- 22 | 23 | ### Alpaca - General Information 24 | 25 | * Fine-tuned from a 7B LLaMA model on 52K instruction-following data 26 | * Built using techniques from the Self-Instruct paper with modifications 27 | * Capable of understanding and executing tasks based on user instructions 28 | * Provides a dataset, data generation process, and fine-tuning code for reproducibility 29 | 30 | --- 31 | 32 | ### Alpaca - Setup 33 | Setup instructions for Stanford Alpaca: 34 | 35 | * [Data Generation Process](https://github.com/tatsu-lab/stanford_alpaca#data-generation-process) Instructions on generating the instruction-following dataset used for fine-tuning the Alpaca model. 36 | * [Fine-tuning](https://github.com/tatsu-lab/stanford_alpaca#fine-tuning) A guide to fine-tune LLaMA and OPT models using the provided dataset and code. 37 | * [Recovering Alpaca Weights](https://github.com/tatsu-lab/stanford_alpaca#recovering-alpaca-weights) Instructions for recovering the Alpaca-7B weights from the released weight diff. 38 | --- -------------------------------------------------------------------------------- /technologies/superagi/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "SuperAGI" 3 | description: "SuperAGI is a dev-first, open-source autonomous AI agent framework - Enabling developers to build, manage & run useful autonomous agents quickly and reliably." 4 | --- 5 | 6 | # SuperAGI 7 | 8 | SuperAGI is a dev-first, open-source autonomous AI agent framework - Enabling developers to build, manage & run useful autonomous agents quickly and reliably. Users can extend Agent capabilities with tools such as GitHub, Jira, Slack, Twitter, etc., and also run multiple agents concurrently. 9 | 10 | SuperAGI also provides users with a GUI to easily configure and monitor their agents. Users can access organization, run, and user-level metrics via the Agent Performance Monitoring (APM) dashboard to get actionable insights about improving their agents. 11 | 12 | | General | | 13 | | --- | --- | 14 | | Repository | https://github.com/TransformerOptimus/SuperAGI | 15 | | Type | Large Language Model Framework | 16 | 17 | ### SuperAGI **- Resources** 18 | 19 | Resources to get started with SuperAGI 20 | 21 | - SuperAGI GitHub Repository: [https://github.com/TransformerOptimus/SuperAGI](https://github.com/TransformerOptimus/SuperAGI) 22 | - Documentation for SuperAGI: [https://superagi.com/docs/](https://superagi.com/docs/) 23 | - SuperAGI Blogs and Concepts: [https://superagi.com/blog/](https://superagi.com/blog/) 24 | 25 | ### SuperAGI - Video Walkthroughs 26 | 27 | - Setting up SuperAGI via DockerHub: [https://youtu.be/qjJneWjH5v4](https://youtu.be/qjJneWjH5v4) 28 | - SuperAGI Cloud version: [https://youtu.be/K3DMcgTd8t4](https://youtu.be/K3DMcgTd8t4) 29 | - Agent Performance Monitoring walkthrough: [https://youtu.be/D6NVJaOjizU](https://youtu.be/D6NVJaOjizU) 30 | - SuperAGI YouTube channel for more walkthroughs and resources: [https://www.youtube.com/@_SuperAGI/videos](https://www.youtube.com/@_SuperAGI/videos) 31 | 32 | ### SuperAGI **- Use cases** 33 | 34 | Use cases for SuperAGI 35 | 36 | - [Autonomous Sales Engagement](https://www.youtube.com/watch?v=WGqBC4ENVWE) [Link to video: [https://www.youtube.com/watch?v=WGqBC4ENVWE](https://www.youtube.com/watch?v=WGqBC4ENVWE)] 37 | - Research Buddy: [https://youtu.be/LfZ6T8XP-Q0](https://youtu.be/LfZ6T8XP-Q0) 38 | - Automated Social Media posting: [https://www.youtube.com/watch?v=sdebm646_J8](https://www.youtube.com/watch?v=sdebm646_J8) 39 | - Awesome-repo for more resources: [https://github.com/TransformerOptimus/Awesome-SuperAGI](https://github.com/TransformerOptimus/Awesome-SuperAGI) 40 | -------------------------------------------------------------------------------- /technologies/tabbyml/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "TabbyML" 3 | description: "TabbyML is a company that specializes in building an AI-powered code assistant to help developers write code faster and more efficiently." 4 | --- 5 | 6 | # TabbyML 7 | 8 | TabbyML is a company that specializes in building an AI-powered code assistant to help developers write code faster and more efficiently. Their product, Tabby, draws context from comments and code to suggest lines and whole functions instantly, and is available as an extension for Visual Studio Code and VIM. TabbyML's approach to AI development is centered around openness and transparency. 9 | 10 | Visit the [TabbyML website](https://www.tabbyml.com/) 11 | -------------------------------------------------------------------------------- /technologies/tabbyml/tabby.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Tabby: AI Coding Assistant" 3 | description: "Tabby is an open-source AI coding assistant that helps developers write code faster with multi-line and full-function suggestions in real-time. Tabby is available as an extension for Visual Studio Code and VIM." 4 | --- 5 | 6 | # Tabby: AI Coding Assistant 7 | 8 | Tabby is an open-source AI coding assistant designed to help developers write code faster and more efficiently. It provides multi-line and full-function suggestions in real-time, allowing for quicker development and increased productivity. Tabby is fine-tuned for specific projects to ensure maximum quality and offers an easily configurable training process. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Release date | 2023 | 13 | | Documentation | https://github.com/TabbyML/tabby | 14 | | Type | AI Coding Assistant | 15 | 16 | --- 17 | 18 | ## Features 19 | 20 | - Fine-tuned for your project: Tabby's model is tuned specifically to your project for maximum quality. 21 | - Easily configurable: Control the training process with simple YAML config. 22 | - Open source: Audit the entire Tabby codebase for security or compliance on Github, or host your own deployment. 23 | - Supports self-hosting: Deploy Tabby with Docker on machines with GPU accelerator (e.g., RTX 3080). 24 | - Available as extensions: Tabby will be available as an extension for Visual Studio Code and VIM. 25 | 26 | ## Join the Waitlist 27 | 28 | Visit [TabbyML's website](https://www.tabbyml.com/) to join the waitlist and stay updated on Tabby's development and releases. -------------------------------------------------------------------------------- /technologies/template.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Technology Title" 3 | author: "Technology Author (provider)" 4 | description: "A brief (88-158 characters) description that includes one main keyword related to the provider (e.g., "OpenAI")." 5 | --- 6 | 7 | # 'Author + Technology' Heading 8 | Description of the technology. 9 | 10 | | General | | 11 | | --- | --- | 12 | | Relese date | Month XX, 20XX | 13 | | Author | [Company](link) | 14 | | Repository | Link | 15 | | Type | The type of the AI model | 16 | 17 | 18 | ## Start building with 'author technology' 19 | We have collected the best 'technology' libraries and resources to help you get started to build with 'technology' today. To see what others are building with 'technology', check out the community built ['technology' Use Cases and Applications](/apps/tech/technology). 20 | 21 | 22 | ### 'author technology' Tutorials 23 | 24 | --- 25 | 26 | 27 | ### 'author technology' Boilerplates 28 | Kickstart your development with a 'technology' based boilerplate. Boilerplates is a great way to headstart when building your next project with 'technology'. 29 | 30 | * [Awesome 'technology' boilerplate](link) This boilerplate will help you get started with the 'technology'. 31 | 32 | --- 33 | 34 | ### 'author technology' Libraries 35 | A curated list of libraries and technologies to help you build great projects with 'technology'. 36 | 37 | * ['technology' Documantation](link to the documentation) 38 | 39 | --- 40 | -------------------------------------------------------------------------------- /technologies/text-generation-webui/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Text Generation Web UI" 3 | description: "A Gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. It provides a user-friendly interface to interact with these models and generate text, with features such as model switching, notebook mode, chat mode, and more." 4 | --- 5 | 6 | # Text Generation Web UI 7 | The Text Generation Web UI is a Gradio-based interface for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. 8 | It provides a user-friendly interface to interact with these models and generate text, with features such as model switching, notebook mode, chat mode, and more. 9 | The project aims to become the go-to web UI for text generation and is similar to [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) in terms of functionality. 10 | 11 | ## Features 12 | * Dropdown menu for switching between models 13 | * Notebook mode that resembles OpenAI's playground 14 | * Chat mode for conversation and role-playing 15 | * Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, and MOSS 16 | * Nice HTML output for GPT-4chan 17 | * Markdown output for GALACTICA, including LaTeX rendering 18 | * Custom chat characters 19 | * Advanced chat features (send images, get audio responses with TTS) 20 | * Efficient text streaming 21 | * Parameter presets 22 | * Layers splitting across GPU(s), CPU, and disk 23 | * CPU mode 24 | * and much more! 25 | 26 | ## Installation 27 | There are different installation methods available, including one-click installers for Windows, Linux, and macOS, as well as manual installation using Conda. Detailed installation instructions can be found in the [Text Generation Web UI repository](https://github.com/oobabooga/text-generation-webui). 28 | 29 | ## Downloading Models 30 | Models should be placed inside the `models` folder. You can download models from [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads), such as Pythia, OPT, GALACTICA, and GPT-J 6B. Use the `download-model.py` script to automatically download a model from Hugging Face. 31 | 32 | ## Starting the Web UI 33 | After installing the necessary dependencies and downloading the models, you can start the web UI by running the `server.py` script. The web UI can be accessed at `http://localhost:7860/?__theme=dark`. You can customize the interface and behavior using various command-line flags. 34 | 35 | ## System Requirements 36 | Check the [wiki](https://github.com/oobabooga/text-generation-webui/wiki) for examples of VRAM and RAM usage in both GPU and CPU mode. 37 | 38 | ## Contributing 39 | Pull requests, suggestions, and issue reports are welcome. Before reporting a bug, make sure you have followed the installation instructions provided and searched for existing issues. 40 | -------------------------------------------------------------------------------- /technologies/tonic-ai/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Tonic.ai" 3 | description: "Automating data synthesis, advancing data privacy." 4 | --- 5 | 6 | # Tonic.ai - Revolution in synthetic data generation 7 | 8 | In the world of data, there’s a fascinating player called Tonic.ai. Think of them as the “Fake Data Company” with a mission to unblock data access, supercharge development, and respect data privacy. Let’s dive into the details of this remarkable technology. 9 | 10 | Tonic.ai is like a magician’s wand for data. It takes your real-world production data and conjures up safe, useful datasets for testing, QA, and development. Its purpose is to provide developers with realistic datasets while ensuring compliance and security. 11 | 12 | | General | | 13 | | --- | --- | 14 | | Platform | https://tonic.ai | 15 | | Repository | https://github.com/TonicAI | 16 | | Type | Data Synthesis | 17 | 18 | 19 | ## Key Features and Benefits 20 | 21 | **Secure Synthetic Data:** Tonic.ai generates secure and scalable synthetic data precisely when and where you need it. Whether you’re fueling staging environments or unblocking local development, Tonic.ai ensures data privacy without compromising efficiency. 22 | 23 | **Staging Environment Fix:** Developers often struggle with staging environments that lack the complexity of production data. Tonic.ai bridges this gap by providing de-identified datasets that mirror real-world scenarios, helping identify critical bugs before they hit production. 24 | 25 | **Local Development Unblock:** Rapid access to realistic data during local development accelerates the development process. Tonic.ai ensures that developers work with data that closely resembles production, streamlining their workflows. 26 | 27 | **Fresh and Synced Environments:** Keeping all environments up-to-date with consistent, refreshed data is crucial. Tonic.ai simplifies this process, ensuring that your data remains relevant across the development lifecycle. 28 | 29 | **Compliance Without Dysfunction:** Regulatory compliance is essential, but it shouldn’t hinder developer productivity. Tonic.ai strikes the right balance, allowing organizations to meet compliance requirements seamlessly. 30 | 31 | **Trusted by Engineering Teams:** Tonic.ai has earned the trust of engineering teams worldwide. Companies like eBay and Everlywell have witnessed faster release cycles, saved development hours, and improved onboarding thanks to Tonic.ai. 32 | 33 | 34 | --- 35 | -------------------------------------------------------------------------------- /technologies/truera/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "TruEra" 3 | description: "Enabling trust in AI models with TruEra's quality solutions." 4 | --- 5 | 6 | # TruEra: Explaining, Debugging, and Monitoring Machine Learning Models 7 | 8 | TruEra fills a critical gap in your AI stack, explaining and testing model quality throughout the lifecycle. TruEra's AI Quality solutions explain, debug, and monitor machine learning models, leading to higher quality and trustworthiness, as well as faster deployment. Backed by years of pioneering research, TruEra works across the model lifecycle, is independent of model development platforms, and embeds easily into your existing AI stack. 9 | 10 | With major offices in the US, the UK, and Singapore, TruEra is ensuring that machine learning delivers value and benefit to organizations, customers, and citizens alike. 11 | 12 | | General | | 13 | | --- | --- | 14 | | Repository | [TruEra on GitHub](https://github.com/truera) | 15 | | Type | AI Quality Solutions | 16 | 17 | ## Start building with TruEra 18 | 19 | TruEra provides a range of AI Quality solutions to enhance the trustworthiness and quality of machine learning models. Whether you need to explain, debug, or monitor your models, TruEra has the tools you need for success. 20 | 21 | 22 | ### TruEra TruLens Tutorials 23 | 24 | --- 25 | 26 | ## TruEra Products 27 | 28 | TruEra offers a suite of products and solutions for developers and AI professionals: 29 | 30 | - **[TruEra Diagnostics](https://truera.com/diagnostics/)**: Identify issues and enhance the quality of your machine learning models. 31 | - **[TruEra Monitoring](https://truera.com/monitoring/)**: Monitor model performance and gain insights into model behavior. 32 | - **[TruEra Platform](https://truera.com/platform/)**: An integrated platform for model explainability and quality control. 33 | - **[LLM Observability](https://truera.com/llm-testing/)**: Understand and control your model's performance in production. 34 | 35 | ### TruEra on GitHub 36 | 37 | Explore TruEra's open-source projects on GitHub, including examples and tools for machine learning practitioners: 38 | 39 | - **[truera-examples](https://github.com/truera/truera-examples)** 40 | - **[trulens](https://github.com/truera/trulens)**: A product for developers and AI professionals. 41 | 42 | Discover how TruEra can help you achieve better, more reliable machine learning models and gain insights into your AI stack's performance. 43 | -------------------------------------------------------------------------------- /technologies/unstructuredio/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Unstructured" 3 | description: "Prepare data for LLM with Unstructured" 4 | --- 5 | 6 | # Unstructured: Transforming Data for LLM Success 7 | Unstructured flawlessly extracts and transforms data into clean, consistent JSON, tailored for integration into vector databases and LLM frameworks. Experience efficient data processing for optimal LLM performance. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [Unsctructured.io](https://unstructured.io/) | 12 | | Repository | https://github.com/Unstructured-IO/unstructured | 13 | | Type | Data Transformation Tool | 14 | 15 | ## Key Features 16 | 17 | - **Document preprocessing:** Unstructured provides an API for document preprocessing without a custom code need. 18 | - **Accurate data:** Unstructured focuses on delivering clean, LLM-ready data, ensuring efficient performance. 19 | - **Rapid integration:** Integrates into existing workflows with a smooth setup. 20 | - **High scalability** Unstructured automatically retrieves, transforms, and stages large volumes of data for LLMs, ensuring scalability and efficiency. 21 | 22 | ### Start building with Unsctructured's products 23 | Explore Unstructured's products tailored to meet the your needs of your data transformation for LLMs. 24 | 25 | ### List of Unstructured's products 26 | 27 | ## API (SaaS & Marketplace) 28 | The API offers a document preprocessing with production grading and doesn't require a custom code. Ideal for getting started quickly with document processing tasks. 29 | 30 | ## Platform (Paid) 31 | The Platform serves enterprises and companies with large data volumes. It enables automatic retrieval, transformation, and staging of data for LLMs, ensuring efficiency. 32 | 33 | ## RAG Support (with LangChain) 34 | Unstructured collaborates with LangChain to provide RAG support, optimizing the transition of your RAG from prototype to production. Make the most of expert guidance and seamless integration with LangChain's support. 35 | 36 | 37 | ### System Requirements 38 | 39 | Unstructured is compatible with major operating systems, including Windows, macOS, and Linux. A minimum of 4 GB of RAM is recommended for optimal performance. For intensive data processing tasks, a multicore processor is recommended to ensure the efficient outcome. 40 | -------------------------------------------------------------------------------- /technologies/vectara/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Vectara" 3 | description: "RAG-as-a-service API-driven generative AI platform." 4 | --- 5 | 6 | # Vectara: The Future of GenAI 7 | 8 | Vectara is the trusted GenAI platform. 9 | It's designed to make it easy for you to build and deploy GenAI applications that can generate text-based answers using your particular data 10 | (this type of application flow is also known as RAG or retrieval-augmented-generation). 11 | You just ingest your data and then build apps using the Query and Summarization APIs. It's that simple. 12 | 13 | | General | | 14 | | --- | --- | 15 | | Platform | https://vectara.com | 16 | | Type | Generative AI | 17 | 18 | 19 | ## Start building with Vectara 20 | 21 | Vectara is a modern, API-first search platform. 22 | Developer-friendly and easily accessible, all Vectara APIs are designed for consumption by application developers and data engineers who want to embed powerful generative AI into their site or application. 23 | Vectara removes the barrier to entry with a trusted entry point by allowing users to operate its platform without having to have deep technical knowledge of operating and hosting multiple LLMs. Vectara APIs abstract away the underlying complexity of operating GenAI solution. 24 | 25 | ### Vectara Resources 26 | 27 | * **[Vectara Guide for Hackers](https://lablab.ai/t/vectara-hackathon-guide)** 28 | A comprehensive guide to integrating and maximizing the capabilities of Vectara within your projects. 29 | * **[Join Vectara Platform](https://console.vectara.com/console/overview?utm_source=hackathon&utm_medium=lablabAI&utm_term=sign-up&utm_content=registration-page&utm_campaign=hackathon-lablabAI-sign-up-registration-page)** 30 | Get started with Vectara and experience the next level of search technology. 31 | 32 | ### Vectara Tutorials 33 | 34 | --- 35 | -------------------------------------------------------------------------------- /technologies/vectorboard/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Vectorboard' 3 | description: 'Open Source Hyperparameter Optimization and Eval Framework for Embeddings 🚀' 4 | --- 5 | 6 | # 💛 Vectorboard 7 | 8 | Vectorboard is an open-source framework for optimizing and evaluating embedding and retrieval-based machine learning models, particularly those built around RAG (Retrieval-Augmented Generation). 9 | 10 | RAG is a methodology that enhances machine learning models by combining generative and retrieval-based aspects. 11 | 12 | **Importance of Good Embeddings** 13 | Good embeddings are vital for the successful execution of RAG applications. They serve as the basis for retrieving contextually relevant information. 14 | 15 | ## How Vectorboard Helps 16 | 17 | Vectorboard simplifies the complex task of optimizing these embeddings by trying different hyperparameters (chunk size, overlap, splitting function, embedding algorithm, etc,) in a structured framework. 18 | 19 | In the end, Vectorboard provides you with a Results Dashboard that allows you to compare the performance of different embeddings and hyperparameters, ran on your own data. 20 | 21 | | General | | 22 | | ----------- | --------------------------------------------------- | 23 | | Relese date | Sep, 2023 | 24 | | Author | [Vectorboard](https://vectorboard.ai) | 25 | | Repository | https://github.com/VectorBoard/vectorboard/ | 26 | | Type | Eval and Hyperparameter Optimization for embeddings | 27 | 28 | ### Vectorboard - Resources 29 | 30 | Resources to get stared with Vectorboard: 31 | 32 | - [Vectorboard](https://github.com/vectorboard/vectorboard) GitHub Repository 33 | - [Documentation](https://docs.vectorboard.ai) Documentation for Vectorboard 34 | 35 | --- 36 | 37 | ### Vectorboard Libraries 38 | 39 | A curated list of libraries and technologies to help you build great projects with Vectorboard. 40 | 41 | - [Vectorboard Python Library Documantation](https://docs.vectorboard.ai) - The official Python library for Vectorboard. 42 | 43 | --- 44 | -------------------------------------------------------------------------------- /technologies/vercel/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Vercel" 3 | description: "Vercel is a platform designed for developers, providing speed, reliability, and scalability to create and deploy web applications. With built-in CI/CD, zero configuration, and deep integrations with popular Git providers such as GitHub, GitLab, and Bitbucket, Vercel streamlines the development process, making it easy for teams to collaborate and iterate on their projects." 4 | --- 5 | 6 | # Vercel 7 | 8 | Vercel is a platform designed for developers, providing speed, reliability, and scalability 9 | to create and deploy web applications. With built-in CI/CD, zero configuration, and deep 10 | integrations with popular Git providers such as GitHub, GitLab, and Bitbucket, Vercel 11 | streamlines the development process, making it easy for teams to collaborate and iterate on their projects. 12 | 13 | | General | | 14 | | ------------ | ------------------------------- | 15 | | Release date | 2015 | 16 | | Author | [Vercel](https://vercel.com) | 17 | | Type | Deployment and hosting platform | 18 | 19 | --- 20 | 21 | ## Tutorials 22 | 23 | Great tutorials on how to build with Vercel 24 | 25 | 26 | 27 | ### Vercel - Helpful Resources 28 | 29 | Check it out to become a Vercel Master! 30 | 31 | - [Vercel Documentation](https://vercel.com/docs) Comprehensive documentation for Vercel 32 | - [Vercel Blog](https://vercel.com/blog) Stay updated with the latest news and updates from Vercel 33 | - [Vercel GitHub](https://github.com/vercel) Vercel's open-source repositories 34 | 35 | --- 36 | -------------------------------------------------------------------------------- /technologies/weaviate/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Weaviate" 3 | description: "Weaviate is an open-source vector database that enables you to store data objects and vector embeddings from your preferred ML models. Moreover, it offers smooth scalability for handling billions of data objects." 4 | --- 5 | 6 | # Weaviate 7 | Weaviate is an open-source vector database that enables you to store data objects and vector embeddings from your preferred ML models. Moreover, it offers smooth scalability for handling billions of data objects. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [Weaviate](https://weaviate.io/) | 12 | | Repository | https://github.com/weaviate | 13 | | Type | Vector Database | 14 | --- 15 | 16 | 17 | ### Weaviate - Helpful Resources 18 | 19 | Explore Weaviate resources to better understand and utilize Weaviate, the AI native 20 | vector database, effectively in your projects. 21 | 22 | - [Weaviate Documentation](https://weaviate.io/developers/weaviate) Comprehensive documentation for Weaviate 23 | - [Weaviate Quickstart](https://weaviate.io/developers/weaviate/quickstart) In this quickstart guide, you'll discover how to build a vector database using Weaviate Cloud Services (WCS), import data, and conduct vector search. 24 | - [WCS - Weaviate SaaS](https://console.weaviate.cloud/) Connect to the Weaviate Cloud Services or to a local Weaviate (Keep in mind that Weaviate can also be utilized locally with Docker). 25 | 26 | --- 27 | 28 | ### Weaviate Similarity search 29 | 30 | Delve into Weaviate's similarity search to quickly and accurately identify closely related data within your vector database. 31 | 32 | - [Weaviate Similarity / Vector search Documentation](https://weaviate.io/developers/weaviate/search/similarity) Explore the documentation on executing similarity-based searches with Weaviate. 33 | - [Similarity Search Recipes](https://github.com/weaviate/recipes/tree/main/similarity-search/text2vec) Discover various similarity search recipes tailored for platforms like Cohere, OpenAI, HuggingFace, PaLM, and more. 34 | 35 | --- 36 | 37 | ### GraphQL API 38 | 39 | - [Weaviate GraphQL Documentation](https://weaviate.io/developers/weaviate/api/graphql) 40 | 41 | --- 42 | 43 | ### Weaviate Generative Search 44 | 45 | - [Generative Search - OpenAI Documentation](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/generative-openai) 46 | - [Generative Search - Cohere Documentation](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/generative-cohere) 47 | - [Generative Search - PaLM Documentation](https://weaviate.io/developers/weaviate/modules/reader-generator-modules/generative-palm) 48 | - [Generative Search Recipes](https://github.com/weaviate/recipes/tree/main/generative-search) Discover various generative search recipes tailored for platforms like Cohere, OpenAI, and PaLM. 49 | 50 | --- -------------------------------------------------------------------------------- /technologies/x-ai/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "xAI" 3 | description: "xAI, founded by Elon Musk in July 2023, creates cutting-edge AI tools advancing science, understanding, and real-world problem-solving." 4 | --- 5 | 6 | # xAI 7 | 8 | xAI is an innovative AI technology company founded by Elon Musk in July 2023. The company is dedicated to pushing the boundaries of artificial intelligence by creating tools and platforms to advance scientific discovery, foster understanding, and simplify complex problem-solving. With cutting-edge technology and a user-focused approach, xAI provides developers and organizations with the tools they need to unlock the potential of AI in everyday applications. 9 | 10 | ## General Information 11 | 12 | | Attribute | Details | 13 | |---------------|-----------------------------------------------| 14 | | **Company** | [xAI](https://x.ai/about) | 15 | | **Founded** | 2023 | 16 | | **Documentation**| https://docs.x.ai/docs | 17 | | **API Reference** | https://docs.x.ai/api#introduction | 18 | 19 | 20 | ## Products 21 | 22 | ### Grok AI Models 23 | xAI's flagship product line consists of the Grok series of conversational AI and vision-enhanced models: 24 | 25 | - **Grok-1:** xAI's foundational model offering robust reasoning and conversational capabilities. 26 | 27 | - **Grok-1.5:** Improved reasoning and extended context for handling large datasets. 28 | 29 | - **Grok-1.5V:** Introduced vision capabilities for analyzing images and diagrams. 30 | 31 | - **Grok-2:** Enhanced performance with advanced image generation and multimodal abilities. 32 | 33 | ### xAI API 34 | An accessible platform for developers to integrate Grok models into their applications. Fully compatible with existing OpenAI and Anthropic SDKs. 35 | 36 | ### PromptIDE 37 | A powerful integrated development environment for crafting, testing, and refining AI prompts. Designed to optimize prompt engineering workflows for large language models. 38 | 39 | ## Start Building with xAI 40 | 41 | 👉 Visit [x.ai](https://x.ai/) to create a developer account and get started. 42 | 43 | 👉 Sign in to the xAI Developer Portal to obtain API keys, access Grok models, and [explore documentation](https://docs.x.ai/docs). 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | -------------------------------------------------------------------------------- /technologies/yi-llms/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "01.AI's Yi Series Large Language Models" 3 | description: "Explore 01.AI's Yi series of LLMs, engineered for developers seeking robust language processing capabilities." 4 | --- 5 | 6 | # Yi Series by 01.AI: Pushing Language Understanding Boundaries 7 | Engineered from scratch by 01.AI developers, the Yi series models stand as a testament to cutting-edge language understanding and processing capabilities. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [01.AI](https://01.ai) | 12 | | Repository | https://github.com/01-ai/Yi | 13 | | Type | Large Language Models | 14 | 15 | 16 | ## Yi LLMs Overview 17 | Developers can leverage two standout models within the Yi series, each offering distinct capabilities in language processing: 18 | 19 | ### Yi-34B 20 | This model flaunts exceptional performance across various evaluations such as MMLU and comprehensive LLM assessments. It outperforms larger counterparts like LLaMA2-70B and Falcon-180B while remaining cost-effective for diverse applications. 21 | 22 | ### Yi-6B 23 | With robust language processing abilities, the Yi-6B model plays a crucial role in innovative projects and diverse applications. 24 | 25 | #### Opportunities 26 | 27 | 01.AI's Yi series models are engineered to cater to developers' needs, offering advanced language processing capabilities for varied applications. 28 | 29 | --- 30 | 31 | 01.AI, founded by AI luminary Kai-Fu Lee, introduces the Yi-34B and Yi-6B models, surpassing performance benchmarks and setting new standards in language understanding. 32 | 33 | Reports highlight the Yi series' superior performance compared to larger pre-trained LLMs across benchmarks like common reasoning, reading comprehension, and MMLU. This exceptional performance, combined with cost-effectiveness, positions the Yi series as an optimal choice for diverse use cases. 34 | 35 | 01.AI aims to democratize AI innovation, providing open access for academic research while requiring permissions for free commercial use. As 01.AI continues to redefine language understanding, the Yi series models serve as a beacon of advancement in the AI landscape. 36 | 37 | ### Yi LLMs Tutorials 38 | 39 | --- 40 | 41 | ### Yi LLMs Resources 42 | 43 | - [01.AI Official Website](https://01.ai/) 44 | - [Yi-34B on HuggingFace with 200K context window](https://huggingface.co/01-ai/Yi-34B-200K) 45 | - [Smaller model, Yi-6B with 200K context window on HuggingFace](https://huggingface.co/01-ai/Yi-6B-200K) 46 | - [Application for Commercial License](https://www.lingyiwanwu.com/yi-license) 47 | 48 | --- 49 | -------------------------------------------------------------------------------- /technologies/yolo/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "YOLO" 3 | description: "YOLO is a state-of-the-art, real-time object detection algorithm that uses convolutional neural networks to be bale to identify and detect objects in real time " 4 | --- 5 | 6 | # YOLO 7 | 8 | YOLO (You Only Look Once) is a state-of-the-art, real-time object detection algorithm that can quickly detect and locate objects within an image or video. The YOLO architecture works by taking an input and separating it into a grid of cells and each of these cells is in charge of detecting objects within that region. YOLO returns the bounding boxes containing all the objects in the image and predicts the probability of an object being in each of the boxes and also predicts a class probability to help identify the type of object it is. YOLO is a highly effective object detection algorithm and making YOLO and open-source project led the community to make several improvements in such a limited time. 9 | 10 | | General | | 11 | | ----------- | ------------------------------ | 12 | | Relese date | 2015 | 13 | | Author | Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi| 14 | | Paper | (https://arxiv.org/abs/1506.02640) | 15 | | Type | Object detection algorithm| 16 | 17 | --- 18 | 19 | 20 | ### YOLO - Resources 21 | 22 | Learn even more about YOLO! 23 | 24 | - [v7 Labs](https://www.v7labs.com/blog/yolo-object-detection) Blog "YOLO: Algorithm for Object Detection Explained". 25 | - [YOLOv5 Repository](https://github.com/ultralytics/yolov5) Object detection architectures and models pretrained on the COCO dataset. 26 | - [YOLOv6 Web demo](https://huggingface.co/spaces/nateraw/yolov6) Gradio demo for YOLOv6 for object detection on videos. 27 | - [Hugging Face Spaces](https://huggingface.co/spaces/akhaliq/yolov7) Test YOLOv7 in the browser with Hugging Face Spaces. 28 | 29 | 30 | 31 | 32 | --- 33 | -------------------------------------------------------------------------------- /technologies/yolo/yolov5.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "YOLOv5" 3 | author: "YOLO" 4 | description: "YOLOv5 is family of object detections models pretrained on the COCO Dataset. It has been created by Ultralytics in 2020. This architecture contains 10 different models, each one with a different size and speed. YOLOv5 is also a part of the YOLO family of object detection models." 5 | --- 6 | 7 | # YOLO v5 8 | [YOLOv5](https://github.com/ultralytics/yolov5) is a family of object detection architectures and 9 | models pretrained on the COCO dataset and created By [Ultralytics](https://ultralytics.com/) Team in 2020. This architecture uses single Neural Network to process entire image. It divides image into regions 10 | and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities. 11 | YOLOv5 is available in the Hub as a PyTorch module. You can use it directly in your code or in the Hub. 12 | 13 | | General | | 14 | | --- | --- | 15 | | Relese date | June 25, 2020 | 16 | | Author | [YOLO](https://ultralytics.com/) | 17 | | Repository | https://github.com/ultralytics/yolov5 | 18 | | Type | Real time object detection | 19 | 20 | --- 21 | 22 | ### Libraries 23 | Discover YOLOv5 24 | 25 | * [YOLOv5 Repository](https://github.com/ultralytics/yolov5) Object detection architectures and models pretrained on the COCO data set 26 | * [YOLOv5 Documentation](https://docs.ultralytics.com) GPT-4 Product description page 27 | * [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5) 28 | -------------------------------------------------------------------------------- /technologies/yolo/yolov6.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "YOLOv6" 3 | author: "YOLO" 4 | description: "YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance." 5 | --- 6 | 7 | # YOLO v6 8 | YOLOv6 has a series of models for various industrial scenarios, including N/T/S/M/L, 9 | which the architectures vary considering the model size for better accuracy-speed trade-off. 10 | And some Bag-of-freebies methods are introduced to further improve the performance, 11 | such as self-distillation and more training epochs. For industrial deployment, 12 | we adopt QAT with channel-wise distillation and graph optimization to pursue extreme performance. 13 | YOLOv6-N hits 35.9% AP on COCO dataset with 1234 FPS on T4. YOLOv6-S strikes 43.5% AP with 495 FPS, 14 | and the quantized YOLOv6-S model achieves 43.3% AP at a accelerated speed of 869 FPS on T4. 15 | YOLOv6-T/M/L also have excellent performance, which show higher accuracy than other detectors with the similar inference speed. 16 | 17 | | General | | 18 | | --- | --- | 19 | | Relese date | June, 2022 | 20 | | Author | [YOLO](https://ultralytics.com/) | 21 | | Repository | https://github.com/meituan/YOLOv6 | 22 | | Type | Real time object detection | 23 | 24 | --- 25 | 26 | ### Libraries 27 | Discover YOLOv6 28 | 29 | * [YOLOv6 Web demo](https://huggingface.co/spaces/nateraw/yolov6) Gradio demo for YOLOv6 for object detection on videos. To use it, simply upload your video or click one of the examples to load them. Read more at the links below. 30 | * [YOLOv6 NCNN Android app demo](https://github.com/FeiGeChuanShu/ncnn-android-yolov6) This is a sample ncnn android project, it depends on ncnn library and opencv -------------------------------------------------------------------------------- /technologies/yolo/yolov7.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "YOLOv7" 3 | author: "YOLO" 4 | description: "YOLOv7 is the new state-of-the-art object detector in the YOLO family. According to the paper, it is the most accurate and fastest real-time object detection to date." 5 | --- 6 | 7 | # YOLO v7 8 | The YOLOv7 algorithm is a big advancement in the field of computer vision and machine learning. 9 | It is more accurate and faster than any other object detection models or YOLO versions. It is also 10 | much cheaper to train on small datasets without any pre-trained weights. Hence, it's expected 11 | to become the industry standard for object detection in the near future. 12 | 13 | 14 | The official paper named “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for 15 | real-time object detectors” was released in July 2022 by Chien-Yao Wang, Alexey Bochkovskiy, and 16 | Hong-Yuan Mark Liao. The research paper has become immensely popular in a matter of days. The source code was 17 | released as open source under the GPL-3.0 license, a free copyleft license. You can find the code 18 | in the official YOLOv7 GitHub repository. The repository was awarded over 4.3k stars in the first 19 | month after release. 20 | 21 | ## What's new in YOLOv7? 22 | 23 | Serveral architectural reforms imporved speed and accuraracy in YOLOv7. Compared to the previously 24 | most accurate YOLOv6 model (56.8% AP), the YOLOv7 real-time model achieves a 13.7% higher AP (43.1% AP) 25 | on the COCO dataset. 26 | 27 | - Architectural Reforms 28 | - Model Scaling for Concatenation based Models 29 | - E-ELAN (Extended Efficient Layer Aggregation Network) 30 | - Trainable BoF 31 | - Planned re-parameterized convolution 32 | - Coarse for auxiliary and Fine for lead loss 33 | 34 | The paper discusses the YOLOv7 architecture in great detail and provides intuition into how the model works. 35 | 36 | 37 | | General | | 38 | | --- | --- | 39 | | Relese date | July, 2022 | 40 | | Repository | https://github.com/WongKinYiu/yolov7 | 41 | | Type | Real time object detection | 42 | 43 | --- 44 | 45 | ### Libraries 46 | Discover YOLOv7 47 | 48 | * [Research Paper](https://arxiv.org/abs/2207.02696) YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, official research paper 49 | * [Hugging Face Spaces](https://huggingface.co/spaces/akhaliq/yolov7) Test YOLOv7 in the browser with Hugging Face Spaces 50 | * [GitHub Repository](https://github.com/WongKinYiu/yolov7) View the GitHub repository for YOLOv7 51 | -------------------------------------------------------------------------------- /technologies/yolo/yolov8.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "YOLOv8" 3 | author: "YOLO" 4 | description: "YOLOv8 is the new state-of-the-art object detector in the YOLO family. Real-time object detection, classification, and segmentation. Faster, more accurate, and simpler." 5 | --- 6 | 7 | # YOLO v8 8 | 9 | Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions 10 | and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, 11 | accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, 12 | image classification and pose estimation tasks. 13 | 14 | ## What's new in YOLOv8? 15 | 16 | YOLOv8 supports a full range of vision AI tasks, including detection, segmentation, pose estimation, tracking, 17 | and classification. This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains. 18 | 19 | 20 | 21 | | General | | 22 | | --- | --- | 23 | | Relese date | May, 2023 | 24 | | Repository | https://github.com/ultralytics/ultralytics | 25 | | Type | Real time object detection | 26 | 27 | --- 28 | 29 | ### Libraries 30 | Discover YOLOv8 31 | 32 | * [Documentation](https://docs.ultralytics.com/) YOLOv8: 33 | * [Quickstart](https://docs.ultralytics.com/quickstart/) Quickstart with YOLOv8 34 | * [GitHub Repository](https://github.com/ultralytics/ultralytics) View the GitHub repository for YOLOv8 35 | * [Hugging Face Spaces](https://huggingface.co/spaces/kadirnar/yolov8) Test YOLOv8 in the browser with Hugging Face Spaces 36 | -------------------------------------------------------------------------------- /technologies/zilliz/index.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Zilliz: Unleashing the Power of Vector Databases" 3 | description: "Empower your AI applications with Zilliz Cloud, a fully-managed Milvus service designed for efficient vector search and manipulation." 4 | --- 5 | 6 | # Zilliz Vector Database 7 | Zilliz Technologies, the creators of Milvus, presents Zilliz Cloud, the go-to solution for efficient vector search and manipulation. Trusted by over 5000 enterprises globally, Zilliz Cloud simplifies vector search applications, enabling developers to harness the full potential of advanced vector search capabilities. 8 | 9 | | General | | 10 | | --- | --- | 11 | | Author | [Zilliz](https://zilliz.com?utm_source=lablab) | 12 | | Platform | [Explore](https://zilliz.com?utm_source=lablab) | 13 | | Type | Vector Database | 14 | 15 | ## Zilliz Overview 16 | 17 | Zilliz Cloud is a fully-managed service built upon Milvus, the leading open-source vector database. With over 23,000 stars on GitHub and 3.8 million+ downloads, Milvus has solidified its position as the fastest-growing vector database. Zilliz Cloud extends Milvus' capabilities, supporting billion-scale vector search effortlessly, earning trust from over 1,000 enterprise users worldwide. 18 | 19 | 20 | ### Zilliz Tutorials 21 | 22 | --- 23 | 24 | ### Technology Resources 25 | 26 | - [Zilliz Cloud Documentation](https://docs.zilliz.com) - Dive into detailed documentation for a comprehensive understanding of Zilliz Cloud's features and functionalities. 27 | 28 | Zilliz Cloud simplifies the deployment and scaling of vector search applications by eliminating the complexities of infrastructure management. Its fully-managed service enables developers to build and scale AI applications with confidence. 29 | 30 | Discover the endless possibilities with Zilliz Cloud's vector database capabilities. Sign up for a free Zilliz Cloud account, access our official SDKs, create your initial collection, and perform vector similarity searches to supercharge your AI applications within minutes. As you prepare to launch your application, seamlessly upgrade to a pay-as-you-go plan. 31 | 32 | For more information and to explore diverse use cases, visit [Zilliz Use Cases](https://zilliz.com/use-cases). 33 | 34 | - Zilliz Platform - [Check it out](https://zilliz.com?utm_source=lablab) 35 | - Quick Start Guide - [Zilliz Cloud Docs](https://docs.zilliz.com) 36 | - [Zilliz on GitHub](https://github.com/zilliztech/) 37 | --- 38 | -------------------------------------------------------------------------------- /topics/app/chatbot/index.json: -------------------------------------------------------------------------------- 1 | { 2 | "seoTitle": "AI chatbots apps created by lablab.ai community", 3 | "seoDescription": "Step into the world of AI chatbots and discover how they're revolutionizing every business. Explore all of the potential uses of chatbots in variety apps with us.", 4 | "title": "AI chatbots apps created by lablab.ai community", 5 | "description": "Step into the world of AI chatbots and discover how they're revolutionizing every business. Explore all of the potential uses of chatbots in variety apps with us." 6 | } 7 | -------------------------------------------------------------------------------- /topics/appTechnology/openai/gpt3.json: -------------------------------------------------------------------------------- 1 | { 2 | "seoTitle": "Embark into the world of GPT-3 technology with AI applications created during AI Hackathons", 3 | "seoDescription": "Uncover GPT-3 magic with lablab.ai's inspiring AI applications! Experience OpenAI's GPT-3 prowess & innovative creations from our captivating AI Hackathons.", 4 | "title": "GPT-3 Applications created during lablab AI Hackathons", 5 | "description": "Explore the capabilities of OpenAI's GPT-3 model and get inspired by amazing AI applications created by lablab.ai community during our AI Hackathons", 6 | "image": "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/9af6d89e-41b1-4f70-3a9d-976e3e45fb00/full" 7 | } 8 | -------------------------------------------------------------------------------- /topics/appTechnology/openai/index.json: -------------------------------------------------------------------------------- 1 | { 2 | "seoTitle": "Exploring OpenAI: Innovative AI Applications from Hackathons", 3 | "seoDescription": "Dive into lablab.ai's showcase of OpenAI innovations from AI Hackathons. Discover the future of AI with captivating applications and insights.", 4 | "title": "OpenAI Applications created during lablab AI Hackathons", 5 | "description": "Discover OpenAI's advanced models through lablab.ai's Hackathon highlights. Be inspired by groundbreaking AI applications crafted by our vibrant community.", 6 | "image": "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/9af6d89e-41b1-4f70-3a9d-976e3e45fb00/full" 7 | } 8 | -------------------------------------------------------------------------------- /topics/readme.md: -------------------------------------------------------------------------------- 1 | # Topics 2 | 3 | Repo to handle submitting & updating topics 4 | 5 | ## 👉 How to publish a new topic and seo on lablab.ai 6 | 7 | In this guide you will learn how to create SEO header and page title/description for `/apps` sub pages 8 | 9 | 1. You need to know for what page you are creating the SEO content. 10 | 11 | 1. If you are creating for a custom search on `/apps` page for example for chatbot `https://lablab.ai/apps/topic/chatbot`. You need to go into the `app` folder here in. 12 | 13 | 1. There you need to create a folder with the name of the search keyword and a file inside called `index.json` 14 | 2. There you can define `seoTitle` `seoDescription` `Title` `Description` 15 | 3. You can also add an image to that folder and we will check it and upload it to CDN if we find it OK. 16 | 17 | 2. If you are creating for a technology search on `/apps` for example `https://lablab.ai/apps/tech/openai/gpt3` you need to go into the `appTechnology` folder 18 | 3. 1. There you need to create a folder with the technology provider name in our case `openai` 19 | 2. Inside that folder you need to create the technoloy name file `gpt3.json` 20 | 3. If it is for the technology provider file name should be `index.json` 21 | 4. Easiest way to check the proper names by checking first the url and copying the path 22 | 5. There you can define `seoTitle` `seoDescription` `Title` `Description` 23 | 6. You can also add an image to that folder and we will check it and upload it to CDN if we find it OK. 24 | -------------------------------------------------------------------------------- /tutorials/ar/README.md: -------------------------------------------------------------------------------- 1 | # This is the folder for tutorials in arabic languages 2 | -------------------------------------------------------------------------------- /tutorials/ar/create-a-simple-app-with-openai-gpt-4-and-streamlit.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "إنشاء تطبيق بسيط باستخدام OpenAI GPT-4 و Streamlit" 3 | description: "Streamlit هو أداة شهيرة لبناء تطبيقات الويب للتعلم الآلي وعلوم البيانات. في هذا الدرس، سنقوم ببناء تطبيق ويب بسيط حيث يمكن للمستخدمين إدخال النص والحصول على رد من نموذج GPT-4 التابع لـ OpenAI." 4 | image: "https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/0b885c46-90a2-49f5-fda0-9c4834875200/full" 5 | authorUsername: "zakrz303" 6 | --- 7 | 8 | ## الشروط المسبقة: 9 | 10 | يجب أن يكون لديك Python مثبتًا. 11 | يجب أن يكون لديك Streamlit مثبتًا: pip install streamlit 12 | تحتاج إلى مكتبة OpenAI Python: pip install openai 13 | 14 | ### الخطوات: 15 | 16 | إعداد مفتاح API الخاص بك: 17 | قبل استخدام GPT-4، تأكد من أن لديك مفتاح API لـ OpenAI. قم بإعداده كمتغير بيئي أو مباشرة في الكود (لا يُوصى به للإنتاج بسبب أسباب الأمان). 18 | 19 | كتابة تطبيق Streamlit: 20 | 21 | قم بإنشاء ملف جديد باسم gpt4_app.py وأضف الكود التالي: 22 | 23 | ```python 24 | import streamlit as st 25 | import openai 26 | 27 | # قم بإعداد مفتاح API لـ OpenAI 28 | openai.api_key = 'YOUR_OPENAI_API_KEY' 29 | 30 | def get_gpt4_response(prompt): 31 | """احصل على رد من GPT-4 استنادًا إلى الإدخال المعطى.""" 32 | response = openai.Completion.create( 33 | model="gpt-4.0-turbo", 34 | prompt=prompt, 35 | max_tokens=150 36 | ) 37 | return response.choices[0].text.strip() 38 | 39 | # التطبيق الرئيسي لـ Streamlit 40 | def main(): 41 | st.title("تطبيق GPT-4 Streamlit") 42 | 43 | user_input = st.text_area("أدخل نصك هنا:", "اكتب شيئًا للحصول على رد من GPT-4...") 44 | if st.button("احصل على الرد"): 45 | response = get_gpt4_response(user_input) 46 | st.write(response) 47 | 48 | if __name__ == "__main__": 49 | main() 50 | ``` 51 | 52 | تشغيل تطبيق Streamlit: 53 | 54 | في النافذة الطرفية أو موجه الأوامر، انتقل إلى الدليل الذي يحتوي على gpt4_app.py وقم بتشغيل: 55 | 56 | ``` 57 | streamlit run gpt4_app.py 58 | ``` 59 | 60 | تفاعل مع GPT-4 من خلال Streamlit: 61 | 62 | بمجرد تشغيل تطبيق Streamlit، انتقل إلى الرابط المقدم (عادةً ما يكون http://localhost:8501/) في المتصفح. يمكنك بعد ذلك كتابة أي نص والنقر على زر "احصل على الرد" لرؤية رد GPT-4. 63 | 64 | ### ملاحظات: 65 | 66 | تذكر استبدال 'YOUR_OPENAI_API_KEY' بمفتاح API الحقيقي الخاص بك. 67 | بالنسبة للتطبيقات الإنتاجية، فكر في إعداد متغيرات البيئة أو استخدام أداة إدارة الأسرار للتعامل مع مفاتيح API بأمان. 68 | قم بضبط معلمة max_tokens في الوظيفة get_gpt4_response إذا كنت ترغب في ردود أقصر أو أطول. 69 | قد تواجه حدود معدلات أو تكاليف مرتبطة بواجهة API الخاصة بـ OpenAI حسب اشتراكك واستخدامك. 70 | هذا هو! لديك الآن تطبيق Streamlit بسيط يتفاعل مع OpenAI GPT-4. 71 | -------------------------------------------------------------------------------- /tutorials/en/Hackernoon_ Going Global.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/lablab-ai/community-content/edb93e6974c83be7f836785ba9d8f1e40098f333/tutorials/en/Hackernoon_ Going Global.docx -------------------------------------------------------------------------------- /tutorials/en/cohere-tutorial-entity-extraction.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Cohere tutorial: Entity extraction" 3 | description: "In this tutorial we will show you how to use Cohere AI to extract entities from text." 4 | image: "https://storage.googleapis.com/lablab-static-eu/images/tutorials/cohere_entity_extraction.jpg" 5 | authorUsername: "Flafi" 6 | --- 7 | 8 | Extracting information from text is a common task in language processing. 9 | LLMs can extract entities that are difficult to extract using other NLP methods (and where pre-training provides the model with some context on these entities). 10 | This is an overview of using generative LLMs to extract entities 11 | 12 | # Let's get started 13 | 14 | First of all, register to Cohere https://dashboard.cohere.ai/register 15 | 16 | After registration you need to head over to the Playground https://os.cohere.ai/playground 17 | 18 | Next, we can check out the Cohere Playground.The Cohere Classify Playground is a great tool for testing your ideas and getting started with a project. It has a clean UI and can export your code in multiple languages. 19 | 20 | We will choose generate endpoint with default language model. 21 | 22 | 23 | For this example I will use the Extract entities from Invoices example. And rewrite it just to show how great this model is. 24 | 25 | We will extract names from sentences. You can paste the following sentences in the Playground and see how the model extracts the names. 26 | 27 | 28 | ```bash 29 | Extract names from sentences. 30 | 31 | Sentence: Bob went to the market to buy something for lunch 32 | Name: Bob 33 | -- 34 | Sentence: Peter Parker was Spiderman in real life 35 | Name: Peter Parker 36 | -- 37 | Sentence: Once I have seen Buffy the vampire slayer 38 | Name: 39 | ``` 40 | 41 | In my test it was correctly generating the answer even with two examples. 42 | 43 | And from the playground we can export the code in multiple languages. 44 | 45 | # Conclusion 46 | 47 | Cohere is giving a great solution to extract entities from text with endless possibilities. 48 | 49 | Join our AI Hackathons to test your knowledge and build with assistance of our mentors AI based tools to change the world! Check the upcoming ones here: [Cohere Thanksgiving Hackathon](https://lablab.ai/event/cohere-thanksgiving-hackathon) & 50 | [Cohere Holiday Hackathon](https://lablab.ai/event/cohere-holiday-hackathon) 51 | 52 | **Thank you!** If you enjoyed this tutorial you can find more and continue reading [on our tutorial page](https://lablab.ai/t) 53 | -------------------------------------------------------------------------------- /tutorials/en/deep-learning-introduction.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Deeplearning" 3 | description: "Deeplearning introduction" 4 | image: "https://storage.googleapis.com/lablab-static-eu/images/tutorials/SCR-20220706-pwi.png" 5 | --- 6 | 7 | ![An Interactive DALL-E 2 Stream](https://imagedelivery.net/K11gkZF3xaVyYzFESMdWIQ/ec71b5eb-ae0f-4c0a-d1fe-06f120ed9600/full) 8 | 9 | **Come and join us for a fun stream where we'll be using the DALL-E 2! DALL-E 2 is a deep learning algorithm that can generate images from textual descriptions 10 | by OpenAI.** It uses a 12-billion parameter training version of the GPT-3 transformer model to interpret the natural language inputs and generate corresponding 11 | images. It'll be interactive, and viewers will suggest prompts in the chat. And we'll be holding a lottery during the stream where three lucky people will be able 12 | to get the generated image for themselves! 13 | 14 | ```html 15 |
16 | 17 |
18 | ``` 19 | 20 | ```css 21 | @tailwind base; 22 | @tailwind components; 23 | @tailwind utilities; 24 | 25 | .my-custom-style { 26 | /* ... */ 27 | } 28 | ``` 29 | -------------------------------------------------------------------------------- /tutorials/template.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "SHORT title" 3 | description: "SHORT description" 4 | image: "url_to_cover_image" 5 | authorUsername: "Author's username (from lablab.ai profile!)" 6 | --- 7 | 8 | {/*To add image to your tutorial use component with params:*/} 9 | {/*