├── lfm ├── __init__.py └── model.py ├── requirements.txt ├── scripts ├── tests.sh ├── test_name.sh ├── code_quality.sh └── merge_all_prs.sh ├── .github ├── workflows │ ├── ruff.yml │ ├── pull-request-links.yml │ ├── docs.yml │ ├── lints.yml │ ├── testing.yml │ ├── quality.yml │ ├── pr_request_checks.yml │ ├── label.yml │ ├── run_test.yml │ ├── welcome.yml │ ├── pylint.yml │ ├── docs_test.yml │ ├── unit-test.yml │ ├── python-publish.yml │ ├── code_quality_control.yml │ ├── stale.yml │ ├── cos_integration.yml │ └── test.yml ├── dependabot.yml ├── ISSUE_TEMPLATE │ ├── feature_request.md │ └── bug_report.md ├── FUNDING.yml ├── PULL_REQUEST_TEMPLATE.yml └── labeler.yml ├── .pre-commit-config.yaml ├── LICENSE ├── pyproject.toml ├── .gitignore └── README.md /lfm/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch 2 | zetascale 3 | swarms 4 | -------------------------------------------------------------------------------- /scripts/tests.sh: -------------------------------------------------------------------------------- 1 | find ./tests -name '*.py' -exec pytest {} \; -------------------------------------------------------------------------------- /.github/workflows/ruff.yml: -------------------------------------------------------------------------------- 1 | name: Ruff 2 | on: [ push, pull_request ] 3 | jobs: 4 | ruff: 5 | runs-on: ubuntu-latest 6 | steps: 7 | - uses: actions/checkout@v4 8 | - uses: chartboost/ruff-action@v1 9 | -------------------------------------------------------------------------------- /scripts/test_name.sh: -------------------------------------------------------------------------------- 1 | find ./tests -name "*.py" -type f | while read file 2 | do 3 | filename=$(basename "$file") 4 | dir=$(dirname "$file") 5 | if [[ $filename != test_* ]]; then 6 | mv "$file" "$dir/test_$filename" 7 | fi 8 | done -------------------------------------------------------------------------------- /.github/workflows/pull-request-links.yml: -------------------------------------------------------------------------------- 1 | name: readthedocs/actions 2 | on: 3 | pull_request_target: 4 | types: 5 | - opened 6 | paths: 7 | - "docs/**" 8 | 9 | permissions: 10 | pull-requests: write 11 | 12 | jobs: 13 | pull-request-links: 14 | runs-on: ubuntu-latest 15 | steps: 16 | - uses: readthedocs/actions/preview@v1 17 | with: 18 | project-slug: swarms_torch -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | # https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates 2 | 3 | version: 2 4 | updates: 5 | - package-ecosystem: "github-actions" 6 | directory: "/" 7 | schedule: 8 | interval: "weekly" 9 | 10 | - package-ecosystem: "pip" 11 | directory: "/" 12 | schedule: 13 | interval: "weekly" 14 | 15 | -------------------------------------------------------------------------------- /.github/workflows/docs.yml: -------------------------------------------------------------------------------- 1 | name: Docs WorkFlow 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | - main 8 | - develop 9 | jobs: 10 | deploy: 11 | runs-on: ubuntu-latest 12 | steps: 13 | - uses: actions/checkout@v4 14 | - uses: actions/setup-python@v5 15 | with: 16 | python-version: '3.10' 17 | - run: pip install mkdocs-material 18 | - run: pip install "mkdocstrings[python]" 19 | - run: mkdocs gh-deploy --force -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | repos: 2 | - repo: https://github.com/ambv/black 3 | rev: 22.3.0 4 | hooks: 5 | - id: black 6 | - repo: https://github.com/charliermarsh/ruff-pre-commit 7 | rev: 'v0.0.255' 8 | hooks: 9 | - id: ruff 10 | args: [--fix] 11 | - repo: https://github.com/nbQA-dev/nbQA 12 | rev: 1.6.3 13 | hooks: 14 | - id: nbqa-black 15 | additional_dependencies: [ipython==8.12, black] 16 | - id: nbqa-ruff 17 | args: ["--ignore=I001"] 18 | additional_dependencies: [ipython==8.12, ruff] -------------------------------------------------------------------------------- /.github/workflows/lints.yml: -------------------------------------------------------------------------------- 1 | name: Linting 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | 8 | jobs: 9 | lint: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - name: Checkout code 14 | uses: actions/checkout@v4 15 | 16 | - name: Set up Python 17 | uses: actions/setup-python@v5 18 | with: 19 | python-version: '3.10' 20 | 21 | - name: Install dependencies 22 | run: pip install --no-cache-dir -r requirements.txt 23 | 24 | - name: Run linters 25 | run: pylint swarms_torch -------------------------------------------------------------------------------- /.github/workflows/testing.yml: -------------------------------------------------------------------------------- 1 | name: Unit Tests 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | 8 | jobs: 9 | test: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - name: Checkout code 14 | uses: actions/checkout@v4 15 | 16 | - name: Set up Python 17 | uses: actions/setup-python@v5 18 | with: 19 | python-version: '3.10' 20 | 21 | - name: Install dependencies 22 | run: pip install --no-cache-dir -r requirements.txt 23 | 24 | - name: Run unit tests 25 | run: pytest tests/ -------------------------------------------------------------------------------- /.github/workflows/quality.yml: -------------------------------------------------------------------------------- 1 | name: Quality 2 | 3 | on: 4 | push: 5 | branches: [ "main" ] 6 | pull_request: 7 | branches: [ "main" ] 8 | 9 | jobs: 10 | lint: 11 | runs-on: ubuntu-latest 12 | strategy: 13 | fail-fast: false 14 | steps: 15 | - name: Checkout actions 16 | uses: actions/checkout@v4 17 | with: 18 | fetch-depth: 0 19 | - name: Init environment 20 | uses: ./.github/actions/init-environment 21 | - name: Run linter 22 | run: | 23 | pylint `git diff --name-only --diff-filter=d origin/main HEAD | grep -E '\.py$' | tr '\n' ' '` -------------------------------------------------------------------------------- /.github/workflows/pr_request_checks.yml: -------------------------------------------------------------------------------- 1 | name: Pull Request Checks 2 | 3 | on: 4 | pull_request: 5 | branches: 6 | - master 7 | 8 | jobs: 9 | test: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - name: Checkout code 14 | uses: actions/checkout@v4 15 | 16 | - name: Set up Python 17 | uses: actions/setup-python@v5 18 | with: 19 | python-version: '3.10' 20 | 21 | - name: Install dependencies 22 | run: pip install --no-cache-dir -r requirements.txt 23 | 24 | - name: Run tests and checks 25 | run: | 26 | pytest tests/ 27 | pylint swarms_torch -------------------------------------------------------------------------------- /.github/workflows/label.yml: -------------------------------------------------------------------------------- 1 | # This workflow will triage pull requests and apply a label based on the 2 | # paths that are modified in the pull request. 3 | # 4 | # To use this workflow, you will need to set up a .github/labeler.yml 5 | # file with configuration. For more information, see: 6 | # https://github.com/actions/labeler 7 | 8 | name: Labeler 9 | on: [pull_request_target] 10 | 11 | jobs: 12 | label: 13 | 14 | runs-on: ubuntu-latest 15 | permissions: 16 | contents: read 17 | pull-requests: write 18 | 19 | steps: 20 | - uses: actions/labeler@v5.0.0 21 | with: 22 | repo-token: "${{ secrets.GITHUB_TOKEN }}" 23 | -------------------------------------------------------------------------------- /.github/workflows/run_test.yml: -------------------------------------------------------------------------------- 1 | name: Python application test 2 | 3 | on: [push] 4 | 5 | jobs: 6 | build: 7 | 8 | runs-on: ubuntu-latest 9 | 10 | steps: 11 | - uses: actions/checkout@v4 12 | - name: Set up Python 3.10 13 | uses: actions/setup-python@v5 14 | with: 15 | python-version: '3.10' 16 | - name: Install dependencies 17 | run: | 18 | python -m pip install --no-cache-dir --upgrade pip 19 | pip install pytest 20 | if [ -f requirements.txt ]; then pip install --no-cache-dir -r requirements.txt; fi 21 | - name: Run tests with pytest 22 | run: | 23 | pytest tests/ 24 | -------------------------------------------------------------------------------- /.github/workflows/welcome.yml: -------------------------------------------------------------------------------- 1 | name: Welcome WorkFlow 2 | 3 | on: 4 | issues: 5 | types: [opened] 6 | pull_request_target: 7 | types: [opened] 8 | 9 | jobs: 10 | build: 11 | name: 👋 Welcome 12 | permissions: write-all 13 | runs-on: ubuntu-latest 14 | steps: 15 | - uses: actions/first-interaction@v1.3.0 16 | with: 17 | repo-token: ${{ secrets.GITHUB_TOKEN }} 18 | issue-message: "Hello there, thank you for opening an Issue ! 🙏🏻 The team was notified and they will get back to you asap." 19 | pr-message: "Hello there, thank you for opening an PR ! 🙏🏻 The team was notified and they will get back to you asap." -------------------------------------------------------------------------------- /.github/workflows/pylint.yml: -------------------------------------------------------------------------------- 1 | name: Pylint 2 | 3 | on: [push] 4 | 5 | jobs: 6 | build: 7 | runs-on: ubuntu-latest 8 | strategy: 9 | matrix: 10 | python-version: ["3.9", "3.10"] 11 | steps: 12 | - uses: actions/checkout@v4 13 | - name: Set up Python ${{ matrix.python-version }} 14 | uses: actions/setup-python@v5 15 | with: 16 | python-version: ${{ matrix.python-version }} 17 | - name: Install dependencies 18 | run: | 19 | python -m pip install --no-cache-dir --upgrade pip 20 | pip install pylint 21 | - name: Analysing the code with pylint 22 | run: | 23 | pylint $(git ls-files '*.py') 24 | -------------------------------------------------------------------------------- /.github/workflows/docs_test.yml: -------------------------------------------------------------------------------- 1 | name: Documentation Tests 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | 8 | jobs: 9 | test: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - name: Checkout code 14 | uses: actions/checkout@v4 15 | 16 | - name: Set up Python 17 | uses: actions/setup-python@v5 18 | with: 19 | python-version: '3.10' 20 | 21 | - name: Install dependencies 22 | run: pip install --no-cache-dir -r requirements.txt 23 | 24 | - name: Build documentation 25 | run: make docs 26 | 27 | - name: Validate documentation 28 | run: sphinx-build -b linkcheck docs build/docs -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature request 3 | about: Suggest an idea for this project 4 | title: '' 5 | labels: '' 6 | assignees: 'kyegomez' 7 | 8 | --- 9 | 10 | **Is your feature request related to a problem? Please describe.** 11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] 12 | 13 | **Describe the solution you'd like** 14 | A clear and concise description of what you want to happen. 15 | 16 | **Describe alternatives you've considered** 17 | A clear and concise description of any alternative solutions or features you've considered. 18 | 19 | **Additional context** 20 | Add any other context or screenshots about the feature request here. 21 | -------------------------------------------------------------------------------- /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: [kyegomez] 4 | patreon: # Replace with a single Patreon username 5 | open_collective: # Replace with a single Open Collective username 6 | ko_fi: # Replace with a single Ko-fi username 7 | tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel 8 | community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry 9 | liberapay: # Replace with a single Liberapay username 10 | issuehunt: # Replace with a single IssueHunt username 11 | otechie: # Replace with a single Otechie username 12 | lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry 13 | custom: #Nothing 14 | -------------------------------------------------------------------------------- /.github/workflows/unit-test.yml: -------------------------------------------------------------------------------- 1 | name: build 2 | 3 | on: 4 | push: 5 | branches: [ main ] 6 | pull_request: 7 | branches: [ main ] 8 | 9 | jobs: 10 | 11 | build: 12 | 13 | runs-on: ubuntu-latest 14 | 15 | steps: 16 | - uses: actions/checkout@v4 17 | 18 | - name: Setup Python 19 | uses: actions/setup-python@v5 20 | with: 21 | python-version: '3.10' 22 | 23 | - name: Install dependencies 24 | run: pip install --no-cache-dir -r requirements.txt 25 | 26 | - name: Run Python unit tests 27 | run: python3 -m unittest tests/ 28 | 29 | - name: Verify that the Docker image for the action builds 30 | run: docker build . --file Dockerfile 31 | 32 | - name: Verify integration test results 33 | run: python3 -m unittest tests/ 34 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Bug report 3 | about: Create a detailed report on the bug and it's root cause. Conduct root cause error analysis 4 | title: "[BUG] " 5 | labels: bug 6 | assignees: kyegomez 7 | 8 | --- 9 | 10 | **Describe the bug** 11 | A clear and concise description of what the bug is and what the main root cause error is. Test very thoroughly before submitting. 12 | 13 | **To Reproduce** 14 | Steps to reproduce the behavior: 15 | 1. Go to '...' 16 | 2. Click on '....' 17 | 3. Scroll down to '....' 18 | 4. See error 19 | 20 | **Expected behavior** 21 | A clear and concise description of what you expected to happen. 22 | 23 | **Screenshots** 24 | If applicable, add screenshots to help explain your problem. 25 | 26 | **Additional context** 27 | Add any other context about the problem here. 28 | -------------------------------------------------------------------------------- /.github/workflows/python-publish.yml: -------------------------------------------------------------------------------- 1 | 2 | name: Upload Python Package 3 | 4 | on: 5 | release: 6 | types: [published] 7 | 8 | permissions: 9 | contents: read 10 | 11 | jobs: 12 | deploy: 13 | 14 | runs-on: ubuntu-latest 15 | 16 | steps: 17 | - uses: actions/checkout@v4 18 | - name: Set up Python 19 | uses: actions/setup-python@v5 20 | with: 21 | python-version: '3.10' 22 | - name: Install dependencies 23 | run: | 24 | python -m pip install --no-cache-dir --upgrade pip 25 | pip install build 26 | - name: Build package 27 | run: python -m build 28 | - name: Publish package 29 | uses: pypa/gh-action-pypi-publish@ec4db0b4ddc65acdf4bff5fa45ac92d78b56bdf0 30 | with: 31 | user: __token__ 32 | password: ${{ secrets.PYPI_API_TOKEN }} -------------------------------------------------------------------------------- /.github/workflows/code_quality_control.yml: -------------------------------------------------------------------------------- 1 | name: Linting and Formatting 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | 8 | jobs: 9 | lint_and_format: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - name: Checkout code 14 | uses: actions/checkout@v4 15 | 16 | - name: Set up Python 17 | uses: actions/setup-python@v5 18 | with: 19 | python-version: '3.10' 20 | 21 | - name: Install dependencies 22 | run: pip install --no-cache-dir -r requirements.txt 23 | 24 | - name: Find Python files 25 | run: find swarms_torch -name "*.py" -type f -exec autopep8 --in-place --aggressive --aggressive {} + 26 | 27 | - name: Push changes 28 | uses: ad-m/github-push-action@master 29 | with: 30 | github_token: ${{ secrets.GITHUB_TOKEN }} -------------------------------------------------------------------------------- /.github/workflows/stale.yml: -------------------------------------------------------------------------------- 1 | # This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time. 2 | # 3 | # You can adjust the behavior by modifying this file. 4 | # For more information, see: 5 | # https://github.com/actions/stale 6 | name: Mark stale issues and pull requests 7 | 8 | on: 9 | schedule: 10 | - cron: '26 12 * * *' 11 | 12 | jobs: 13 | stale: 14 | 15 | runs-on: ubuntu-latest 16 | permissions: 17 | issues: write 18 | pull-requests: write 19 | 20 | steps: 21 | - uses: actions/stale@v9 22 | with: 23 | repo-token: ${{ secrets.GITHUB_TOKEN }} 24 | stale-issue-message: 'Stale issue message' 25 | stale-pr-message: 'Stale pull request message' 26 | stale-issue-label: 'no-issue-activity' 27 | stale-pr-label: 'no-pr-activity' -------------------------------------------------------------------------------- /scripts/code_quality.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Navigate to the directory containing the 'package' folder 4 | # cd /path/to/your/code/directory 5 | 6 | # Run autopep8 with max aggressiveness (-aaa) and in-place modification (-i) 7 | # on all Python files (*.py) under the 'package' directory. 8 | autopep8 --in-place --aggressive --aggressive --recursive --experimental --list-fixes package/ 9 | 10 | # Run black with default settings, since black does not have an aggressiveness level. 11 | # Black will format all Python files it finds in the 'package' directory. 12 | black --experimental-string-processing package/ 13 | 14 | # Run ruff on the 'package' directory. 15 | # Add any additional flags if needed according to your version of ruff. 16 | ruff --unsafe_fix 17 | 18 | # YAPF 19 | yapf --recursive --in-place --verbose --style=google --parallel package 20 | -------------------------------------------------------------------------------- /scripts/merge_all_prs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Check if we are inside a Git repository 4 | if ! git rev-parse --git-dir > /dev/null 2>&1; then 5 | echo "Error: Must be run inside a Git repository." 6 | exit 1 7 | fi 8 | 9 | # Fetch all open pull requests 10 | echo "Fetching open PRs..." 11 | prs=$(gh pr list --state open --json number --jq '.[].number') 12 | 13 | # Check if there are PRs to merge 14 | if [ -z "$prs" ]; then 15 | echo "No open PRs to merge." 16 | exit 0 17 | fi 18 | 19 | echo "Found PRs: $prs" 20 | 21 | # Loop through each pull request number and merge it 22 | for pr in $prs; do 23 | echo "Attempting to merge PR #$pr" 24 | merge_output=$(gh pr merge $pr --auto --merge) 25 | merge_status=$? 26 | if [ $merge_status -ne 0 ]; then 27 | echo "Failed to merge PR #$pr. Error: $merge_output" 28 | else 29 | echo "Successfully merged PR #$pr" 30 | fi 31 | done 32 | 33 | echo "Processing complete." 34 | -------------------------------------------------------------------------------- /.github/workflows/cos_integration.yml: -------------------------------------------------------------------------------- 1 | name: Continuous Integration 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | 8 | jobs: 9 | test: 10 | runs-on: ubuntu-latest 11 | steps: 12 | - name: Checkout code 13 | uses: actions/checkout@v4 14 | 15 | - name: Set up Python 16 | uses: actions/setup-python@v5 17 | with: 18 | python-version: '3.10' 19 | 20 | - name: Install dependencies 21 | run: pip install --no-cache-dir -r requirements.txt 22 | 23 | - name: Run unit tests 24 | run: pytest tests/unit 25 | 26 | - name: Run integration tests 27 | run: pytest tests/integration 28 | 29 | - name: Run code coverage 30 | run: pytest --cov=swarms tests/ 31 | 32 | - name: Run linters 33 | run: pylint swarms 34 | 35 | - name: Build documentation 36 | run: make docs 37 | 38 | - name: Validate documentation 39 | run: sphinx-build -b linkcheck docs build/docs 40 | 41 | - name: Run performance tests 42 | run: pytest tests/performance -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.yml: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Liquid Foundation Models [LFMs] 4 | 5 | [![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb) 6 | 7 | **Welcome to the open-source implementation of Liquid Foundation Models (LFMs)** — the next generation of multi-modal, adaptive AI systems. LFMs represent a breakthrough in model architecture design, specifically tailored to meet the evolving demands of real-world applications across different modalities such as text, audio, image, and video. [The article provides some implementation details](https://www.liquid.ai/liquid-foundation-models) 8 | 9 | ## Overview 10 | 11 | The Liquid Foundation Models are a series of generative AI models designed with an entirely new architecture, optimized for performance, scalability, and efficiency across a wide variety of hardware platforms and modalities. Unlike traditional GPT-based models, LFMs introduce a novel computational paradigm that redefines how we approach token and channel mixing, weight sharing, feature sharing, and adaptive computation, making them suitable for a broader range of applications. 12 | 13 | LFMs are built to support multi-modal inputs, which allows them to process text, audio, images, and other data types seamlessly. This flexibility makes them ideal for scenarios where combining various forms of data can yield more powerful insights and predictions. 14 | 15 | ## Key Features of LFMs 16 | 17 | ### 1. **Multi-Modal Support** 18 | LFMs natively support multiple data modalities such as text, audio, images, and video. This is achieved through a robust featurization process that converts raw data into structured feature representations. This architecture allows LFMs to unify different types of data and process them under a single framework, enabling richer and more complex tasks like multi-modal reasoning, understanding, and generation. 19 | 20 | ### 2. **Token Mixing & Channel Mixing** 21 | The architecture of LFMs incorporates specialized computational units that perform **token mixing** and **channel mixing**: 22 | 23 | - **Token Mixing**: This operation focuses on how the model interacts with and mixes embeddings within a sequence of tokens. By leveraging adaptive linear operations, LFMs dynamically modulate the way tokens interact, leading to more efficient and expressive token representations. 24 | - **Channel Mixing**: Channel mixing refers to interactions between different layers or channels within the model. By introducing adaptive mechanisms at the channel level, LFMs enhance the ability to model complex relationships across different layers, ensuring efficient information flow throughout the network. 25 | 26 | ### 3. **Adaptive Linear Operators** 27 | At the heart of LFMs are **adaptive linear operators**. These operators adjust their computational actions based on the input they receive, allowing the model to be more flexible and contextually aware of the data it processes. This adaptiveness reduces the computational overhead typically seen in traditional linear operations while improving the model's performance on diverse data types. 28 | 29 | ### 4. **Weight & Feature Sharing** 30 | LFMs incorporate **weight sharing** across depth groups to improve parameter efficiency and reduce memory consumption without sacrificing performance. In addition, **feature sharing** between different featurizer interconnections allows the model to seamlessly transfer learned features between different modalities, resulting in faster convergence and better generalization. 31 | 32 | ### 5. **Mixture of Experts (MoE)** 33 | One of the key innovations in LFMs is the **Mixture of Experts (MoE)** layer. This mechanism dynamically selects a subset of experts (sub-models) to process specific inputs. This results in a more efficient model, as only a fraction of the network is used for any given task, significantly reducing computational requirements while maintaining high performance. LFMs with MoE can scale to larger parameter counts while preserving compute efficiency, making them suitable for both cloud and edge deployments. 34 | 35 | ### 6. **Hardware Optimization** 36 | LFMs are built with hardware efficiency in mind. They can be optimized for a range of platforms, including Qualcomm, Apple, Cerebras, and AMD, ensuring that the model delivers high throughput regardless of the deployment environment. This optimization not only boosts inference speeds but also enables LFMs to run on cost-effective hardware, democratizing access to high-performance AI. 37 | 38 | ### 7. **Scalable Architecture** 39 | With models ranging from 1 billion to over 40 billion parameters, LFMs offer scalability without compromise. The architecture has been designed to maximize performance at every size, and the combination of adaptive computation and MoE allows LFMs to outperform models that are significantly larger in size, offering both efficiency and precision. 40 | 41 | ## Benefits of LFMs 42 | 43 | ### 1. **Multi-Modal Capabilities** 44 | LFMs’ ability to seamlessly integrate and process multi-modal data makes them a powerful tool for applications requiring cross-modal reasoning, such as video analysis with audio context, or multi-modal document understanding. This allows enterprises to leverage a single model architecture to handle diverse data types, reducing the need for separate models and increasing operational efficiency. 45 | 46 | ### 2. **Superior Performance at Reduced Costs** 47 | By incorporating MoE and adaptive linear operations, LFMs are able to maintain performance levels comparable to models much larger than themselves. This reduces the cost of inference and training, making LFMs an ideal choice for organizations looking to scale AI without incurring prohibitive hardware costs. 48 | 49 | ### 3. **Optimized for Real-World Use** 50 | LFMs are designed with real-world applications in mind, with targeted optimizations for deployment on edge devices, cloud environments, and specialized hardware like GPUs and AI accelerators. Their hardware adaptability ensures smooth operation across diverse deployment environments, from mobile devices to large-scale data centers. 51 | 52 | ### 4. **Efficient Training and Inference** 53 | LFMs' unique architecture reduces the overall number of computations required per task, speeding up both training and inference times. Enterprises can expect faster model iterations, quicker deployment of new features, and a more efficient feedback loop for continuous improvement. 54 | 55 | ### 5. **Scalability and Flexibility** 56 | With LFMs, enterprises can scale their AI capabilities flexibly. Whether working with smaller 1B parameter models or larger 40B models, LFMs provide scalability that is driven by the needs of the task rather than hardware constraints. This allows businesses to balance performance with cost-effectiveness depending on their specific application. 57 | 58 | ### 6. **Enterprise-Ready Design** 59 | LFMs are architected with a focus on deployment at scale. The model’s ability to optimize for different hardware platforms means enterprises can quickly integrate LFMs into their existing workflows without major infrastructure overhauls. Moreover, the inclusion of the Mixture of Experts layer provides higher throughput and lower latency, crucial for time-sensitive applications. 60 | 61 | ## Conclusion 62 | 63 | Liquid Foundation Models (LFMs) mark the next step in the evolution of generative AI models. With their innovative architecture, LFMs provide unmatched performance across multi-modal tasks while being highly efficient and scalable. Their adaptive computational units, token and channel mixing, and Mixture of Experts (MoE) technology offer a new balance between model size, performance, and hardware compatibility. 64 | 65 | LFMs are the foundation for the future of AI systems, designed to be deployed in enterprise environments, across diverse hardware setups, and to handle real-world tasks that span multiple data types. 66 | 67 | This open-source implementation provides a glimpse into the powerful potential of LFMs, enabling businesses and researchers to experiment with the most advanced model architectures available today. 68 | 69 | For more information on the Liquid Foundation Models and their broader applications, visit [Liquid AI](https://www.liquid.ai/liquid-foundation-models). 70 | 71 | --- 72 | --------------------------------------------------------------------------------