├── .gitignore ├── README.md ├── aws-ecs ├── README.md └── cloudformation.yaml ├── cerebrium ├── README.md └── cerebrium.toml ├── fly.io ├── README.md └── fly.toml ├── kubernetes ├── README.md └── agent-manifest.yaml ├── node-agent-example-docker ├── .dockerignore ├── Dockerfile └── README.md ├── python-agent-example-app ├── .dockerignore ├── Dockerfile ├── README.md ├── main.py └── requirements.txt └── render.com ├── README.md └── render.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control 110 | .pdm.toml 111 | .pdm-python 112 | .pdm-build/ 113 | 114 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 115 | __pypackages__/ 116 | 117 | # Celery stuff 118 | celerybeat-schedule 119 | celerybeat.pid 120 | 121 | # SageMath parsed files 122 | *.sage.py 123 | 124 | # Environments 125 | .env 126 | .venv 127 | env/ 128 | venv/ 129 | ENV/ 130 | env.bak/ 131 | venv.bak/ 132 | 133 | # Spyder project settings 134 | .spyderproject 135 | .spyproject 136 | 137 | # Rope project settings 138 | .ropeproject 139 | 140 | # mkdocs documentation 141 | /site 142 | 143 | # mypy 144 | .mypy_cache/ 145 | .dmypy.json 146 | dmypy.json 147 | 148 | # Pyre type checker 149 | .pyre/ 150 | 151 | # pytype static type analyzer 152 | .pytype/ 153 | 154 | # Cython debug symbols 155 | cython_debug/ 156 | 157 | # PyCharm 158 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 159 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 160 | # and can be added to the global gitignore or merged into this file. For a more nuclear 161 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 162 | #.idea/ 163 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LiveKit Agents deployment examples 2 | 3 | This repository contains a collection of examples to deploy [LiveKit Agents](https://github.com/livekit/agents) into a production environment for a variety of cloud providers. 4 | 5 | For more information about deployment, see the [documentation](https://docs.livekit.io/agents/deployment) 6 | 7 | ## Dockerfile examples 8 | 9 | The following examples include a bare-bones agent and Dockerfile which is suitable for running in any containerized environment. 10 | 11 | | Platform | Description | 12 | |----------|-------------| 13 | | [Python](/python-agent-example-app) | `Dockerfile` example for Python | 14 | | [Node.js](/node-agent-example-docker) | `Dockerfile` example for Node.js | 15 | 16 | ## Provider templates 17 | 18 | The following examples include a template configuration or manifest file for each provider. You should use these files in conjunction with the Dockerfile examples above. 19 | 20 | | Provider | Description | 21 | |----------|-------------| 22 | | [AWS ECS](/aws-ecs) | `cloudformation.yaml` example for ECS | 23 | | [Cerebrium](/cerebrium) | `cerebrium.toml` example for [Cerebrium](https://cerebrium.ai) | 24 | | [Fly.io](/fly.io) | `fly.toml` example for [Fly.io](https://fly.io) | 25 | | [Kubernetes](/kubernetes) | Example manifest file for any Kubernetes environment | 26 | | [Render.com](/render.com) | `render.yaml` example for [Render](https://render.com) | 27 | 28 | ## Missing a provider? 29 | 30 | Feel free to open a PR or issue to add instructions for your favorite provider! -------------------------------------------------------------------------------- /aws-ecs/README.md: -------------------------------------------------------------------------------- 1 | # AWS ECS Deployment Example 2 | 3 | This directory demonstrates how to deploy the `agent-example` to AWS ECS. 4 | 5 | Deployment configuration lives mostly in the `cloudformation.yaml` file. 6 | 7 | ## Getting Started 8 | 9 | ### Copy Example App w/ Dockerfile 10 | 11 | This guide assumes the app and relevant files exist in this directory. 12 | We provide an example app in the `agent-example-app` directory at the top-level of this repo. 13 | 14 | ```bash 15 | cp ../agent-example-app/* . 16 | cp ../agent-example-app/.dockerignore . 17 | ``` 18 | 19 | ### Install dependencies: 20 | 21 | 22 | 1. [AWS Cli](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) 23 | 2. [Docker](https://docs.docker.com/engine/install/) 24 | 25 | [!NOTE] 26 | Once the aws cli is installed, you'll need to configure it. 27 | There are a lot of ways to do this so we defer to the 28 | (aws docs)[https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html] 29 | 30 | ### Create secrets 31 | ```bash 32 | aws secretsmanager create-secret \ 33 | --name ecs/agent-example/livekit-url \ 34 | --region us-east-1 \ 35 | --secret-string "wss://your-url-from-livekit-cloud-dashboard.livekit.cloud" 36 | 37 | aws secretsmanager create-secret \ 38 | --name ecs/agent-example/livekit-api-key \ 39 | --region us-east-1 \ 40 | --secret-string "api-key-from-livekit-cloud-dashboard" 41 | 42 | aws secretsmanager create-secret \ 43 | --name ecs/agent-example/livekit-api-secret \ 44 | --secret-string "api-secret-from-livekit-cloud-dashboard" 45 | ``` 46 | 47 | Update the cloudformation.yaml with the arn from these created secrets. 48 | 49 | You will likely need to add additional secrets here as 50 | well depending on your agent, for example, `OPENAI_API_KEY`. 51 | 52 | ### Create Cloud Formation Stack 53 | 54 | This example leverages cloud formation for creating resources in aws: 55 | ```bash 56 | aws cloudformation create-stack \ 57 | --stack-name agents-stack \ 58 | --template-body file://cloudformation.yaml \ 59 | --capabilities CAPABILITY_NAMED_IAM 60 | ``` 61 | 62 | This will scaffold: 63 | - VPC + Subnet 64 | - ECS Cluster 65 | - ECR (Docker Repository) 66 | - ECS Task Definition with configuration for the agent-example 67 | - IAM Role for ECS Task Definition execution 68 | - ECS Service for the agent-example 69 | 70 | The `DesiredCount` set on the ECS Service is initially set to `0`. This is 71 | because there is a chicken-egg problem: 72 | - The Docker repository and `agent-example` docker image don't exist 73 | - The CloudFormation stack creation won't succeed until the service starts successfully 74 | - The Service depends on the Docker image 75 | 76 | In the next steps, we will build and push the Docker image and scale the Service. 77 | 78 | ### Login Docker to your Image Repository 79 | 80 | Fetch the repositoryUri from aws: 81 | ```bash 82 | aws ecr describe-repositories 83 | ``` 84 | 85 | ```bash 86 | aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin "" 87 | ``` 88 | 89 | ### Build and Push Docker Image 90 | 91 | ```bash 92 | docker buildx build --platform linux/amd64 -t "": --push . 93 | ``` 94 | 95 | Update the image used in the `AgentExampleService` section in `cloudformation.yaml` 96 | 97 | ### Scale the service 98 | 99 | Now that the image exists, we can scale the service. We'll start once instance 100 | by setting `DesiredCount: 1` in the `AgentExampleService` of `cloudformation.yaml`. 101 | 102 | Once you make this change, run: 103 | 104 | ```bash 105 | aws cloudformation update-stack \ 106 | --stack-name agents-stack \ 107 | --template-body file://cloudformation.yaml \ 108 | --capabilities CAPABILITY_NAMED_IAM 109 | ``` 110 | 111 | ### Updating a deployment 112 | 113 | ```bash 114 | docker buildx build --platform linux/amd64 -t "": --push . 115 | ``` 116 | 117 | Then update the image in cloudformation and run the `update-stack` command again. -------------------------------------------------------------------------------- /aws-ecs/cloudformation.yaml: -------------------------------------------------------------------------------- 1 | Resources: 2 | # Create a Virtual Private Cloud (VPC) for your ECS cluster 3 | AgentVPC: 4 | Type: AWS::EC2::VPC 5 | Properties: 6 | CidrBlock: 10.0.0.0/16 7 | EnableDnsSupport: true 8 | EnableDnsHostnames: true 9 | 10 | # Create an Internet Gateway for internet access 11 | InternetGateway: 12 | Type: AWS::EC2::InternetGateway 13 | 14 | # Attach the Internet Gateway to the VPC 15 | AttachGateway: 16 | Type: AWS::EC2::VPCGatewayAttachment 17 | Properties: 18 | VpcId: !Ref AgentVPC 19 | InternetGatewayId: !Ref InternetGateway 20 | 21 | # Create a public subnet within the VPC 22 | PublicSubnet: 23 | Type: AWS::EC2::Subnet 24 | Properties: 25 | VpcId: !Ref AgentVPC 26 | CidrBlock: 10.0.0.0/24 27 | AvailabilityZone: "us-east-1a" 28 | MapPublicIpOnLaunch: true 29 | 30 | # Create a private subnet within the VPC 31 | PrivateSubnet: 32 | Type: AWS::EC2::Subnet 33 | Properties: 34 | VpcId: !Ref AgentVPC 35 | CidrBlock: 10.0.1.0/24 36 | AvailabilityZone: "us-east-1a" 37 | MapPublicIpOnLaunch: false 38 | 39 | # Allocate an Elastic IP address for the NAT Gateway 40 | NATIPAddress: 41 | Type: AWS::EC2::EIP 42 | Properties: 43 | Domain: vpc 44 | 45 | # Create a NAT Gateway for internet access from the private subnet 46 | NATGateway: 47 | Type: AWS::EC2::NatGateway 48 | Properties: 49 | AllocationId: !GetAtt NATIPAddress.AllocationId 50 | SubnetId: !Ref PublicSubnet # NAT Gateway in public subnet 51 | 52 | # Create a route table for the public subnet 53 | PublicRouteTable: 54 | Type: AWS::EC2::RouteTable 55 | Properties: 56 | VpcId: !Ref AgentVPC 57 | 58 | # Add a default route to the Internet Gateway in the public route table 59 | PublicRoute: 60 | Type: AWS::EC2::Route 61 | DependsOn: AttachGateway 62 | Properties: 63 | RouteTableId: !Ref PublicRouteTable 64 | DestinationCidrBlock: 0.0.0.0/0 65 | GatewayId: !Ref InternetGateway 66 | 67 | # Associate the public subnet with the public route table 68 | PublicSubnetRouteTableAssociation: 69 | Type: AWS::EC2::SubnetRouteTableAssociation 70 | Properties: 71 | SubnetId: !Ref PublicSubnet 72 | RouteTableId: !Ref PublicRouteTable 73 | 74 | # Create a route table for the private subnet 75 | PrivateRouteTable: 76 | Type: AWS::EC2::RouteTable 77 | Properties: 78 | VpcId: !Ref AgentVPC 79 | 80 | # Add a default route to the NAT Gateway in the private route table 81 | PrivateRoute: 82 | Type: AWS::EC2::Route 83 | Properties: 84 | RouteTableId: !Ref PrivateRouteTable 85 | DestinationCidrBlock: 0.0.0.0/0 86 | NatGatewayId: !Ref NATGateway 87 | 88 | # Associate the private subnet with the private route table 89 | PrivateSubnetRouteTableAssociation: 90 | Type: AWS::EC2::SubnetRouteTableAssociation 91 | Properties: 92 | SubnetId: !Ref PrivateSubnet 93 | RouteTableId: !Ref PrivateRouteTable 94 | 95 | # Define a security group for ECS tasks 96 | SecurityGroup: 97 | Type: AWS::EC2::SecurityGroup 98 | Properties: 99 | GroupDescription: Security group for ECS tasks 100 | VpcId: !Ref AgentVPC 101 | SecurityGroupEgress: 102 | - IpProtocol: -1 103 | CidrIp: '0.0.0.0/0' 104 | 105 | # Create an ECS cluster to run your tasks and services 106 | AgentCluster: 107 | Type: AWS::ECS::Cluster 108 | 109 | # Create an Amazon ECR repository to store Docker images 110 | DockerRepository: 111 | Type: AWS::ECR::Repository 112 | Properties: 113 | RepositoryName: agent-example 114 | 115 | # Define an IAM role for ECS task execution 116 | TaskExecutionRole: 117 | Type: AWS::IAM::Role 118 | Properties: 119 | RoleName: ecsTaskExecutionRole # You can name this role, or let CloudFormation name it 120 | AssumeRolePolicyDocument: 121 | Version: "2012-10-17" 122 | Statement: 123 | - Effect: Allow 124 | Principal: 125 | Service: ecs-tasks.amazonaws.com 126 | Action: "sts:AssumeRole" 127 | Policies: 128 | - PolicyName: ECSTaskExecutionAndSecretsPolicy 129 | PolicyDocument: 130 | Version: "2012-10-17" 131 | Statement: 132 | - Effect: Allow 133 | Action: 134 | # Permissions for AWS Secrets Manager and Systems Manager Parameter Store 135 | - secretsmanager:GetSecretValue 136 | - ssm:GetParameters 137 | - ssm:GetParameter 138 | - ssm:DescribeParameters 139 | - kms:Decrypt # Needed if secrets are encrypted with KMS 140 | 141 | # Permissions for pulling container images from Amazon ECR 142 | - ecr:GetAuthorizationToken 143 | - ecr:GetDownloadUrlForLayer 144 | - ecr:BatchCheckLayerAvailability 145 | - ecr:BatchGetImage 146 | 147 | # Permissions for CloudWatch Logs 148 | - logs:CreateLogStream 149 | - logs:PutLogEvents 150 | 151 | # Permissions for accessing S3 (if needed) 152 | - s3:GetObject 153 | Resource: "*" 154 | 155 | # Create a CloudWatch Logs log group for your ECS tasks 156 | AgentExampleLogGroup: 157 | Type: AWS::Logs::LogGroup 158 | Properties: 159 | LogGroupName: "/ecs/agent-example" 160 | RetentionInDays: 14 # Adjust retention as needed 161 | 162 | # Define an ECS task definition for your application 163 | AgentExampleTaskDefinition: 164 | Type: AWS::ECS::TaskDefinition 165 | Properties: 166 | RequiresCompatibilities: 167 | - FARGATE 168 | Cpu: "2048" 169 | Memory: "4096" 170 | NetworkMode: awsvpc 171 | ExecutionRoleArn: !GetAtt TaskExecutionRole.Arn 172 | ContainerDefinitions: 173 | - Name: agent-example 174 | Image: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${DockerRepository}:" 175 | Essential: true 176 | Secrets: 177 | - Name: LIVEKIT_URL 178 | ValueFrom: arn:aws:secretsmanager:us-east-1:203125320322:secret:ecs/agent-example/livekit-url- 179 | - Name: LIVEKIT_API_KEY 180 | ValueFrom: arn:aws:secretsmanager:us-east-1:203125320322:secret:ecs/agent-example/livekit-api-key- 181 | - Name: LIVEKIT_API_SECRET 182 | ValueFrom: arn:aws:secretsmanager:us-east-1:203125320322:secret:ecs/agent-example/livekit-api-secret- 183 | LogConfiguration: 184 | LogDriver: awslogs 185 | Options: 186 | awslogs-group: !Ref AgentExampleLogGroup 187 | awslogs-region: !Ref AWS::Region 188 | awslogs-stream-prefix: agent-example 189 | 190 | # Create an ECS service to run and maintain your tasks 191 | AgentExampleService: 192 | Type: AWS::ECS::Service 193 | Properties: 194 | Cluster: !Ref AgentCluster 195 | DesiredCount: 0 # Initially set to 0 because the image doesn't exist yet 196 | TaskDefinition: !Ref AgentExampleTaskDefinition 197 | LaunchType: FARGATE 198 | NetworkConfiguration: 199 | AwsvpcConfiguration: 200 | Subnets: 201 | - !Ref PrivateSubnet 202 | SecurityGroups: 203 | - !Ref SecurityGroup 204 | -------------------------------------------------------------------------------- /cerebrium/README.md: -------------------------------------------------------------------------------- 1 | # Cerebrium.ai LiveKit Agents Deployment Example 2 | 3 | This directory demonstrates how to deploy LiveKit Agents to [Cerebrium](https://www.cerebrium.ai) using a sample `cerebrium.toml` file. 4 | 5 | You also need a working agents app and Dockerfile. See the examples for [Python](/python-agent-example-app) or [Node.js](/node-agent-example-docker) if necessary. 6 | 7 | ## Getting Started 8 | 9 | ### Create Cerebrium account 10 | 11 | If you don't have a Cerebrium account, you can easily signup with the CLI: 12 | 13 | ```bash 14 | pip install cerebrium --upgrade 15 | cerebrium login 16 | ``` 17 | 18 | ### Add Secrets to Cerebrium 19 | 20 | In your [Cerebrium dashboard](https://dashboard.cerebrium.ai) you'll need to create add the following LiveKit secrets. 21 | ```bash 22 | LIVEKIT_URL=wss://your-livekit-url.livekit.cloud 23 | LIVEKIT_API_KEY=your-livekit-api-key 24 | LIVEKIT_API_SECRET=your-livekit-api-secret 25 | ``` 26 | 27 | ### Add your cerebrium.toml file 28 | 29 | Copy the `cerebrium.toml` file to the root of your project (wherever your `Dockerfile` is located). 30 | 31 | ### Launch your service 32 | 33 | Run the following in your CLI to launch the service 34 | 35 | ```bash 36 | cerebrium deploy 37 | ``` 38 | 39 | If you need further help extending functionality you can look in the documentation [here](https://docs.cerebrium.ai/cerebrium/getting-started/introduction). -------------------------------------------------------------------------------- /cerebrium/cerebrium.toml: -------------------------------------------------------------------------------- 1 | [cerebrium.deployment] 2 | name = "cerebrium" 3 | python_version = "3.11" 4 | docker_base_image_url = "debian:bookworm-slim" 5 | disable_auth = false 6 | include = ['./*', 'main.py', 'cerebrium.toml'] 7 | exclude = ['.*'] 8 | 9 | [cerebrium.hardware] 10 | cpu = 2 11 | memory = 8.0 12 | compute = "CPU" 13 | 14 | [cerebrium.scaling] 15 | min_replicas = 1 16 | max_replicas = 5 17 | cooldown = 30 18 | replica_concurrency = 1 19 | response_grace_period = 900 20 | scaling_metric = "cpu_utilization" 21 | scaling_target = 60 22 | 23 | [cerebrium.runtime.custom] 24 | port = 8081 25 | dockerfile_path = "./Dockerfile" 26 | -------------------------------------------------------------------------------- /fly.io/README.md: -------------------------------------------------------------------------------- 1 | # Fly.io LiveKit Agents Deployment Example 2 | 3 | This directory demonstrates how to deploy LiveKit Agents to [fly.io](https://fly.io) using a sample `fly.toml` file. 4 | 5 | You also need a working agents app and Dockerfile. See the examples for [Python](/python-agent-example-app) or [Node.js](/node-agent-example-docker) if necessary. 6 | 7 | 8 | ## Getting Started 9 | 10 | ### Install the `fly` command-line interface: 11 | 12 | https://fly.io/docs/flyctl/install/ 13 | 14 | ### Authenticate with Fly.io 15 | 16 | ```bash 17 | fly auth login 18 | ``` 19 | 20 | ### Copy the sample `fly.toml` file 21 | 22 | Copy the `fly.toml` file to the root of your project (wherever your `Dockerfile` is located). 23 | 24 | ### Create your app 25 | 26 | Create your app, and use the `fly.toml` file you already have. You can change the name if you'd like, both in the file and in the command below. 27 | 28 | ```bash 29 | fly app create agent-example 30 | ``` 31 | 32 | ### Create secrets 33 | 34 | You will need to create secrets for your app. You can do this using the `fly secrets` command. 35 | 36 | ```bash 37 | fly secrets set --app agent-example \ 38 | LIVEKIT_URL="wss://your-url-from-livekit-cloud-dashboard.livekit.cloud" \ 39 | LIVEKIT_API_KEY="api-key-from-livekit-cloud-dashboard" \ 40 | LIVEKIT_API_SECRET="api-secret-from-livekit-cloud-dashboard" 41 | ``` 42 | 43 | These secrets will be available as environment variables in the worker process. You will likely need to add additional secrets here as well depending on your agent, for example, `OPENAI_API_KEY`. 44 | 45 | ### Deploy your app 46 | 47 | ```bash 48 | fly deploy -c fly.toml 49 | ``` 50 | 51 | ### Scaling 52 | 53 | Scaling can be done manually using fly commands: 54 | 55 | ```bash 56 | fly scale count --app agent-example 57 | ``` 58 | 59 | For autoscaling on fly, see their guide: https://fly.io/docs/launch/autoscale-by-metric/ 60 | -------------------------------------------------------------------------------- /fly.io/fly.toml: -------------------------------------------------------------------------------- 1 | app = 'agent-example' 2 | primary_region = 'iad' 3 | 4 | # As of now, the maximum kill_timeout on fly.io is 5 minutes. After 5 | # 5 minutes, in-progress AI voice conversations will be killed. This is limitation of 6 | # fly and we recommend that your app be able to gracefully handle this. 7 | # The easiest way to handle this is to create a new LiveKit room which will receive 8 | # a new agent. 9 | kill_timeout = "300s" 10 | 11 | [build] 12 | dockerfile = "Dockerfile" 13 | ignorefile = ".dockerignore" 14 | 15 | [env] 16 | PYTHONUNBUFFERED = '1' 17 | 18 | [[vm]] 19 | memory = '4gb' 20 | cpu_kind = 'shared' 21 | cpus = 2 22 | 23 | [deploy] 24 | # We want to leave the previous deploy running to allow active conversations to drain 25 | # so we use a bluegreen strategy. 26 | strategy = "bluegreen" 27 | 28 | # A check if required for the bluegreen deployment strategy. 29 | [[checks]] 30 | name = "healthcheck" 31 | type = "tcp" 32 | interval = "10s" 33 | timeout = "2s" 34 | grace_period = "5s" 35 | port = 8081 # Default health check port -------------------------------------------------------------------------------- /kubernetes/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Deployment for LiveKit Agents 2 | 3 | This directory provides an example Kubernetes deployment manifest for LiveKit Agents, making it easy to deploy and scale your agents on any Kubernetes cluster. 4 | 5 | You also need a working agents app and Dockerfile. See the examples for [Python](/python-agent-example-app) or [Node.js](/node-agent-example-docker) if necessary. 6 | 7 | ## Deployment Steps 8 | 9 | The basic steps are as follows: 10 | 11 | 1. Review and modify the `agent-manifest.yaml` file to suit your needs: 12 | - Update the environment variables with your LiveKit API key, secret, and other configurations. 13 | - Adjust resource limits and requests as needed for your workload. 14 | 15 | 2. Apply the manifest to your Kubernetes cluster: 16 | ```bash 17 | kubectl apply -f agent-manifest.yaml 18 | ``` 19 | 20 | See the [Kubernetes documentation](https://kubernetes.io/docs/home/) for more information on managing your Kubernetes cluster. -------------------------------------------------------------------------------- /kubernetes/agent-manifest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: agent-example 5 | namespace: livekit 6 | labels: 7 | app: agent-example 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: agent-example 13 | template: 14 | metadata: 15 | labels: 16 | app: agent-example 17 | spec: 18 | # Give the agent 10 minutes to finish up any ongoing conversations 19 | terminationGracePeriodSeconds: 600 20 | 21 | containers: 22 | - name: agent-example 23 | image: your-docker-repo.com/path-to-your-docker-build:latest 24 | 25 | # Good for ~30 concurrent 1:1 AI conversations 26 | resources: 27 | requests: 28 | cpu: "4" 29 | memory: "8Gi" 30 | limits: 31 | cpu: "4" 32 | memory: "8Gi" 33 | 34 | env: 35 | - name: LIVEKIT_URL 36 | valueFrom: 37 | secretKeyRef: 38 | name: agent-example-livekit 39 | key: LIVEKIT_URL 40 | - name: LIVEKIT_API_KEY 41 | valueFrom: 42 | secretKeyRef: 43 | name: agent-example-livekit 44 | key: LIVEKIT_API_KEY 45 | - name: LIVEKIT_API_SECRET 46 | valueFrom: 47 | secretKeyRef: 48 | name: agent-example-livekit 49 | key: LIVEKIT_API_SECRET -------------------------------------------------------------------------------- /node-agent-example-docker/.dockerignore: -------------------------------------------------------------------------------- 1 | # Node.js dependencies 2 | node_modules 3 | npm-debug.log 4 | yarn-error.log 5 | pnpm-debug.log 6 | 7 | # Build outputs 8 | dist 9 | build 10 | coverage 11 | 12 | # Local environment & config files 13 | .env 14 | .env.local 15 | .DS_Store 16 | 17 | # Logs & temp files 18 | *.log 19 | *.gz 20 | *.tgz 21 | .tmp 22 | .cache 23 | 24 | # Docker artifacts 25 | Dockerfile* 26 | .dockerignore 27 | 28 | # Git & Editor files 29 | .git 30 | .gitignore 31 | .idea 32 | .vscode 33 | -------------------------------------------------------------------------------- /node-agent-example-docker/Dockerfile: -------------------------------------------------------------------------------- 1 | # this is an example Dockerfile that builds a minimal container for running LK Agents 2 | # syntax=docker/dockerfile:1 3 | FROM node:20-slim AS base 4 | 5 | WORKDIR /app 6 | 7 | RUN npm install -g pnpm@9.7.0 8 | 9 | # throw away build stage to reduce size of final image 10 | FROM base AS build 11 | 12 | RUN apt-get update -qq && apt-get install --no-install-recommends -y ca-certificates 13 | COPY --link . . 14 | 15 | RUN pnpm install --frozen-lockfile 16 | RUN npm run build 17 | 18 | FROM base 19 | COPY --from=build /app /app 20 | COPY --from=build /etc/ssl/certs /etc/ssl/certs 21 | 22 | # start the server by default, this can be overwritten at runtime 23 | EXPOSE 8081 24 | 25 | CMD [ "node", "./dist/agent.js", "start" ] 26 | -------------------------------------------------------------------------------- /node-agent-example-docker/README.md: -------------------------------------------------------------------------------- 1 | # Node.js Agents Docker Example 2 | 3 | This directory contains an example production-ready Dockerfile for [LiveKit Agents for Node.js](https://github.com/livekit/agents-js). 4 | 5 | -------------------------------------------------------------------------------- /python-agent-example-app/.dockerignore: -------------------------------------------------------------------------------- 1 | # Python artifacts 2 | venv/ 3 | __pycache__/ 4 | 5 | # Local environment & config files 6 | .env 7 | .env.local 8 | .DS_Store 9 | 10 | # Logs & temp files 11 | *.log 12 | *.gz 13 | *.tgz 14 | .tmp 15 | .cache 16 | 17 | # Docker artifacts 18 | Dockerfile* 19 | .dockerignore 20 | 21 | # Git & Editor files 22 | .git 23 | .gitignore 24 | .idea 25 | .vscode 26 | -------------------------------------------------------------------------------- /python-agent-example-app/Dockerfile: -------------------------------------------------------------------------------- 1 | # This is an example Dockerfile that builds a minimal container for running LK Agents 2 | # syntax=docker/dockerfile:1 3 | ARG PYTHON_VERSION=3.11.6 4 | FROM python:${PYTHON_VERSION}-slim 5 | 6 | # Keeps Python from buffering stdout and stderr to avoid situations where 7 | # the application crashes without emitting any logs due to buffering. 8 | ENV PYTHONUNBUFFERED=1 9 | 10 | # Create a non-privileged user that the app will run under. 11 | # See https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user 12 | ARG UID=10001 13 | RUN adduser \ 14 | --disabled-password \ 15 | --gecos "" \ 16 | --home "/home/appuser" \ 17 | --shell "/sbin/nologin" \ 18 | --uid "${UID}" \ 19 | appuser 20 | 21 | 22 | # Install gcc and other build dependencies. 23 | RUN apt-get update && \ 24 | apt-get install -y \ 25 | gcc \ 26 | python3-dev \ 27 | && rm -rf /var/lib/apt/lists/* 28 | 29 | USER appuser 30 | 31 | RUN mkdir -p /home/appuser/.cache 32 | RUN chown -R appuser /home/appuser/.cache 33 | 34 | WORKDIR /home/appuser 35 | 36 | COPY requirements.txt . 37 | RUN python -m pip install --user --no-cache-dir -r requirements.txt 38 | 39 | COPY . . 40 | 41 | # ensure that any dependent models are downloaded at build-time 42 | RUN python main.py download-files 43 | 44 | # expose healthcheck port 45 | EXPOSE 8081 46 | 47 | # Run the application. 48 | CMD ["python", "main.py", "start"] 49 | -------------------------------------------------------------------------------- /python-agent-example-app/README.md: -------------------------------------------------------------------------------- 1 | # Python Agents Example Application 2 | 3 | This directory contains an example production-ready `Dockerfile` and bare bones app for [LiveKit Agents](https://github.com/livekit/agents). -------------------------------------------------------------------------------- /python-agent-example-app/main.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from livekit.agents import AutoSubscribe, JobContext, WorkerOptions, WorkerType, cli 4 | 5 | logger = logging.getLogger("my-worker") 6 | logger.setLevel(logging.INFO) 7 | 8 | async def entrypoint(ctx: JobContext): 9 | logger.info("starting entrypoint") 10 | await ctx.connect(auto_subscribe=AutoSubscribe.SUBSCRIBE_ALL) 11 | logger.info("connected to the room") 12 | 13 | 14 | if __name__ == "__main__": 15 | cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint, worker_type=WorkerType.ROOM)) -------------------------------------------------------------------------------- /python-agent-example-app/requirements.txt: -------------------------------------------------------------------------------- 1 | livekit-agents>=0.9.0 -------------------------------------------------------------------------------- /render.com/README.md: -------------------------------------------------------------------------------- 1 | # Render.com LiveKit Agents Deployment Example 2 | 3 | This directory demonstrates how to deploy LiveKit Agents to [Render.com](https://render.com). 4 | 5 | You also need a working agents app and Dockerfile. See the examples for [Python](/python-agent-example-app) or [Node.js](/node-agent-example-docker) if necessary. 6 | 7 | ## Getting Started 8 | 9 | ### Copy the `render.yaml` file 10 | 11 | Copy the `render.yaml` file to the root of your project (wherever your `Dockerfile` is located). 12 | 13 | ### Create environment group 14 | 15 | In your [Render.com dashboard](https://dashboard.render.com) you'll need to create an environment group to store LiveKit secrets. 16 | Create an environment group with the name `agent-example-env-group` with the variables: 17 | ```bash 18 | LIVEKIT_URL=wss://your-livekit-url.livekit.cloud 19 | LIVEKIT_API_KEY=your-livekit-api-key 20 | LIVEKIT_API_SECRET=your-livekit-api-secret 21 | ``` 22 | 23 | ### Launch your service 24 | 25 | To launch your service, create a blueprint in your Render.com dashboard 26 | pointing to your repo. 27 | 28 | This will find the `render.yaml` file and apply it's changes. You can use 29 | the provided render.yaml here as reference. Comments have been placed where 30 | you're likely to need your own configuration. 31 | -------------------------------------------------------------------------------- /render.com/render.yaml: -------------------------------------------------------------------------------- 1 | services: 2 | - type: pserv 3 | repo: https://github.com/livekit-examples/agent-deployment 4 | 5 | # Change this to the path containing your Dockerfile 6 | rootDir: python-agent-example-app 7 | 8 | name: agent-example 9 | runtime: image 10 | 11 | # Free plan is not available for private services 12 | plan: starter 13 | 14 | # Environment variables are stored in a group 15 | # created in the Render dashboard. 16 | envVars: 17 | - fromGroup: agent-example-env 18 | region: virginia 19 | 20 | # 300s is the standard allowed maximum. Talk to render.com support if you need this increased. 21 | maxShutdownDelaySeconds: 300 22 | 23 | # Sane defaults 24 | scaling: 25 | minInstances: 1 26 | maxInstances: 4 27 | targetCPUPercent: 60 28 | targetMemoryPercent: 60 29 | version: "1" --------------------------------------------------------------------------------