├── .dockerignore ├── .github ├── stale.yml └── workflows │ └── nodejs.yml ├── .gitignore ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── Cargo.lock ├── Cargo.toml ├── Dockerfile ├── LICENSE ├── Modes.md ├── README.md ├── crates ├── backend │ ├── Cargo.toml │ └── src │ │ ├── api.rs │ │ ├── lib.rs │ │ ├── sse.rs │ │ └── ws.rs ├── chirp │ ├── Cargo.toml │ ├── README.md │ └── src │ │ ├── counter.rs │ │ ├── main.rs │ │ ├── processors │ │ ├── command.rs │ │ ├── docker.rs │ │ └── mod.rs │ │ ├── sources │ │ ├── cron.rs │ │ ├── files.rs │ │ ├── mqtt.rs │ │ ├── storage.rs │ │ └── ws.rs │ │ └── trace.rs ├── frontend │ ├── Cargo.toml │ ├── Trunk.toml │ ├── index.html │ └── src │ │ ├── home.rs │ │ └── main.rs └── shared │ ├── Cargo.toml │ └── src │ ├── lib.rs │ ├── mysql.rs │ ├── postgres.rs │ ├── redis.rs │ └── sqlite.rs ├── examples ├── rest-api │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── standalone │ └── chirpy.yml └── uptime-bird │ ├── Cargo.toml │ ├── src │ └── main.rs │ └── tasks │ └── github.hurl └── screenshots ├── logo.svg ├── overview.png ├── queues.png ├── shot.png └── workers.png /.dockerignore: -------------------------------------------------------------------------------- 1 | */node_modules 2 | *.log 3 | -------------------------------------------------------------------------------- /.github/stale.yml: -------------------------------------------------------------------------------- 1 | # Number of days of inactivity before an issue becomes stale 2 | daysUntilStale: 90 3 | # Number of days of inactivity before a stale issue is closed 4 | daysUntilClose: 7 5 | # Issues with these labels will never be considered stale 6 | exemptLabels: 7 | - pinned 8 | - security 9 | - bug 10 | - enhancement 11 | # Label to use when marking an issue as stale 12 | staleLabel: wontfix 13 | # Comment to post when marking an issue as stale. Set to `false` to disable 14 | markComment: > 15 | This issue has been automatically marked as stale because it has not had 16 | recent activity. It will be closed if no further activity occurs. Thank you 17 | for your contributions. 18 | # Comment to post when closing a stale issue. Set to `false` to disable 19 | closeComment: false 20 | -------------------------------------------------------------------------------- /.github/workflows/nodejs.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | on: 3 | push: 4 | branches: 5 | - master 6 | pull_request: 7 | branches: 8 | - master 9 | jobs: 10 | build: 11 | name: Build, lint, and test on Node ${{ matrix.node }} and ${{ matrix.os }} 12 | 13 | runs-on: ${{ matrix.os }} 14 | strategy: 15 | matrix: 16 | node: [ '14.x', '16.x' ] 17 | os: [ubuntu-latest] 18 | 19 | steps: 20 | - name: Checkout repo 21 | uses: actions/checkout@v2 22 | 23 | - name: Use Node ${{ matrix.node }} 24 | uses: actions/setup-node@v2 25 | with: 26 | node-version: ${{ matrix.node }} 27 | cache: 'yarn' 28 | 29 | - name: Install deps 30 | run: yarn install --frozen-lockfile --silent 31 | env: 32 | CI: true 33 | 34 | - name: Lint 35 | run: yarn lint 36 | 37 | - name: Build 38 | run: yarn build 39 | env: 40 | CI: true 41 | 42 | - name: Test 43 | run: yarn test 44 | env: 45 | CI: true 46 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | node_modules/ 2 | .DS_Store 3 | dist 4 | yarn-error.log 5 | *.rdb 6 | website/build 7 | docker-compose.dockest-generated.yml 8 | dockest-error.json 9 | 10 | 11 | # Added by cargo 12 | 13 | /target 14 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ### Changelog 2 | 3 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to make participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | - Using welcoming and inclusive language 18 | - Being respectful of differing viewpoints and experiences 19 | - Gracefully accepting constructive criticism 20 | - Focusing on what is best for the community 21 | - Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | - The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | - Trolling, insulting/derogatory comments, and personal or political attacks 28 | - Public or private harassment 29 | - Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | - Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies within all project spaces, and it also applies when 49 | an individual is representing the project or its community in public spaces. 50 | Examples of representing a project or community include using an official 51 | project e-mail address, posting via an official social media account, or acting 52 | as an appointed representative at an online or offline event. Representation of 53 | a project may be further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at x2cg60knn@relay.firefox.com. All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the Contributor Covenant, version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | For answers to common questions about this code of conduct, see 76 | https://www.contributor-covenant.org/faq 77 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | members = [ 3 | "crates/backend", 4 | "crates/chirp", 5 | "crates/frontend", 6 | "crates/shared", 7 | "examples/rest-api", 8 | "examples/uptime-bird", 9 | ] 10 | resolver = "2" 11 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:lts 2 | 3 | WORKDIR /app/website 4 | 5 | EXPOSE 3000 35729 6 | COPY ./docs /app/docs 7 | COPY ./website /app/website 8 | RUN yarn install 9 | 10 | CMD ["yarn", "start"] 11 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Vitor Capretz (capretzvitor@gmail.com) 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /Modes.md: -------------------------------------------------------------------------------- 1 | # Modes 2 | There are two approaches to run this project 3 | 4 | 1. Board mode 5 | - Run apalis backends on another point. 6 | - Use the board to monitor and add new jobs 7 | - Simple config, just provide the db url and job type per job 8 | 9 | 2. Full mode 10 | - No need to write rust code. (Expects a config and runs apalis for you) 11 | - we consume all the jobs and use hurl to push jobs via http 12 | - Still use the board to monitor and add new jobs 13 | - More complex config, provide a worker, layers, a db url and a hurl file 14 | 15 | Both provide a binary and feature flags to compile. 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # apalis-board 2 | 3 | Apalis board contains a number of crates useful for building UIs for [apalis](https://github.com/geofmureithi/apalis). 4 | 5 | It helps you visualize your queues and their jobs. 6 | You get a beautiful UI for visualizing what's happening with each job in your queues, their status and some actions that will enable you to get the job done. 7 | 8 | ## Screenshots 9 | 10 | ### Workers 11 |  12 | 13 | ### Queues 14 |  15 | 16 | ### Jobs 17 |  18 | 19 | ## Crates 20 | 21 | ### Backend 22 | 23 | An extensible `actix` service It handles job scheduling, storage, and task execution. 24 | 25 | ### Chirp 26 | 27 | The chirp crate is the main entry point for the `apalis-chirp` command runner. It configures the application, sets up the necessary components, and starts the server. 28 | 29 | ### Frontend 30 | 31 | Contains a reusable frontend build with `hirola` 32 | 33 | ### Shared 34 | 35 | The shared crate contains common code and utilities that are used across the other crates. This includes data models, configuration handling, and utility functions. 36 | 37 | ## Examples 38 | 39 | ### Rest API 40 | 41 | The `rest-api` example demonstrates how to use `apalis` and `actix` to create an application to run jobs via HTTP requests. 42 | 43 | 44 | ### Building the Workspace 45 | 46 | To build the entire workspace, run the following command: 47 | 48 | ```sh 49 | cargo build --release 50 | ``` 51 | -------------------------------------------------------------------------------- /crates/backend/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "apalis-board-backend" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | actix-web = "4.5.1" 10 | actix-web-actors = "4.3.0" 11 | actix = "0.13.3" 12 | uuid = { version = "1.8", features = ["v4", "serde"] } 13 | shared = { package = "apalis-board-shared", path = "../shared" } 14 | apalis-core = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0" } 15 | serde = { version = "1", features = ["derive"] } 16 | futures = "0.3" 17 | tokio = "1" 18 | unescape = "0.1.0" 19 | -------------------------------------------------------------------------------- /crates/backend/src/api.rs: -------------------------------------------------------------------------------- 1 | use std::{collections::HashSet, fmt::Display}; 2 | 3 | use actix_web::{web, HttpResponse, Scope}; 4 | use apalis_core::{storage::Storage, task::task_id::TaskId}; 5 | use serde::{de::DeserializeOwned, Serialize}; 6 | use shared::{BackendExt, Filter, GetJobsResult}; 7 | use tokio::sync::RwLock; 8 | 9 | pub struct ApiBuilder { 10 | scope: Scope, 11 | list: HashSet, 12 | } 13 | 14 | impl ApiBuilder { 15 | pub fn add_storage(mut self, storage: &S, namespace: &str) -> Self 16 | where 17 | J: Serialize + DeserializeOwned + 'static, 18 | S: BackendExt + Clone, 19 | S: Storage, 20 | S: 'static + Send, 21 | S::Context: Serialize, 22 | S::Request: Serialize, 23 | ::Error: Display, 24 | { 25 | self.list.insert(namespace.to_string()); 26 | 27 | Self { 28 | scope: self.scope.service( 29 | Scope::new(namespace) 30 | .app_data(web::Data::new(RwLock::new(storage.clone()))) 31 | .route("", web::get().to(get_jobs::)) // Fetch jobs in queue 32 | .route("/workers", web::get().to(get_workers::)) // Fetch jobs in queue 33 | .route("/job", web::put().to(push_job::)) // Allow add jobs via api 34 | .route("/job/{job_id}", web::get().to(get_job::)), // Allow fetch specific job 35 | ), 36 | list: self.list, 37 | } 38 | } 39 | 40 | pub fn build(self) -> Scope { 41 | async fn fetch_queues(queues: web::Data>) -> HttpResponse { 42 | HttpResponse::Ok().json(queues) 43 | } 44 | 45 | self.scope 46 | .app_data(web::Data::new(self.list)) 47 | .route("", web::get().to(fetch_queues)) 48 | } 49 | 50 | pub fn new() -> Self { 51 | Self { 52 | scope: Scope::new("backend"), 53 | list: HashSet::new(), 54 | } 55 | } 56 | } 57 | 58 | impl Default for ApiBuilder { 59 | fn default() -> Self { 60 | Self::new() 61 | } 62 | } 63 | 64 | async fn push_job(job: web::Json, storage: web::Data>) -> HttpResponse 65 | where 66 | J: Serialize + DeserializeOwned + 'static, 67 | S: Storage + Clone, 68 | S::Error: Display, 69 | { 70 | let res = storage.write().await.push(job.into_inner()).await; 71 | match res { 72 | Ok(parts) => { 73 | HttpResponse::Ok().body(format!("Job with ID [{}] added to queue", parts.task_id)) 74 | } 75 | Err(e) => HttpResponse::InternalServerError().body(format!("{e}")), 76 | } 77 | } 78 | 79 | async fn get_jobs(storage: web::Data>, filter: web::Query) -> HttpResponse 80 | where 81 | J: Serialize + DeserializeOwned + 'static, 82 | S: Storage + BackendExt + Send, 83 | S::Request: Serialize, 84 | { 85 | dbg!(&filter); 86 | // TODO: fix unwrap 87 | let stats = storage.read().await.stats().await.unwrap_or_default(); 88 | let res = storage 89 | .read() 90 | .await 91 | .list_jobs(&filter.status, filter.page) 92 | .await; 93 | match res { 94 | Ok(jobs) => HttpResponse::Ok().json(GetJobsResult { stats, jobs }), 95 | Err(_) => HttpResponse::InternalServerError().json("get_jobs_failed"), //TODO 96 | } 97 | } 98 | 99 | async fn get_workers(storage: web::Data>) -> HttpResponse 100 | where 101 | J: Serialize + DeserializeOwned + 'static, 102 | S: Storage + BackendExt + Clone, 103 | { 104 | let workers = storage.read().await.list_workers().await; 105 | match workers { 106 | Ok(workers) => HttpResponse::Ok().json(workers), 107 | Err(_) => HttpResponse::InternalServerError().body("get_workers_failed"), //TODO 108 | } 109 | } 110 | 111 | async fn get_job(job_id: web::Path, storage: web::Data>) -> HttpResponse 112 | where 113 | J: Serialize + DeserializeOwned + 'static, 114 | S: Storage + 'static, 115 | S::Error: Display, 116 | S::Context: Serialize, 117 | { 118 | let res = storage.write().await.fetch_by_id(&job_id).await; 119 | match res { 120 | Ok(Some(job)) => HttpResponse::Ok().json(job), 121 | Ok(None) => HttpResponse::NotFound().finish(), 122 | Err(e) => HttpResponse::InternalServerError().body(format!("{e}")), 123 | } 124 | } 125 | -------------------------------------------------------------------------------- /crates/backend/src/lib.rs: -------------------------------------------------------------------------------- 1 | pub mod api; 2 | pub mod sse; 3 | -------------------------------------------------------------------------------- /crates/backend/src/sse.rs: -------------------------------------------------------------------------------- 1 | use std::{ 2 | pin::Pin, 3 | sync::Mutex, 4 | task::{Context, Poll}, 5 | time::Duration, 6 | }; 7 | 8 | use actix_web::{ 9 | rt::time, 10 | web::{Bytes, Data}, 11 | }; 12 | use futures::{ 13 | channel::mpsc::{channel, Receiver, Sender}, 14 | Stream, StreamExt, 15 | }; 16 | 17 | #[derive(Debug)] 18 | pub struct Broadcaster { 19 | clients: Vec>, 20 | } 21 | 22 | impl Default for Broadcaster { 23 | fn default() -> Self { 24 | Self::new() 25 | } 26 | } 27 | 28 | impl Broadcaster { 29 | pub fn create() -> Data> { 30 | // Data ≃ Arc 31 | let me = Data::new(Mutex::new(Broadcaster::new())); 32 | 33 | // ping clients every 10 seconds to see if they are alive 34 | Broadcaster::spawn_ping(me.clone()); 35 | 36 | me 37 | } 38 | 39 | pub fn new() -> Self { 40 | Broadcaster { 41 | clients: Vec::new(), 42 | } 43 | } 44 | 45 | fn spawn_ping(me: Data>) { 46 | let mut interval = time::interval(Duration::from_millis(500)); 47 | let task = async move { 48 | loop { 49 | interval.tick().await; 50 | let _ = me.lock().map(|mut res| res.remove_stale_clients()); 51 | } 52 | }; 53 | tokio::spawn(task); 54 | } 55 | 56 | fn remove_stale_clients(&mut self) { 57 | let mut ok_clients = Vec::new(); 58 | for client in self.clients.iter() { 59 | let result = client.clone().try_send(Bytes::from("data: ping\n\n")); 60 | 61 | if let Ok(()) = result { 62 | ok_clients.push(client.clone()); 63 | } 64 | } 65 | self.clients = ok_clients; 66 | } 67 | 68 | pub fn new_client(&mut self) -> Client { 69 | let (tx, rx) = channel(100); 70 | 71 | tx.clone() 72 | .try_send(Bytes::from("data: connected\n\n")) 73 | .unwrap(); 74 | 75 | self.clients.push(tx); 76 | Client(rx) 77 | } 78 | 79 | pub fn send(&self, msg: &str) { 80 | let msg = unescape::unescape(msg).unwrap(); 81 | let msg = Bytes::from(["data: ", &msg, "\n\n"].concat()); 82 | 83 | for client in self.clients.iter().filter(|client| !client.is_closed()) { 84 | client.clone().try_send(msg.clone()).unwrap(); 85 | } 86 | } 87 | } 88 | 89 | // wrap Receiver in own type, with correct error type 90 | pub struct Client(Receiver); 91 | 92 | impl Stream for Client { 93 | type Item = Result; 94 | 95 | fn poll_next(mut self: Pin<&mut Client>, cx: &mut Context<'_>) -> Poll> { 96 | self.0.poll_next_unpin(cx).map(|c| Ok(c).transpose()) 97 | } 98 | } 99 | -------------------------------------------------------------------------------- /crates/backend/src/ws.rs: -------------------------------------------------------------------------------- 1 | use actix::prelude::{Message, Recipient}; 2 | use actix::{fut, ActorContext, ContextFutureSpawner, WrapFuture}; 3 | use actix::{Actor, Running, StreamHandler}; 4 | use actix::{ActorFutureExt, Addr}; 5 | use actix::{AsyncContext, Handler}; 6 | use actix_web_actors::ws; 7 | use actix_web_actors::ws::Message::Text; 8 | use std::time::{Duration, Instant}; 9 | use uuid::Uuid; 10 | 11 | use self::lobby::Lobby; 12 | 13 | #[derive(Message)] 14 | #[rtype(result = "()")] 15 | pub struct WsMessage(pub String); 16 | 17 | #[derive(Message)] 18 | #[rtype(result = "()")] 19 | pub struct Connect { 20 | pub addr: Recipient, 21 | pub task: String, 22 | pub self_id: Uuid, 23 | } 24 | 25 | #[derive(Message)] 26 | #[rtype(result = "()")] 27 | pub struct Disconnect { 28 | pub id: Uuid, 29 | pub task: String, 30 | } 31 | 32 | #[derive(Message)] 33 | #[rtype(result = "()")] 34 | pub struct ClientActorMessage { 35 | pub id: Uuid, 36 | pub msg: String, 37 | pub task: String, 38 | } 39 | 40 | pub mod lobby { 41 | use actix::prelude::{Actor, Context, Handler, Recipient}; 42 | use std::collections::{HashMap, HashSet}; 43 | use uuid::Uuid; 44 | 45 | use super::*; 46 | 47 | type Socket = Recipient; 48 | 49 | pub struct Lobby { 50 | sessions: HashMap, //self id to self 51 | tasks: HashMap>, //room id to list of users id 52 | } 53 | 54 | impl Default for Lobby { 55 | fn default() -> Lobby { 56 | Lobby { 57 | sessions: HashMap::new(), 58 | tasks: HashMap::new(), 59 | } 60 | } 61 | } 62 | 63 | impl Lobby { 64 | fn send_message(&self, message: &str, id_to: &Uuid) { 65 | if let Some(socket_recipient) = self.sessions.get(id_to) { 66 | let _ = socket_recipient.do_send(WsMessage(message.to_owned())); 67 | } else { 68 | println!("attempting to send message but couldn't find user id."); 69 | } 70 | } 71 | } 72 | 73 | impl Actor for Lobby { 74 | type Context = Context; 75 | } 76 | 77 | impl Handler for Lobby { 78 | type Result = (); 79 | 80 | fn handle(&mut self, msg: Disconnect, _: &mut Context) { 81 | if self.sessions.remove(&msg.id).is_some() { 82 | self.tasks 83 | .get(&msg.task) 84 | .unwrap() 85 | .iter() 86 | .filter(|conn_id| *conn_id.to_owned() != msg.id) 87 | .for_each(|user_id| { 88 | self.send_message(&format!("{} disconnected.", &msg.id), user_id) 89 | }); 90 | if let Some(lobby) = self.tasks.get_mut(&msg.task) { 91 | if lobby.len() > 1 { 92 | lobby.remove(&msg.id); 93 | } else { 94 | //only one in the lobby, remove it entirely 95 | self.tasks.remove(&msg.task); 96 | } 97 | } 98 | } 99 | } 100 | } 101 | 102 | impl Handler for Lobby { 103 | type Result = (); 104 | 105 | fn handle(&mut self, msg: Connect, _: &mut Context) -> Self::Result { 106 | self.tasks 107 | .entry(msg.task.clone()) 108 | .or_insert_with(HashSet::new) 109 | .insert(msg.self_id); 110 | 111 | self.tasks 112 | .get(&msg.task) 113 | .unwrap() 114 | .iter() 115 | .filter(|conn_id| *conn_id.to_owned() != msg.self_id) 116 | .for_each(|conn_id| { 117 | self.send_message(&format!("{} just joined!", msg.self_id), conn_id) 118 | }); 119 | 120 | self.sessions.insert(msg.self_id, msg.addr); 121 | 122 | self.send_message(&format!("your id is {}", msg.self_id), &msg.self_id); 123 | } 124 | } 125 | 126 | impl Handler for Lobby { 127 | type Result = (); 128 | 129 | fn handle(&mut self, msg: ClientActorMessage, _ctx: &mut Context) -> Self::Result { 130 | if msg.msg.starts_with("\\w") { 131 | if let Some(id_to) = msg.msg.split(' ').collect::>().get(1) { 132 | self.send_message(&msg.msg, &Uuid::parse_str(id_to).unwrap()); 133 | } 134 | } else { 135 | self.tasks 136 | .get(&msg.task) 137 | .unwrap() 138 | .iter() 139 | .for_each(|client| self.send_message(&msg.msg, client)); 140 | } 141 | } 142 | } 143 | } 144 | 145 | const HEARTBEAT_INTERVAL: Duration = Duration::from_secs(5); 146 | const CLIENT_TIMEOUT: Duration = Duration::from_secs(10); 147 | 148 | pub struct WsConn { 149 | task: String, 150 | lobby_addr: Addr, 151 | hb: Instant, 152 | id: Uuid, 153 | } 154 | 155 | impl WsConn { 156 | pub fn new(task: String, lobby: Addr) -> WsConn { 157 | WsConn { 158 | id: Uuid::new_v4(), 159 | task, 160 | hb: Instant::now(), 161 | lobby_addr: lobby, 162 | } 163 | } 164 | } 165 | 166 | impl Actor for WsConn { 167 | type Context = ws::WebsocketContext; 168 | 169 | fn started(&mut self, ctx: &mut Self::Context) { 170 | self.hb(ctx); 171 | 172 | let addr = ctx.address(); 173 | self.lobby_addr 174 | .send(Connect { 175 | addr: addr.recipient(), 176 | task: self.task.clone(), 177 | self_id: self.id, 178 | }) 179 | .into_actor(self) 180 | .then(|res, _, ctx| { 181 | match res { 182 | Ok(_res) => (), 183 | _ => ctx.stop(), 184 | } 185 | fut::ready(()) 186 | }) 187 | .wait(ctx); 188 | } 189 | 190 | fn stopping(&mut self, _: &mut Self::Context) -> Running { 191 | self.lobby_addr.do_send(Disconnect { 192 | id: self.id, 193 | task: self.task.clone(), 194 | }); 195 | Running::Stop 196 | } 197 | } 198 | 199 | impl WsConn { 200 | fn hb(&self, ctx: &mut ws::WebsocketContext) { 201 | ctx.run_interval(HEARTBEAT_INTERVAL, |act, ctx| { 202 | if Instant::now().duration_since(act.hb) > CLIENT_TIMEOUT { 203 | println!("Disconnecting failed heartbeat"); 204 | ctx.stop(); 205 | return; 206 | } 207 | 208 | ctx.ping(b"hi"); 209 | }); 210 | } 211 | } 212 | 213 | impl StreamHandler> for WsConn { 214 | fn handle(&mut self, msg: Result, ctx: &mut Self::Context) { 215 | match msg { 216 | Ok(ws::Message::Ping(msg)) => { 217 | self.hb = Instant::now(); 218 | ctx.pong(&msg); 219 | } 220 | Ok(ws::Message::Pong(_)) => { 221 | self.hb = Instant::now(); 222 | } 223 | Ok(ws::Message::Binary(bin)) => ctx.binary(bin), 224 | Ok(ws::Message::Close(reason)) => { 225 | ctx.close(reason); 226 | ctx.stop(); 227 | } 228 | Ok(ws::Message::Continuation(_)) => { 229 | ctx.stop(); 230 | } 231 | Ok(ws::Message::Nop) => (), 232 | Ok(Text(s)) => self.lobby_addr.do_send(ClientActorMessage { 233 | id: self.id, 234 | msg: s.to_string(), 235 | task: self.task.clone(), 236 | }), 237 | Err(e) => std::panic::panic_any(e), 238 | } 239 | } 240 | } 241 | 242 | impl Handler for WsConn { 243 | type Result = (); 244 | 245 | fn handle(&mut self, msg: WsMessage, ctx: &mut Self::Context) { 246 | ctx.text(msg.0); 247 | } 248 | } 249 | -------------------------------------------------------------------------------- /crates/chirp/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "apalis-chirp" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | [dependencies] 7 | anyhow = "1" 8 | figment = { version = "0.10", features = ["yaml"] } 9 | serde = "1" 10 | serde_json = "1" 11 | cron = "0.11.0" 12 | apalis = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", features = [ 13 | "retry", 14 | "limit", 15 | "catch-panic" 16 | ] } 17 | apalis-redis = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0" } 18 | apalis-sql = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", features = [ 19 | "mysql", 20 | "postgres", 21 | "sqlite", 22 | ] } 23 | apalis-cron = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0" } 24 | tokio = { version = "1", features = ["full"] } 25 | tower = { version = "0.4" } 26 | shlex = "1.3.0" 27 | chrono = "0.4" 28 | clap = { version = "4.5.7", features = ["derive"] } 29 | backend = { path = "../backend", package = "apalis-board-backend" } 30 | actix-web = "4" 31 | futures = "0.3" 32 | actix-cors = "0.6.1" 33 | async-process = "2.2.3" 34 | bollard = "0.16" 35 | tracing-subscriber = { version = "0.3.11", features = ["json", "env-filter"] } 36 | 37 | 38 | [dependencies.tracing] 39 | default-features = false 40 | version = "0.1" 41 | -------------------------------------------------------------------------------- /crates/chirp/README.md: -------------------------------------------------------------------------------- 1 | 2 | # Apalis Chirp 3 | 4 | `apalis-chirp` is a command runner application that is designed to be invoked via HTTP or cron jobs. It leverages multiple storage backends and supports running tasks defined in a config file. 5 | 6 | ## Features 7 | 8 | - **HTTP and Cron Job Integration**: Run tasks through HTTP requests or scheduled cron jobs. 9 | - **Storage Support**: Supports Redis, MySQL, PostgreSQL, and SQLite for storing job data. 10 | - **Docker Integration**: Run commands in Docker containers. 11 | - **Real-time Event Streaming**: SSE (Server-Sent Events) support for real-time updates. 12 | - **Structured Logging**: JSON-formatted logs for easy monitoring and debugging. 13 | 14 | ## Installation 15 | 16 | To build and run the `apalis-chirp` application, you need to have Rust and Cargo installed. Clone the repository and run the following command: 17 | 18 | ```sh 19 | cargo build --release 20 | ``` 21 | 22 | ## Configuration 23 | 24 | The application configuration is done via a YAML file. The configuration file should define the jobs to be executed. Below is an example configuration: 25 | 26 | ```yaml 27 | jobs: 28 | example_job: 29 | source: 30 | Cron: "0 0 * * * *" 31 | task: 32 | Command: 33 | steps: 34 | step1: "echo 'Hello, World!'" 35 | docker: "alpine:latest" 36 | ``` 37 | 38 | ## Running the Application 39 | 40 | To run the application, use the following command, specifying your configuration file: 41 | 42 | ```sh 43 | ./target/release/apalis-chirp --config path/to/config.yaml 44 | ``` 45 | 46 | See [this example](../../examples/standalone/chirpy.yml) 47 | 48 | ## API Endpoints 49 | 50 | ### POST /api/v1/backend/{namespace}/{job} 51 | 52 | Trigger a job to run immediately. 53 | 54 | Example 55 | ```s= 56 | curl -X PUT \ 57 | http://127.0.0.1:8000/api/v1/backend/send-email/job \ 58 | -H 'cache-control: no-cache' \ 59 | -H 'content-type: application/json' \ 60 | -d '{ "Custom": { "value": "test" } }' 61 | ``` 62 | 63 | ### GET /api/v1/events 64 | 65 | Listen to real-time updates via Server-Sent Events (SSE). 66 | 67 | ## Development 68 | 69 | ### Prerequisites 70 | 71 | - Rust 72 | - Cargo 73 | 74 | ### Building from Source 75 | 76 | Clone the repository and build the project: 77 | 78 | ```sh 79 | git clone 80 | cd crates/chirp 81 | cargo build --release 82 | ``` 83 | 84 | ### Running Tests 85 | 86 | To run tests, use the following command: 87 | 88 | ```sh 89 | cargo test 90 | ``` 91 | 92 | ## License 93 | 94 | This project is licensed under the MIT License. 95 | -------------------------------------------------------------------------------- /crates/chirp/src/counter.rs: -------------------------------------------------------------------------------- 1 | use std::sync::atomic::AtomicUsize; 2 | 3 | pub struct Overview { 4 | total_executions: AtomicUsize 5 | } 6 | -------------------------------------------------------------------------------- /crates/chirp/src/main.rs: -------------------------------------------------------------------------------- 1 | use actix_cors::Cors; 2 | use actix_web::{web, App, HttpResponse, HttpServer, Responder}; 3 | use apalis::layers::catch_panic::CatchPanicLayer; 4 | use apalis::layers::tracing::TraceLayer; 5 | use apalis::prelude::{Data, Monitor, WorkerBuilder, WorkerFactoryFn}; 6 | use apalis_cron::CronStream; 7 | use apalis_redis::RedisStorage; 8 | use apalis_sql::mysql::{MySqlPool, MysqlStorage}; 9 | use apalis_sql::postgres::{PgPool, PostgresStorage}; 10 | use apalis_sql::sqlite::{SqlitePool, SqliteStorage}; 11 | use backend::api::ApiBuilder; 12 | use backend::sse::Broadcaster; 13 | use chrono::{DateTime, Utc}; 14 | use clap::Parser; 15 | use figment::providers::{Format, Yaml}; 16 | use figment::Figment; 17 | use futures::io::BufReader; 18 | use futures::{future, AsyncBufReadExt, StreamExt}; 19 | use processors::docker::run_docker; 20 | use std::collections::HashMap; 21 | use std::sync::Mutex; 22 | use std::time::Duration; 23 | use std::{process::Stdio, str::FromStr}; 24 | use trace::{Subscriber, TaskSpan}; 25 | use tracing::info; 26 | use tracing_subscriber::layer::SubscriberExt; 27 | use tracing_subscriber::util::SubscriberInitExt; 28 | use tracing_subscriber::{EnvFilter, Layer}; 29 | 30 | mod processors; 31 | mod trace; 32 | 33 | use serde::{Deserialize, Serialize}; 34 | 35 | #[derive(Debug, Serialize, Deserialize)] 36 | enum LaunchJob { 37 | Cron(DateTime), 38 | Custom { value: serde_json::Value }, 39 | } 40 | #[derive(Debug, Clone)] 41 | enum StorageType { 42 | Redis(RedisStorage), 43 | Mysql(MysqlStorage), 44 | Postgres(PostgresStorage), 45 | Sqlite(SqliteStorage), 46 | } 47 | #[derive(Deserialize, Clone, Debug)] 48 | enum Source { 49 | Http { 50 | backend: Option, 51 | }, 52 | /// Cron 53 | Cron(String), 54 | } 55 | 56 | #[derive(Deserialize, Debug, Clone)] 57 | struct LaunchConfig { 58 | jobs: HashMap, 59 | } 60 | 61 | #[derive(Deserialize, Clone, Debug)] 62 | struct Launch { 63 | // description: Option, 64 | source: Source, 65 | task: Task, 66 | // layers: Vec, 67 | } 68 | 69 | #[derive(Deserialize, Clone, Debug)] 70 | #[serde(untagged)] 71 | enum Task { 72 | Command { 73 | steps: HashMap, 74 | docker: Option, //image 75 | }, 76 | } 77 | 78 | impl From> for LaunchJob { 79 | fn from(value: DateTime) -> Self { 80 | LaunchJob::Cron(value) 81 | } 82 | } 83 | 84 | async fn spawn_command(cmd: &str) -> anyhow::Result<()> { 85 | let mut words = shlex::split(cmd).ok_or(anyhow::anyhow!("parsing error"))?; 86 | let cmd = words.remove(0); 87 | let mut cmd = async_process::Command::new(cmd); 88 | for word in words { 89 | cmd.arg(word); 90 | } 91 | let mut child = cmd.stdout(Stdio::piped()).spawn()?; 92 | let mut lines = BufReader::new(child.stdout.take().unwrap()).lines(); 93 | 94 | while let Some(line) = lines.next().await { 95 | println!("{:?}", line); 96 | } 97 | Ok(()) 98 | } 99 | 100 | async fn run_task(command: &Launch) -> anyhow::Result<()> { 101 | match &command.task { 102 | Task::Command { steps, docker } => match docker { 103 | Some(image) => { 104 | run_docker(image, steps).await?; 105 | } 106 | None => { 107 | for cmd in steps.values() { 108 | spawn_command(cmd).await?; 109 | } 110 | } 111 | }, 112 | }; 113 | Ok(()) 114 | } 115 | 116 | async fn launch_job(job: LaunchJob, cur: Data) { 117 | info!("Job started {job:?}, {cur:?}"); 118 | let res = run_task(&cur).await; 119 | 120 | if let Err(e) = res { 121 | tracing::error!("job failed with error: {e}"); 122 | } else { 123 | info!("Job done"); 124 | }; 125 | } 126 | 127 | /// Simple program to greet a person 128 | #[derive(Parser, Debug)] 129 | #[command(version, about, long_about = None)] 130 | struct Args { 131 | #[arg(short, long)] 132 | config: String, 133 | } 134 | 135 | async fn new_client(broadcaster: actix_web::web::Data>) -> impl Responder { 136 | let rx = broadcaster.lock().unwrap().new_client(); 137 | 138 | HttpResponse::Ok() 139 | .append_header(("content-type", "text/event-stream")) 140 | .append_header(("Cache-Control", "no-cache")) 141 | .append_header(("Connection", "keep-alive")) 142 | .streaming(rx) 143 | } 144 | 145 | #[tokio::main] 146 | async fn main() -> std::io::Result<()> { 147 | std::env::set_var("RUST_LOG", "debug,bollard::docker=error,sqlx::query=error"); 148 | let broadcaster = Broadcaster::create(); 149 | let line_sub = Subscriber { 150 | tx: broadcaster.clone(), 151 | }; 152 | let tracer = tracing_subscriber::registry() 153 | .with( 154 | tracing_subscriber::fmt::layer().with_filter( 155 | EnvFilter::builder() 156 | .parse("debug,bollard::docker=error,sqlx::query=error") 157 | .unwrap(), 158 | ), 159 | ) 160 | .with( 161 | tracing_subscriber::fmt::layer() 162 | .with_ansi(false) 163 | .fmt_fields(tracing_subscriber::fmt::format::JsonFields::new()) 164 | .event_format(tracing_subscriber::fmt::format().with_ansi(false).json()) 165 | .with_writer(line_sub) 166 | .with_filter( 167 | EnvFilter::builder() 168 | .parse("debug,bollard::docker=error,sqlx::query=error") 169 | .unwrap(), 170 | ), 171 | ); 172 | tracer.try_init().unwrap(); 173 | let args = Args::parse(); 174 | let config: LaunchConfig = Figment::new() 175 | .merge(Yaml::file(args.config)) 176 | .extract() 177 | .unwrap(); 178 | 179 | let mut monitor = Monitor::new(); 180 | let mut exposed: Vec<(String, StorageType)> = vec![]; 181 | 182 | for (job, command) in config.jobs.iter() { 183 | match &command.source { 184 | Source::Http { backend } => match backend.as_ref().map(|s| s.as_str()) { 185 | None | Some("default") => { 186 | let cfg = apalis_sql::Config::default().set_namespace(job); 187 | let pool = SqlitePool::connect("sqlite::memory:").await.unwrap(); 188 | 189 | SqliteStorage::setup(&pool) 190 | .await 191 | .expect("unable to run migrations for sqlite"); 192 | let storage: SqliteStorage = 193 | SqliteStorage::new_with_config(pool, cfg); 194 | exposed.push((job.clone(), StorageType::Sqlite(storage.clone()))); 195 | monitor = monitor.register( 196 | WorkerBuilder::new(job) 197 | .layer(CatchPanicLayer::new()) 198 | .data(command.clone()) 199 | .data(config.clone()) 200 | .layer(TraceLayer::new().make_span_with(TaskSpan::new(job))) 201 | .backend(storage) 202 | .build_fn(launch_job), 203 | ); 204 | } 205 | Some(url) if url.starts_with("redis://") => { 206 | let conn = apalis_redis::connect(url).await.unwrap(); 207 | let cfg = apalis_redis::Config::default().set_namespace(job); 208 | 209 | let redis: RedisStorage = 210 | RedisStorage::new_with_config(conn.clone(), cfg); 211 | exposed.push((job.clone(), StorageType::Redis(redis.clone()))); 212 | monitor = monitor.register( 213 | WorkerBuilder::new(job) 214 | .layer(CatchPanicLayer::new()) 215 | .data(command.clone()) 216 | .data(config.clone()) 217 | .layer(TraceLayer::new().make_span_with(TaskSpan::new(job))) 218 | .backend(redis) 219 | .build_fn(launch_job), 220 | ); 221 | } 222 | Some(url) if url.starts_with("mysql://") => { 223 | let cfg = apalis_sql::Config::default().set_namespace(job); 224 | let pool = MySqlPool::connect(url).await.unwrap(); 225 | 226 | MysqlStorage::setup(&pool) 227 | .await 228 | .expect("unable to run migrations for mysql"); 229 | let storage: MysqlStorage = MysqlStorage::new_with_config(pool, cfg); 230 | exposed.push((job.clone(), StorageType::Mysql(storage.clone()))); 231 | monitor = monitor.register( 232 | WorkerBuilder::new(job) 233 | .layer(CatchPanicLayer::new()) 234 | .data(command.clone()) 235 | .data(config.clone()) 236 | .layer(TraceLayer::new().make_span_with(TaskSpan::new(job))) 237 | .backend(storage) 238 | .build_fn(launch_job), 239 | ); 240 | } 241 | Some(url) if url.starts_with("postgresql://") => { 242 | let cfg = apalis_sql::Config::default().set_namespace(job); 243 | let pool = PgPool::connect(url).await.unwrap(); 244 | 245 | PostgresStorage::setup(&pool) 246 | .await 247 | .expect("unable to run migrations for postgres"); 248 | let storage: PostgresStorage = 249 | PostgresStorage::new_with_config(pool, cfg); 250 | exposed.push((job.clone(), StorageType::Postgres(storage.clone()))); 251 | monitor = monitor.register( 252 | WorkerBuilder::new(job) 253 | .layer(CatchPanicLayer::new()) 254 | .data(command.clone()) 255 | .data(config.clone()) 256 | .layer(TraceLayer::new().make_span_with(TaskSpan::new(job))) 257 | .backend(storage) 258 | .build_fn(launch_job), 259 | ); 260 | } 261 | Some(url) if url.starts_with("sqlite://") => { 262 | let cfg = apalis_sql::Config::default().set_namespace(job); 263 | let pool = SqlitePool::connect(url).await.unwrap(); 264 | 265 | SqliteStorage::setup(&pool) 266 | .await 267 | .expect("unable to run migrations for sqlite"); 268 | let storage: SqliteStorage = 269 | SqliteStorage::new_with_config(pool, cfg); 270 | exposed.push((job.clone(), StorageType::Sqlite(storage.clone()))); 271 | monitor = monitor.register( 272 | WorkerBuilder::new(job) 273 | .layer(CatchPanicLayer::new()) 274 | .data(command.clone()) 275 | .data(config.clone()) 276 | .layer(TraceLayer::new().make_span_with(TaskSpan::new(job))) 277 | .backend(storage) 278 | .build_fn(launch_job), 279 | ); 280 | } 281 | _ => unimplemented!(), 282 | }, 283 | Source::Cron(cron) => { 284 | let schedule = apalis_cron::Schedule::from_str(cron).unwrap(); 285 | let stream = CronStream::new(schedule); 286 | monitor = monitor.register( 287 | WorkerBuilder::new("cron") 288 | .layer(CatchPanicLayer::new()) 289 | .data(command.clone()) 290 | .data(config.clone()) 291 | .layer(TraceLayer::new().make_span_with(TaskSpan::new(job))) 292 | .backend(stream) 293 | .build_fn(launch_job), 294 | ); 295 | } 296 | } 297 | } 298 | let http = async { 299 | HttpServer::new(move || { 300 | let mut api = ApiBuilder::new(); 301 | for (namespace, storage) in exposed.clone() { 302 | match storage { 303 | StorageType::Redis(redis) => { 304 | api = api.add_storage(&redis, &namespace); 305 | } 306 | StorageType::Sqlite(sqlite) => api = api.add_storage(&sqlite, &namespace), 307 | StorageType::Mysql(mysql) => api = api.add_storage(&mysql, &namespace), 308 | StorageType::Postgres(pg) => api = api.add_storage(&pg, &namespace), 309 | } 310 | } 311 | let scope = api.build().route("/events", web::get().to(new_client)); 312 | App::new() 313 | .wrap(Cors::permissive()) 314 | .app_data(broadcaster.clone()) 315 | .service(web::scope("/api/v1").service(scope)) 316 | }) 317 | .bind("127.0.0.1:8000")? 318 | .run() 319 | .await?; 320 | Ok(()) 321 | }; 322 | 323 | future::try_join( 324 | http, 325 | monitor 326 | .shutdown_timeout(Duration::from_secs(3)) 327 | .run_with_signal(tokio::signal::ctrl_c()), 328 | ) 329 | .await?; 330 | 331 | Ok(()) 332 | } 333 | -------------------------------------------------------------------------------- /crates/chirp/src/processors/command.rs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/crates/chirp/src/processors/command.rs -------------------------------------------------------------------------------- /crates/chirp/src/processors/docker.rs: -------------------------------------------------------------------------------- 1 | use std::collections::HashMap; 2 | 3 | use anyhow::bail; 4 | use bollard::{ 5 | container::{Config, RemoveContainerOptions}, 6 | exec::{CreateExecOptions, StartExecResults}, 7 | image::CreateImageOptions, 8 | Docker, API_DEFAULT_VERSION, 9 | }; 10 | use futures::{StreamExt, TryStreamExt}; 11 | use tracing::debug; 12 | 13 | pub async fn run_docker(image: &str, commands: &HashMap) -> anyhow::Result<()> { 14 | let docker = Docker::connect_with_unix( 15 | "/Users/geoffreymureithi/.colima/default/docker.sock", 16 | 120, 17 | API_DEFAULT_VERSION, 18 | ) 19 | .unwrap(); 20 | 21 | docker 22 | .create_image( 23 | Some(CreateImageOptions { 24 | from_image: image, 25 | ..Default::default() 26 | }), 27 | None, 28 | None, 29 | ) 30 | .try_collect::>() 31 | .await?; 32 | 33 | let alpine_config = Config { 34 | image: Some(image), 35 | tty: Some(true), 36 | ..Default::default() 37 | }; 38 | 39 | let id = docker 40 | .create_container::<&str, &str>(None, alpine_config) 41 | .await? 42 | .id; 43 | docker.start_container::(&id, None).await?; 44 | 45 | for command in commands.values() { 46 | // non interactive 47 | let exec = docker 48 | .create_exec( 49 | &id, 50 | CreateExecOptions { 51 | attach_stdout: Some(true), 52 | attach_stderr: Some(true), 53 | cmd: Some(shlex::split(command).unwrap()), 54 | ..Default::default() 55 | }, 56 | ) 57 | .await? 58 | .id; 59 | if let StartExecResults::Attached { mut output, .. } = 60 | docker.start_exec(&exec, None).await? 61 | { 62 | while let Some(msg) = output.next().await { 63 | debug!("{msg:?}"); 64 | } 65 | } else { 66 | unreachable!(); 67 | } 68 | let info = docker.inspect_exec(&exec).await?; 69 | if info.exit_code.unwrap() != 0 { 70 | docker 71 | .remove_container( 72 | &id, 73 | Some(RemoveContainerOptions { 74 | force: true, 75 | ..Default::default() 76 | }), 77 | ) 78 | .await?; 79 | bail!("command failed with code: {}", info.exit_code.unwrap()) 80 | } 81 | } 82 | 83 | docker 84 | .remove_container( 85 | &id, 86 | Some(RemoveContainerOptions { 87 | force: true, 88 | ..Default::default() 89 | }), 90 | ) 91 | .await?; 92 | 93 | Ok(()) 94 | } 95 | -------------------------------------------------------------------------------- /crates/chirp/src/processors/mod.rs: -------------------------------------------------------------------------------- 1 | pub mod docker; 2 | -------------------------------------------------------------------------------- /crates/chirp/src/sources/cron.rs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/crates/chirp/src/sources/cron.rs -------------------------------------------------------------------------------- /crates/chirp/src/sources/files.rs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/crates/chirp/src/sources/files.rs -------------------------------------------------------------------------------- /crates/chirp/src/sources/mqtt.rs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/crates/chirp/src/sources/mqtt.rs -------------------------------------------------------------------------------- /crates/chirp/src/sources/storage.rs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/crates/chirp/src/sources/storage.rs -------------------------------------------------------------------------------- /crates/chirp/src/sources/ws.rs: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/crates/chirp/src/sources/ws.rs -------------------------------------------------------------------------------- /crates/chirp/src/trace.rs: -------------------------------------------------------------------------------- 1 | use std::{io::LineWriter, sync::Mutex}; 2 | 3 | use apalis::{layers::tracing::MakeSpan, prelude::*}; 4 | use backend::sse::Broadcaster; 5 | use tracing::{Level, Span}; 6 | use tracing_subscriber::fmt::MakeWriter; 7 | 8 | #[derive(Debug, Clone)] 9 | pub struct TaskSpan { 10 | level: Level, 11 | name: String, 12 | } 13 | 14 | impl TaskSpan { 15 | /// Create a new `TaskSpan`. 16 | pub fn new(name: &str) -> Self { 17 | Self { 18 | level: Level::DEBUG, 19 | name: name.to_string(), 20 | } 21 | } 22 | } 23 | 24 | impl MakeSpan for TaskSpan { 25 | fn make_span(&mut self, req: &Request) -> Span { 26 | let task_id: &TaskId = req.get().unwrap(); 27 | let attempts: Attempt = req.get().cloned().unwrap_or_default(); 28 | let span = Span::current(); 29 | let task = &self.name; 30 | macro_rules! make_span { 31 | ($level:expr) => { 32 | tracing::span!( 33 | parent: span, 34 | $level, 35 | "task", 36 | task_type = task.to_string(), 37 | task_id = task_id.to_string(), 38 | attempt = attempts.current().to_string(), 39 | ) 40 | }; 41 | } 42 | 43 | match self.level { 44 | Level::ERROR => { 45 | make_span!(Level::ERROR) 46 | } 47 | Level::WARN => { 48 | make_span!(Level::WARN) 49 | } 50 | Level::INFO => { 51 | make_span!(Level::INFO) 52 | } 53 | Level::DEBUG => { 54 | make_span!(Level::DEBUG) 55 | } 56 | Level::TRACE => { 57 | make_span!(Level::TRACE) 58 | } 59 | } 60 | } 61 | } 62 | 63 | #[derive(Debug, Clone)] 64 | pub struct Subscriber { 65 | pub tx: actix_web::web::Data>, 66 | } 67 | 68 | impl<'a> MakeWriter<'a> for Subscriber { 69 | type Writer = LineWriter; 70 | 71 | fn make_writer(&self) -> Self::Writer { 72 | LineWriter::new(Self { 73 | tx: self.tx.clone(), 74 | }) 75 | } 76 | } 77 | 78 | impl std::io::Write for Subscriber { 79 | fn write(&mut self, buf: &[u8]) -> std::io::Result { 80 | let len = buf.len(); 81 | let _ = self 82 | .tx 83 | .try_lock() 84 | .map(|b| b.send(std::str::from_utf8(buf).unwrap_or_default())); 85 | Ok(len) 86 | } 87 | 88 | fn flush(&mut self) -> std::io::Result<()> { 89 | Ok(()) 90 | } 91 | } 92 | -------------------------------------------------------------------------------- /crates/frontend/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "apalis-board-frontend" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | hirola = { version = "0.4", features = ["dom"] } 10 | web-sys = { version = "0.3", features = ["EventSource"] } 11 | gloo-net = { version = "0.5" } 12 | shared = { path = "../shared", package = "apalis-board-shared", default-features = false } 13 | serde_json = "1" 14 | console_log = { version = "1", features = ["color"] } 15 | log = "0.4" 16 | strum = "0.26" 17 | serde = "1" 18 | -------------------------------------------------------------------------------- /crates/frontend/Trunk.toml: -------------------------------------------------------------------------------- 1 | [build] 2 | target = "index.html" 3 | dist = "dist" 4 | 5 | [[proxy]] 6 | rewrite = "/api/v1/" 7 | backend = "http://127.0.0.1:8000/api/v1" 8 | -------------------------------------------------------------------------------- /crates/frontend/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | Apalis Board 8 | 9 | 10 | 11 | 12 | 13 | 14 | -------------------------------------------------------------------------------- /crates/frontend/src/home.rs: -------------------------------------------------------------------------------- 1 | use hirola::dom::XEffect; 2 | use hirola::{ 3 | dom::{app::App, Dom}, 4 | prelude::*, 5 | }; 6 | use serde::Serialize; 7 | 8 | use crate::Link; 9 | 10 | use crate::State; 11 | 12 | pub fn page(app: &App) -> Dom { 13 | html! { 14 | <> 15 | 16 | > 17 | } 18 | } 19 | 20 | pub fn resolve_json(val: V) -> String { 21 | let mut buf = Vec::new(); 22 | let formatter = serde_json::ser::PrettyFormatter::with_indent(b" \n"); 23 | let mut ser = serde_json::Serializer::with_formatter(&mut buf, formatter); 24 | val.serialize(&mut ser).unwrap(); 25 | String::from_utf8(buf).unwrap() 26 | } 27 | 28 | #[component] 29 | fn Main(app: App) -> Dom { 30 | html! { 31 | 32 | 33 | "Stats" 34 | "Your system instance stats" 35 | 36 | {stats_card("Max memory", "0G", None, None)} 37 | {stats_card("Used memory", "0M", Some("0.01%"), Some("text-green-500"))} 38 | {stats_card("Connections", "0", None, None)} 39 | {stats_card("Queues", "0", None, None)} 40 | 41 | 42 | 43 | "Overview" 44 | "An overview of execution stats" 45 | 46 | {overview_card("Total jobs in queues", "0")} 47 | {overview_card("Jobs per minute", "0")} 48 | {overview_card("Jobs past hour", "0")} 49 | {overview_card("Failed jobs past 7 days", "0")} 50 | 51 | 52 | 53 | "Queues" 54 | "These are all the queues you have created on your apalis instance." 55 | 56 | {app.state().namespaces 57 | .signal_vec_cloned() 58 | .map_render(|ns| { 59 | html! { 60 | 61 | {queue_card(&ns, "0 jobs")} 62 | 63 | } 64 | }) 65 | } 66 | 67 | 68 | 69 | } 70 | } 71 | 72 | fn stats_card( 73 | title: &str, 74 | value: &str, 75 | extra_info: Option<&str>, 76 | extra_class: Option<&str>, 77 | ) -> Dom { 78 | html! { 79 | 80 | 81 | {title} 82 | 94 | 95 | 96 | 97 | 98 | {value} 99 | {if let Some(info) = extra_info { 100 | html! { {info} } 101 | } else { 102 | html! { <>> } 103 | }} 104 | 105 | 106 | } 107 | } 108 | 109 | fn overview_card(title: &str, value: &str) -> Dom { 110 | html! { 111 | 112 | 113 | {title} 114 | 115 | 116 | {value} 117 | 118 | 119 | } 120 | } 121 | 122 | pub fn queue_card(title: &str, info: &str) -> Dom { 123 | html! { 124 | 125 | 126 | {title} 127 | 139 | 140 | 141 | 142 | 143 | 144 | {info} 145 | 146 | // 147 | // 151 | // // Add your recharts or other chart library code here 152 | // 153 | // 154 | 155 | 156 | 157 | } 158 | } 159 | 160 | #[component] 161 | fn Logo() -> Dom { 162 | html! { 163 | 166 | } 167 | } 168 | 169 | #[component] 170 | pub fn Header(app: &'static App) -> Dom { 171 | html! { 172 | 173 | 174 | 175 | 176 | 177 | "Apalis Board"" · 0.3.0" 178 | 179 | 180 | "Overview" 181 | "Queues" 182 | "Settings" 183 | "Github" 184 | 185 | 186 | } 187 | } 188 | -------------------------------------------------------------------------------- /crates/frontend/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::str::FromStr; 2 | 3 | use gloo_net::http::Request; 4 | use hirola::dom::app::router::Router; 5 | use hirola::dom::app::App; 6 | use hirola::dom::effects::prelude::*; 7 | use hirola::dom::Dom; 8 | use hirola::prelude::{Suspend, *}; 9 | use home::{queue_card, resolve_json}; 10 | use log::Level; 11 | use shared::{Filter, GetJobsResult, JobState, Worker}; 12 | use strum::IntoEnumIterator; 13 | use web_sys::EventSource; 14 | mod home; 15 | 16 | #[derive(Debug, Clone)] 17 | pub struct State { 18 | event_source: EventSource, 19 | namespaces: MutableVec, 20 | } 21 | 22 | impl State { 23 | async fn list_namespaces() -> Result, gloo_net::Error> { 24 | let res = Request::get(API_PATH).send().await?; 25 | res.json().await 26 | } 27 | 28 | async fn list_jobs( 29 | namespace: String, 30 | filter: Filter, 31 | ) -> Result, gloo_net::Error> { 32 | let res = Request::get(&format!("{API_PATH}/{namespace}")) 33 | .query([ 34 | ("page", filter.page.to_string()), 35 | ("status", filter.status.to_string()), 36 | ]) 37 | .send() 38 | .await?; 39 | res.json().await 40 | } 41 | 42 | async fn list_workers(namespace: String) -> Result, gloo_net::Error> { 43 | let res = Request::get(&format!("{API_PATH}/{namespace}/workers")) 44 | .send() 45 | .await?; 46 | res.json().await 47 | } 48 | } 49 | 50 | const API_PATH: &str = "/api/v1/backend"; 51 | 52 | fn namespace_page(app: &App) -> Dom { 53 | html! { 54 | <> 55 | 56 | > 57 | } 58 | } 59 | 60 | fn namespace_status_page(app: &App) -> Dom { 61 | html! { 62 | <> 63 | 64 | > 65 | } 66 | } 67 | 68 | fn queues_page(app: &App) -> Dom { 69 | let namespaces = &app.state().namespaces; 70 | html! { 71 | 72 | "Queues" 73 | "These are all the queues you have created on this apalis-board instance." 74 | 75 | {namespaces 76 | .signal_vec_cloned() 77 | .map_render(|ns| { 78 | html! { 79 | 80 | {queue_card(&ns, "0 jobs")} 81 | 82 | } 83 | }) 84 | } 85 | 86 | 87 | 88 | } 89 | } 90 | 91 | #[component] 92 | fn SidebarInput() -> Dom { 93 | html! { 94 | 95 | 100 | 101 | } 102 | } 103 | 104 | impl EffectAttribute for Link { 105 | type Handler = XEffect; 106 | fn read_as_attr(&self) -> String { 107 | "link".to_owned() 108 | } 109 | } 110 | 111 | struct Link; 112 | 113 | #[component] 114 | fn SidebarItem( 115 | href: String, 116 | icon: &'static str, 117 | label: String, 118 | router: &'static Router, 119 | ) -> Dom { 120 | html! { 121 | 122 | 134 | 135 | 136 | {label} 137 | 138 | } 139 | } 140 | 141 | #[component] 142 | fn NamespaceContent(router: Router) -> Dom { 143 | let namespace = router.current_params().get("namespace").unwrap().clone(); 144 | html! { 145 | 146 | 147 | 148 | 149 | {match State::list_workers(namespace).suspend().await { 150 | Loading => html! { "Loading..." }, 151 | Ready(Ok(workers)) => { 152 | html! { 153 | 154 | {for worker in workers { 155 | html! { 156 | <> 157 | 158 | > 159 | } 160 | }} 161 | 162 | } 163 | }, 164 | Ready(Err(err)) => html! { "An error occurred: " {err.to_string()} } 165 | } } 166 | 167 | 168 | 169 | } 170 | } 171 | 172 | #[component] 173 | fn QueueNav(router: Router) -> Dom { 174 | let params = router.current_params(); 175 | let namespace = params.get("namespace").unwrap().clone(); 176 | // let status = params.get("status").cloned(); 177 | 178 | html! { 179 | 180 | {format!("Queue: {namespace}")} 181 | "Active · 606 jobs · 500 failed · 60 pending · 5 dead" 182 | 183 | 184 | 185 | 186 | 187 | "Workers" 188 | 189 | 190 | "2" 191 | 192 | 193 | 194 | 195 | 196 | {for status in JobState::iter() { 197 | html! { 198 | <> 199 | 200 | > 201 | } 202 | }} 203 | 204 | 205 | } 206 | } 207 | 208 | #[component] 209 | fn NamespaceStatusContent(router: Router) -> Dom { 210 | let namespace = router.current_params().get("namespace").unwrap().clone(); 211 | let status = router 212 | .current_params() 213 | .get("status") 214 | .map(|s| JobState::from_str(s).unwrap()); 215 | html! { 216 | 217 | 218 | 219 | 220 | {match State::list_jobs(namespace, Filter { page: 1, status:status.unwrap()}).suspend().await { 221 | Loading => html! { "Loading..." }, 222 | Ready(Ok(res)) => { 223 | html! { 224 | <> 225 | 226 | {for task in res.jobs{ 227 | html! { 228 | 229 | 230 | {resolve_json(&task)} 231 | 232 | 233 | } 234 | }} 235 | 236 | 237 | 238 | 239 | 240 | "Previous" 241 | 242 | 243 | 244 | 245 | "Next" 246 | 247 | 248 | 249 | > 250 | } 251 | }, 252 | Ready(Err(err)) => html! { "An error occurred: " {err.to_string()} } 253 | } } 254 | 255 | 256 | 257 | } 258 | } 259 | 260 | #[component] 261 | fn NavItem>(label: L, router: Router) -> Dom { 262 | let label = label.as_ref(); 263 | let current_params = router.current_params(); 264 | let namespace = current_params.get("namespace").unwrap(); 265 | html! { 266 | 267 | {label} 268 | 269 | } 270 | } 271 | 272 | #[component] 273 | fn Card, S: AsRef>(title: T, status: S) -> Dom { 274 | html! { 275 | 276 | 277 | 289 | 290 | 291 | 292 | {title.as_ref()} 293 | 294 | "Last seen less than a minute ago" 295 | 296 | 297 | 298 | {status.as_ref()} 299 | 300 | 301 | 302 | } 303 | } 304 | 305 | fn main() { 306 | console_log::init_with_level(Level::Debug).unwrap(); 307 | let es = EventSource::new(&format!("{API_PATH}/events")).unwrap(); 308 | let api = State { 309 | event_source: es, 310 | namespaces: Default::default(), 311 | }; 312 | dbg!(&api.event_source); 313 | let mut app = App::new(api); 314 | app.route("/", home::page); 315 | app.route("/queue/:namespace", namespace_page); 316 | app.route("/queue/:namespace/:status", namespace_status_page); 317 | app.route("/queues", queues_page); 318 | 319 | let parent_node = web_sys::window() 320 | .unwrap() 321 | .document() 322 | .unwrap() 323 | .body() 324 | .unwrap(); 325 | 326 | app.mount_with(&parent_node, |app| { 327 | use crate::home::Header; 328 | let app = Box::leak(Box::new(app.clone())); 329 | let ns_poller = async { 330 | let values = State::list_namespaces().await.unwrap(); 331 | app.state().namespaces.lock_mut().replace_cloned(values); 332 | }; 333 | let wrapper = html! { 334 | 335 | //<--!App goes here --> 336 | 337 | }; 338 | let inner = app.router().render(app, &wrapper); 339 | html! { 340 | 341 | 342 | 343 | {inner} 344 | 345 | 346 | } 347 | }); 348 | } 349 | -------------------------------------------------------------------------------- /crates/shared/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "apalis-board-shared" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | thiserror = "1" 10 | apalis-core = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", default-features = false } 11 | apalis-redis = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", optional = true } 12 | apalis-sql = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", optional = true } 13 | serde = "1" 14 | serde_json = "1" 15 | redis = { version = "0.27", optional = true } 16 | strum = { version = "0.26", features = ["derive"] } 17 | 18 | [dependencies.sqlx] 19 | version = "0.8.2" 20 | default-features = false 21 | optional = true 22 | 23 | 24 | [features] 25 | default = ["sqlite", "postgres", "mysql", "redis"] 26 | redis = ["apalis-redis", "dep:redis"] 27 | postgres = ["apalis-sql/postgres"] 28 | sqlite = ["apalis-sql/sqlite", "sqlx/sqlite", "sqlx/json"] 29 | mysql = ["apalis-sql/mysql"] 30 | -------------------------------------------------------------------------------- /crates/shared/src/lib.rs: -------------------------------------------------------------------------------- 1 | use std::{any::type_name, future::Future, num::TryFromIntError, time::Duration}; 2 | 3 | #[cfg(feature = "redis")] 4 | pub mod redis; 5 | 6 | #[cfg(feature = "sqlite")] 7 | pub mod sqlite; 8 | 9 | #[cfg(feature = "postgres")] 10 | pub mod postgres; 11 | 12 | #[cfg(feature = "mysql")] 13 | pub mod mysql; 14 | 15 | use apalis_core::worker::WorkerId; 16 | use serde::{Deserialize, Serialize}; 17 | 18 | /// A serializable version of a worker. 19 | #[derive(Debug, Serialize, Deserialize)] 20 | pub struct Worker { 21 | /// The Worker's Id 22 | pub worker_id: WorkerId, 23 | /// Type of task being consumed by the worker, useful for display and filtering 24 | pub r#type: String, 25 | /// The type of job stream 26 | pub source: String, 27 | /// The layers that were loaded for worker. 28 | pub layers: Vec, 29 | // / The last time the worker was seen. Some sources use keep alive. 30 | // last_seen: Timestamp, 31 | } 32 | impl Worker { 33 | pub fn new(worker_id: WorkerId, r#type: String) -> Self { 34 | Self { 35 | worker_id, 36 | r#type, 37 | source: type_name::().to_string(), 38 | layers: Vec::new(), 39 | } 40 | } 41 | } 42 | 43 | #[derive(Debug, Deserialize, Serialize, Default)] 44 | pub struct Stat { 45 | pub pending: usize, 46 | pub running: usize, 47 | pub dead: usize, 48 | pub failed: usize, 49 | pub success: usize, 50 | } 51 | 52 | #[derive( 53 | Debug, Deserialize, Serialize, Default, strum::Display, strum::EnumString, strum::EnumIter, 54 | )] 55 | pub enum JobState { 56 | #[default] 57 | Pending, 58 | Scheduled, 59 | Running, 60 | Dead, 61 | Failed, 62 | Success, 63 | } 64 | 65 | #[derive(Deserialize, Debug)] 66 | pub struct Filter { 67 | #[serde(default)] 68 | pub status: JobState, 69 | #[serde(default = "default_page")] 70 | pub page: i32, 71 | } 72 | 73 | fn default_page() -> i32 { 74 | 1 75 | } 76 | 77 | pub trait BackendExt 78 | where 79 | Self: Sized, 80 | { 81 | type Request; 82 | type Error; 83 | /// List all Workers that are working on a backend 84 | fn list_workers(&self) -> impl Future, Self::Error>> + Send; 85 | 86 | /// Returns the counts of jobs in different states 87 | fn stats(&self) -> impl Future> + Send; 88 | 89 | /// Fetch jobs persisted in a backend 90 | fn list_jobs( 91 | &self, 92 | status: &JobState, 93 | page: i32, 94 | ) -> impl Future, Self::Error>> + Send; 95 | } 96 | 97 | #[derive(Debug, Deserialize)] 98 | pub enum Config { 99 | Board(BoardConfig), 100 | } 101 | #[derive(Debug, Deserialize)] 102 | pub enum BoardConfig { 103 | Redis(String), 104 | Postgres(String), 105 | Mysql(String), 106 | Sqlite(String), 107 | } 108 | 109 | #[derive(Debug, Serialize, Deserialize)] 110 | pub enum Layer { 111 | Retry { retries: u64 }, 112 | Timeout { duration: Duration }, 113 | LoadShed, 114 | RateLimit { num: u64, per: Duration }, 115 | ConcurrencyLimit { max: usize }, 116 | Buffer { bound: usize }, 117 | Sentry { dsn: usize }, 118 | Prometheus, 119 | } 120 | 121 | #[derive(Debug, Serialize, Deserialize)] 122 | pub struct GetJobsResult { 123 | pub stats: Stat, 124 | pub jobs: Vec, 125 | } 126 | 127 | #[derive(Debug, thiserror::Error)] 128 | pub enum SqlError { 129 | #[error("sqlx::Error: {0}")] 130 | Sqlx(#[from] sqlx::Error), 131 | #[error("TryFromIntError: {0}")] 132 | TryFromInt(#[from] TryFromIntError), 133 | } 134 | -------------------------------------------------------------------------------- /crates/shared/src/mysql.rs: -------------------------------------------------------------------------------- 1 | use apalis_core::{ 2 | codec::json::JsonCodec, 3 | request::{Parts, Request}, 4 | worker::WorkerId, 5 | Codec, 6 | }; 7 | use apalis_sql::{context::SqlContext, from_row::SqlRequest, mysql::MysqlStorage}; 8 | use serde::{de::DeserializeOwned, Serialize}; 9 | use serde_json::Value; 10 | 11 | use crate::{BackendExt, JobState, SqlError, Stat, Worker}; 12 | 13 | type MysqlCodec = JsonCodec; 14 | 15 | impl BackendExt 16 | for MysqlStorage 17 | { 18 | type Request = Request>; 19 | type Error = SqlError; 20 | async fn stats(&self) -> Result { 21 | let fetch_query = "SELECT 22 | COUNT(CASE WHEN status = 'Pending' THEN 1 END) AS pending, 23 | COUNT(CASE WHEN status = 'Running' THEN 1 END) AS running, 24 | COUNT(CASE WHEN status = 'Done' THEN 1 END) AS done, 25 | COUNT(CASE WHEN status = 'Retry' THEN 1 END) AS retry, 26 | COUNT(CASE WHEN status = 'Failed' THEN 1 END) AS failed, 27 | COUNT(CASE WHEN status = 'Killed' THEN 1 END) AS killed 28 | FROM jobs WHERE job_type = ?"; 29 | 30 | let res: (i64, i64, i64, i64, i64, i64) = sqlx::query_as(fetch_query) 31 | .bind(self.get_config().namespace()) 32 | .fetch_one(self.pool()) 33 | .await?; 34 | 35 | Ok(Stat { 36 | pending: res.0.try_into()?, 37 | running: res.1.try_into()?, 38 | dead: res.4.try_into()?, 39 | failed: res.3.try_into()?, 40 | success: res.2.try_into()?, 41 | }) 42 | } 43 | 44 | async fn list_jobs( 45 | &self, 46 | status: &JobState, 47 | page: i32, 48 | ) -> Result, Self::Error> { 49 | let status = status.to_string(); 50 | let fetch_query = "SELECT * FROM jobs WHERE status = ? AND job_type = ? ORDER BY done_at DESC, run_at DESC LIMIT 10 OFFSET ?"; 51 | let res: Vec> = sqlx::query_as(fetch_query) 52 | .bind(status) 53 | .bind(self.get_config().namespace()) 54 | .bind(((page - 1) * 10).to_string()) 55 | .fetch_all(self.pool()) 56 | .await?; 57 | Ok(res 58 | .into_iter() 59 | .map(|j| { 60 | let (req, ctx) = j.req.take_parts(); 61 | let req: J = MysqlCodec::decode(req).unwrap(); 62 | Request::new_with_ctx(req, ctx) 63 | }) 64 | .collect()) 65 | } 66 | 67 | async fn list_workers(&self) -> Result, Self::Error> { 68 | let fetch_query = 69 | "SELECT id, layers, last_seen FROM workers WHERE worker_type = ? ORDER BY last_seen DESC LIMIT 20 OFFSET ?"; 70 | let res: Vec<(String, String, i64)> = sqlx::query_as(fetch_query) 71 | .bind(self.get_config().namespace()) 72 | .bind(0) 73 | .fetch_all(self.pool()) 74 | .await?; 75 | Ok(res 76 | .into_iter() 77 | .map(|w| Worker::new::(WorkerId::new(w.0), w.1)) 78 | .collect()) 79 | } 80 | } 81 | -------------------------------------------------------------------------------- /crates/shared/src/postgres.rs: -------------------------------------------------------------------------------- 1 | use crate::{BackendExt, JobState, SqlError, Stat, Worker}; 2 | use apalis_core::request::Parts; 3 | use apalis_core::Codec; 4 | use apalis_core::{codec::json::JsonCodec, request::Request, worker::WorkerId}; 5 | use apalis_sql::{context::SqlContext, from_row::SqlRequest, postgres::PostgresStorage}; 6 | use serde::{de::DeserializeOwned, Serialize}; 7 | use serde_json::Value; 8 | 9 | impl BackendExt 10 | for PostgresStorage 11 | { 12 | type Request = Request>; 13 | type Error = SqlError; 14 | async fn stats(&self) -> Result { 15 | let fetch_query = "SELECT 16 | COUNT(1) FILTER (WHERE status = 'Pending') AS pending, 17 | COUNT(1) FILTER (WHERE status = 'Running') AS running, 18 | COUNT(1) FILTER (WHERE status = 'Done') AS done, 19 | COUNT(1) FILTER (WHERE status = 'Retry') AS retry, 20 | COUNT(1) FILTER (WHERE status = 'Failed') AS failed, 21 | COUNT(1) FILTER (WHERE status = 'Killed') AS killed 22 | FROM apalis.jobs WHERE job_type = $1"; 23 | 24 | let res: (i64, i64, i64, i64, i64, i64) = sqlx::query_as(fetch_query) 25 | .bind(self.config().namespace()) 26 | .fetch_one(self.pool()) 27 | .await?; 28 | 29 | Ok(Stat { 30 | pending: res.0.try_into()?, 31 | running: res.1.try_into()?, 32 | dead: res.4.try_into()?, 33 | failed: res.3.try_into()?, 34 | success: res.2.try_into()?, 35 | }) 36 | } 37 | 38 | async fn list_jobs( 39 | &self, 40 | status: &JobState, 41 | page: i32, 42 | ) -> Result, Self::Error> { 43 | let status = status.to_string(); 44 | let fetch_query = "SELECT * FROM apalis.jobs WHERE status = $1 AND job_type = $2 ORDER BY done_at DESC, run_at DESC LIMIT 10 OFFSET $3"; 45 | let res: Vec> = sqlx::query_as(fetch_query) 46 | .bind(status) 47 | .bind(self.config().namespace()) 48 | .bind(((page - 1) * 10).to_string()) 49 | .fetch_all(self.pool()) 50 | .await?; 51 | Ok(res 52 | .into_iter() 53 | .map(|j| { 54 | let (req, ctx) = j.req.take_parts(); 55 | let req = JsonCodec::::decode(req).unwrap(); 56 | Request::new_with_ctx(req, ctx) 57 | }) 58 | .collect()) 59 | } 60 | 61 | async fn list_workers(&self) -> Result, Self::Error> { 62 | let fetch_query = 63 | "SELECT id, layers, last_seen FROM apalis.workers WHERE worker_type = $1 ORDER BY last_seen DESC LIMIT 20 OFFSET $2"; 64 | let res: Vec<(String, String, i64)> = sqlx::query_as(fetch_query) 65 | .bind(self.config().namespace()) 66 | .bind(0) 67 | .fetch_all(self.pool()) 68 | .await?; 69 | Ok(res 70 | .into_iter() 71 | .map(|w| Worker::new::(WorkerId::new(w.0), w.1)) 72 | .collect()) 73 | } 74 | } 75 | -------------------------------------------------------------------------------- /crates/shared/src/redis.rs: -------------------------------------------------------------------------------- 1 | use crate::{BackendExt, JobState, Stat, Worker}; 2 | use apalis_core::codec::json::JsonCodec; 3 | use apalis_core::request::Request; 4 | use apalis_core::worker::WorkerId; 5 | use apalis_core::Codec; 6 | use apalis_redis::RedisContext; 7 | use apalis_redis::RedisStorage; 8 | use redis::{ErrorKind, Value}; 9 | use serde::{de::DeserializeOwned, Serialize}; 10 | 11 | type RedisCodec = JsonCodec>; 12 | 13 | impl BackendExt for RedisStorage 14 | where 15 | T: 'static + Serialize + DeserializeOwned + Send + Unpin + Sync, 16 | { 17 | type Request = Request; 18 | type Error = redis::RedisError; 19 | async fn stats(&self) -> Result { 20 | let mut conn = self.get_connection().clone(); 21 | let queue = self.get_config(); 22 | let script = r#" 23 | local pending_jobs_set = KEYS[1] 24 | local running_jobs_set = KEYS[2] 25 | local dead_jobs_set = KEYS[3] 26 | local failed_jobs_set = KEYS[4] 27 | local success_jobs_set = KEYS[5] 28 | 29 | local pending_count = redis.call('ZCARD', pending_jobs_set) 30 | local running_count = redis.call('ZCARD', running_jobs_set) 31 | local dead_count = redis.call('ZCARD', dead_jobs_set) 32 | local failed_count = redis.call('ZCARD', failed_jobs_set) 33 | local success_count = redis.call('ZCARD', success_jobs_set) 34 | 35 | return {pending_count, running_count, dead_count, failed_count, success_count} 36 | "#; 37 | 38 | let keys = vec![ 39 | queue.inflight_jobs_set().to_string(), 40 | queue.active_jobs_list().to_string(), 41 | queue.dead_jobs_set().to_string(), 42 | queue.failed_jobs_set().to_string(), 43 | queue.done_jobs_set().to_string(), 44 | ]; 45 | 46 | let results: Vec = redis::cmd("EVAL") 47 | .arg(script) 48 | .arg(keys.len().to_string()) 49 | .arg(keys) 50 | .query_async(&mut conn) 51 | .await?; 52 | 53 | Ok(Stat { 54 | pending: results[0], 55 | running: results[1], 56 | dead: results[2], 57 | failed: results[3], 58 | success: results[4], 59 | }) 60 | } 61 | async fn list_jobs( 62 | &self, 63 | status: &JobState, 64 | page: i32, 65 | ) -> Result, redis::RedisError> { 66 | let mut conn = self.get_connection().clone(); 67 | let queue = self.get_config(); 68 | match status { 69 | JobState::Pending | JobState::Scheduled => { 70 | let active_jobs_list = &queue.active_jobs_list(); 71 | let job_data_hash = &queue.job_data_hash(); 72 | let ids: Vec = redis::cmd("LRANGE") 73 | .arg(active_jobs_list) 74 | .arg(((page - 1) * 10).to_string()) 75 | .arg((page * 10).to_string()) 76 | .query_async(&mut conn) 77 | .await?; 78 | 79 | if ids.is_empty() { 80 | return Ok(Vec::new()); 81 | } 82 | let data: Option = redis::cmd("HMGET") 83 | .arg(job_data_hash) 84 | .arg(&ids) 85 | .query_async(&mut conn) 86 | .await?; 87 | 88 | let jobs: Vec> = 89 | deserialize_multiple_jobs::<_, RedisCodec>(data.as_ref()).unwrap(); 90 | Ok(jobs) 91 | } 92 | JobState::Running => { 93 | let consumers_set = &queue.consumers_set(); 94 | let job_data_hash = &queue.job_data_hash(); 95 | let workers: Vec = redis::cmd("ZRANGE") 96 | .arg(consumers_set) 97 | .arg("0") 98 | .arg("-1") 99 | .query_async(&mut conn) 100 | .await?; 101 | 102 | if workers.is_empty() { 103 | return Ok(Vec::new()); 104 | } 105 | let mut all_jobs = Vec::new(); 106 | for worker in workers { 107 | let ids: Vec = redis::cmd("SMEMBERS") 108 | .arg(&worker) 109 | .query_async(&mut conn) 110 | .await?; 111 | 112 | if ids.is_empty() { 113 | continue; 114 | }; 115 | let data: Option = redis::cmd("HMGET") 116 | .arg(job_data_hash.clone()) 117 | .arg(&ids) 118 | .query_async(&mut conn) 119 | .await?; 120 | 121 | let jobs: Vec> = 122 | deserialize_multiple_jobs::<_, RedisCodec>(data.as_ref()).unwrap(); 123 | all_jobs.extend(jobs); 124 | } 125 | 126 | Ok(all_jobs) 127 | } 128 | JobState::Success => { 129 | let done_jobs_set = &queue.done_jobs_set(); 130 | let job_data_hash = &queue.job_data_hash(); 131 | let ids: Vec = redis::cmd("ZRANGE") 132 | .arg(done_jobs_set) 133 | .arg(((page - 1) * 10).to_string()) 134 | .arg((page * 10).to_string()) 135 | .query_async(&mut conn) 136 | .await?; 137 | 138 | if ids.is_empty() { 139 | return Ok(Vec::new()); 140 | } 141 | let data: Option = redis::cmd("HMGET") 142 | .arg(job_data_hash) 143 | .arg(&ids) 144 | .query_async(&mut conn) 145 | .await?; 146 | 147 | let jobs: Vec> = 148 | deserialize_multiple_jobs::<_, RedisCodec>(data.as_ref()).unwrap(); 149 | Ok(jobs) 150 | } 151 | // JobState::Retry => Ok(Vec::new()), 152 | JobState::Failed => { 153 | let failed_jobs_set = &queue.failed_jobs_set(); 154 | let job_data_hash = &queue.job_data_hash(); 155 | let ids: Vec = redis::cmd("ZRANGE") 156 | .arg(failed_jobs_set) 157 | .arg(((page - 1) * 10).to_string()) 158 | .arg((page * 10).to_string()) 159 | .query_async(&mut conn) 160 | .await?; 161 | if ids.is_empty() { 162 | return Ok(Vec::new()); 163 | } 164 | let data: Option = redis::cmd("HMGET") 165 | .arg(job_data_hash) 166 | .arg(&ids) 167 | .query_async(&mut conn) 168 | .await?; 169 | let jobs: Vec> = 170 | deserialize_multiple_jobs::<_, RedisCodec>(data.as_ref()).unwrap(); 171 | 172 | Ok(jobs) 173 | } 174 | JobState::Dead => { 175 | let dead_jobs_set = &queue.dead_jobs_set(); 176 | let job_data_hash = &queue.job_data_hash(); 177 | let ids: Vec = redis::cmd("ZRANGE") 178 | .arg(dead_jobs_set) 179 | .arg(((page - 1) * 10).to_string()) 180 | .arg((page * 10).to_string()) 181 | .query_async(&mut conn) 182 | .await?; 183 | 184 | if ids.is_empty() { 185 | return Ok(Vec::new()); 186 | } 187 | let data: Option = redis::cmd("HMGET") 188 | .arg(job_data_hash) 189 | .arg(&ids) 190 | .query_async(&mut conn) 191 | .await?; 192 | 193 | let jobs: Vec> = 194 | deserialize_multiple_jobs::<_, RedisCodec>(data.as_ref()).unwrap(); 195 | 196 | Ok(jobs) 197 | } 198 | } 199 | } 200 | async fn list_workers(&self) -> Result, redis::RedisError> { 201 | let queue = self.get_config(); 202 | let consumers_set = &queue.consumers_set(); 203 | let mut conn = self.get_connection().clone(); 204 | let workers: Vec = redis::cmd("ZRANGE") 205 | .arg(consumers_set) 206 | .arg("0") 207 | .arg("-1") 208 | .query_async(&mut conn) 209 | .await?; 210 | Ok(workers 211 | .into_iter() 212 | .map(|w| { 213 | Worker::new::( 214 | WorkerId::new(w.replace(&format!("{}:", &queue.inflight_jobs_set()), "")), 215 | "".to_string(), 216 | ) 217 | }) 218 | .collect()) 219 | } 220 | } 221 | 222 | fn deserialize_multiple_jobs>>( 223 | jobs: Option<&Value>, 224 | ) -> Option>> 225 | where 226 | T: DeserializeOwned, 227 | { 228 | let jobs = match jobs { 229 | None => None, 230 | Some(Value::Array(val)) => Some(val), 231 | _ => { 232 | // error!( 233 | // "Decoding Message Failed: {:?}", 234 | // "unknown result type for next message" 235 | // ); 236 | None 237 | } 238 | }; 239 | 240 | jobs.map(|values| { 241 | values 242 | .iter() 243 | .filter_map(|v| match v { 244 | Value::BulkString(data) => { 245 | let inner = C::decode(data.to_vec()) 246 | .map_err(|e| (ErrorKind::IoError, "Decode error", e.into().to_string())) 247 | .unwrap(); 248 | Some(inner) 249 | } 250 | _ => None, 251 | }) 252 | .collect() 253 | }) 254 | } 255 | -------------------------------------------------------------------------------- /crates/shared/src/sqlite.rs: -------------------------------------------------------------------------------- 1 | use apalis_core::{ 2 | codec::json::JsonCodec, 3 | request::{Parts, Request}, 4 | worker::WorkerId, 5 | Codec, 6 | }; 7 | use apalis_sql::{context::SqlContext, from_row::SqlRequest, sqlite::SqliteStorage}; 8 | use serde::{de::DeserializeOwned, Serialize}; 9 | 10 | use crate::{BackendExt, JobState, SqlError, Stat, Worker}; 11 | 12 | impl BackendExt 13 | for SqliteStorage> 14 | { 15 | type Request = Request>; 16 | type Error = SqlError; 17 | async fn stats(&self) -> Result { 18 | let fetch_query = "SELECT 19 | COUNT(1) FILTER (WHERE status = 'Pending') AS pending, 20 | COUNT(1) FILTER (WHERE status = 'Running') AS running, 21 | COUNT(1) FILTER (WHERE status = 'Done') AS done, 22 | COUNT(1) FILTER (WHERE status = 'Failed') AS failed, 23 | COUNT(1) FILTER (WHERE status = 'Killed') AS killed 24 | FROM Jobs WHERE job_type = ?"; 25 | 26 | let res: (i64, i64, i64, i64, i64, i64) = sqlx::query_as(fetch_query) 27 | .bind(self.get_config().namespace()) 28 | .fetch_one(self.pool()) 29 | .await?; 30 | 31 | Ok(Stat { 32 | pending: res.0.try_into()?, 33 | running: res.1.try_into()?, 34 | dead: res.4.try_into()?, 35 | failed: res.3.try_into()?, 36 | success: res.2.try_into()?, 37 | }) 38 | } 39 | 40 | async fn list_jobs( 41 | &self, 42 | status: &JobState, 43 | page: i32, 44 | ) -> Result, Self::Error> { 45 | let status = status.to_string(); 46 | let fetch_query = "SELECT * FROM Jobs WHERE status = ? AND job_type = ? ORDER BY done_at DESC, run_at DESC LIMIT 10 OFFSET ?"; 47 | let res: Vec> = sqlx::query_as(fetch_query) 48 | .bind(status) 49 | .bind(self.get_config().namespace()) 50 | .bind(((page - 1) * 10).to_string()) 51 | .fetch_all(self.pool()) 52 | .await?; 53 | Ok(res 54 | .into_iter() 55 | .map(|j| { 56 | let (req, ctx) = j.req.take_parts(); 57 | let req = JsonCodec::::decode(req).unwrap(); 58 | Request::new_with_ctx(req, ctx) 59 | }) 60 | .collect()) 61 | } 62 | 63 | async fn list_workers(&self) -> Result, Self::Error> { 64 | let fetch_query = 65 | "SELECT id, layers, last_seen FROM Workers WHERE worker_type = ? ORDER BY last_seen DESC LIMIT 20 OFFSET ?"; 66 | let res: Vec<(String, String, i64)> = sqlx::query_as(fetch_query) 67 | .bind(self.get_config().namespace()) 68 | .bind(0) 69 | .fetch_all(self.pool()) 70 | .await?; 71 | Ok(res 72 | .into_iter() 73 | .map(|w| Worker::new::(WorkerId::new(w.0), w.1)) 74 | .collect()) 75 | } 76 | } 77 | -------------------------------------------------------------------------------- /examples/rest-api/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "rest-api" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | apalis = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", features = [ 10 | "limit", 11 | ] } 12 | apalis-redis = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0" } 13 | apalis-sql = { version = "0.6.0-rc.8", git = "https://github.com/geofmureithi/apalis", branch = "chore/v0.6.0", features = [ 14 | "sqlite", 15 | ] } 16 | actix-web = "4.5.1" 17 | actix-web-actors = "4.3.0" 18 | actix = "0.13.3" 19 | backend = { package = "apalis-board-backend", path = "../../crates/backend" } 20 | serde = "1" 21 | env_logger = "0.11" 22 | futures = "0.3" 23 | actix-cors = "0.7" 24 | tower = "0.4" 25 | -------------------------------------------------------------------------------- /examples/rest-api/src/main.rs: -------------------------------------------------------------------------------- 1 | use actix::clock::sleep; 2 | use actix_web::{rt::signal::ctrl_c, web, App, HttpServer}; 3 | use apalis::{layers::tracing::TraceLayer, prelude::*}; 4 | use apalis_redis::RedisStorage; 5 | use backend::{api::ApiBuilder, sse::Broadcaster}; 6 | use core::fmt; 7 | use futures::{ 8 | future::{self, BoxFuture}, 9 | FutureExt, 10 | }; 11 | use serde::{Deserialize, Serialize}; 12 | use std::{sync::Mutex, task::Context, task::Poll, time::Duration}; 13 | use tower::{Layer, Service}; 14 | 15 | mod sse { 16 | use actix_web::{web::*, HttpResponse}; 17 | use backend::sse::Broadcaster; 18 | use std::sync::Mutex; 19 | 20 | pub async fn new_client(broadcaster: Data>) -> HttpResponse { 21 | let rx = broadcaster.lock().unwrap().new_client(); 22 | 23 | HttpResponse::Ok() 24 | .append_header(("content-type", "text/event-stream")) 25 | .streaming(rx) 26 | } 27 | } 28 | 29 | #[derive(Debug, Deserialize, Serialize, Clone)] 30 | pub struct Email { 31 | pub to: String, 32 | pub subject: String, 33 | pub text: String, 34 | } 35 | 36 | pub async fn send_email(_job: Email) { 37 | sleep(Duration::from_secs(10)).await; 38 | // log::info!("Attempting to send email to {}", job.to); 39 | } 40 | 41 | #[actix_web::main] 42 | async fn main() -> std::io::Result<()> { 43 | std::env::set_var("RUST_LOG", "debug,sqlx::query=error"); 44 | env_logger::init(); 45 | let broadcaster = Broadcaster::create(); 46 | let mut redis = RedisStorage::new(apalis_redis::connect("redis://127.0.0.1/").await.unwrap()); 47 | 48 | produce_redis_jobs(&mut redis).await; 49 | let worker = Monitor::new() 50 | .register( 51 | WorkerBuilder::new("tasty-apple") 52 | .layer(TraceLayer::new()) 53 | .layer(SseLogLayer::new(broadcaster.clone())) 54 | .backend(redis.clone()) 55 | .build_fn(send_email), 56 | ) 57 | .run_with_signal(async { ctrl_c().await }); 58 | let http = async move { 59 | HttpServer::new(move || { 60 | App::new() 61 | .route("/events", web::get().to(sse::new_client)) 62 | .service( 63 | web::scope("/api").service( 64 | ApiBuilder::new() 65 | .add_storage(&redis, "apalis::redis") 66 | .build(), 67 | ), 68 | ) 69 | .app_data(broadcaster.clone()) 70 | }) 71 | .bind("127.0.0.1:8000")? 72 | .run() 73 | .await?; 74 | Ok(()) 75 | }; 76 | 77 | future::try_join(http, worker).await?; 78 | 79 | Ok(()) 80 | } 81 | 82 | async fn produce_redis_jobs(storage: &mut RedisStorage) { 83 | use apalis::prelude::Storage; 84 | for i in 0..10 { 85 | storage 86 | .push(Email { 87 | to: format!("test{i}@example.com"), 88 | text: "Test background job from apalis".to_string(), 89 | subject: "Background email job".to_string(), 90 | }) 91 | .await 92 | .unwrap(); 93 | } 94 | } 95 | 96 | #[derive(Debug, Clone)] 97 | pub struct SseLogLayer { 98 | target: actix_web::web::Data>, 99 | } 100 | 101 | impl SseLogLayer { 102 | pub fn new(target: actix_web::web::Data>) -> Self { 103 | Self { target } 104 | } 105 | } 106 | 107 | impl Layer for SseLogLayer { 108 | type Service = SseLogService; 109 | 110 | fn layer(&self, service: S) -> Self::Service { 111 | SseLogService { 112 | target: self.target.clone(), 113 | service, 114 | } 115 | } 116 | } 117 | 118 | // This service implements the Log behavior 119 | #[derive(Clone)] 120 | pub struct SseLogService { 121 | target: actix_web::web::Data>, 122 | service: S, 123 | } 124 | 125 | impl Service for SseLogService 126 | where 127 | S: Service, 128 | Request: fmt::Debug, 129 | S::Future: Send + 'static, 130 | S::Response: Send + 'static, 131 | S::Error: Send + 'static, 132 | { 133 | type Response = S::Response; 134 | type Error = S::Error; 135 | type Future = BoxFuture<'static, Result>; 136 | 137 | fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll> { 138 | self.service.poll_ready(cx) 139 | } 140 | 141 | fn call(&mut self, request: Request) -> Self::Future { 142 | let broadcaster = &self.target; 143 | broadcaster.lock().unwrap().send("Job started"); 144 | let broadcaster = broadcaster.clone(); 145 | 146 | self.service 147 | .call(request) 148 | .then(|res| async move { 149 | match res { 150 | Ok(r) => { 151 | broadcaster 152 | .lock() 153 | .unwrap() 154 | .send("Job completed successfully"); 155 | Ok(r) 156 | } 157 | Err(e) => { 158 | broadcaster.lock().unwrap().send("Job failed"); 159 | Err(e) 160 | } 161 | } 162 | }) 163 | .boxed() 164 | } 165 | } 166 | -------------------------------------------------------------------------------- /examples/standalone/chirpy.yml: -------------------------------------------------------------------------------- 1 | jobs: 2 | send-email: 3 | description: Sending emails with nodejs 4 | task: 5 | docker: "alpine:3" 6 | steps: 7 | echo: 'echo "HelloWorld"' 8 | source: 9 | Http: 10 | backend: default 11 | send-again: 12 | description: Sending again 13 | docker: "alpine:3" 14 | steps: 15 | echo: 'echo "HelloWorld"' 16 | source: 17 | Http: 18 | backend: default 19 | daily-reminder: 20 | description: Daily reminder 21 | task: 22 | docker: "alpine:3" 23 | steps: 24 | echo: 'echo "repository variable : $REPOSITORY_VAR"' 25 | source: 26 | Cron: 1/20 * * * * * 27 | -------------------------------------------------------------------------------- /examples/uptime-bird/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "uptime-bird" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | -------------------------------------------------------------------------------- /examples/uptime-bird/src/main.rs: -------------------------------------------------------------------------------- 1 | fn main() { 2 | println!("Hello, world!"); 3 | } 4 | -------------------------------------------------------------------------------- /examples/uptime-bird/tasks/github.hurl: -------------------------------------------------------------------------------- 1 | GET http://localhost:8000/assert-xpath 2 | HTTP 200 3 | [Asserts] 4 | xpath "normalize-space(//data)" == "café" 5 | xpath "normalize-space(//data)" == "caf\u{00e9}" 6 | xpath "normalize-space(//data)" > "CAFÉ" 7 | xpath "//toto" not exists 8 | 9 | café 10 | 11 | 12 | # Test XPath assert with XML namespace. 13 | GET http://localhost:8000/assert-xpath-simple-namespaces 14 | HTTP 200 15 | [Asserts] 16 | 17 | xpath "string(//bk:book/bk:title)" == "Cheaper by the Dozen" 18 | xpath "string(//*[name()='bk:book']/*[name()='bk:title'])" == "Cheaper by the Dozen" 19 | xpath "string(//*[local-name()='book']/*[local-name()='title'])" == "Cheaper by the Dozen" 20 | 21 | xpath "string(//bk:book/isbn:number)" == "1568491379" 22 | xpath "string(//*[name()='bk:book']/*[name()='isbn:number'])" == "1568491379" 23 | xpath "string(//*[local-name()='book']/*[local-name()='number'])" == "1568491379" 24 | 25 | 26 | # Test XPath assert with default XML namespace. 27 | # _ can be used to target a default namespace. 28 | GET http://localhost:8000/assert-xpath-svg 29 | HTTP 200 30 | [Asserts] 31 | xpath "//_:svg/_:g/_:circle" count == 3 32 | xpath "//*[local-name()='svg']/*[local-name()='g']/*[local-name()='circle']" count == 3 33 | xpath "//*[name()='svg']/*[name()='g']/*[name()='circle']" count == 3 34 | 35 | 36 | # Test XPath assert with default and prefixed XML namespace. 37 | # _ can be used to target a default namespace. 38 | GET http://localhost:8000/assert-xpath-namespaces 39 | HTTP 200 40 | [Asserts] 41 | xpath "string(//_:book/_:title)" == "Cheaper by the Dozen" 42 | xpath "string(//_:book/_:title)" > "Cheaper" 43 | xpath "string(//_:book/isbn:number)" == "1568491379" 44 | xpath "//*[name()='book']/*[name()='notes']" count == 1 45 | xpath "//*[local-name()='book']/*[local-name()='notes']" count == 1 46 | xpath "//_:book/_:notes/*[local-name()='p']" count == 1 -------------------------------------------------------------------------------- /screenshots/logo.svg: -------------------------------------------------------------------------------- 1 | 2 | -------------------------------------------------------------------------------- /screenshots/overview.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/screenshots/overview.png -------------------------------------------------------------------------------- /screenshots/queues.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/screenshots/queues.png -------------------------------------------------------------------------------- /screenshots/shot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/screenshots/shot.png -------------------------------------------------------------------------------- /screenshots/workers.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/apalis-dev/apalis-board/42907b7bf2099d32d4e33d376e3a6f86fab503d8/screenshots/workers.png --------------------------------------------------------------------------------
"Your system instance stats"
"An overview of execution stats"
"These are all the queues you have created on your apalis instance."
{info}
"These are all the queues you have created on this apalis-board instance."
"Active · 606 jobs · 500 failed · 60 pending · 5 dead"