├── Content Generator V2 └── README.md ├── .gitignore ├── .claude └── settings.local.json ├── Backup Postgres Table to GitHub in CSV Format ├── README.md └── n8n.json ├── Tweet Scheduler └── README.md ├── Content Generator V1 ├── README.md └── n8n.json ├── MCP AI Assistant ├── prompt checklist.md └── README.md ├── Turn Any Prompt Into a Chart and Upload It to WordPress ├── README.md └── n8n.json ├── Discord Daily Digest for Multiple Google Analytics Accounts └── README.md ├── Reddit To Twitter Automation ├── README.md └── n8n.json ├── firebase-debug.log ├── Generate and Upload Blog Images with Leonardo AI and WordPress ├── README.md └── n8n.json ├── Ebook to Audiobook converter ├── README.md └── n8n.json ├── CLAUDE.md ├── Generate Images with Replicate and Flux ├── README.md └── n8n.json ├── Memecoin Art Generator ├── README.md └── n8n.json ├── f5bot reddit auto comment └── README.md ├── AI-Powered Blog Automation for WordPress └── README.md ├── AI Email Classifier └── README.md ├── Content Generator V3 └── README.md ├── README.md └── Product Hunt Lead Generator └── README.md /Content Generator V2/README.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ## Repo-specific ignores 2 | # Do not commit contributor guide 3 | AGENTS.md 4 | 5 | # Do not commit JSON files in Content Generator V3/V4 6 | Content Generator V3/**/*.json 7 | Content Generator V4/**/*.json 8 | Content Generator Shopify Blog/**/*.json 9 | Product Hunt Lead Generator/**/*.json 10 | Product Hunt Lead Generator/**/*.png 11 | -------------------------------------------------------------------------------- /.claude/settings.local.json: -------------------------------------------------------------------------------- 1 | { 2 | "permissions": { 3 | "allow": [ 4 | "WebFetch(domain:github.com)", 5 | "WebFetch(domain:articles.emp0.com)", 6 | "Bash(curl -s \"https://docs.google.com/spreadsheets/d/e/2PACX-1vRN5LtYfZvQFaTY8xDCszXIacax-dmUAdKS4i5XqZSzRLeB0LwQrgn0_6Czj6rzDyPzmVLQ_B6Y2bma/pub?output=csv&gid=1255499381\")", 7 | "Bash(find:*)" 8 | ], 9 | "deny": [] 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /Backup Postgres Table to GitHub in CSV Format/README.md: -------------------------------------------------------------------------------- 1 | This workflow automatically backs up all public Postgres tables into a GitHub repository as CSV files every 24 hours. 2 | It ensures your database snapshots are always up to date updating existing files if data changes, or creating new backups for new tables. 3 | 4 | This workflow is also published on the [n8n official site](https://n8n.io/workflows/7419-daily-postgres-table-backup-to-github-in-csv-format/) 5 | 6 | ## 📺 Video Tutorial 7 | 8 | [![n8n Discord Bot Tutorial](https://img.youtube.com/vi/ZECpVg8n7-Y/maxresdefault.jpg)](https://youtu.be/ZECpVg8n7-Y) 9 | 10 | *Watch our complete tutorial on how to set up and use the n8n Discord trigger bot* 11 | 12 | --- 13 | 14 | 15 | **How it works:** 16 | 1. **Schedule Trigger** – Runs daily to start the backup process. 17 | 2. **GitHub Integration** – Lists existing files in the target repo to avoid duplicates. 18 | 3. **Postgres Query** – Fetches all table names from the `public` schema. 19 | 4. **Data Extraction** – Selects all rows from each table. 20 | 5. **Convert to CSV** – Saves table data as CSV files. 21 | 6. **Conditional Upload** – 22 | - If the table already exists in GitHub → Update the file. 23 | - If new → Upload a new file. 24 | 25 | --- 26 | 27 | ### **Postgres Tables Preview** 28 | ![postgres table](https://articles.emp0.com/wp-content/uploads/2025/08/backup-postgres-to-github-tables.png) 29 | 30 | --- 31 | 32 | ### **GitHub Backup Preview** 33 | ![github backup](https://articles.emp0.com/wp-content/uploads/2025/08/backup-postgres-to-github-repo.png) 34 | 35 | --- 36 | 37 | **Use case:** 38 | Perfect for developers, analysts, or data engineers who want **daily automated backups** of Postgres data without manual exports keeping both history and version control in GitHub. 39 | 40 | **Requirements:** 41 | - Postgres credentials with read access. 42 | - GitHub repository (OAuth2 connected in n8n). 43 | -------------------------------------------------------------------------------- /Tweet Scheduler/README.md: -------------------------------------------------------------------------------- 1 | # Twitter Automation (n8n Template) 2 | 3 | ## 🚀 What it does 4 | - Posts a **unique tweet every 2 hours** 5 | - **70% content tweets** (10 proven templates) 6 | - **30% promo tweets** pulled from Google Sheets 7 | - Logs past tweets to avoid **duplicates** 8 | 9 | --- 10 | 11 | ## ✅ Requirements 12 | - **n8n** 13 | - **Google Sheets** with 2 tabs: 14 | - `posts` → log of past tweets (`PAST TWEETS`, `Date`) 15 | - `promo` → promo source (`name`, `last_posted`, optional extra fields) 16 | ![twitterinfluencerexcel.png](fileId:2340) 17 | - **Twitter (X)** account with OAuth2 write access 18 | - **Gemini API key** (for text generation) 19 | 20 | --- 21 | 22 | ## ⚡ Results 23 | 1. Automated motivational posts 24 | ![twitterautomation1.png](fileId:2338) 25 | 26 | 2. Automated promo posts 27 | ![twitterautomation2.png](fileId:2339) 28 | --- 29 | 30 | ## 🔄 How It Works 31 | 1. Triggers **every 2 hours** (optional: random delay up to 120 min). 32 | 2. Randomly selects **content (70%)** or **promo (30%)**. 33 | 3. **Content path:** checks `posts`, generates tweet, logs new one. 34 | 4. **Promo path:** picks row from `promo`, generates tweet, updates `last_posted`. 35 | 5. Posts to **Twitter** automatically. 36 | 37 | --- 38 | 39 | ## 🎯 Content Templates 40 | Transformation · Hook–List–Takeaway · Interesting Fact · Metaphor · Contrast · Motivation · Triad · Comparison · 80/20 Rule · Callout 41 | 42 | --- 43 | 44 | ## ⚙️ Customization 45 | | Setting | Where | Example | 46 | |---------|-------|---------| 47 | | Ads probability | Code node | Change `Math.random() < 0.3` → `0.2` for 20% | 48 | | Templates | Code node | Edit the `templates` array | 49 | | Cadence | Schedule Trigger | Cron or fixed hours | 50 | | Random delay | Time randomizer | Enable node (0–120 mins) | 51 | 52 | --- 53 | 54 | ## 🛠️ Troubleshooting 55 | | Issue | Fix | 56 | |-------|-----| 57 | | Repeated tweets | Ensure `posts` has column `PAST TWEETS` | 58 | | Not posting | Reconnect Twitter creds with write access | 59 | | Promo never used | Increase ads probability (e.g., `0.4`) | 60 | -------------------------------------------------------------------------------- /Content Generator V1/README.md: -------------------------------------------------------------------------------- 1 | # Blog Generator V1 — Automated AI-Powered Blog Creation 2 | 3 | ![Workflow Screenshot](https://articles.emp0.com/wp-content/uploads/2025/08/content-generator-v1-workflow.png) 4 | 5 | ## 📌 Overview 6 | The **Blog Generator V1** is an automated **n8n workflow** that fetches trending topics, generates high-quality SEO-optimized blog posts, creates a featured image, and publishes them directly to your WordPress site — all without manual intervention. 7 | 8 | This workflow is designed for: 9 | - **Content marketers** who want to automate daily blog posting 10 | - **SEO specialists** aiming for consistent content output 11 | - **Businesses** that want to maintain a regular publishing schedule 12 | 13 | --- 14 | 15 | ## ⚙️ Features 16 | - **Automated Topic Discovery** — Finds an interesting news topic each day using AI. 17 | - **AI-Powered Content Generation** — Creates a 1,500-word SEO-optimized blog post in HTML format. 18 | - **Featured Image Creation** — Uses AI to generate a high-quality blog cover image. 19 | - **Direct WordPress Publishing** — Posts the content automatically with the image attached. 20 | - **Scheduling** — Runs daily (or at your preferred interval) using n8n's schedule trigger. 21 | 22 | --- 23 | 24 | ## 🛠 How It Works 25 | 1. **Schedule Trigger** 26 | Initiates the workflow at a specified time. 27 | 2. **AI Topic Finder** 28 | Uses OpenAI to find a trending, simple blog topic. 29 | 3. **Content Creation** 30 | Passes the topic to an AI Agent for a 1,500-word HTML-formatted blog post. 31 | 4. **WordPress Draft Creation** 32 | Posts the generated content as a draft. 33 | 5. **Image Generation & Upload** 34 | Creates a featured image based on the post title and uploads it to WordPress. 35 | 6. **Publishing** 36 | Updates the draft to published status with the featured image set. 37 | 38 | --- 39 | 40 | ## 📸 Example Output 41 | ![Generated Blog Posts Screenshot](https://articles.emp0.com/wp-content/uploads/2025/08/content-generator-v1-blog.png) 42 | 43 | --- 44 | 45 | ## 🔧 Requirements 46 | - **n8n** instance (self-hosted or cloud) 47 | - **OpenAI API key** 48 | - **WordPress API credentials** 49 | - **Categories & tags** pre-created in WordPress 50 | 51 | --- 52 | 53 | ## 🚀 Setup Instructions 54 | 1. Import the `content generator v1.json` workflow into your n8n instance. 55 | 2. Add your **OpenAI API credentials**. 56 | 3. Add your **WordPress API credentials**. 57 | 4. Adjust the **Schedule Trigger** to your desired posting frequency. 58 | 5. Update **category** and **tag IDs** in the `Create a post` node. 59 | 6. Activate the workflow. 60 | 61 | --- 62 | 63 | ## 📌 Notes 64 | - The blog length, style, and SEO optimizations can be customized in the AI Agent's system message. 65 | - Image size is currently set to `1792x1024` — adjust in the `Generate an image` node if needed. 66 | - Ensure your WordPress API user has permission to create and update posts. 67 | -------------------------------------------------------------------------------- /MCP AI Assistant/prompt checklist.md: -------------------------------------------------------------------------------- 1 | # ✅ MCP Assistant Test Prompt Checklist 2 | 3 | ## 📅 Google Calendar 4 | - [X] "Schedule a meeting with Alice tomorrow at 10am. and send an invite to alice@wonderland.com" 5 | - [X] "Create an event called 'Project Sync' on Friday at 3pm with Bob and Charlie." 6 | - [X] "Update the time of my call with James to next Monday at 2pm." 7 | - [X] "Delete my meeting with Marketing next Wednesday." 8 | - [x] "What is my schedule tommorow ? " 9 | 10 | ## 📧 Gmail 11 | - [x] "Show me unread emails from this week." 12 | - [x] "Search for emails with subject: invoice" 13 | - [X] "Reply to the latest email from john@company.com saying 'Thanks, noted!'" 14 | - [X] "Draft an email to info@a16z.com with subject 'Emp0 Fundraising' and draft the body of the email with an investment opportunity in Emp0, scrape this site https://Emp0.com to get to know more about emp0.com" 15 | - [X] "Send an email to hi@cursor.com with subject 'Feature request' and cc sales@cursor.com" 16 | - [ ] "Send an email to recruiting@openai.com , write about how you like their product and want to apply for a job there and attach my latest CV from Google Drivce" 17 | 18 | ## 🗂 Google Drive 19 | - [ ] "Upload the PDF you just sent me to my Google Drive." 20 | - [X] "Create a folder called 'July Reports' inside Emp0 shared drive." 21 | - [X] "Move the file named 'Q2_Review.pdf' to 'Reports/2024/Q2'." 22 | - [X] "Share the folder 'Investor Decks' with info@a16z.com as viewer." 23 | - [ ] "Download the file 'Wayne_Li_CV.pdf' and attach it in Discord." 24 | - [X] "Search for a file named 'Invoice May' in my Google Drive." 25 | 26 | ## 🖼 LinkedIn 27 | - [X] "Think of a random and inspiring quote. Post a text update on LinkedIn with the quote and end with a question so people will answer and increase engagement" 28 | - [ ] "Post this Google Drive image to LinkedIn with the caption: 'Team offsite snapshots!'" 29 | - [X] "Summarize the contents of this workflow and post it on linkedin with the original url https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/" 30 | 31 | ## 🐦 Twitter 32 | - [X] "Tweet: 'AI is eating operations. Fast.'" 33 | - [X] "Send a DM to @founderguy: 'Would love to connect on what you’re building.'" 34 | - [X] "Search Twitter for keyword: 'founder advice'" 35 | 36 | ## 🌐 Utilities 37 | - [X] "What time is it now?" 38 | - [ ] "Download this PDF: https://ontheline.trincoll.edu/images/bookdown/sample-local-pdf.pdf" 39 | - [X] "Search this URL and summarize important tech updates today: https://techcrunch.com/feed/" 40 | 41 | ## 📎 Discord Attachments 42 | - [ ] "Take the image I just uploaded and post it to LinkedIn." 43 | - [ ] "Get the file from my last message and upload it to Google Drive." 44 | 45 | ## 🧪 Edge Cases 46 | - [X] "Schedule a meeting on Feb 30." 47 | - [X] "Send a DM to @user_that_does_not_exist" 48 | - [ ] "Download a 50MB PDF and post it to LinkedIn" 49 | - [X] "Get the latest tweet from my timeline and email it to myself." 50 | 51 | ## 🔗 Cross-tool Flows 52 | - [ ] "Get the latest image from my Google Drive and post it on LinkedIn with the caption 'Another milestone hit!'" 53 | - [ ] "Find the latest PDF report in Google Drive and email it to investor@vc.com." 54 | - [ ] "Download an image from this link and upload it to my Google Drive: https://example.com/image.png" 55 | - [ ] "Get the most recent attachment from my inbox and upload it to Google Drive." 56 | 57 | --- 58 | Run each of these in isolated test cases. For cross-tool flows, verify binary serialization integrity. 59 | -------------------------------------------------------------------------------- /Turn Any Prompt Into a Chart and Upload It to WordPress/README.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | This workflow takes a prompt, searches the web for relevant data and generate a chart using chart.js . The workflow then uploads the image to wordpress to be used as a CDN image delivery for your blog. I personally use this workflow as a mcp tool when generating blog posts 4 | 5 | ![Turn Any Prompt Into a Chart and Upload It to WordPress](https://articles.emp0.com/wp-content/uploads/2025/07/AI-Powered-Chart-Generation-from-Web-Data-with-GPT-4o-and-WordPress-Upload.png) 6 | 7 | This workflow is also published on the [main n8n.io website](https://n8n.io/workflows/6361-ai-powered-chart-generation-from-web-data-with-gpt-4o-and-wordpress-upload/) 8 | 9 | ## AI-Powered Chart Generation from Web Data 10 | 11 | This n8n workflow automates the process of: 12 | 1. **Scraping real-time data from the web** using GPT-4o with browsing capability 13 | 2. **Converting markdown tables into Chart.js-compatible JSON** 14 | 3. **Rendering the chart** using [QuickChart.io](https://quickchart.io) 15 | 4. **Uploading the resulting image** directly to your WordPress media library 16 | 17 | --- 18 | 19 | ## 🚀 Use Case 20 | 21 | Ideal for content creators, analysts, or automation engineers who need to: 22 | - Automate generation of visual reports 23 | - Create marketing-ready charts from live data 24 | - Streamline research-to-publish workflows 25 | 26 | --- 27 | 28 | ## 🧠 How It Works 29 | 30 | ### 1. Prompt Input 31 | Trigger the workflow manually or via another workflow with a `prompt` string, e.g.: 32 | 33 | Generate a graph of apple's market share in the mobile phone market in Q1 2025 34 | 35 | 36 | --- 37 | 38 | ### 2. Web Search + Table Extraction 39 | The `Message a model` node uses GPT-4o with search to: 40 | - Perform a real-time query 41 | - Extract data into a markdown table 42 | - Return the raw table + citation URLs 43 | 44 | --- 45 | 46 | ### 3. Chart Generation via AI Agent 47 | The `Generate Chart AI Agent`: 48 | - Interprets the table 49 | - Picks an appropriate chart type (bar, line, doughnut, etc.) 50 | - Outputs valid Chart.js JSON using a strict schema 51 | 52 | --- 53 | 54 | ### 4. QuickChart API Integration 55 | The `Create QuickChart` node: 56 | - Sends the Chart.js config to QuickChart.io 57 | - Renders the chart into a PNG image 58 | 59 | --- 60 | 61 | ### 5. WordPress Image Upload 62 | The `Upload image` node: 63 | - Uploads the PNG to your WordPress media library using REST API 64 | - Uses proper headers for filename and content-type 65 | - Returns the media GUID and full image URL 66 | 67 | --- 68 | 69 | ## 🧩 Nodes Used 70 | 71 | - `Manual Trigger` or `Execute Workflow Trigger` 72 | - `OpenAI Chat Model` (GPT-4o) 73 | - `LangChain Agent` (Chart Generator) 74 | - `LangChain OutputParserStructured` 75 | - `HTTP Request` (QuickChart API + WordPress Upload) 76 | - `Code` (Final result formatting) 77 | 78 | --- 79 | 80 | ## 🗂 Output Format 81 | 82 | The final `Code` node returns: 83 | ```json 84 | { 85 | "research": { ...raw markdown table + citations... }, 86 | "graph_data": { ...Chart.js JSON... }, 87 | "graph_image": { ...WordPress upload metadata... }, 88 | "result_image_url": "https://your-wordpress.com/wp-content/uploads/...png" 89 | } 90 | ``` 91 | ## ⚙️ Requirements 92 | OpenAI credentials (GPT-4o or GPT-4o-mini) 93 | 94 | WordPress REST API credentials with media write access 95 | 96 | QuickChart.io (free tier works) 97 | 98 | n8n v1.25+ recommended 99 | 100 | ## 📌 Notes 101 | Chart style and format are determined dynamically based on your table structure and AI interpretation. 102 | 103 | Make sure your OpenAI and WordPress credentials are connected properly. 104 | 105 | Outputs are schema-validated to ensure reliable rendering. 106 | 107 | ## 🖼 Sample Output 108 | ![openchart graph](https://articles.emp0.com/wp-content/uploads/2025/07/chart-apple-market-share-q1-2025.png) -------------------------------------------------------------------------------- /Discord Daily Digest for Multiple Google Analytics Accounts/README.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | Fetch Multiple Google Analytics GA4 metrics daily, post to Discord, update previous day’s entry as GA data finalizes over seven days. 3 | 4 | ![Google Analytics to Discord Workflow](https://articles.emp0.com/wp-content/uploads/2025/07/google-analytics-to-discord-workflow.png) 5 | 6 | This workflow can also be found on the [main n8n.io website](https://n8n.io/workflows/5470-discord-daily-digest-for-multiple-google-analytics-accounts/) 7 | 8 | ## Benefits 9 | - Automates daily traffic reporting 10 | - Maintains single message per day, avoids channel clutter 11 | - Provides near–real-time updates by editing prior messages 12 | 13 | ## Use Case 14 | - Teams tracking website performance via Discord (or any chat tool) without manual copy–paste. Marketing managers, community moderators, growth hackers. 15 | - If your manager asks you for daily marketing report every morning, you can now automate it 16 | 17 | ![Discord Daily Digest for Multiple Google Analytics Accounts](https://articles.emp0.com/wp-content/uploads/2025/06/Discord-Daily-Digest-for-Multiple-Google-Analytics-Accounts.png) 18 | 19 | ## Notes 20 | - google analytics node in n8n does not provide real time data. The node updates previous values for the next 7 days 21 | - discord node on n8n does not have features to update an exisiting message by message id. So we have used the discord api for this 22 | - most businesses use multiple google analytics properties across their digital platforms 23 | 24 | 25 | # Core Logic 26 | 27 | 1. Schedule trigger fires once a day. 28 | 2. Google Analytics node retrieves metrics for date ranges (past 7 days) 29 | 3. Aggregate node collates all records. 30 | 4. Discord node fetches the last 10 messages in the broadcast channel 31 | 5. Code node maps existing Discord messages by to the google analytics data using the date fields 32 | 5. For each GA record: 33 | 1. If no message exists → send new POST to the discord channel 34 | 2. If message exists and metrics changed, send an update patch to the existing discord message 35 | 8. Batch loops + wait nodes prevent rate-limit. 36 | 37 | # Setup Instructions 38 | 39 | 1. Import workflow JSON into n8n. 40 | 2. [Follow the n8n guide to Create Google Analytics OAuth2 credential](https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.googleanalytics) with access to all required GA accounts. 41 | 3. [Follow the n8n guide to Create Discord OAuth2 credential](https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.discord) for “Get Messages” operations. 42 | 4. [Follow the Discord guide to Create HTTP Header Auth credential](https://discord.com/developers/docs/reference) named “Discord-Bot” with header 43 | 44 | ``` 45 | Key: Authorization 46 | 47 | Value: Bot <your-bot-token> 48 | ``` 49 | 50 | 51 | 5. In the two Set nodes in the beginning of the flow, assign discord_channel_id and google_analytics_id. 52 | ![Edit discord and google analytics id](https://articles.emp0.com/wp-content/uploads/2025/06/edit-discord-and-google-analytics-id-1.png) 53 | 1. Get your discord channel id by sending a text on your discord channel and then copy message link 54 | 2. Paste the text below and you will see your message link in the form of https://discord.com/channels/server_id/channel_id/message_id , you will want to get the channel_id which is the number in the middle 55 | ![get discord channel id](https://articles.emp0.com/wp-content/uploads/2025/06/12345678-1.png) 56 | 3. Find your google analytics id by going to google analytics dashboard, seeing the properties in the top right and copy paste that number to the flow 57 | ![get google analytics id ](https://articles.emp0.com/wp-content/uploads/2025/06/12345678.png) 58 | 59 | 60 | 8. Adjust schedule trigger times to your preferred report hour. 61 | 62 | 9. Activate workflow. 63 | 64 | # Customization 65 | 66 | Replace Discord HTTP Request nodes with Slack, ClickUp, WhatsApp, Telegram integrations by swapping POST/PATCH endpoints and authentication. 67 | 68 | ## 📎 Repo & Credits 69 | 70 | - Also check out our Discord bot trigger: [n8n_discord_trigger_bot](https://github.com/Jharilela/n8n_discord_trigger_bot) 71 | - Creator: [Jay (Emp₀)](https://twitter.com/jharilela) 72 | - Automation tool: [n8n](https://n8n.partnerlinks.io/emp0) -------------------------------------------------------------------------------- /Reddit To Twitter Automation/README.md: -------------------------------------------------------------------------------- 1 | Automatically turns trending Reddit posts into **punchy, first-person tweets** powered by **Google Gemini AI**, **Reddit**, and **Twitter API**, with Google Sheets logging. 2 | 3 | --- 4 | 5 | ## 🧩 Overview 6 | 7 | This workflow repurposes Reddit content into original tweets every few hours. 8 | It’s perfect for creators, marketers, or founders who want to **automate content inspiration** while keeping tweets sounding **human, edgy, and fresh**. 9 | 10 | **Core automation loop:** 11 | 12 | 1. Fetch trending Reddit posts from selected subreddits. 13 | 2. Use **Gemini AI** to write a short, first-person tweet. 14 | 3. Check your Google Sheet to avoid reusing the same Reddit post. 15 | 4. Publish to Twitter automatically. 16 | 5. Log tweet + Reddit reference in Google Sheets. 17 | 18 | --- 19 | 20 | ## 🧠 Workflow Diagram 21 | 22 | ![Workflow Diagram](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-twitter-workflow-1.png) 23 | 24 | --- 25 | 26 | ## 🪄 How It Works 27 | 28 | 1️⃣ **Every 2 hours** → the workflow triggers automatically. 29 | 2️⃣ It picks a subreddit (like `r/automation`, `r/n8n`, `r/SaaS`). 30 | 3️⃣ Gemini AI analyzes a rising Reddit post and writes a **fresh, short tweet**. 31 | 4️⃣ The system checks your **Google Sheet** to ensure it hasn’t used that Reddit post before. 32 | 5️⃣ Once validated, the tweet is **published via Twitter API** and **logged**. 33 | 34 | --- 35 | 36 | ## 🧠 Example Tweet Output 37 | 38 | ![Example Tweet](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-twitter-post.png) 39 | 40 | --- 41 | 42 | ## 📊 Logged Data (Google Sheets) 43 | 44 | Each tweet is automatically logged for version control and duplication checks. 45 | 46 | | Date | Subreddit | Post ID | Tweet Text | 47 | |------|------------|----------|-------------| 48 | | 08/10/2025 | n8n_ai_agents | 1o16ome | Just saw a wild n8n workflow on Reddit... | 49 | 50 | --- 51 | 52 | ## ⚙️ Key Components 53 | 54 | | Node | Function | 55 | |------|-----------| 56 | | **Schedule Trigger** | Runs every 2 hours to generate a new tweet. | 57 | | **Code (Randomly Decide Subreddit)** | Picks one subreddit randomly from your preset list. | 58 | | **Gemini Chat Model** | Generates tweet text in first person tone using custom prompt rules. | 59 | | **Reddit Tool** | Fetches top or rising posts from the chosen subreddit. | 60 | | **Google Sheets (read database)** | Keeps a record of already-used Reddit posts. | 61 | | **Structured Output Parser** | Ensures consistent tweet formatting (tweet text, subreddit, post ID). | 62 | | **Twitter Node** | Publishes the AI-generated tweet. | 63 | | **Append Row in Sheet** | Logs the tweet with date, subreddit, and post ID. | 64 | 65 | --- 66 | 67 | ## 🧩 Setup Tutorial 68 | 69 | ### 1️⃣ Prerequisites 70 | 71 | | Tool | Purpose | 72 | |------|----------| 73 | | **n8n Cloud or Self-Host** | Workflow execution | 74 | | **Google Gemini API Key** | For tweet generation | 75 | | **Reddit OAuth2 API** | To fetch posts | 76 | | **Twitter (X) API OAuth2** | To publish tweets | 77 | | **Google Sheets API** | For logging and duplication tracking | 78 | 79 | --- 80 | 81 | ### 2️⃣ Import the Workflow 82 | 83 | 1. Download `Reddit Twitter Automation.json`. 84 | 2. In n8n, click **Import Workflow → From File**. 85 | 3. Connect your credentials: 86 | - Gemini → `Gemini` 87 | - Reddit → `Reddit account` 88 | - Twitter → `X` 89 | - Google Sheets → `Gsheet` 90 | 91 | --- 92 | 93 | ### 3️⃣ Configure Google Sheet 94 | 95 | Your sheet must include these columns: 96 | 97 | | Column | Description | 98 | |--------|--------------| 99 | | `PAST TWEETS` | The tweet text | 100 | | `Date` | Auto-generated date | 101 | | `subreddit` | Reddit source | 102 | | `post_id` | Reddit post reference | 103 | 104 | ![Google Sheet Log](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-twitter-gsheet.png) 105 | 106 | --- 107 | 108 | ### 4️⃣ Customize Subreddits 109 | 110 | In the **Code Node**, update this array to choose which subreddits to monitor: 111 | 112 | ```javascript 113 | const subreddits = [ 114 | "n8n", 115 | "microsaas", 116 | "SaaS", 117 | "automation", 118 | "n8n_ai_agents" 119 | ]; -------------------------------------------------------------------------------- /firebase-debug.log: -------------------------------------------------------------------------------- 1 | [debug] [2025-12-09T11:30:35.855Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 2 | [debug] [2025-12-09T11:30:35.870Z] > authorizing via signed-in user (jay@emp0.com) 3 | [debug] [2025-12-09T11:30:35.871Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 4 | [debug] [2025-12-09T11:30:35.871Z] > authorizing via signed-in user (jay@emp0.com) 5 | [debug] [2025-12-09T11:30:35.900Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 6 | [debug] [2025-12-09T11:30:35.901Z] > authorizing via signed-in user (jay@emp0.com) 7 | [debug] [2025-12-09T11:30:36.070Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 8 | [debug] [2025-12-09T11:30:36.073Z] > authorizing via signed-in user (jay@emp0.com) 9 | [debug] [2025-12-09T11:30:36.075Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 10 | [debug] [2025-12-09T11:30:36.075Z] > authorizing via signed-in user (jay@emp0.com) 11 | [debug] [2025-12-09T11:30:36.082Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 12 | [debug] [2025-12-09T11:30:36.082Z] > authorizing via signed-in user (jay@emp0.com) 13 | [debug] [2025-12-09T11:30:36.083Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 14 | [debug] [2025-12-09T11:30:36.083Z] > authorizing via signed-in user (jay@emp0.com) 15 | [debug] [2025-12-09T11:30:38.212Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 16 | [debug] [2025-12-09T11:30:38.321Z] > authorizing via signed-in user (jay@emp0.com) 17 | [debug] [2025-12-09T11:30:38.321Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 18 | [debug] [2025-12-09T11:30:38.321Z] > authorizing via signed-in user (jay@emp0.com) 19 | [debug] [2025-12-09T11:30:38.466Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 20 | [debug] [2025-12-09T11:30:38.469Z] > authorizing via signed-in user (jay@emp0.com) 21 | [debug] [2025-12-09T11:30:38.795Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 22 | [debug] [2025-12-09T11:30:38.806Z] > authorizing via signed-in user (jay@emp0.com) 23 | [debug] [2025-12-09T11:30:38.809Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 24 | [debug] [2025-12-09T11:30:38.809Z] > authorizing via signed-in user (jay@emp0.com) 25 | [debug] [2025-12-09T11:30:38.811Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 26 | [debug] [2025-12-09T11:30:38.812Z] > authorizing via signed-in user (jay@emp0.com) 27 | [debug] [2025-12-09T11:30:38.813Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"] 28 | [debug] [2025-12-09T11:30:38.813Z] > authorizing via signed-in user (jay@emp0.com) 29 | -------------------------------------------------------------------------------- /Generate and Upload Blog Images with Leonardo AI and WordPress/README.md: -------------------------------------------------------------------------------- 1 | # Overview 2 | 3 | ![Generate and Upload Blog Images with Leonardo AI and WordPress](https://articles.emp0.com/wp-content/uploads/2025/07/generate-and-upload-blog-images-with-leonardo-ai-and-wordpress.png) 4 | 5 | This workflow generates high-quality AI images from text prompts using **Leonardo AI**, then automatically uploads the result to your WordPress media library and returns the final image URL. 6 | 7 | It functions as a **Modular Content Production (MCP) tool** - ideal for AI agents or workflows that need to dynamically generate and store visual assets on-demand. 8 | 9 | This workflow can also be found on the [main n8n.io website](https://n8n.io/workflows/6363-generate-and-upload-blog-images-with-leonardo-ai-and-wordpress/) 10 | 11 | --- 12 | 13 | ## ⚙️ Features 14 | 15 | - 🧠 **AI-Powered Generation** 16 | Uses Leonardo AI to create 1472x832px images from any text prompt, with enhanced contrast and style UUID preset. 17 | 18 | - ☁️ **WordPress Media Upload** 19 | Uploads the image as an attachment to your connected WordPress site via REST API. 20 | 21 | - 🔗 **Returns Final URL** 22 | Outputs the publicly accessible image URL for immediate use in websites, blogs, or social media posts. 23 | 24 | - 🔁 **Workflow-Callable (MCP Compatible)** 25 | Can be executed standalone or triggered by another workflow. Acts as an image-generation microservice for larger automation pipelines. 26 | 27 | --- 28 | 29 | ## 🧠 Use Cases 30 | 31 | ### For AI Agents (MCP) 32 | - Plug this into multi-agent systems as the "image generation module" 33 | - Generate blog thumbnails, product mockups, or illustrations 34 | - Return a clean `image_url` for content embedding or post-publishing 35 | 36 | ### For Marketers / Bloggers 37 | - Automate visual content creation for articles 38 | - Scale image generation for SEO blogs or landing pages 39 | 40 | ### For Developers / Creators 41 | - Integrate with other n8n workflows 42 | - Pass prompt and slug as inputs from any external trigger (e.g., webhook, Discord, Airtable, etc.) 43 | 44 | --- 45 | 46 | ## 📥 Inputs 47 | 48 | | Field | Type | Description | 49 | |--------|--------|----------------------------------------| 50 | | prompt | string | Text prompt for image generation | 51 | | slug | string | Filename identifier (e.g. `hero-image`) | 52 | 53 | Example: 54 | ```json 55 | { 56 | "prompt": "A futuristic city skyline at night", 57 | "slug": "futuristic-city" 58 | } 59 | ``` 60 | ## 📤 Output 61 | ```json 62 | { 63 | "image_url": "https://yourwordpresssite.com/wp-content/uploads/2025/07/img-futuristic-city.jpg" 64 | } 65 | ``` 66 | ## 🔄 Workflow Summary 67 | 1. Receive Prompt & Slug 68 | Via manual trigger or parent workflow execution 69 | 2. Generate Image 70 | POST to Leonardo AI's API with the prompt and config 71 | 3. Wait & Poll 72 | Delays 1 minute, then fetches final image metadata 73 | 4. Download Image 74 | GET request to retrieve generated image 75 | 5. Upload to WordPress 76 | Uses WordPress REST API with proper headers 77 | 6. Return Result 78 | Outputs a clean image_url JSON object 79 | 80 | ## 🔐 Requirements 81 | - Leonardo AI account and API Key 82 | - WordPress site with API credentials (media write permission) 83 | - n8n instance (self-hosted or cloud) 84 | - This credential setup: 85 | - `httpHeaderAuth` for Leonardo headers 86 | - `httpBearerAuth` for Leonardo bearer token 87 | - `wordpressApi` for upload 88 | 89 | ## 🧩 Node Stack 90 | - `Execute Workflow Trigger` / `Manual Trigger` 91 | - `Code` (Input Parser) 92 | - `HTTP Request` → Leonardo image generation 93 | - `Wait` → 1 min delay 94 | - `HTTP Request` → Poll generation result 95 | - `HTTP Request` → Download image 96 | - `HTTP Request` → Upload to WordPress 97 | - `Code` → Return final image URL 98 | 99 | ## 🖼 Example Prompt 100 | ```json 101 | { 102 | "prompt": "Batman typing on a laptop", 103 | "slug": "batman-typing-on-a-laptop" 104 | } 105 | ``` 106 | Will return: 107 | ```bash 108 | https://articles.emp0.com/wp-content/uploads/2025/07/img-batman-typing-on-a-laptop.jpg 109 | ``` 110 | ## 🧠 Integrate with AI Agents 111 | This workflow is MCP-compliant—plug it into: 112 | - Research-to-post pipelines 113 | - Blog generators 114 | - Carousel builders 115 | - Email visual asset creators 116 | 117 | Trigger it from any parent AI agent that needs to generate an image based on a given idea, post, or instruction. -------------------------------------------------------------------------------- /Ebook to Audiobook converter/README.md: -------------------------------------------------------------------------------- 1 | # Ebook to Audiobook Converter 2 | 3 | [![Watch Demo](https://img.youtube.com/vi/xKqkjXIPZoM/maxresdefault.jpg)](https://youtu.be/xKqkjXIPZoM) 4 | 5 | **[▶️ Watch Full Demo Video](https://youtu.be/xKqkjXIPZoM)** 6 | 7 | --- 8 | 9 | ## What It Does 10 | 11 | Turn any PDF ebook into a professional audiobook automatically. Upload a PDF, get an MP3 audiobook in your Google Drive. Perfect for listening to books, research papers, or documents on the go. 12 | 13 | **Example**: [Input PDF](https://www.laburnumps.vic.edu.au/uploaded_files/media/little_red_riding_hood.pdf) → [Output Audiobook](https://drive.google.com/file/d/12aVR2p-ZQ2DyqXCUgJPouzy-acoAB7WO/view?usp=sharing) 14 | 15 | ## Key Features 16 | 17 | - Upload PDF via web form → Get MP3 audiobook in Google Drive 18 | - Natural-sounding AI voices (MiniMax Speech-02-HD) 19 | - Automatic text extraction, chunking, and audio merging 20 | - Customizable voice, speed, and emotion settings 21 | - Processes long books in batches with smart rate limiting 22 | 23 | ## Perfect For 24 | 25 | - **Students**: Turn textbooks into study audiobooks 26 | - **Professionals**: Listen to reports and documents while commuting 27 | - **Content Creators**: Repurpose written content as audio 28 | - **Accessibility**: Make content accessible to visually impaired users 29 | 30 | ## Requirements 31 | 32 | | Component | Details | 33 | |-----------|---------| 34 | | **n8n** | Self-hosted ONLY (cannot run on n8n Cloud) | 35 | | **FFmpeg** | Must be installed in your n8n environment | 36 | | **Replicate API** | For MiniMax TTS ([Sign up here](https://replicate.com)) | 37 | | **Google Drive** | OAuth2 credentials + "Audiobook" folder | 38 | 39 | ⚠️ **Important**: This workflow **does NOT work on n8n Cloud** because FFmpeg installation is required. 40 | 41 | ## Quick Setup 42 | 43 | ### 1. Install FFmpeg 44 | 45 | **Docker users:** 46 | ```bash 47 | docker exec -it /bin/bash 48 | apt-get update && apt-get install -y ffmpeg 49 | ``` 50 | 51 | **Native installation:** 52 | ```bash 53 | sudo apt-get install ffmpeg # Linux 54 | brew install ffmpeg # macOS 55 | ``` 56 | 57 | ### 2. Get API Keys 58 | - **Replicate**: Sign up at [replicate.com](https://replicate.com) and copy your API token 59 | - **Google Drive**: Set up OAuth2 in n8n and create an "Audiobook" folder in Drive 60 | 61 | ### 3. Import & Configure 62 | 1. Import `n8n.json` into your n8n instance 63 | 2. Replace the Replicate API token in the "MINIMAX TTS" node 64 | 3. Configure Google Drive credentials and select your "Audiobook" folder 65 | 4. Activate the workflow 66 | 67 | ## Cost Estimate 68 | 69 | | Component | Cost | 70 | |-----------|------| 71 | | **MiniMax TTS API** | ~$0.15 per 1000 characters (~$3-5 for average book) | 72 | | **Google Drive Storage** | Free (up to 15GB) | 73 | | **Processing Time** | ~1-2 minutes per 10 pages | 74 | 75 | ## How It Works 76 | 77 | ![Workflow Diagram](https://articles.emp0.com/wp-content/uploads/2025/10/Screenshot-from-2025-10-20-19-23-45.png) 78 | 79 | ``` 80 | PDF Upload → Extract Text → Split into Chunks → Convert to Speech (batches of 5) 81 | → Merge Audio Files (FFmpeg) → Upload to Google Drive 82 | ``` 83 | 84 | The workflow uses four main modules: 85 | 1. **Extraction**: PDF text extraction and intelligent chunking 86 | 2. **Conversion**: MiniMax TTS processes text in batches 87 | 3. **Merging**: FFmpeg combines all audio files seamlessly 88 | 4. **Upload**: Final audiobook saved to Google Drive 89 | 90 | ## Voice Settings (Customizable) 91 | 92 | ```json 93 | { 94 | "voice_id": "Friendly_Person", 95 | "emotion": "happy", 96 | "speed": 1, 97 | "pitch": 0 98 | } 99 | ``` 100 | 101 | Available emotions: `happy`, `neutral`, `sad`, `angry`, `excited` 102 | 103 | ## Limitations 104 | 105 | - ⚠️ **Self-hosted n8n ONLY** (not compatible with n8n Cloud) 106 | - PDF files only (not EPUB, MOBI, or scanned images) 107 | - Large books (500+ pages) take longer to process 108 | - Requires FFmpeg installation (see setup above) 109 | 110 | ## Troubleshooting 111 | 112 | **FFmpeg not found?** 113 | - Docker: Run `docker exec -it /bin/bash` then `apt-get install ffmpeg` 114 | - Native: Run `sudo apt-get install ffmpeg` (Linux) or `brew install ffmpeg` (macOS) 115 | 116 | **Rate limit errors?** 117 | - Increase wait time in the "WAITS FOR 5 SECONDS" node to 10-15 seconds 118 | 119 | **Google Drive upload fails?** 120 | - Make sure you created the "Audiobook" folder in your Google Drive 121 | - Reconfigure OAuth2 credentials in n8n 122 | 123 | --- 124 | 125 | Created by [emp0](https://emp0.com) | More workflows: [n8n Gallery](https://n8n.io/creators/jay-emp0/) 126 | -------------------------------------------------------------------------------- /CLAUDE.md: -------------------------------------------------------------------------------- 1 | # CLAUDE.md 2 | 3 | This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. 4 | 5 | ## Repository Overview 6 | 7 | This repository contains a collection of production-ready n8n workflow automations created by the emp0 team. Each workflow is designed to solve specific business automation challenges using AI agents, integrations, and modern APIs. The workflows are distributed through the official n8n website and emp0.com. 8 | 9 | ## Repository Structure 10 | 11 | The repository follows a folder-per-workflow structure: 12 | - Each workflow folder contains an `n8n.json` file (the workflow definition) 13 | - Each workflow has a comprehensive `README.md` with setup instructions and use cases 14 | - Some workflows include additional documentation like `Technical Setup.md` 15 | - Testing documentation exists for complex workflows (e.g., `prompt checklist.md` for MCP Assistant) 16 | 17 | ## Workflow Types & Architecture 18 | 19 | ### Core Workflow Categories 20 | 1. **AI-Powered Content Generation** - Automated blog content creation and distribution 21 | 2. **Email Management & Classification** - Intelligent email processing with spam detection 22 | 3. **Personal Productivity** - Multi-platform assistant with calendar, email, and social media integration 23 | 4. **Visual Content Creation** - AI image generation and WordPress integration 24 | 5. **Analytics & Reporting** - Data aggregation and Discord notifications 25 | 26 | ### Technical Architecture Patterns 27 | 28 | #### AI Agent Integration 29 | Most workflows use n8n's AI Agent nodes with: 30 | - OpenAI GPT-4 for reasoning and content generation 31 | - MCP (Model Context Protocol) for tool orchestration 32 | - Vector databases for semantic search and memory 33 | - Custom training data via Google Sheets 34 | 35 | #### Platform Integrations 36 | Common integration patterns: 37 | - **Gmail**: Trigger-based email processing with OAuth2 authentication 38 | - **Discord**: Custom bot integration via `n8n_discord_trigger_bot` repository 39 | - **WordPress**: REST API for content publishing with featured images 40 | - **Google Services**: Calendar, Drive, Sheets with service account authentication 41 | - **Social Media**: Twitter/LinkedIn for content distribution 42 | 43 | #### Data Flow Architecture 44 | 1. **Trigger**: External event (email, webhook, schedule) 45 | 2. **Processing**: AI analysis, classification, or generation 46 | 3. **Memory**: Vector storage or Google Sheets for persistent data 47 | 4. **Distribution**: Multi-platform publishing (Discord, WordPress, social) 48 | 49 | ## Key Dependencies & External Tools 50 | 51 | ### Required External Repository 52 | - **n8n_discord_trigger_bot** (https://github.com/Jharilela/n8n_discord_trigger_bot): Custom Discord bot that enables Discord triggers in n8n workflows 53 | 54 | ### Common API Services 55 | - OpenAI (GPT-4 for AI processing) 56 | - Google Workspace APIs (Gmail, Calendar, Drive, Sheets) 57 | - WordPress REST API 58 | - Twitter API v2 59 | - LinkedIn API 60 | - Leonardo AI (image generation) 61 | - Various RSS feeds for content sourcing 62 | 63 | ## Development Guidelines 64 | 65 | ### Workflow Configuration 66 | - All credentials must be replaced with your own OAuth tokens when copying workflows 67 | - Timezone settings should be configured in n8n Settings > Default Timezone 68 | - Vector databases require proper embedding configuration (MongoDB or Supabase pgvector) 69 | 70 | ### Testing Approach 71 | - Complex workflows include prompt checklists (see `MCP AI Assistant/prompt checklist.md`) 72 | - Test cross-tool integrations separately before combining 73 | - Verify binary serialization for file transfers between services 74 | 75 | ### Content Standards 76 | - Workflows generate content for emp0.com and are tagged with version numbers 77 | - All generated articles include attribution and source links 78 | - Visual content includes proper alt text and SEO optimization 79 | 80 | ## Workflow-Specific Notes 81 | 82 | ### Content Generator V3 83 | - Uses RSS ingestion → vectorization → semantic clustering → multi-agent content generation 84 | - Requires vector database setup and embedding API configuration 85 | - Includes cost optimization strategies documented in Technical Setup.md 86 | 87 | ### AI Email Classifier 88 | - Uses Google Sheets as training memory for spam/legit classification 89 | - Supports multiple Gmail accounts with unified Discord reporting 90 | - Feedback loop allows manual correction via Discord replies 91 | 92 | ### MCP AI Assistant 93 | - Implements full MCP protocol for tool orchestration 94 | - Requires careful timezone configuration to avoid UTC bugs 95 | - Supports complex multi-step reasoning across multiple platforms 96 | 97 | ## Important URLs & Resources 98 | - Official n8n workflow gallery: n8n.io/creators/jay-emp0/ 99 | - Live examples: articles.emp0.com/tag/v3/ 100 | - Discord bot dependency: github.com/Jharilela/n8n_discord_trigger_bot 101 | - Commercial distribution: emp0.com/automation-workflows -------------------------------------------------------------------------------- /Generate Images with Replicate and Flux/README.md: -------------------------------------------------------------------------------- 1 | MCP Tool — Replicate (Flux) Image Generator → WordPress/Twitter 2 | 3 | Generates images via **Replicate** Flux models and uploads to WordPress (and optionally Twitter/X). Built to act as an **MCP** module that other agents/workflows call for on-demand image creation. 4 | 5 | - Models configured in this workflow:\ 6 | `black-forest-labs/flux-schnell`, `black-forest-labs/flux-dev`, `black-forest-labs/flux-1.1-pro` 7 | - Switch rationale: **lower cost** 💰, **broader model choice** 🎯, **full control of parameters** ⚙️ 8 | - Leonardo API credits cannot be used in the **web UI** 🙅‍♂️; separate spend for API vs UI 9 | 10 | Links: 11 | 12 | - 📜 Prior Leonardo-based workflow: [https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/](https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/) 13 | - 📰 Blog automation consuming these images: [https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/](https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/) 14 | 15 | --- 16 | 17 | ## 📥 Inputs 18 | 19 | | Field | Type | Description | 20 | | ------ | ------ | --------------------------------- | 21 | | prompt | string | Text description for the image | 22 | | slug | string | Filename slug for WP media | 23 | | model | string | One of the configured Flux models | 24 | 25 | Example: 26 | 27 | ```json 28 | { 29 | "prompt":"Joker watching a Batman movie on his laptop", 30 | "slug":"joker-watching-batman", 31 | "model":"black-forest-labs/flux-dev" 32 | } 33 | ``` 34 | 35 | ## 📤 Output 36 | 37 | ```json 38 | { 39 | "public_image_url": "https://your-wp.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp", 40 | "wordpress": {...}, 41 | "twitter": {...} 42 | } 43 | ``` 44 | 45 | --- 46 | 47 | ## 🔄 Flow 48 | 49 | 1. Trigger with `prompt`, `slug`, `model` 50 | 2. Build model payload (quality/steps/ratio/output format) 51 | 3. Call Replicate: `POST /v1/models/{model}/predictions` (Prefer: wait) 52 | 4. Download the generated image URL 53 | 5. Upload to WordPress (returns public URL) 54 | 6. Optional: upload to Twitter/X 55 | 7. Return URL + metadata 56 | 57 | --- 58 | 59 | ## 🤖 MCP Use at Scale (emp0.com) 60 | 61 | Operational pattern: I currently use this setup for my blog where i generate **~300 posts/month**, each with **4 images** (banner + 2 to 3 inline images) → **~1,000 images/month** produced by this MCP. 62 | 63 | 💡 **Hybrid Cost-Optimized Setup:** 64 | 65 | - **High-priority images** (banners, main visuals): Generated using **Flux Dev** on Leonardo for slightly better prompt adherence. 66 | - **Low-priority images** (inline blog visuals): Generated using **Flux Schnell** on Replicate for maximum cost efficiency. 67 | 68 | --- 69 | 70 | ## 💰 Pricing Comparison (per image) 71 | 72 | 73 | Leonardo per-image cost uses API Basic math: **$9 / 3,500 credits = $0.0025714 per credit**. 74 | 75 | - **Flux Schnell (Leonardo)** = 7 credits 76 | - **Flux Dev (Leonardo)** = 7 credits 77 | - **Flux 1.1 Pro equivalent in Leonardo** = **Leonardo Phoenix** based on my experience = 10 credits 78 | 79 | | Flux Model | Replicate | Leonardo API* | 80 | | ------------------------ | ------------------------- | ------------------------------- | 81 | | `flux-schnell` | **$0.0030** (=$3/1,000) | **$0.0180** (7 × $0.0025714) | 82 | | `flux-dev` | **$0.0250** | **$0.0180** (7 × $0.0025714) | 83 | | `flux-1.1-pro` / Phoenix | **$0.0400** | **$0.0257** (10 × $0.0025714) | 84 | 85 | **Replicate pricing:** [https://replicate.com/pricing](https://replicate.com/pricing)\ 86 | **Leonardo pricing:** [https://leonardo.ai/pricing/](https://leonardo.ai/pricing/)\ 87 | **Leonardo API usage:** [https://docs.leonardo.ai/docs/commonly-used-api-values](https://docs.leonardo.ai/docs/commonly-used-api-values) 88 | 89 | --- 90 | 91 | ## 📊 Monthly Cost Example (1,000 images/month) 92 | 93 | Mix: **300 ×`flux-dev` on Leonardo**, 94 | **700 ×`flux-schnell` on Replicate**. 95 | 96 | | Platform/Model | Images | Price per Image | Total | 97 | | ------------------------ | ------ | --------------- | ---------- | 98 | | Leonardo `flux-dev` | 300 | $0.0180 | **$5.40** | 99 | | Replicate `flux-schnell` | 700 | $0.0030 | **$2.10** | 100 | | **Total Monthly Spend** | 1000 | — | **$7.50** | 101 | 102 | 💵 **If using Leonardo for both:** 103 | 104 | - 300 × $0.0180 = $5.40 105 | - 700 × $0.0180 = $12.60 106 | - **Total = $18.00** 107 | 108 | **Savings:** $10.50/month (**≈58% lower**) with the hybrid setup. 109 | 110 | --- 111 | 112 | ## 📌 Notes 113 | 114 | - More Replicate models can be added in `Code1` node. 115 | - Parameters tuned for aspect ratio, inference steps, quality, guidance. 116 | - Leonardo credit model is API-only; credits are not spendable in Leonardo's web UI. 117 | -------------------------------------------------------------------------------- /Memecoin Art Generator/README.md: -------------------------------------------------------------------------------- 1 | # 🐱 MemeCoin Art Generator - using Gemini Flash NanoBanana & upload to Twitter 2 | 3 | Automatically generates **memecoin art** and posts it to **Twitter (X)** powered by **Google Gemini**, **NanoBanana image generation**, and **n8n automation**. 4 | 5 | --- 6 | 7 | ## 🧩 Overview 8 | 9 | This workflow creates viral style memecoin images (like *Popcat*) and posts them directly to Twitter with a witty, Gen Z style tweet. 10 | 11 | It combines **text to image AI**, **scheduled triggers**, and **social publishing**, all in one seamless flow. 12 | 13 | **Workflow flow:** 14 | 1. Define your memecoin mascot (name, description, and base image URL). 15 | 2. Generate an AI image prompt and a meme tweet. 16 | 3. Feed the base mascot image into **Gemini Image Generation API**. 17 | 4. Render a futuristic memecoin artwork using **NanoBanana**. 18 | 5. Upload the final image and tweet automatically to Twitter. 19 | 20 | --- 21 | 22 | ## 🧠 Workflow Diagram 23 | 24 | ![Workflow Diagram](https://articles.emp0.com/wp-content/uploads/2025/10/memecoin-workflow.png) 25 | 26 | --- 27 | 28 | ## ⚙️ Key Components 29 | 30 | | Node | Function | 31 | |------|-----------| 32 | | **Schedule Trigger** | Runs automatically at chosen intervals to start meme generation. | 33 | | **Define Memecoin** | Defines mascot name, description, and base image URL. | 34 | | **AI Agent** | Generates tweet text and creative image prompt using Google Gemini. | 35 | | **Google Gemini Chat Model** | Provides trending topic context and meme phrasing. | 36 | | **Get Source Image** | Fetches the original mascot image (e.g., Popcat). | 37 | | **Convert Source Image to Base64** | Prepares image for AI based remixing. | 38 | | **Generate Image using NanoBanana** | Sends the prompt and base image to Gemini Image API for art generation. | 39 | | **Convert Base64 to PNG** | Converts the AI output to an image file. | 40 | | **Upload to Twitter** | Uploads generated image to Twitter via media upload API. | 41 | | **Create Tweet** | Publishes the tweet with attached image. | 42 | 43 | --- 44 | 45 | ## 🪄 How It Works 46 | 47 | 1️⃣ **Schedule Trigger** - starts the automation (e.g., hourly or daily). 48 | 2️⃣ **Define Memecoin** - stores your mascot metadata: 49 | ```yaml 50 | memecoin_name: popcat 51 | mascot_description: cat with open mouth 52 | mascot_image: https://i.pinimg.com/736x/9d/05/6b/9d056b5b97c0513a4fc9d9cd93304a05.jpg 53 | ``` 54 | 3️⃣ **AI Agent** - prompts Gemini to: 55 | - Write a short 100 character tweet in Gen Z slang. 56 | - Create an image generation prompt inspired by current meme trends. 57 | 4️⃣ **NanoBanana API** - applies your base image + AI prompt to create art. 58 | 5️⃣ **Upload & Tweet** - final image gets uploaded and posted automatically. 59 | 60 | --- 61 | 62 | ## 🧠 Example Output 63 | 64 | **Base Source Image:** 65 | ![Base Image](https://articles.emp0.com/wp-content/uploads/2025/10/popcat-original.jpg) 66 | 67 | **Generated Image (AI remix):** 68 | ![Generated Image](https://articles.emp0.com/wp-content/uploads/2025/10/popcat-edit.png) 69 | 70 | **Published Tweet:** 71 | ![Tweet Example](https://articles.emp0.com/wp-content/uploads/2025/10/popcat-edit-twitter-1.png) 72 | 73 | Example tweet text: 74 | > Popcat's about to go absolutely wild, gonna moon harder than my last test score! 🚀📈 We up! #Popcat #Memecoin 75 | 76 | --- 77 | 78 | ## 🧩 Setup Tutorial 79 | 80 | ### 1️⃣ Prerequisites 81 | 82 | | Tool | Purpose | 83 | |------|----------| 84 | | **n8n (Cloud or Self hosted)** | Workflow automation platform | 85 | | **Google Gemini API Key** | For generating tweet and image prompts | 86 | | **Twitter (X) API OAuth1 + OAuth2** | For uploading and posting tweets | 87 | 88 | --- 89 | 90 | ### 2️⃣ Import the Workflow 91 | 92 | 1. Download `memecoin art generator.json`. 93 | 2. In n8n, click **Import Workflow → From File**. 94 | 3. Set up and connect credentials: 95 | - Google Gemini API 96 | - Twitter OAuth 97 | 4. (Optional) Adjust **Schedule Trigger** frequency to your desired posting interval. 98 | 99 | --- 100 | 101 | ### 3️⃣ Customize Your MemeCoin 102 | 103 | In the **Define Memecoin** node, edit these fields to change your meme theme: 104 | ```javascript 105 | memecoin_name: "doggo" 106 | mascot_description: "shiba inu in astronaut suit" 107 | mascot_image: "https://example.com/shiba.jpg" 108 | ``` 109 | 110 | That’s it - next cycle will generate your new meme and post it. 111 | 112 | --- 113 | 114 | ### 4️⃣ API Notes 115 | 116 | - **Gemini Image Generation API Docs:** 117 | [https://ai.google.dev/gemini-api/docs/image-generation#gemini-image-editing](https://ai.google.dev/gemini-api/docs/image-generation#gemini-image-editing) 118 | - **API Key Portal:** 119 | [https://aistudio.google.com/api-keys](https://aistudio.google.com/api-keys) 120 | 121 | --- 122 | 123 | ## 🌐 Links 124 | 125 | - 💼 [Emp0 — AI Revenue Team](https://emp0.com) 126 | - 🧠 [Content Farming v3 Workflow](https://0emp0.gumroad.com/l/content-farming-v3) 127 | - 💬 [Discord Community](https://discord.gg/emp0) 128 | -------------------------------------------------------------------------------- /f5bot reddit auto comment/README.md: -------------------------------------------------------------------------------- 1 | # 🤖 Reddit Auto-Comment Assistant (AI-Driven Marketing Workflow) 2 | 3 | Automate how you reply to Reddit posts using **AI-generated, first-person comments** that sound human, follow subreddit rules, and (optionally) promote your own links or products. 4 | 5 | --- 6 | 7 | ## 🧩 Overview 8 | 9 | This workflow monitors Reddit mentions (via **F5Bot Gmail alerts**) and automatically: 10 | 1. Fetches the relevant Reddit post. 11 | 2. Checks the subreddit’s rules for self-promotion. 12 | 3. Generates a comment using GPT-5 style prompting (human-like tone, <255 chars). 13 | 4. Optionally promotes your chosen product from Google Sheets. 14 | 5. Posts the comment automatically 15 | 16 | It’s ideal for creators, marketers, or founders who want to grow awareness **organically and authentically** on Reddit — without sounding like a bot. 17 | 18 | --- 19 | 20 | ## 🧠 Workflow Diagram 21 | 22 | ![Workflow Diagram](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-automation-workflow-1.png) 23 | 24 | --- 25 | 26 | ## 🚀 Key Features 27 | 28 | | Feature | Description | 29 | |----------|--------------| 30 | | **AI-Generated Reddit Replies** | Uses GPT-powered reasoning and prompt structure that mimics a senior marketing pro typing casually. | 31 | | **Rule-Aware Posting** | Reads subreddit rules and adapts tone — no promo where it’s not allowed. | 32 | | **Product Integration** | Pulls product name + URL from your Google Sheet automatically. | 33 | | **Full Automation Loop** | From Gmail → Gsheet → Reddit | 34 | | **Evaluation Metrics** | Logs tool usage, link presence, and formatting to ensure output quality. | 35 | 36 | --- 37 | 38 | ## 🧰 Setup Guide 39 | 40 | ### 1️⃣ Prerequisites 41 | 42 | | Tool | Purpose | 43 | |------|----------| 44 | | **n8n Cloud or Self-Host** | Workflow automation environment | 45 | | **OpenAI API key** | For comment generation | 46 | | **Reddit OAuth2 credentials** | To post comments | 47 | | **Google Sheets API** | To fetch and evaluate products | 48 | | **Gmail API** | To read F5Bot alerts | 49 | 50 | --- 51 | 52 | ### 2️⃣ Import the Workflow 53 | 54 | 1. Download `Reddit Assistant.json` 55 | 2. In n8n, click **Import Workflow → From File** 56 | 3. Paste your credentials in the corresponding nodes: 57 | - `Reddit account` 58 | - `Gmail account` 59 | - `Gsheet account` 60 | - `OpenAI API` 61 | 62 | --- 63 | 64 | ### 3️⃣ Connect Your Google Sheets 65 | 66 | You’ll need **two Google Sheets**: 67 | 68 | | Sheet | Purpose | Example Tab | 69 | |--------|----------|-------------| 70 | | **Product List** | Contains all your product names, URLs, goals, and CTAs | `promo` | 71 | | **Reddit Evaluations** | Logs AI performance metrics and tool usage | `reddit evaluations` | 72 | 73 | ![Google Sheet Example](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-automation-gsheet-promo.png) 74 | 75 | --- 76 | 77 | ### 4️⃣ Set Up Gmail Trigger (F5Bot) 78 | 79 | 1. Subscribe to [F5Bot](https://f5bot.com) alerts for keywords like `"blog automation"` or your brand name. 80 | 81 | ![F5Bot Setup](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-automation-f5bot-setup.png) 82 | 83 | 2. Configure Gmail Trigger to only pull from sender: `admin@f5bot.com`. 84 | 85 | ![F5Bot Email Example](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-automation-email-f5bot.png) 86 | --- 87 | 88 | ### 5️⃣ Configure AI Agent Prompt 89 | 90 | The built-in prompt follows a **GPT-5-style structured reasoning chain**: 91 | 92 | - Reads the Reddit post + rules. 93 | - Determines if promotion is allowed. 94 | - Fetches product data from Google Sheets. 95 | - Writes a short, human comment (<255 chars). 96 | - Avoids buzzwords and fake enthusiasm. 97 | 98 | --- 99 | 100 | ## 📊 Workflow Evaluations 101 | 102 | The workflow includes **automatic evaluation nodes** to track: 103 | 104 | | Metric | Description | 105 | |--------|--------------| 106 | | `contains link` | Checks if comment includes a URL | 107 | | `contains dash` | Detects format breaks | 108 | | `Tools Used` | Logs which AI tools were used in reasoning | 109 | | `executionTime` | Monitors average latency | 110 | 111 | ![Workflow Evaluations](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-automation-evaluations.png) 112 | 113 | --- 114 | 115 | ## 💡 Why This Workflow Has Value 116 | 117 | | Value | Explanation | 118 | |--------|--------------| 119 | | **Saves time** | Automates Reddit marketing without manual engagement. | 120 | | **Feels human** | AI comments use a fast-typing, casual tone (e.g., “u,” “ur,” “idk”). | 121 | | **Follows rules** | Respects subreddits where promo is banned. | 122 | | **Data-driven** | Logs performance across 10 test cases for validation. | 123 | | **Monetizable** | Can promote Gumroad, YouTube, or SaaS products safely. | 124 | 125 | --- 126 | 127 | ## ⚙️ Example Use Case 128 | 129 | > “I used this automation to pull $1.4k by replying to Reddit posts about blog automation. 130 | > Each comment felt natural and directed users to my n8n workflow.” 131 | 132 | ![Reddit Success Example](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-automation-reddit-comment.png) -------------------------------------------------------------------------------- /AI-Powered Blog Automation for WordPress/README.md: -------------------------------------------------------------------------------- 1 | # n8n Workflow: AI-Powered Blog Automation for WordPress 2 | 3 | This workflow automatically generates and publishes 10 blog posts per day to a WordPress site. It collects tech-related news articles, filters and analyzes them for relevance, expands them with research, generates SEO-optimized long-form articles using AI, creates a matching image using Leonardo AI, and publishes them via the WordPress REST API. Every step is tracked and stored in MongoDB for reference and performance tracking. 4 | 5 | ![Content Farming v2 from news to wordpress](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v2-workflow.png) 6 | 7 | This workflow can also be found on the [main n8n.io website](https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/) 8 | 9 | You can see the demo results for the AI based articles here: [Emp0 Articles](https://articles.emp0.com/author/anya/) 10 | 11 | ![Blog that uses content farm](https://articles.emp0.com/wp-content/uploads/2025/06/content-farm.png) 12 | 13 | ## How it works 14 | 15 | 1. A scheduler runs daily to fetch the latest news from RSS feeds including BBC, TechCrunch, Wired, MIT Tech Review, HackerNoon, and others. 16 | 2. The RSS data is normalized and filtered to include only articles published within the past 24 hours. 17 | 3. Each article is passed through an OpenAI-powered classifier to check for relevance to predefined user topics like AI, robotics, or tech policy. 18 | 4. Relevant articles are then aggregated, researched, and summarized with supporting sources and citations. 19 | 5. An AI agent generates five long-tail SEO blog title ideas, ranks them by uniqueness and performance score, and selects the top one. 20 | 6. A blog outline is created including H1 and H2 headers, keyword targeting, content structure, and featured snippet optimization. 21 | 7. A full-length article (1000 to 1500 words) is generated based on the outline, with analogies, citations, examples, and keyword density maintained. 22 | 8. SEO metadata is produced including meta title, description, image alt text, slug, and a readability audit. 23 | 9. An AI-generated image is created based on the blog theme using Leonardo AI, enhanced for emotional storytelling and visual consistency. 24 | 10. The blog article, metadata, and image are uploaded to WordPress as a draft, the image is attached, Yoast SEO metadata is set, and the article is published. 25 | 26 | All outputs including article versions, metadata, generation steps, and final blog URLs are stored in MongoDB to allow for future analytics and feedback. 27 | 28 | ## Requirements 29 | 30 | To run this project, you need accounts and API access for the following: 31 | 32 | | Tool | Purpose | Notes | 33 | |--------------|------------------------------------------------------------------|-----------------------------------------------------------------------| 34 | | OpenAI | Used for blog classification, generation, summarization, SEO | Around $0.20 per day, using GPT-4o-mini. Estimated monthly: $6 | 35 | | MongoDB | Stores data flexibly including drafts, titles, metadata, logs | Free tier on MongoDB Atlas offers 512 MB, enough for 64,000 articles | 36 | | Leonardo AI | Generates featured images for blog articles | $9 for 3500 credits, $5 monthly top-up needed for 300 images | 37 | | WordPress | Final publishing platform via REST API | Hosted on Hostinger for $15/year including domain | 38 | 39 | ## Setup Instructions 40 | 41 | 1. Import the provided JSON file into your n8n instance. 42 | 2. Configure these credentials in n8n: 43 | - OpenAI API key 44 | - MongoDB Atlas connection string 45 | - HTTP Header Auth for Leonardo AI 46 | - WordPress REST API credentials 47 | 3. Modify the classifier and prompt nodes to reflect your preferred content themes. 48 | 4. Adjust scheduler nodes if you want to change post frequency or publishing times. 49 | 5. Run the n8n instance continuously using Docker, PM2, or hosted automation platform. 50 | 51 | ## Cost Estimate 52 | 53 | | Component | Daily Usage | Monthly Cost Estimate | 54 | |---------------|------------------------------|------------------------| 55 | | OpenAI | 10 posts per day | ~$6 | 56 | | Leonardo AI | 10 images per day (15 credits each) | ~$14 (9 base + 5 top-up) | 57 | | MongoDB | Free up to 512 MB | $0 | 58 | | WordPress | Hosting and domain | ~$1.25 | 59 | | **Total** | | **~$21/month** | 60 | 61 | ## Observations and Learnings 62 | 63 | This system can scale daily article publishing with zero manual effort. However, current limitations include inconsistent blog length and occasional coherence issues. To address this, I plan to build a feedback loop within the workflow: 64 | 65 | - An SEO Commentator Agent will assess keyword strength, structure, and discoverability. 66 | - An Editor-in-Chief Agent will review tone, clarity, and narrative structure. 67 | - Both agents will loop back suggestions to the content generator, improving each draft until it meets human-level standards. 68 | 69 | The final goal is to consistently produce high-quality, readable, SEO-optimized content that is indistinguishable from human writing. 70 | 71 | ## 📎 Repo & Credits 72 | 73 | - Also checkout our Discord bot trigger: [n8n_discord_trigger_bot](https://github.com/Jharilela/n8n_discord_trigger_bot) 74 | - Creator: [Jay (Emp₀)](https://twitter.com/jharilela) 75 | - Automation tool: [n8n](https://n8n.partnerlinks.io/emp0) -------------------------------------------------------------------------------- /AI Email Classifier/README.md: -------------------------------------------------------------------------------- 1 | # AI Email Classifier 📬 2 | Automate Email Classification, Prioritization, and Spam Detection Across Multiple Accounts 3 | 4 | ![ai-email-classifier-workflow](https://articles.emp0.com/wp-content/uploads/2025/07/workflow-ai-email-classifier.png) 5 | 6 | This workflow can also be found on the [main n8n.io website](https://n8n.io/workflows/5789-multi-account-email-classifier-with-ai-gmail-discord-and-google-sheets/) 7 | 8 | Created by: Jayant Kumar ([@jharilela](https://github.com/Jharilela)) 9 | 🛠 Powered by: Gmail, Google Sheets, OpenAI, Discord, and n8n 10 | 11 | --- 12 | 13 | ## Sample Discord labelling as Spam 14 | 15 | ![ai-email-classifier-spam](https://articles.emp0.com/wp-content/uploads/2025/07/ai-email-classifier-spam.png) 16 | 17 | ## Sample Discord labelling as Legit 18 | 19 | ![ai-email-classifier-legit](https://articles.emp0.com/wp-content/uploads/2025/07/ai-email-classifier-legit.png) 20 | 21 | --- 22 | 23 | ## Why I Built This 24 | 25 | Focus is Expensive. Managing multiple email inboxes every day—personal, business, partnerships, invoices. Logging into each, skimming through noise, flagging important stuff, and deleting spam started eating up hours of my week. I needed a system that helped me **focus** only on what matters without building an entire helpdesk dashboard. 26 | 27 | I already live in Discord. It made sense to push my emails there—but in a fun, digestible, and actionable way. I built **AI Email Classifier 📬** to summarize emails, detect spam, assign priority, and make everything skimmable with pictures and links. 28 | 29 | And it works across **multiple Gmail accounts**. 30 | 31 | --- 32 | 33 | ## Key Features 34 | 35 | - ✅ Works with **multiple Gmail inboxes** 36 | - 🧠 Uses AI to **classify spam vs legit** 37 | - 🎯 Assigns **priority levels**: High / Medium / Low 38 | - 🗂 Appends everything to a **central Google Sheet** 39 | - 📸 Sends **visual summaries to Discord** (with image + action links) 40 | - 🛠 Powered by open-source: [n8n_discord_trigger_bot](https://github.com/Jharilela/n8n_discord_trigger_bot) 41 | 42 | --- 43 | 44 | ## How It Works 45 | 46 | Here’s the high-level flow: 47 | 48 | 1. New Email in any inbox triggers the worfklow to start 49 | 2. The AI Agent reads the raw content, subject, sender, Gmail labels. 50 | 3. It calls a **Google Sheet** that acts as our feedback memory: 51 | - Emails and domains manually marked as **spam** or **legit**. 52 | 4. AI classifies the incoming email using logic: 53 | - **Spam** if sender or domain is blacklisted, or content matches patterns like: 54 | _"promotions, phishing, ads, mass emails, cold offers"_ 55 | - **Priority** is assigned by: 56 | - **High**: deadlines, legal, payments, clients, CEO emails 57 | - **Medium**: team updates, meetings, project notifications 58 | - **Low**: newsletters, FYIs, casual threads 59 | 6. It produces a compact JSON output with: 60 | - Sender, recipient, subject, summary, priority, priority color, image URL, action URL 61 | 7. The message is formatted visually and posted back to Discord as an embed with: 62 | - Summary text 63 | - Actionable links 64 | - Priority color code 65 | - Thumbnail (if any) 66 | 67 | --- 68 | 69 | ## Google Sheet Training Table 70 | 71 | The system uses this sheet as live memory to label spam and legit senders: 72 | 73 | ``` 74 | ╔════════════════════╦══════════════╦═════════════════╦══════════════╦════════════════╗ 75 | ║ Email ║ Domain ║ Classification ║ Labelled By ║ Labelled Date ║ 76 | ╠════════════════════╬══════════════╬═════════════════╬══════════════╬════════════════╣ 77 | ║ offers@badsite.com ║ badsite.com ║ Spam ║ Jayant ║ 08/07/2025 ║ 78 | ║ ceo@trusted.com ║ trusted.com ║ Legit ║ Jayant ║ 08/07/2025 ║ 79 | ╚════════════════════╩══════════════╩═════════════════╩══════════════╩════════════════╝ 80 | 81 | ``` 82 | 83 | 84 | This allows **manual control** to teach the AI which senders to trust or ignore. Every time I see something marked wrong, I just reply in Discord with `"spam"` or `"legit"` on that message thread. That triggers an update to the Sheet via AI parsing and n8n. 85 | 86 | --- 87 | 88 | ## Why Manual Input Still Matters 89 | 90 | AI isn’t perfect. 91 | Some spam emails are cleverly disguised. And some senders are contextually important only to *you*. 92 | 93 | That’s why I kept a simple feedback loop: 94 | - You tell the bot `"spam"` or `"legit"` on any Discord email message. Or anything along that line 95 | - The AI agent detects the intent and updates the Sheet. 96 | - The AI improves its judgment next time as it now **remembers** your preference 97 | 98 | --- 99 | 100 | ## Why Discord? 101 | 102 | Because Slack charges per seat and email feels lonely. 103 | I run most of my operations inside Discord community chats, client rooms, bot alerts. 104 | Instead of making a full email UI, I turned each email into a **Discord card with a thumbnail, summary, and quick actions**. 105 | It’s fun. It’s visual. It doesn’t feel like work. 106 | Email becomes more like a game feed. 107 | 108 | --- 109 | 110 | ## Tech Stack 111 | 112 | - Gmail → Discord via Gmail trigger node 113 | - Discord → n8n Webhook via [`n8n_discord_trigger_bot`](https://github.com/Jharilela/n8n_discord_trigger_bot) 114 | - OpenAI GPT-4o (classification + summarization) 115 | - Google Sheets (feedback memory) 116 | - Discord Node (embed output with JSON + images) 117 | 118 | --- 119 | 120 | ## Try It Yourself 121 | 122 | Clone the workflow JSON, set up your Gmail integrations, and install the [n8n Discord Trigger Bot](https://github.com/Jharilela/n8n_discord_trigger_bot). 123 | I made this workaround because i couldnt find a discord trigger on n8n. 124 | 125 | Now I just scroll my Discord DMs and know what to reply to, and ignore everything else.Dont let Email spam your brain. Let your AI do the thinking. 126 | 127 | --- 128 | 129 | ## 📎 Repo & Credits 130 | 131 | - Discord bot trigger: [n8n_discord_trigger_bot](https://github.com/Jharilela/n8n_discord_trigger_bot) 132 | - Creator: [Jay (Emp₀)](https://twitter.com/jharilela) 133 | - Automation tool: [n8n](https://n8n.partnerlinks.io/emp0) 134 | -------------------------------------------------------------------------------- /MCP AI Assistant/README.md: -------------------------------------------------------------------------------- 1 | # 🤖 MCP Personal Assistant Workflow Description 2 | 3 | This workflow integrates multiple productivity tools into a single AI-powered assistant using n8n, acting as a centralized control hub to receive and execute tasks across Google Calendar, Gmail, Google Drive, LinkedIn, Twitter, and more. 4 | 5 | ![ai-mcp-personal-assistant](https://articles.emp0.com/wp-content/uploads/2025/07/AI-MCP-personal-assistant.png) 6 | 7 | This workflow is also published on the [main n8n.io website](https://n8n.io/workflows/5850/) 8 | 9 | --- 10 | 11 | ## ✅ Key Capabilities 12 | 13 | - **AI Agent + Tool Use**: Built using n8n's AI Agent and MCP system, enabling intelligent multi-step reasoning. 14 | - **Tool Integration**: 15 | - Google Calendar: schedule, update, delete events 16 | - Gmail: search, draft, send emails 17 | - Google Drive: manage files and folders 18 | - LinkedIn & Twitter: post updates, send DMs 19 | - Utility tools: fetch date/time, search URLs 20 | - **Discord Input**: Accepts prompts via `n8n_discord_trigger_bot` [repo link](https://github.com/Jharilela/n8n_discord_trigger_bot) 21 | 22 | --- 23 | 24 | ## 🛠 Setup Instructions 25 | 26 | 1. **Timezone Configuration**: 27 | - Go to `Settings > Default Timezone` in n8n. 28 | - Set to your local timezone (e.g., `Asia/Jakarta`). 29 | - Ensure all `Date & Time` nodes explicitly use the same zone to avoid UTC-related bugs. 30 | 31 | 2. **Tool Authentication**: 32 | - Replace all OAuth credentials for: 33 | - Gmail 34 | - Google Drive 35 | - Google Calendar 36 | - Twitter 37 | - LinkedIn 38 | - Use your own accounts when copying this workflow. 39 | 40 | 3. **Platform Adaptability**: 41 | - While designed for Discord, you can replace the Discord trigger with any other chat or webhook service. 42 | - Example: Telegram, Slack, WhatsApp Webhook, n8n Form Trigger, etc. 43 | 44 | --- 45 | 46 | ## 📦 Strengths 47 | 48 | - Great for **document retrieval**, **email summarization**, **calendar scheduling**, and **social posting**. 49 | - Reduces the need for tab-switching across multiple platforms. 50 | - Tested with a comprehensive checklist across categories like: 51 | - Calendar 52 | - Gmail 53 | - Google Drive 54 | - Twitter 55 | - LinkedIn 56 | - Utility tools 57 | - Cross-tool actions 58 | (Refer to [prompt checklist](https://github.com/Jharilela/n8n-workflows/blob/main/MCP%20AI%20Assistant/prompt%20checklist.md) for prompt coverage.) 59 | 60 | --- 61 | 62 | ## ⚠️ Limitations 63 | 64 | - ❌ **Binary Uploads**: 65 | - AI agents & MCP server currently struggle with binary payloads. 66 | - Uploading files to Gmail, Google Drive, or LinkedIn may fail due to format serialization issues. 67 | - Binary operations (upload/post) are **under development** and will be fixed in future iterations. 68 | 69 | - ❌ **Date Bugs**: 70 | - If timezone settings are incorrect, event times may default to UTC, leading to misaligned calendar events. 71 | 72 | --- 73 | 74 | ## 🔬 Testing 75 | 76 | Use the provided prompt checklist for full coverage of: 77 | - ✅ Core feature flows 78 | - ✅ Edge cases (e.g., invalid dates, nonexistent users) 79 | - ✅ Cross-tool chains (e.g., Google Drive → Gmail → LinkedIn) 80 | 81 | --- 82 | 83 | ## ✅ MCP Assistant Test Prompt Checklist 84 | 85 | ### 📅 Google Calendar 86 | - [X] "Schedule a meeting with Alice tomorrow at 10am. and send an invite to alice@wonderland.com" 87 | - [X] "Create an event called 'Project Sync' on Friday at 3pm with Bob and Charlie." 88 | - [X] "Update the time of my call with James to next Monday at 2pm." 89 | - [X] "Delete my meeting with Marketing next Wednesday." 90 | - [x] "What is my schedule tommorow ? " 91 | 92 | ### 📧 Gmail 93 | - [x] "Show me unread emails from this week." 94 | - [x] "Search for emails with subject: invoice" 95 | - [X] "Reply to the latest email from john@company.com saying 'Thanks, noted!'" 96 | - [X] "Draft an email to info@a16z.com with subject 'Emp0 Fundraising' and draft the body of the email with an investment opportunity in Emp0, scrape this site https://Emp0.com to get to know more about emp0.com" 97 | - [X] "Send an email to hi@cursor.com with subject 'Feature request' and cc sales@cursor.com" 98 | - [ ] "Send an email to recruiting@openai.com , write about how you like their product and want to apply for a job there and attach my latest CV from Google Drivce" 99 | 100 | ### 🗂 Google Drive 101 | - [ ] "Upload the PDF you just sent me to my Google Drive." 102 | - [X] "Create a folder called 'July Reports' inside Emp0 shared drive." 103 | - [X] "Move the file named 'Q2_Review.pdf' to 'Reports/2024/Q2'." 104 | - [X] "Share the folder 'Investor Decks' with info@a16z.com as viewer." 105 | - [ ] "Download the file 'Wayne_Li_CV.pdf' and attach it in Discord." 106 | - [X] "Search for a file named 'Invoice May' in my Google Drive." 107 | 108 | ### 🖼 LinkedIn 109 | - [X] "Think of a random and inspiring quote. Post a text update on LinkedIn with the quote and end with a question so people will answer and increase engagement" 110 | - [ ] "Post this Google Drive image to LinkedIn with the caption: 'Team offsite snapshots!'" 111 | - [X] "Summarize the contents of this workflow and post it on linkedin with the original url https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/" 112 | 113 | ### 🐦 Twitter 114 | - [X] "Tweet: 'AI is eating operations. Fast.'" 115 | - [X] "Send a DM to @founderguy: 'Would love to connect on what you’re building.'" 116 | - [X] "Search Twitter for keyword: 'founder advice'" 117 | 118 | ### 🌐 Utilities 119 | - [X] "What time is it now?" 120 | - [ ] "Download this PDF: https://ontheline.trincoll.edu/images/bookdown/sample-local-pdf.pdf" 121 | - [X] "Search this URL and summarize important tech updates today: https://techcrunch.com/feed/" 122 | 123 | ### 📎 Discord Attachments 124 | - [ ] "Take the image I just uploaded and post it to LinkedIn." 125 | - [ ] "Get the file from my last message and upload it to Google Drive." 126 | 127 | ### 🧪 Edge Cases 128 | - [X] "Schedule a meeting on Feb 30." 129 | - [X] "Send a DM to @user_that_does_not_exist" 130 | - [ ] "Download a 50MB PDF and post it to LinkedIn" 131 | - [X] "Get the latest tweet from my timeline and email it to myself." 132 | 133 | ### 🔗 Cross-tool Flows 134 | - [ ] "Get the latest image from my Google Drive and post it on LinkedIn with the caption 'Another milestone hit!'" 135 | - [ ] "Find the latest PDF report in Google Drive and email it to investor@vc.com." 136 | - [ ] "Download an image from this link and upload it to my Google Drive: https://example.com/image.png" 137 | - [ ] "Get the most recent attachment from my inbox and upload it to Google Drive." 138 | 139 | --- 140 | Run each of these in isolated test cases. For cross-tool flows, verify binary serialization integrity. 141 | 142 | 143 | ## 🧠 Why Use This Workflow? 144 | 145 | This is an always-on personal assistant that can: 146 | - Process natural language input 147 | - Handle multi-step logic 148 | - Execute commands across 6+ platforms 149 | - Be extended with more tools and memory 150 | 151 | If you want to interact with all your work tools from a single prompt—this is your base to start. 152 | 153 | --- 154 | 155 | ## 📎 Repo & Credits 156 | 157 | - Discord bot trigger: [n8n_discord_trigger_bot](https://github.com/Jharilela/n8n_discord_trigger_bot) 158 | - Creator: [Jay (Emp₀)](https://twitter.com/jharilela) 159 | - Automation tool: [n8n](https://n8n.partnerlinks.io/emp0) -------------------------------------------------------------------------------- /Content Generator V3/README.md: -------------------------------------------------------------------------------- 1 | # AI Powered Blog Automation Workflow for Wordpress and Twitter - v3 Upgrade 2 | 3 | ## Introduction 4 | 5 | [![Watch the video](https://img.youtube.com/vi/lg80KikzaLg/maxresdefault.jpg)](https://www.youtube.com/watch?v=lg80KikzaLg) 6 | 7 | This is a fully automated AI powered blog content generation and distribution workflow built for the n8n automation platform. It's designed for solo founders, creators, indie hackers, marketers, SEO consultants, or lean startup teams who run multiple projects and want to consistently publish high quality, search optimized articles without relying on a human marketing team. 8 | 9 | ![Content farming v3](https://articles.emp0.com/wp-content/uploads/2025/07/gumroad.png) 10 | 11 | Running a blog is essential to drive traffic, build authority, and rank on Google, but it’s time consuming and expensive to manage. This workflow turns your blog into a living, breathing content engine that generates, enhances, and publishes articles every day using a team of autonomous AI agents. If you want to boost your site’s visibility and promote it across multiple channels like WordPress, Twitter, and Dev.to, this workflow is for you. 12 | 13 | --- 14 | 15 | ## Important links 16 | 17 | - ✨ This workflow is published on the [n8n official website](https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/) 18 | - 📚 Checkout the articles [generated with this workflow](https://articles.emp0.com/tag/v3/) 19 | - 🛒 [Buy Now for only $29](https://0emp0.gumroad.com/l/content-farming-v3) and get instant traffic to your blog / wordpress site 20 | 21 | --- 22 | 23 | ## Before You Start 24 | 25 | To get the most out of this workflow, prepare the following: 26 | 27 | - **Define your content pillars** e.g., AI, business, automation, developer tools, healthtech, etc. 28 | - **List your RSS or news sources** like TechCrunch, Wired, CNN Tech, The Verge, etc. 29 | - **Know your audience** Who are you trying to attract? Developers? Founders? Investors? Enterprises? 30 | 31 | --- 32 | 33 | ## How It Works 34 | 35 | - Step 1: Ingest Recent News via RSS + Vectorization 36 | - Step 2: Topic Generation with Semantic Clustering 37 | - Step 3: Research + Data Enrichment 38 | - Step 4: Content Generation (3 Agent Loop) 39 | - Step 5: Blog Title Optimization 40 | - Step 6: Metadata Generation 41 | - Step 7: Featured Image Generation 42 | - Step 8: WordPress Draft Creation 43 | - Step 9: Distribute to Twitter and Dev.to 44 | 45 | Read the [complete technical architecture and cost structure](https://github.com/Jharilela/n8n-workflows/blob/main/Content%20Generator%20V3/Technical%20Setup.md) 46 | 47 | --- 48 | 49 | ## What makes this flow unique ? 50 | 51 | This isn’t just automation. It’s AI workflow orchestration. You get a full scale, modular content team simulated by multi agent communication using the following architecture: 52 | 53 | ![3 AI Agents talking to each other](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-3-ai-agents-talking-to-each-other.png) 54 | 55 | ### 3-Agent System 56 | 57 | - **Agent 1: Task Manager** 58 | - Defines objectives 59 | - Assigns sections and goals 60 | - **Agent 2: Content Generator** 61 | - Equipped with MCP plugins: 62 | - Web Search 63 | - Image Generator 64 | - Graph/Chart Generator 65 | - Section Optimizer 66 | - **Agent 3: Quality Control** 67 | - Checks SEO metrics, flow, clarity, link strategy 68 | - Gives detailed recommendations back to Agent 1 69 | 70 | Each agent talks via shared JSON memory and only modifies content blocks relevant to their scope, reducing token usage and preserving structure. This modularity mimics how a marketing team operates. The agents operate in a looped chain of thought format using `n8n` loops and memory iterating until the article crosses an 80% quality threshold or max loop limit. 71 | 72 | ### MCP tools for visuals 73 | 74 | The article is generated in a loop involving: 75 | - Internet search to generate outbound links (via MCP tool) 76 | 77 | ![Outbound links](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-outbound-links.png) 78 | 79 | - Image generation (Leonardo or DALL·E via MCP) 80 | 81 | ![Image generation](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-image-generation.png) 82 | 83 | - Chart/graph generation (via QuickChart ) 84 | 85 | ![Chart generation](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-graph-generation.png) 86 | 87 | - Table generation (via AI agent and HTML blocks) 88 | 89 | ![Table generation](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-table-generation.png) 90 | 91 | ## Revenue Potential & Monetization Use Cases 92 | 93 | This workflow can generate revenue if deployed strategically. 94 | 95 | ### 💸 Monetization Strategies 96 | 97 | - **Affiliate Blog Engine** 98 | Populate your WordPress with long form articles optimized for affiliate products, SaaS tools, or service reviews. Insert call-to-actions and outbound links to affiliate platforms like Amazon, Gumroad, or Impact. 99 | 100 | - **Lead Magnet Factory** 101 | Target SEO keywords for high-intent leads in your niche (e.g., “best CRM for freelancers”). Capture emails via lead forms in every article and nurture via outbound marketing. Drive subscribers into your Substack or ConvertKit funnel—without writing anything manually. 102 | 103 | - **Agency/Client Work** 104 | Use the workflow to generate blog content for your SEO clients or sell content automation as a service. Deliver consistent value at scale while reducing human cost. 105 | 106 | 107 | ### 📈 Traffic Results (v2 Example) 108 | 109 | - Daily traffic increase: ~10–30 visitors/day 110 | - Monthly cost: ~$21 for OpenAI tokens 111 | - Content output: 10 SEO-rich posts/day 112 | 113 | ![google analytics v2 traffic](https://articles.emp0.com/wp-content/uploads/2025/08/content-generator-v3-gumroad-banner-1.png) 114 | 115 | With the v3 upgrade, including charts, images, and outbound links, expect **higher engagement, lower bounce rate**, and better SERP visibility. 116 | 117 | --- 118 | 119 | ## Need Help or Customization? 120 | 121 | Need to customize this workflow for your niche? Or need some help setting it up? Reach out to us and we can help 122 | 123 | - ⭐ Read [Customer Reviews](https://0emp0.gumroad.com/l/content-farming-v3) and how they scaled their blog traffic using this workflow 124 | - ✍️ Write to us via [Email: tool@emp0.com](tools@emp0.com) 125 | - 💬 Chat with us: [Discord @jym.god](https://discord.com/users/jym.god) 126 | 127 | --- 128 | 129 | ## Join the community 130 | 131 | - [Other free n8n workflows](https://n8n.io/creators/jay-emp0/) 132 | - [Github repository](https://github.com/Jharilela) 133 | - [Discord Community](https://discord.gg/vuDYfNwf) 134 | - [Official website](https://emp0.com) 135 | 136 | --- 137 | 138 | ## Changelog: From v2 to v3 139 | 140 | I have previously posted a free version of this content generator. You can find [FREE v2 on the n8n official website](https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/) 141 | 142 | Here is the visual difference of the content length and visual components 143 | 144 | ![v2 vs v3 article length](https://articles.emp0.com/wp-content/uploads/2025/07/content-farming-v3-article-length.png) 145 | 146 | ### Problems with v2 147 | 148 | - RSS links broke when source URLs changed → HTTP 404 149 | - Target length 1500 words never met → hallucinations capped around 600–800 150 | - Content lacked media → zero images/tables/graphs, was very boring to read 151 | - Low SEO score due to minimal outbound links and broken links 152 | - No content repurposing → missed visibility on Dev.to, Twitter for backlinking and gaining traffic 153 | - Single topic limitation → no support for multi pillar blogs 154 | 155 | ### v3 Upgrades 156 | 157 | 1. **Vector DB Integration** 158 | Store full text and embeddings. Enables multiple content pillar filtering and prevents broken links. Making it customizable per topic 159 | 160 | ![storing news in vector database](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-news-vector-embeddings.png) 161 | 162 | 2. **3-Agent AI Architecture** 163 | Task Manager → Content Writer → QC Loop 164 | - AI agents loop until threshold met 165 | - Dynamic sections 166 | - Visual generation tools built in 167 | - See architecture image 168 | 169 | ![3 AI Agents workflow](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-3-ai-agents-talking-to-each-other-workflow-black.png) 170 | 171 | 3. **Modular Content Blocks** 172 | Content is saved per section → you can modify one section without touching the rest. Saves tokens and cost. 173 | 174 | ![modular content blocks](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-content-blocks.png) 175 | 176 | 4. **MCP Tools** (included) 177 | - Generate images. Find the tool [here](https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/) 178 | 179 | ![mcp tool to generate images](https://articles.emp0.com/wp-content/uploads/2025/07/generate-and-upload-blog-images-with-leonardo-ai-and-wordpress.png) 180 | 181 | - Generate graphs. Find the tool [here](https://n8n.io/workflows/6361-ai-powered-chart-generation-from-web-data-with-gpt-4o-and-wordpress-upload/) 182 | 183 | ![mcp tool to generate graphs](https://articles.emp0.com/wp-content/uploads/2025/07/AI-Powered-Chart-Generation-from-Web-Data-with-GPT-4o-and-WordPress-Upload.png) 184 | 185 | - Search web for factual support. Generate tables Improve outbound linking and reader engagement 186 | 187 | ![mcp tool to search the web](https://articles.emp0.com/wp-content/uploads/2025/07/mcp-tool-search-the-web.png) 188 | 189 | 190 | 5. **Improved Twitter Integration** 191 | - Media rich tweets 192 | - Auto hashtag generation 193 | - 10x better impressions than plain-text tweets 194 | 195 | [Checkout our Automated Twitter Account](https://x.com/Emp0_com) 196 | 197 | ![twitter comparison v2 vs v3](https://articles.emp0.com/wp-content/uploads/2025/07/content-generator-v3-twtiter-comparison.png) 198 | 199 | 6. **Dev.to Posting with Signature** 200 | [Example Dev.to article](https://dev.to/jay_all_day/unlocking-the-mystery-of-300-ai-chatbot-subscriptions-what-makes-them-worth-it-3lnj) 201 | 202 | ``` markdown 203 | ✍️ Written by Emp0.com 204 | 🧠 AI Workflows on GitHub: github.com/Jharilela 205 | 💬 Chat with us: Discord @jym.god 206 | ``` 207 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # n8n Automation Workflows by emp0 2 | 3 | [![n8n Creator Badge](https://img.shields.io/badge/n8n-Creator-blue?style=flat-square&logo=n8n)](https://n8n.io/creators/jay-emp0/) [![Workflows Published](https://img.shields.io/badge/Workflows-14+-brightgreen?style=flat-square)](#workflows) [![Discord Community](https://img.shields.io/badge/Discord-Community-7289da?style=flat-square&logo=discord)](https://discord.gg/qg3qVfFchV) [![Built with AI](https://img.shields.io/badge/Built%20with-AI-orange?style=flat-square)](#) 4 | 5 | > **Production-ready automation workflows** that solve real business problems using AI agents, integrations, and modern APIs. 6 | 7 | --- 8 | 9 | ## What is n8n? 10 | 11 | [n8n](https://n8n.io) is a **free, open-source** workflow automation tool that connects different apps and services together. Think of it as a visual programming language for automations - no coding required! You can build workflows that automatically handle repetitive tasks, integrate AI, manage emails, post to social media, and much more. 12 | 13 | --- 14 | 15 | ## Workflows 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 |
WorkflowDescriptionStatus & Pricing
AI Email ClassifierAutomate email classification, prioritization, and spam detection across multiple Gmail accounts with AI-powered Discord notifications. View on n8n✅ FREE
Content Generator V1Automated AI-powered blog creation with topic discovery, content generation, and WordPress publishing with featured images. Simple daily blog automation workflow.✅ FREE
Content Generator V2Scheduled AI content creation with RSS feed integration, automated WordPress publishing, and social media distribution for consistent blog output. View on n8n✅ FREE
Content Generator V3AI-powered blog automation that publishes hourly SEO articles to WordPress and Twitter using RSS ingestion and vector clustering. View on n8n💰 PAID - $29
Content Generator V4Blog automation for WordPress using ChatGPT-5 and Gemini with advanced content generation, SEO optimization, and multi-platform distribution. View on n8n💰 PAID - $39
Blog Image GeneratorGenerate and upload blog images with Leonardo AI and WordPress integration for automated visual content creation. View on n8n✅ FREE
Chart GeneratorTurn any prompt into a chart and upload it to WordPress using GPT-4o and QuickChart.io for data visualization. View on n8n✅ FREE
MCP AI AssistantPersonal AI assistant with MCP protocol integration for Google Calendar, Gmail, Drive, LinkedIn, and Twitter automation. View on n8n✅ FREE
Analytics DigestDiscord daily digest for multiple Google Analytics accounts with automated reporting and insights. View on n8n✅ FREE
Replicate Flux Image GeneratorGenerate images via Replicate Flux models and upload to WordPress/Twitter. Built as an MCP module for on-demand image creation with cost-optimized model selection. View on n8n✅ FREE
Postgres GitHub BackupDaily automated backup of all PostgreSQL tables to GitHub as CSV files with version control. Never lose your database snapshots again! View on n8n✅ FREE
Ebook to Audiobook ConverterTransform PDFs into professional audiobooks automatically using AI text-to-speech (MiniMax). Upload a PDF via web form and get an MP3 audiobook in Google Drive. Perfect for students, content creators, and accessibility. View on n8n✅ FREE
Memecoin Art GeneratorAutomatically generates viral memecoin art and posts to Twitter using Google Gemini AI and NanoBanana image generation. Schedule automated meme posts with trending topics and Gen Z style tweets. View on n8n✅ FREE
Reddit Auto-Comment AssistantAI-driven Reddit marketing automation that monitors F5Bot alerts and generates human-like comments with optional product promotion. Rule-aware posting that respects subreddit guidelines. Perfect for organic growth on Reddit. View on n8n✅ FREE
Reddit to Twitter AutomationRepurpose trending Reddit posts into punchy first-person tweets automatically using Google Gemini AI. Runs every 2 hours with Google Sheets logging to avoid duplicate content. Automated content inspiration for Twitter. View on n8n✅ FREE
Tweet SchedulerAutomated content and promo tweet scheduler with Gemini AI and Google Sheets. Posts unique tweets every 2 hours with 70% content (10 proven templates) and 30% promotional tweets. Includes duplicate prevention and automated logging. View on n8n✅ FREE
104 | 105 | --- 106 | 107 | ## See real results from our users 108 | 109 |
110 | Featured Workflow: Content Generator V3 111 | 112 | ![Traffic Growth](https://articles.emp0.com/wp-content/uploads/2025/08/content-generator-v3-gumroad-banner-1.png) 113 | 114 | **What users achieve:** 115 | - [A fully automated blog](https://emp0.com) that posts 10 time a day 116 | - **1000+ daily visitors** from automated SEO content 117 | - **300% traffic increase** in first month 118 | - **90% time saved** on content creation 119 | - **ROI positive** after 30 days 120 | 121 | **Monthly Operating Cost:** ~$150 (vs $3000+ for content team) 122 | 123 | [Read Customer Reviews](https://0emp0.gumroad.com/l/content-farming-v3) 124 | 125 |
126 | 127 | --- 128 | 129 | ## Getting Started 130 | 131 | 1. **Install n8n** - [Download here](https://n8n.io/download/) (it's free!) 132 | 2. **Browse workflows** - Visit our [official n8n creator page](https://n8n.io/creators/jay-emp0/) 133 | 3. **Import & customize** - Each workflow includes detailed setup instructions 134 | 4. **Need Discord integration?** - Install our [Discord bot](https://github.com/Jharilela/n8n_discord_trigger_bot) 135 | 136 | ### Need Help? 137 | 138 | - **Email:** [tools@emp0.com](mailto:tools@emp0.com) 139 | - **Discord:** [Join AI + Automation Discord](https://discord.gg/qg3qVfFchV) 140 | - **Website:** [emp0.com/automation-workflows](https://emp0.com/automation-workflows) 141 | 142 | ### Supporting Open Source Automation 143 | 144 | *If our free workflows saved you time or money, consider:* 145 | - ⭐ **[Starring this repository](https://github.com/Jharilela/n8n-workflows)** 146 | - **Sharing with your community** 147 | - **Join our [Discord community](https://discord.gg/qg3qVfFchV)** 148 | - **Supporting us with a [paid workflow](https://store.emp0.com)** 149 | - **Reach out to the [emp0 team](https://emp0.com)** 150 | -------------------------------------------------------------------------------- /Content Generator V1/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "content generator v1", 3 | "nodes": [ 4 | { 5 | "parameters": { 6 | "promptType": "define", 7 | "text": "={{ $json.message.content }}", 8 | "hasOutputParser": true, 9 | "options": { 10 | "systemMessage": "You are a seo specialist and experienced content writer. The user will provide you a blog topic and you will write a 1,500 word blog post in HTML format" 11 | } 12 | }, 13 | "type": "@n8n/n8n-nodes-langchain.agent", 14 | "typeVersion": 2.2, 15 | "position": [ 16 | 416, 17 | 0 18 | ], 19 | "id": "9ee064e5-657a-4e2d-b219-81f32347df9d", 20 | "name": "AI Agent" 21 | }, 22 | { 23 | "parameters": { 24 | "rule": { 25 | "interval": [ 26 | { 27 | "field": "hours" 28 | } 29 | ] 30 | } 31 | }, 32 | "type": "n8n-nodes-base.scheduleTrigger", 33 | "typeVersion": 1.2, 34 | "position": [ 35 | -64, 36 | 0 37 | ], 38 | "id": "2d69cbcd-2801-4dd8-948d-a62af9447805", 39 | "name": "Schedule Trigger" 40 | }, 41 | { 42 | "parameters": { 43 | "model": { 44 | "__rl": true, 45 | "value": "gpt-4o-mini", 46 | "mode": "list", 47 | "cachedResultName": "gpt-4o-mini" 48 | }, 49 | "options": {} 50 | }, 51 | "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", 52 | "typeVersion": 1.2, 53 | "position": [ 54 | 416, 55 | 208 56 | ], 57 | "id": "8b3b434c-f63f-4274-8eac-b4b07332cfc7", 58 | "name": "OpenAI Chat Model", 59 | "credentials": { 60 | "openAiApi": { 61 | "id": "WPek5vNr6BEqnAUD", 62 | "name": "OpenAi account 2" 63 | } 64 | } 65 | }, 66 | { 67 | "parameters": { 68 | "schemaType": "manual", 69 | "inputSchema": "{\n\t\"type\": \"object\",\n\t\"properties\": {\n\t\t\"title\": {\n\t\t\t\"type\": \"string\"\n\t\t},\n\t\t\"content\": {\n\t\t\t\"type\": \"string\",\n \"description\": \"html content of the blog\"\n\t\t}\n\t}\n}" 70 | }, 71 | "type": "@n8n/n8n-nodes-langchain.outputParserStructured", 72 | "typeVersion": 1.3, 73 | "position": [ 74 | 560, 75 | 208 76 | ], 77 | "id": "5e96e9bf-667c-4a6f-8688-2ab10b7d3dfd", 78 | "name": "Structured Output Parser" 79 | }, 80 | { 81 | "parameters": { 82 | "title": "={{ $json.output.title }}", 83 | "additionalFields": { 84 | "content": "={{ $json.output.content }}", 85 | "categories": [ 86 | 9 87 | ], 88 | "tags": [ 89 | 13 90 | ] 91 | } 92 | }, 93 | "type": "n8n-nodes-base.wordpress", 94 | "typeVersion": 1, 95 | "position": [ 96 | 752, 97 | 0 98 | ], 99 | "id": "e3b1d21b-9341-48e8-a943-9c68b30d8892", 100 | "name": "Create a post", 101 | "credentials": { 102 | "wordpressApi": { 103 | "id": "o9xGKvqHekSSt3Id", 104 | "name": "Wordpress account" 105 | } 106 | } 107 | }, 108 | { 109 | "parameters": { 110 | "resource": "image", 111 | "prompt": "={{ $json.title.raw }}", 112 | "options": { 113 | "size": "1792x1024" 114 | } 115 | }, 116 | "type": "@n8n/n8n-nodes-langchain.openAi", 117 | "typeVersion": 1.8, 118 | "position": [ 119 | 896, 120 | 0 121 | ], 122 | "id": "a18c73e5-ba34-4a95-9901-c539b7950e72", 123 | "name": "Generate an image", 124 | "credentials": { 125 | "openAiApi": { 126 | "id": "WPek5vNr6BEqnAUD", 127 | "name": "OpenAi account 2" 128 | } 129 | } 130 | }, 131 | { 132 | "parameters": { 133 | "operation": "update", 134 | "postId": "={{ $('Create a post').item.json.id }}", 135 | "updateFields": { 136 | "status": "publish" 137 | } 138 | }, 139 | "type": "n8n-nodes-base.wordpress", 140 | "typeVersion": 1, 141 | "position": [ 142 | 1328, 143 | 0 144 | ], 145 | "id": "351bbf25-0dfb-49f4-b9aa-163e07c91922", 146 | "name": "Update a post", 147 | "credentials": { 148 | "wordpressApi": { 149 | "id": "ROMzk4TNprbiDIjp", 150 | "name": "wp - account@gmail.com" 151 | } 152 | } 153 | }, 154 | { 155 | "parameters": { 156 | "method": "POST", 157 | "url": "=https://articles.emp0.com/wp-json/wp/v2/posts/{{ $('Create a post').item.json.id }}", 158 | "authentication": "predefinedCredentialType", 159 | "nodeCredentialType": "wordpressApi", 160 | "sendQuery": true, 161 | "queryParameters": { 162 | "parameters": [ 163 | { 164 | "name": "featured_media", 165 | "value": "={{ $json.id }}" 166 | } 167 | ] 168 | }, 169 | "options": {} 170 | }, 171 | "id": "b6331535-8f26-4efb-a0f5-60ec97bad368", 172 | "name": "Set Image", 173 | "type": "n8n-nodes-base.httpRequest", 174 | "position": [ 175 | 1184, 176 | 0 177 | ], 178 | "typeVersion": 4.2, 179 | "credentials": { 180 | "wordpressApi": { 181 | "id": "o9xGKvqHekSSt3Id", 182 | "name": "Wordpress account" 183 | } 184 | } 185 | }, 186 | { 187 | "parameters": { 188 | "method": "POST", 189 | "url": "https://articles.emp0.com/wp-json/wp/v2/media", 190 | "authentication": "predefinedCredentialType", 191 | "nodeCredentialType": "wordpressApi", 192 | "sendHeaders": true, 193 | "headerParameters": { 194 | "parameters": [ 195 | { 196 | "name": "Content-Disposition", 197 | "value": "=attachment; filename=\"img-{{ $('Create a post').item.json.title.raw.replaceAll(\" \",\"-\") }}.jpg\"" 198 | } 199 | ] 200 | }, 201 | "sendBody": true, 202 | "contentType": "binaryData", 203 | "inputDataFieldName": "data", 204 | "options": {} 205 | }, 206 | "type": "n8n-nodes-base.httpRequest", 207 | "typeVersion": 4.2, 208 | "position": [ 209 | 1040, 210 | 0 211 | ], 212 | "id": "fd22cc2d-fc52-489a-8894-93b947ea9e82", 213 | "name": "upload image", 214 | "credentials": { 215 | "wordpressApi": { 216 | "id": "o9xGKvqHekSSt3Id", 217 | "name": "Wordpress account" 218 | } 219 | } 220 | }, 221 | { 222 | "parameters": { 223 | "modelId": { 224 | "__rl": true, 225 | "value": "gpt-4o-mini-search-preview", 226 | "mode": "list", 227 | "cachedResultName": "GPT-4O-MINI-SEARCH-PREVIEW" 228 | }, 229 | "messages": { 230 | "values": [ 231 | { 232 | "content": "find an interesting topic regarding latest news today. Simple one line output" 233 | } 234 | ] 235 | }, 236 | "options": {} 237 | }, 238 | "type": "@n8n/n8n-nodes-langchain.openAi", 239 | "typeVersion": 1.8, 240 | "position": [ 241 | 112, 242 | 0 243 | ], 244 | "id": "beacb31b-34c0-4842-abc7-516df8a3ed57", 245 | "name": "Message a model", 246 | "credentials": { 247 | "openAiApi": { 248 | "id": "WPek5vNr6BEqnAUD", 249 | "name": "OpenAi account 2" 250 | } 251 | } 252 | }, 253 | { 254 | "parameters": { 255 | "content": "## Draft an article\n### Create a title and content for the new article", 256 | "height": 512, 257 | "width": 592, 258 | "color": 3 259 | }, 260 | "type": "n8n-nodes-base.stickyNote", 261 | "position": [ 262 | 96, 263 | -144 264 | ], 265 | "typeVersion": 1, 266 | "id": "0673cfe2-ac26-43d8-b268-bb402d9b2472", 267 | "name": "Sticky Note" 268 | }, 269 | { 270 | "parameters": { 271 | "content": "## Publish to wordpress\n### Create a draft, upload image and publish the article", 272 | "height": 512, 273 | "width": 752, 274 | "color": 5 275 | }, 276 | "type": "n8n-nodes-base.stickyNote", 277 | "position": [ 278 | 704, 279 | -144 280 | ], 281 | "typeVersion": 1, 282 | "id": "ad2f4e23-c807-42be-bd24-3b872dbbf11d", 283 | "name": "Sticky Note1" 284 | } 285 | ], 286 | "pinData": { 287 | "Schedule Trigger": [ 288 | { 289 | "json": { 290 | "timestamp": "2025-08-14T05:30:00.507-04:00", 291 | "Readable date": "August 14th 2025, 5:30:00 am", 292 | "Readable time": "5:30:00 am", 293 | "Day of week": "Thursday", 294 | "Year": "2025", 295 | "Month": "August", 296 | "Day of month": "14", 297 | "Hour": "05", 298 | "Minute": "30", 299 | "Second": "00", 300 | "Timezone": "America/New_York (UTC-04:00)" 301 | } 302 | } 303 | ] 304 | }, 305 | "connections": { 306 | "Schedule Trigger": { 307 | "main": [ 308 | [ 309 | { 310 | "node": "Message a model", 311 | "type": "main", 312 | "index": 0 313 | } 314 | ] 315 | ] 316 | }, 317 | "OpenAI Chat Model": { 318 | "ai_languageModel": [ 319 | [ 320 | { 321 | "node": "AI Agent", 322 | "type": "ai_languageModel", 323 | "index": 0 324 | } 325 | ] 326 | ] 327 | }, 328 | "Structured Output Parser": { 329 | "ai_outputParser": [ 330 | [ 331 | { 332 | "node": "AI Agent", 333 | "type": "ai_outputParser", 334 | "index": 0 335 | } 336 | ] 337 | ] 338 | }, 339 | "AI Agent": { 340 | "main": [ 341 | [ 342 | { 343 | "node": "Create a post", 344 | "type": "main", 345 | "index": 0 346 | } 347 | ] 348 | ] 349 | }, 350 | "Create a post": { 351 | "main": [ 352 | [ 353 | { 354 | "node": "Generate an image", 355 | "type": "main", 356 | "index": 0 357 | } 358 | ] 359 | ] 360 | }, 361 | "Generate an image": { 362 | "main": [ 363 | [ 364 | { 365 | "node": "upload image", 366 | "type": "main", 367 | "index": 0 368 | } 369 | ] 370 | ] 371 | }, 372 | "Set Image": { 373 | "main": [ 374 | [ 375 | { 376 | "node": "Update a post", 377 | "type": "main", 378 | "index": 0 379 | } 380 | ] 381 | ] 382 | }, 383 | "upload image": { 384 | "main": [ 385 | [ 386 | { 387 | "node": "Set Image", 388 | "type": "main", 389 | "index": 0 390 | } 391 | ] 392 | ] 393 | }, 394 | "Message a model": { 395 | "main": [ 396 | [ 397 | { 398 | "node": "AI Agent", 399 | "type": "main", 400 | "index": 0 401 | } 402 | ] 403 | ] 404 | } 405 | }, 406 | "active": false, 407 | "settings": { 408 | "executionOrder": "v1" 409 | }, 410 | "versionId": "6a6176fb-43b0-4417-80f0-a0238da3c18d", 411 | "meta": { 412 | "templateCredsSetupCompleted": true, 413 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739" 414 | }, 415 | "id": "Anq2a50Rcp2Gr1xZ", 416 | "tags": [] 417 | } -------------------------------------------------------------------------------- /Product Hunt Lead Generator/README.md: -------------------------------------------------------------------------------- 1 | # Product Hunt Scraper - Automated Lead Generation Workflow 2 | 3 | ![Product Hunt Scraper Workflow](https://articles.emp0.com/wp-content/uploads/2025/12/product-hunt-scraper-n8n-workflow.png) 4 | 5 | ## Turn Product Hunt Launches Into Qualified Leads - Automatically 6 | 7 | **Stop manually searching Product Hunt for potential customers.** This n8n workflow automatically scrapes the latest products from Product Hunt every day, enriches them with contact details, and delivers organized lead lists straight to your Google Sheets. 8 | 9 | ### What You Get 10 | 11 | This powerful automation workflow delivers: 12 | 13 | - **Daily Lead Generation**: Automatically scrapes top 50 Product Hunt products every day at 9 AM 14 | - **Weekly Top Performers**: Every Monday, captures the top 100 products from the previous week (configurable) 15 | - **Rich Contact Data**: Extracts emails, Twitter handles, LinkedIn profiles, Discord usernames, phone numbers, and more 16 | - **Organized Google Sheets**: All leads exported to a structured spreadsheet with product details, maker info, and contact data 17 | - **Automated Outreach**: Optional email system to reach out to makers with personalized messages 18 | - **Fresh Leads Daily**: Never miss a new product launch or potential customer 19 | 20 | ![Google Sheets Output](https://articles.emp0.com/wp-content/uploads/2025/12/product-hunt-scraper-gsheet-output.png) 21 | 22 | --- 23 | 24 | ## Why This Workflow Is a Game-Changer 25 | 26 | ### For Sales & Business Development Teams 27 | 28 | - **Build a targeted prospect list** of innovative companies launching new products 29 | - **Reach decision-makers directly** with founder emails and social profiles 30 | - **Beat your competition** to newly launched products looking for partnerships or tools 31 | - **Scale your outreach** without hiring additional SDRs 32 | 33 | ### For SaaS Founders & Indie Hackers 34 | 35 | - **Find integration partners** by identifying products that complement yours 36 | - **Discover competitors** and track their launches in real-time 37 | - **Connect with fellow makers** for collaboration opportunities 38 | - **Source beta testers** from engaged Product Hunt communities 39 | 40 | ### For Marketing & PR Agencies 41 | 42 | - **Identify potential clients** who just launched and need marketing support 43 | - **Build media lists** of active founders for outreach campaigns 44 | - **Track industry trends** and emerging products in your niche 45 | - **Automate lead qualification** by capturing product categories and descriptions 46 | 47 | ### For Investors & VCs 48 | 49 | - **Monitor emerging startups** launching on Product Hunt 50 | - **Track portfolio company launches** and engagement 51 | - **Discover investment opportunities** in specific categories 52 | - **Build deal flow** with minimal manual research 53 | 54 | --- 55 | 56 | ## Key Features 57 | 58 | ### Intelligent Data Extraction 59 | 60 | ![Product Hunt Scraper Results](https://articles.emp0.com/wp-content/uploads/2025/12/product-hunt-scraper-product-hunt-results.png) 61 | 62 | The workflow captures comprehensive product data: 63 | 64 | - Product name, description, and tagline 65 | - Product categories and launch date 66 | - Upvotes and engagement metrics 67 | - Maker information with social profiles 68 | - Website URLs and demo links 69 | - Banner images and screenshots 70 | 71 | ### Multi-Channel Contact Discovery 72 | 73 | ![Contact Scraper Results](https://articles.emp0.com/wp-content/uploads/2025/12/product-hunt-scraper-contact-scraper-results.png) 74 | 75 | Advanced contact enrichment pulls: 76 | 77 | - **Primary email addresses** (founder emails, sales, feedback, hello) 78 | - **Secondary email addresses** for CC'ing multiple contacts 79 | - **Social media profiles**: Twitter, LinkedIn, Discord, Facebook, Instagram 80 | - **Communication channels**: YouTube, TikTok, Telegram, WhatsApp, Reddit 81 | - **Phone numbers** (when publicly available) 82 | - **Company domains** for additional research 83 | 84 | ### Automated Daily Operations 85 | 86 | - **Set-and-forget scheduling**: Runs automatically every morning at 9 AM 87 | - **Smart deduplication**: Removes duplicate entries to keep your data clean 88 | - **Error handling**: Continues processing even if individual products fail 89 | - **Webhook-driven architecture**: Efficiently processes results as they're ready 90 | - **Batch processing**: Handles large datasets without overwhelming your system 91 | - **Gmail integration**: Professional email sending via Gmail API for better deliverability 92 | 93 | ### Automated Outreach System 94 | 95 | ![Email Outreach Template](https://articles.emp0.com/wp-content/uploads/2025/12/product-hunt-scraper-email-blast.png) 96 | 97 | The workflow includes a sophisticated Gmail-powered email system that: 98 | 99 | - **Sends personalized outreach emails** to product makers with their product details 100 | - **CCs secondary email addresses** for better reach 101 | - **Professional HTML email templates** that are customizable to your brand 102 | - **Better deliverability** through Gmail API (not SMTP) 103 | - **Tracks delivery and engagement** through Gmail 104 | - **Free to use** - 500 emails/day (personal Gmail) or 2,000/day (Workspace) 105 | 106 | **Email Template Features:** 107 | - Dynamic product name and details 108 | - Reference to Product Hunt launch 109 | - Personalized value proposition 110 | - Clear call-to-action 111 | - Unsubscribe link for compliance 112 | - Your branding and contact info 113 | 114 | --- 115 | 116 | ## What's Included 117 | 118 | When you purchase this workflow, you get: 119 | 120 | - **Complete n8n workflow JSON file** ready to import 121 | - **Detailed technical setup guide** with step-by-step instructions 122 | - **Pre-configured Apify actors** for Product Hunt scraping and contact enrichment 123 | - **Email templates** for outreach campaigns 124 | 125 | --- 126 | 127 | ## Real Results 128 | 129 | ### Cost-Effective Lead Generation 130 | 131 | ![Apify Cost Breakdown](https://articles.emp0.com/wp-content/uploads/2025/12/product-hunt-scraper-apify-cost.png) 132 | 133 | Run this workflow for approximately **$65 to $95/month** and generate: 134 | 135 | - **1,900+ leads per month** (50 products/day × 30 days + weekly top 100) 136 | - **Cost per lead: ~$0.03-0.05** - far cheaper than any lead database 137 | - **Fresh, verified contacts** that aren't available in purchased lists 138 | - **Zero manual work** after initial setup 139 | 140 | **Sign up for Apify**: Use our [affiliate link](https://www.apify.com/?fpr=99h7ds) or referral code **99h7ds** 141 | 142 | --- 143 | 144 | ## Technical Highlights 145 | 146 | - Built on **n8n** - the powerful open-source automation platform 147 | - Uses **Apify** actors for reliable scraping at scale 148 | - **Google Sheets integration** for easy data access and sharing 149 | - **Gmail API integration** for professional email outreach 150 | - **Webhook-based architecture** for real-time processing 151 | - **Smart batching** to stay within API rate limits 152 | - **Modular design** - easily customize which data points to collect 153 | - **Error recovery** built into every step 154 | - **Active testing data** pinned for easier debugging 155 | 156 | --- 157 | 158 | ## Requirements 159 | 160 | To run this workflow, you'll need: 161 | 162 | - [n8n instance](https://n8n.partnerlinks.io/emp0) (cloud or self-hosted on [Railway](https://railway.com/deploy/Hx5aTY?referralCode=jay) or [Hostinger](https://www.hostinger.com/vps/n8n-hosting?REFERRALCODE=jayemp0)) 163 | - [Apify account](https://www.apify.com/?fpr=99h7ds) (use referral code: **99h7ds** - free tier available, paid recommended) 164 | - Google account (for Sheets, Drive and Email integration) 165 | 166 | Detailed setup instructions are provided in the [Technical Setup guide](Technical%20Setup.md). 167 | 168 | --- 169 | 170 | ## Comparison: DIY vs This Workflow 171 | 172 | | Task | Manual Process | With This Workflow | 173 | |------|---------------|-------------------| 174 | | Find daily products | 30 mins/day checking Product Hunt | Automatic | 175 | | Extract contact info | 5 mins per product × 50 = 4+ hours | Automatic | 176 | | Organize in spreadsheet | 30 mins/day | Automatic | 177 | | Send outreach emails | 3 mins per email × 50 = 2.5 hours | Automatic | 178 | | **Total time saved** | **7+ hours/day** | **0 hours** | 179 | | **Monthly cost** | Your time + assistant salary | **~$50/month** | 180 | 181 | --- 182 | 183 | ## Customization Options 184 | 185 | This workflow is fully customizable: 186 | 187 | - **Adjust scraping schedule**: Change from daily to twice-daily or weekly 188 | - **Filter by category**: Only scrape products in specific categories (AI, Developer Tools, etc.) 189 | - **Modify top N products**: Increase/decrease the number of products scraped 190 | - **Custom email templates**: Edit the outreach message to match your brand 191 | - **Add email verification**: Integrate ZeroBounce or similar services 192 | - **Connect to your CRM**: Pipe leads directly to Salesforce, HubSpot, or Pipedrive 193 | - **Slack/Discord notifications**: Get alerted when new leads are found 194 | 195 | --- 196 | 197 | ## Support & Updates 198 | 199 | - **Lifetime updates**: Get all future improvements and bug fixes 200 | - **Email support**: Questions? We're here to help with setup, [Email Us](mailto:jay@emp0.com) 201 | - **Community access**: Join our [Skool community](https://www.skool.com/aia-ai-automation-2762) with other users sharing tips and customizations 202 | - **Documentation**: Comprehensive guides covering every feature 203 | 204 | --- 205 | 206 | ## Get Started Today 207 | 208 | **Stop losing leads to competitors.** Start automatically capturing fresh Product Hunt leads every single day. 209 | 210 | → **[Purchase Workflow](https://0emp0.gumroad.com/l/product-hunt-lead-generator)** - One-time payment, lifetime access 211 | 212 | → **[View Technical Setup Guide](Technical%20Setup.md)** - See what's involved 213 | 214 | → **[Join Community](https://www.skool.com/aia-ai-automation-2762)** - Connect with other users 215 | 216 | --- 217 | 218 | ## Frequently Asked Questions 219 | 220 | **Q: Do I need coding skills?** 221 | A: No! The workflow is pre-built and ready to import. Basic n8n familiarity is helpful but not required. 222 | 223 | **Q: What are the ongoing costs?** 224 | A: Approximately $65-95/month total ($40-60 for [Apify](https://www.apify.com/?fpr=99h7ds) scraping services + $20 for [n8n cloud](https://n8n.partnerlinks.io/emp0), or $0 if you self-host on [Railway](https://railway.com/deploy/Hx5aTY?referralCode=jay) or [Hostinger](https://www.hostinger.com/vps/n8n-hosting?REFERRALCODE=jayemp0)). 225 | 226 | **Q: Is this legal?** 227 | A: Yes - all data is publicly available on Product Hunt and product websites. Follow ethical outreach practices. 228 | 229 | **Q: Can I scrape more/fewer products?** 230 | A: Absolutely! The workflow is fully customizable. Adjust the "topNProducts" parameter in the config. 231 | 232 | **Q: What if emails bounce or accounts get banned?** 233 | A: The guide includes best practices for email deliverability and warnings about sending volume limits. 234 | 235 | **Q: Can I use this for specific product categories?** 236 | A: Yes! You can filter by category in the Apify scraper configuration. 237 | 238 | **Q: Does this work with Gmail?** 239 | A: Yes! The workflow now uses Gmail API by default for better deliverability and tracking. You can customize it for other providers by using the SMTP node 240 | 241 | **Q: What's your refund policy?** 242 | A: No Refunds. 243 | 244 | --- 245 | 246 | ## About emp0 247 | 248 | This workflow is created by the team at **emp0** - specialists in n8n automation and AI-powered workflows. We build production-ready automations used by hundreds of businesses worldwide. 249 | 250 | - Explore Premium workflows: [store.emp0.com](https://store.emp0.com) 251 | - Explore Free workflows: [emp0.com/automation-workflows](https://emp0.com) 252 | - Follow us on Twitter: [@emp0_com](https://twitter.com/emp0_com) 253 | 254 | --- 255 | 256 | **Ready to automate your lead generation?** Get the Product Hunt Scraper workflow today and start building your pipeline on autopilot. 257 | -------------------------------------------------------------------------------- /Generate Images with Replicate and Flux/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "nodes": [ 3 | { 4 | "parameters": { 5 | "method": "POST", 6 | "url": "https://articles.emp0.com/wp-json/wp/v2/media", 7 | "authentication": "predefinedCredentialType", 8 | "nodeCredentialType": "wordpressApi", 9 | "sendHeaders": true, 10 | "headerParameters": { 11 | "parameters": [ 12 | { 13 | "name": "Content-Disposition", 14 | "value": "=attachment; filename=\"img-{{ $('Code1').item.json.slug }}.jpg\"" 15 | } 16 | ] 17 | }, 18 | "sendBody": true, 19 | "contentType": "binaryData", 20 | "inputDataFieldName": "data", 21 | "options": {} 22 | }, 23 | "id": "a11410a3-6444-40f1-a564-7ee5707377b2", 24 | "name": "Upload image2", 25 | "type": "n8n-nodes-base.httpRequest", 26 | "position": [ 27 | 640, 28 | -16 29 | ], 30 | "typeVersion": 4.2, 31 | "retryOnFail": true, 32 | "waitBetweenTries": 5000, 33 | "credentials": { 34 | "wordpressApi": { 35 | "id": "G1G8jDdEoWAVytQb", 36 | "name": "Wordpress - anya@emp0.com" 37 | } 38 | } 39 | }, 40 | { 41 | "parameters": { 42 | "method": "POST", 43 | "url": "= https://api.replicate.com/v1/models/{{ $json.model }}/predictions", 44 | "authentication": "genericCredentialType", 45 | "genericAuthType": "httpHeaderAuth", 46 | "sendHeaders": true, 47 | "headerParameters": { 48 | "parameters": [ 49 | { 50 | "name": "content-type", 51 | "value": "application/json" 52 | }, 53 | { 54 | "name": "Prefer", 55 | "value": "wait" 56 | } 57 | ] 58 | }, 59 | "sendBody": true, 60 | "specifyBody": "json", 61 | "jsonBody": "={{ $json.model_config }}", 62 | "options": {} 63 | }, 64 | "type": "n8n-nodes-base.httpRequest", 65 | "typeVersion": 4.2, 66 | "position": [ 67 | 240, 68 | -16 69 | ], 70 | "id": "d1d5d3e3-72aa-49a3-9411-a8a2bcfdfbb9", 71 | "name": "HTTP Request1", 72 | "credentials": { 73 | "httpHeaderAuth": { 74 | "id": "rv80WUvVdf5qIJTg", 75 | "name": "Header Auth - replicate" 76 | }, 77 | "httpBearerAuth": { 78 | "id": "JvhQAEa4Frw0CDoq", 79 | "name": "Leonardo AI" 80 | } 81 | } 82 | }, 83 | { 84 | "parameters": { 85 | "url": "={{ $json.output || $json.output[0] }}", 86 | "authentication": "genericCredentialType", 87 | "genericAuthType": "httpHeaderAuth", 88 | "sendHeaders": true, 89 | "headerParameters": { 90 | "parameters": [ 91 | { 92 | "name": "accept", 93 | "value": "application/json" 94 | } 95 | ] 96 | }, 97 | "options": {} 98 | }, 99 | "type": "n8n-nodes-base.httpRequest", 100 | "typeVersion": 4.2, 101 | "position": [ 102 | 432, 103 | -16 104 | ], 105 | "id": "41c272ef-26de-4481-8490-4bd37d7e52aa", 106 | "name": "HTTP Request2", 107 | "credentials": { 108 | "httpHeaderAuth": { 109 | "id": "rv80WUvVdf5qIJTg", 110 | "name": "Header Auth - replicate" 111 | }, 112 | "httpBearerAuth": { 113 | "id": "JvhQAEa4Frw0CDoq", 114 | "name": "Leonardo AI" 115 | } 116 | } 117 | }, 118 | { 119 | "parameters": { 120 | "content": "## Generate image with replicate", 121 | "height": 272, 122 | "width": 384, 123 | "color": 5 124 | }, 125 | "type": "n8n-nodes-base.stickyNote", 126 | "position": [ 127 | 192, 128 | -112 129 | ], 130 | "typeVersion": 1, 131 | "id": "0ceb522f-0c64-41f1-a207-9188bc70af6e", 132 | "name": "Sticky Note" 133 | }, 134 | { 135 | "parameters": { 136 | "content": "## Upload", 137 | "height": 272, 138 | "width": 342, 139 | "color": 6 140 | }, 141 | "type": "n8n-nodes-base.stickyNote", 142 | "position": [ 143 | 592, 144 | -112 145 | ], 146 | "typeVersion": 1, 147 | "id": "1289653b-9165-470b-b717-1a0649f5dbbd", 148 | "name": "Sticky Note1" 149 | }, 150 | { 151 | "parameters": { 152 | "content": "## Image generated\n![batman-typing-on-a-laptop](https://articles.emp0.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp)", 153 | "height": 496, 154 | "width": 768, 155 | "color": 7 156 | }, 157 | "type": "n8n-nodes-base.stickyNote", 158 | "position": [ 159 | 192, 160 | 176 161 | ], 162 | "typeVersion": 1, 163 | "id": "26418c06-d385-4e56-a989-0ea71cf2ebd4", 164 | "name": "Sticky Note2" 165 | }, 166 | { 167 | "parameters": {}, 168 | "type": "n8n-nodes-base.manualTrigger", 169 | "typeVersion": 1, 170 | "position": [ 171 | -144, 172 | 160 173 | ], 174 | "id": "ccd10411-364b-4ea6-867a-d50063bfea83", 175 | "name": "When clicking ‘Execute workflow’" 176 | }, 177 | { 178 | "parameters": { 179 | "jsCode": "return {\n \"public_image_url\" :$input.first().json.data[0].guid.raw,\n \"wordpress\":$input.first().json.data[0],\n \"twitter\":$input.first().json.data[1]\n}" 180 | }, 181 | "type": "n8n-nodes-base.code", 182 | "typeVersion": 2, 183 | "position": [ 184 | 1296, 185 | -32 186 | ], 187 | "id": "8057ae5d-d37b-47cd-829a-02e650b4e836", 188 | "name": "Code" 189 | }, 190 | { 191 | "parameters": { 192 | "workflowInputs": { 193 | "values": [ 194 | { 195 | "name": "prompt" 196 | }, 197 | { 198 | "name": "slug" 199 | }, 200 | { 201 | "name": "model" 202 | } 203 | ] 204 | } 205 | }, 206 | "type": "n8n-nodes-base.executeWorkflowTrigger", 207 | "typeVersion": 1.1, 208 | "position": [ 209 | -144, 210 | -16 211 | ], 212 | "id": "adfdcc3f-d411-47a7-83cd-7c3fa8e224a1", 213 | "name": "When Executed by Another Workflow" 214 | }, 215 | { 216 | "parameters": { 217 | "jsCode": "const input = $input.first().json;\n\nconst models = {\n \"black-forest-labs/flux-dev\": {\n \"input\":{\n prompt: input.prompt,\n go_fast: true,\n guidance: 3.5,\n megapixels: \"1\",\n num_outputs: 1,\n aspect_ratio: \"16:9\",\n output_format: \"webp\",\n output_quality: 80,\n prompt_strength: 0.8,\n num_inference_steps: 28,\n }\n },\n \"black-forest-labs/flux-schnell\" : {\n \"input\": {\n \"prompt\": input.prompt,\n \"go_fast\": true,\n \"megapixels\": \"1\",\n \"num_outputs\": 1,\n \"aspect_ratio\": \"16:9\",\n \"output_format\": \"webp\",\n \"output_quality\": 80,\n \"num_inference_steps\": 4\n }\n },\n \"black-forest-labs/flux-1.1-pro\":{\n \"input\": {\n \"prompt\": input.prompt,\n \"aspect_ratio\": \"16:9\",\n \"output_format\": \"webp\",\n \"output_quality\": 100,\n \"safety_tolerance\": 2,\n \"prompt_upsampling\": true\n }\n }\n // You can define more models here\n};\n\nif (!models.hasOwnProperty(input.model)) {\n throw new Error(`Model \"${input.model}\" is not supported.`);\n}\n\nreturn {\n ...input,\n model: input.model,\n model_config: models[input.model],\n};\n" 218 | }, 219 | "type": "n8n-nodes-base.code", 220 | "typeVersion": 2, 221 | "position": [ 222 | 64, 223 | -16 224 | ], 225 | "id": "97437ee7-8648-4807-82ed-44391787f9ad", 226 | "name": "Code1" 227 | }, 228 | { 229 | "parameters": {}, 230 | "type": "n8n-nodes-base.merge", 231 | "typeVersion": 3.2, 232 | "position": [ 233 | 960, 234 | -32 235 | ], 236 | "id": "0a7e0c9e-e149-4654-bc63-f5534244ce90", 237 | "name": "Merge" 238 | }, 239 | { 240 | "parameters": { 241 | "method": "POST", 242 | "url": "https://upload.twitter.com/1.1/media/upload.json?media_category=TWEET_IMAGE", 243 | "authentication": "predefinedCredentialType", 244 | "nodeCredentialType": "twitterOAuth1Api", 245 | "sendBody": true, 246 | "contentType": "multipart-form-data", 247 | "bodyParameters": { 248 | "parameters": [ 249 | { 250 | "parameterType": "formBinaryData", 251 | "name": "media", 252 | "inputDataFieldName": "data" 253 | } 254 | ] 255 | }, 256 | "options": { 257 | "response": { 258 | "response": { 259 | "responseFormat": "json" 260 | } 261 | } 262 | } 263 | }, 264 | "id": "706c943b-9c62-4042-baaf-a08d5aeaa5ab", 265 | "name": "Upload Media (X)", 266 | "type": "n8n-nodes-base.httpRequest", 267 | "position": [ 268 | 800, 269 | -16 270 | ], 271 | "typeVersion": 4.2, 272 | "credentials": { 273 | "twitterOAuth1Api": { 274 | "id": "HywNT47jE0Dh8NvQ", 275 | "name": "X OAuth account" 276 | } 277 | } 278 | }, 279 | { 280 | "parameters": { 281 | "aggregate": "aggregateAllItemData", 282 | "options": {} 283 | }, 284 | "type": "n8n-nodes-base.aggregate", 285 | "typeVersion": 1, 286 | "position": [ 287 | 1120, 288 | -32 289 | ], 290 | "id": "8f0a060f-731d-4970-a2ec-0b48af4e2527", 291 | "name": "Aggregate" 292 | }, 293 | { 294 | "parameters": { 295 | "content": "## Models & Pricing / img\n black-forest-labs/flux-schnell -> $0.003\n black-forest-labs/flux-dev -> $0.025\n black-forest-labs/flux-1.1-pro -> $0.04", 296 | "height": 272, 297 | "width": 342, 298 | "color": 6 299 | }, 300 | "type": "n8n-nodes-base.stickyNote", 301 | "position": [ 302 | -176, 303 | 400 304 | ], 305 | "typeVersion": 1, 306 | "id": "5ec62e6f-991f-4c59-b3b2-84cd6b8c168e", 307 | "name": "Sticky Note3" 308 | } 309 | ], 310 | "connections": { 311 | "Upload image2": { 312 | "main": [ 313 | [ 314 | { 315 | "node": "Merge", 316 | "type": "main", 317 | "index": 0 318 | } 319 | ] 320 | ] 321 | }, 322 | "HTTP Request1": { 323 | "main": [ 324 | [ 325 | { 326 | "node": "HTTP Request2", 327 | "type": "main", 328 | "index": 0 329 | } 330 | ] 331 | ] 332 | }, 333 | "HTTP Request2": { 334 | "main": [ 335 | [ 336 | { 337 | "node": "Upload image2", 338 | "type": "main", 339 | "index": 0 340 | }, 341 | { 342 | "node": "Upload Media (X)", 343 | "type": "main", 344 | "index": 0 345 | } 346 | ] 347 | ] 348 | }, 349 | "When clicking ‘Execute workflow’": { 350 | "main": [ 351 | [ 352 | { 353 | "node": "Code1", 354 | "type": "main", 355 | "index": 0 356 | } 357 | ] 358 | ] 359 | }, 360 | "When Executed by Another Workflow": { 361 | "main": [ 362 | [ 363 | { 364 | "node": "Code1", 365 | "type": "main", 366 | "index": 0 367 | } 368 | ] 369 | ] 370 | }, 371 | "Code1": { 372 | "main": [ 373 | [ 374 | { 375 | "node": "HTTP Request1", 376 | "type": "main", 377 | "index": 0 378 | } 379 | ] 380 | ] 381 | }, 382 | "Merge": { 383 | "main": [ 384 | [ 385 | { 386 | "node": "Aggregate", 387 | "type": "main", 388 | "index": 0 389 | } 390 | ] 391 | ] 392 | }, 393 | "Upload Media (X)": { 394 | "main": [ 395 | [ 396 | { 397 | "node": "Merge", 398 | "type": "main", 399 | "index": 1 400 | } 401 | ] 402 | ] 403 | }, 404 | "Aggregate": { 405 | "main": [ 406 | [ 407 | { 408 | "node": "Code", 409 | "type": "main", 410 | "index": 0 411 | } 412 | ] 413 | ] 414 | } 415 | }, 416 | "pinData": { 417 | "When clicking ‘Execute workflow’": [ 418 | { 419 | "prompt": "joker watching a batman movie on his laptop", 420 | "slug": "joker-watching-batman", 421 | "model": "black-forest-labs/flux-1.1-pro" 422 | } 423 | ] 424 | }, 425 | "meta": { 426 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739" 427 | } 428 | } -------------------------------------------------------------------------------- /Generate and Upload Blog Images with Leonardo AI and WordPress/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "nodes": [ 3 | { 4 | "parameters": { 5 | "method": "POST", 6 | "url": "https://your.wordpress.com/wp-json/wp/v2/media", 7 | "authentication": "predefinedCredentialType", 8 | "nodeCredentialType": "wordpressApi", 9 | "sendHeaders": true, 10 | "headerParameters": { 11 | "parameters": [ 12 | { 13 | "name": "Content-Disposition", 14 | "value": "=attachment; filename=\"img-{{ $('Code1').item.json.slug }}.jpg\"" 15 | } 16 | ] 17 | }, 18 | "sendBody": true, 19 | "contentType": "binaryData", 20 | "inputDataFieldName": "data", 21 | "options": {} 22 | }, 23 | "id": "855d3c0d-0a4b-4a6b-a8cd-be67cff0928b", 24 | "name": "Upload image2", 25 | "type": "n8n-nodes-base.httpRequest", 26 | "position": [ 27 | 912, 28 | -16 29 | ], 30 | "typeVersion": 4.2, 31 | "retryOnFail": true, 32 | "waitBetweenTries": 5000, 33 | "credentials": { 34 | "wordpressApi": { 35 | "id": "G1G8jDdEoWAVytQb", 36 | "name": "Wordpress - your@email.com" 37 | } 38 | } 39 | }, 40 | { 41 | "parameters": { 42 | "method": "POST", 43 | "url": "https://cloud.leonardo.ai/api/rest/v1/generations", 44 | "authentication": "genericCredentialType", 45 | "genericAuthType": "httpHeaderAuth", 46 | "sendHeaders": true, 47 | "headerParameters": { 48 | "parameters": [ 49 | { 50 | "name": "content-type", 51 | "value": "application/json" 52 | }, 53 | { 54 | "name": "accept", 55 | "value": "application/json" 56 | } 57 | ] 58 | }, 59 | "sendBody": true, 60 | "specifyBody": "json", 61 | "jsonBody": "={\n \"modelId\": \"b2614463-296c-462a-9586-aafdb8f00e36\",\n \"contrast\": 3.5,\n \"prompt\": \"{{$json.prompt }}\",\n \"num_images\": 1,\n \"width\": 1472,\n \"height\": 832,\n \"styleUUID\": \"111dc692-d470-4eec-b791-3475abac4c46\",\n \"enhancePrompt\": true\n}", 62 | "options": {} 63 | }, 64 | "type": "n8n-nodes-base.httpRequest", 65 | "typeVersion": 4.2, 66 | "position": [ 67 | 224, 68 | -16 69 | ], 70 | "id": "67d0cf44-13b2-4cee-a796-8fd021dd4a2b", 71 | "name": "HTTP Request1", 72 | "credentials": { 73 | "httpHeaderAuth": { 74 | "id": "rGBKnDk3FO1WfA4z", 75 | "name": "Header Auth - leonardo ai" 76 | }, 77 | "httpBearerAuth": { 78 | "id": "JvhQAEa4Frw0CDoq", 79 | "name": "Leonardo AI" 80 | } 81 | } 82 | }, 83 | { 84 | "parameters": { 85 | "url": "=https://cloud.leonardo.ai/api/rest/v1/generations/{{ $json.sdGenerationJob.generationId }}", 86 | "authentication": "genericCredentialType", 87 | "genericAuthType": "httpHeaderAuth", 88 | "sendHeaders": true, 89 | "headerParameters": { 90 | "parameters": [ 91 | { 92 | "name": "accept", 93 | "value": "application/json" 94 | } 95 | ] 96 | }, 97 | "options": {} 98 | }, 99 | "type": "n8n-nodes-base.httpRequest", 100 | "typeVersion": 4.2, 101 | "position": [ 102 | 560, 103 | -16 104 | ], 105 | "id": "0b478d3e-481c-4190-bfea-ea2d13d0c1fc", 106 | "name": "HTTP Request2", 107 | "credentials": { 108 | "httpHeaderAuth": { 109 | "id": "rGBKnDk3FO1WfA4z", 110 | "name": "Header Auth - leonardo ai" 111 | }, 112 | "httpBearerAuth": { 113 | "id": "JvhQAEa4Frw0CDoq", 114 | "name": "Leonardo AI" 115 | } 116 | } 117 | }, 118 | { 119 | "parameters": { 120 | "amount": 1, 121 | "unit": "minutes" 122 | }, 123 | "type": "n8n-nodes-base.wait", 124 | "typeVersion": 1.1, 125 | "position": [ 126 | 400, 127 | -16 128 | ], 129 | "id": "4a41c6c9-8467-4d49-a758-13cbf7b5c6ce", 130 | "name": "Wait", 131 | "webhookId": "8dc48f5a-163b-495e-a477-5b09b444c413" 132 | }, 133 | { 134 | "parameters": { 135 | "url": "={{ $json.generations_by_pk.generated_images[0].url }}", 136 | "options": {} 137 | }, 138 | "type": "n8n-nodes-base.httpRequest", 139 | "typeVersion": 4.2, 140 | "position": [ 141 | 736, 142 | -16 143 | ], 144 | "id": "225e26b5-97b7-4186-9c71-bb2d8bcaa7c6", 145 | "name": "HTTP Request3" 146 | }, 147 | { 148 | "parameters": { 149 | "content": "## Generate image with leonardo", 150 | "height": 272, 151 | "width": 672, 152 | "color": 5 153 | }, 154 | "type": "n8n-nodes-base.stickyNote", 155 | "position": [ 156 | 192, 157 | -112 158 | ], 159 | "typeVersion": 1, 160 | "id": "d8bab379-3d86-4754-9894-e04596832464", 161 | "name": "Sticky Note" 162 | }, 163 | { 164 | "parameters": { 165 | "content": "## Upload to WordPress", 166 | "height": 272, 167 | "width": 166, 168 | "color": 6 169 | }, 170 | "type": "n8n-nodes-base.stickyNote", 171 | "position": [ 172 | 880, 173 | -112 174 | ], 175 | "typeVersion": 1, 176 | "id": "ce8b507c-213c-4288-88d2-356a96ed5956", 177 | "name": "Sticky Note1" 178 | }, 179 | { 180 | "parameters": { 181 | "content": "## Image generated\n![batman-typing-on-a-laptop](https://articles.emp0.com/wp-content/uploads/2025/07/img-batman-typing-on-a-laptop.jpg)", 182 | "height": 432, 183 | "width": 672, 184 | "color": 7 185 | }, 186 | "type": "n8n-nodes-base.stickyNote", 187 | "position": [ 188 | 192, 189 | 176 190 | ], 191 | "typeVersion": 1, 192 | "id": "0ed127f7-26bc-4750-8ade-6bd672d95a5f", 193 | "name": "Sticky Note2" 194 | }, 195 | { 196 | "parameters": {}, 197 | "type": "n8n-nodes-base.manualTrigger", 198 | "typeVersion": 1, 199 | "position": [ 200 | -144, 201 | 160 202 | ], 203 | "id": "76eafb67-e5b6-4bc2-ac8d-e2661ef10f44", 204 | "name": "When clicking ‘Execute workflow’" 205 | }, 206 | { 207 | "parameters": { 208 | "jsCode": "return {\n \"public_image_url\" :$input.first().json.data[0].guid.raw,\n \"wordpress\":$input.first().json.data[0],\n \"twitter\":$input.first().json.data[1]\n}" 209 | }, 210 | "type": "n8n-nodes-base.code", 211 | "typeVersion": 2, 212 | "position": [ 213 | 1456, 214 | 0 215 | ], 216 | "id": "c124d2ca-f8a6-4b59-84dd-61181bcc2501", 217 | "name": "Code" 218 | }, 219 | { 220 | "parameters": { 221 | "workflowInputs": { 222 | "values": [ 223 | { 224 | "name": "prompt" 225 | }, 226 | { 227 | "name": "slug" 228 | } 229 | ] 230 | } 231 | }, 232 | "type": "n8n-nodes-base.executeWorkflowTrigger", 233 | "typeVersion": 1.1, 234 | "position": [ 235 | -144, 236 | -16 237 | ], 238 | "id": "239aa1ee-ef8d-41f6-a34c-a0d097e9d849", 239 | "name": "When Executed by Another Workflow" 240 | }, 241 | { 242 | "parameters": { 243 | "jsCode": "return $input.all();" 244 | }, 245 | "type": "n8n-nodes-base.code", 246 | "typeVersion": 2, 247 | "position": [ 248 | 64, 249 | -16 250 | ], 251 | "id": "773eab92-6f49-4b35-bde8-081c86785c9a", 252 | "name": "Code1" 253 | }, 254 | { 255 | "parameters": { 256 | "content": "## Upload to Twitter", 257 | "height": 272, 258 | "width": 166, 259 | "color": 6 260 | }, 261 | "type": "n8n-nodes-base.stickyNote", 262 | "position": [ 263 | 880, 264 | 192 265 | ], 266 | "typeVersion": 1, 267 | "id": "f5b60545-3f6e-453c-80de-a76c614a95b1", 268 | "name": "Sticky Note3" 269 | }, 270 | { 271 | "parameters": {}, 272 | "type": "n8n-nodes-base.merge", 273 | "typeVersion": 3.2, 274 | "position": [ 275 | 1120, 276 | 0 277 | ], 278 | "id": "1b4cac4d-17b0-44db-84aa-de970fb48ed0", 279 | "name": "Merge" 280 | }, 281 | { 282 | "parameters": { 283 | "method": "POST", 284 | "url": "https://upload.twitter.com/1.1/media/upload.json?media_category=TWEET_IMAGE", 285 | "authentication": "predefinedCredentialType", 286 | "nodeCredentialType": "twitterOAuth1Api", 287 | "sendBody": true, 288 | "contentType": "multipart-form-data", 289 | "bodyParameters": { 290 | "parameters": [ 291 | { 292 | "parameterType": "formBinaryData", 293 | "name": "media", 294 | "inputDataFieldName": "data" 295 | } 296 | ] 297 | }, 298 | "options": { 299 | "response": { 300 | "response": { 301 | "responseFormat": "json" 302 | } 303 | } 304 | } 305 | }, 306 | "id": "f78fca3e-d4b7-4836-ad6c-412d253e805b", 307 | "name": "Upload Media (X)", 308 | "type": "n8n-nodes-base.httpRequest", 309 | "position": [ 310 | 912, 311 | 288 312 | ], 313 | "typeVersion": 4.2, 314 | "credentials": { 315 | "twitterOAuth1Api": { 316 | "id": "HywNT47jE0Dh8NvQ", 317 | "name": "X OAuth account" 318 | } 319 | } 320 | }, 321 | { 322 | "parameters": { 323 | "aggregate": "aggregateAllItemData", 324 | "options": {} 325 | }, 326 | "type": "n8n-nodes-base.aggregate", 327 | "typeVersion": 1, 328 | "position": [ 329 | 1280, 330 | 0 331 | ], 332 | "id": "c3ee67e2-9b71-4552-8693-e4c7bb004fc0", 333 | "name": "Aggregate" 334 | } 335 | ], 336 | "connections": { 337 | "Upload image2": { 338 | "main": [ 339 | [ 340 | { 341 | "node": "Merge", 342 | "type": "main", 343 | "index": 0 344 | } 345 | ] 346 | ] 347 | }, 348 | "HTTP Request1": { 349 | "main": [ 350 | [ 351 | { 352 | "node": "Wait", 353 | "type": "main", 354 | "index": 0 355 | } 356 | ] 357 | ] 358 | }, 359 | "HTTP Request2": { 360 | "main": [ 361 | [ 362 | { 363 | "node": "HTTP Request3", 364 | "type": "main", 365 | "index": 0 366 | } 367 | ] 368 | ] 369 | }, 370 | "Wait": { 371 | "main": [ 372 | [ 373 | { 374 | "node": "HTTP Request2", 375 | "type": "main", 376 | "index": 0 377 | } 378 | ] 379 | ] 380 | }, 381 | "HTTP Request3": { 382 | "main": [ 383 | [ 384 | { 385 | "node": "Upload image2", 386 | "type": "main", 387 | "index": 0 388 | }, 389 | { 390 | "node": "Upload Media (X)", 391 | "type": "main", 392 | "index": 0 393 | } 394 | ] 395 | ] 396 | }, 397 | "When clicking ‘Execute workflow’": { 398 | "main": [ 399 | [ 400 | { 401 | "node": "Code1", 402 | "type": "main", 403 | "index": 0 404 | } 405 | ] 406 | ] 407 | }, 408 | "When Executed by Another Workflow": { 409 | "main": [ 410 | [ 411 | { 412 | "node": "Code1", 413 | "type": "main", 414 | "index": 0 415 | } 416 | ] 417 | ] 418 | }, 419 | "Code1": { 420 | "main": [ 421 | [ 422 | { 423 | "node": "HTTP Request1", 424 | "type": "main", 425 | "index": 0 426 | } 427 | ] 428 | ] 429 | }, 430 | "Merge": { 431 | "main": [ 432 | [ 433 | { 434 | "node": "Aggregate", 435 | "type": "main", 436 | "index": 0 437 | } 438 | ] 439 | ] 440 | }, 441 | "Upload Media (X)": { 442 | "main": [ 443 | [ 444 | { 445 | "node": "Merge", 446 | "type": "main", 447 | "index": 1 448 | } 449 | ] 450 | ] 451 | }, 452 | "Aggregate": { 453 | "main": [ 454 | [ 455 | { 456 | "node": "Code", 457 | "type": "main", 458 | "index": 0 459 | } 460 | ] 461 | ] 462 | } 463 | }, 464 | "pinData": { 465 | "When clicking ‘Execute workflow’": [ 466 | { 467 | "prompt": "Generate an image of batman typing on his keyboard", 468 | "slug": "batman-typing-on-keyboard" 469 | } 470 | ] 471 | }, 472 | "meta": { 473 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739" 474 | } 475 | } -------------------------------------------------------------------------------- /Backup Postgres Table to GitHub in CSV Format/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "backup postgres as csv to github", 3 | "nodes": [ 4 | { 5 | "parameters": { 6 | "rule": { 7 | "interval": [ 8 | { 9 | "field": "hours", 10 | "hoursInterval": 24 11 | } 12 | ] 13 | } 14 | }, 15 | "id": "20477146-5542-4701-a92f-4300c95a0c5d", 16 | "name": "Daily Schedule", 17 | "type": "n8n-nodes-base.scheduleTrigger", 18 | "position": [ 19 | -1712, 20 | -96 21 | ], 22 | "typeVersion": 1.2 23 | }, 24 | { 25 | "parameters": { 26 | "operation": "select", 27 | "schema": { 28 | "__rl": true, 29 | "mode": "list", 30 | "value": "public" 31 | }, 32 | "table": { 33 | "__rl": true, 34 | "value": "={{ $json.table_name }}", 35 | "mode": "name" 36 | }, 37 | "returnAll": true, 38 | "options": {} 39 | }, 40 | "id": "3f5ec59d-374f-421c-a333-155c06443e5d", 41 | "name": "List tables", 42 | "type": "n8n-nodes-base.postgres", 43 | "position": [ 44 | -544, 45 | 64 46 | ], 47 | "typeVersion": 2.6, 48 | "credentials": { 49 | "postgres": { 50 | "id": "0bZEnNx2jwoEznOw", 51 | "name": "pg - n8n-discord-trigger-bot" 52 | } 53 | } 54 | }, 55 | { 56 | "parameters": { 57 | "options": {} 58 | }, 59 | "type": "n8n-nodes-base.splitInBatches", 60 | "typeVersion": 3, 61 | "position": [ 62 | -912, 63 | -96 64 | ], 65 | "id": "8d8af1f2-5b28-449d-a354-a624f9e2a53c", 66 | "name": "Loop Over Items" 67 | }, 68 | { 69 | "parameters": { 70 | "jsCode": "return $input.all();" 71 | }, 72 | "type": "n8n-nodes-base.code", 73 | "typeVersion": 2, 74 | "position": [ 75 | -752, 76 | 64 77 | ], 78 | "id": "680847b4-e704-4728-8ee2-44642d931d58", 79 | "name": "Code" 80 | }, 81 | { 82 | "parameters": { 83 | "binaryPropertyName": "=data", 84 | "options": { 85 | "fileName": "={{ $('List tables1').item.json.table_name }}" 86 | } 87 | }, 88 | "type": "n8n-nodes-base.convertToFile", 89 | "typeVersion": 1.1, 90 | "position": [ 91 | -336, 92 | 64 93 | ], 94 | "id": "0ec30d45-c281-4d65-a966-29f615eade3a", 95 | "name": "Convert to File1" 96 | }, 97 | { 98 | "parameters": { 99 | "operation": "select", 100 | "schema": { 101 | "__rl": true, 102 | "value": "information_schema", 103 | "mode": "list", 104 | "cachedResultName": "information_schema" 105 | }, 106 | "table": { 107 | "__rl": true, 108 | "value": "tables", 109 | "mode": "list", 110 | "cachedResultName": "tables" 111 | }, 112 | "where": { 113 | "values": [ 114 | { 115 | "column": "table_schema", 116 | "value": "public" 117 | } 118 | ] 119 | }, 120 | "options": {} 121 | }, 122 | "id": "09466877-0cdb-4011-809f-87c696ab3717", 123 | "name": "List tables1", 124 | "type": "n8n-nodes-base.postgres", 125 | "position": [ 126 | -1104, 127 | -96 128 | ], 129 | "typeVersion": 2.6, 130 | "credentials": { 131 | "postgres": { 132 | "id": "0bZEnNx2jwoEznOw", 133 | "name": "pg - n8n-discord-trigger-bot" 134 | } 135 | } 136 | }, 137 | { 138 | "parameters": { 139 | "authentication": "oAuth2", 140 | "resource": "file", 141 | "operation": "list", 142 | "owner": { 143 | "__rl": true, 144 | "value": "jharilela", 145 | "mode": "name" 146 | }, 147 | "repository": { 148 | "__rl": true, 149 | "value": "n8n-backup-discord-bot", 150 | "mode": "list", 151 | "cachedResultName": "n8n-backup-discord-bot", 152 | "cachedResultUrl": "https://github.com/Jharilela/n8n-backup-discord-bot" 153 | }, 154 | "filePath": "=" 155 | }, 156 | "id": "014fa66c-4acc-4698-97cc-142efaae9376", 157 | "name": "List files from repository [GITHUB]", 158 | "type": "n8n-nodes-base.github", 159 | "typeVersion": 1, 160 | "position": [ 161 | -1536, 162 | -96 163 | ], 164 | "alwaysOutputData": true, 165 | "webhookId": "f7310d6a-1573-4848-9757-f9a75e359e73", 166 | "credentials": { 167 | "githubOAuth2Api": { 168 | "id": "g3sESgnuArjRvV8F", 169 | "name": "GitHub account" 170 | } 171 | } 172 | }, 173 | { 174 | "parameters": { 175 | "operation": "aggregateItems", 176 | "fieldsToAggregate": { 177 | "fieldToAggregate": [ 178 | { 179 | "fieldToAggregate": "name" 180 | } 181 | ] 182 | }, 183 | "options": {} 184 | }, 185 | "id": "5942926d-b940-4ba4-a0c1-7209c04c0a3c", 186 | "name": "Combine file names [GITHUB]", 187 | "type": "n8n-nodes-base.itemLists", 188 | "typeVersion": 2.1, 189 | "position": [ 190 | -1344, 191 | -96 192 | ] 193 | }, 194 | { 195 | "parameters": { 196 | "content": "## Get list of current tables\nReturn a list of existing files in GitHub repository. \nSome of them are tables, \nSome are readme files", 197 | "height": 547, 198 | "width": 390, 199 | "color": 7 200 | }, 201 | "id": "00aa6a0e-b5c1-4093-8ee0-94b35bcdd934", 202 | "name": "Sticky Note2", 203 | "type": "n8n-nodes-base.stickyNote", 204 | "typeVersion": 1, 205 | "position": [ 206 | -1584, 207 | -256 208 | ] 209 | }, 210 | { 211 | "parameters": { 212 | "batchSize": 1, 213 | "options": {} 214 | }, 215 | "id": "863c59ef-6ce2-4c22-8c0e-00b9abcfdbd9", 216 | "name": "Split to single items", 217 | "type": "n8n-nodes-base.splitInBatches", 218 | "typeVersion": 2, 219 | "position": [ 220 | -80, 221 | -112 222 | ] 223 | }, 224 | { 225 | "parameters": { 226 | "conditions": { 227 | "string": [ 228 | { 229 | "value1": "={{ $node['Combine file names [GITHUB]'].json.name }}", 230 | "operation": "contains", 231 | "value2": "={{ $binary.data.fileName }}" 232 | } 233 | ] 234 | } 235 | }, 236 | "id": "ccee7881-77c8-419e-ad99-ac3bb113ae6b", 237 | "name": "Check if file exists in repository", 238 | "type": "n8n-nodes-base.if", 239 | "typeVersion": 1, 240 | "position": [ 241 | 160, 242 | -128 243 | ] 244 | }, 245 | { 246 | "parameters": { 247 | "authentication": "oAuth2", 248 | "resource": "file", 249 | "operation": "edit", 250 | "owner": { 251 | "__rl": true, 252 | "value": "jharilela", 253 | "mode": "name" 254 | }, 255 | "repository": { 256 | "__rl": true, 257 | "value": "n8n-backup-discord-bot", 258 | "mode": "list", 259 | "cachedResultName": "n8n-backup-discord-bot", 260 | "cachedResultUrl": "https://github.com/Jharilela/n8n-backup-discord-bot" 261 | }, 262 | "filePath": "={{ $binary.data.fileName }}", 263 | "binaryData": true, 264 | "commitMessage": "=backup-{{ $now.toMillis() }}" 265 | }, 266 | "id": "f6d53dcc-1e40-43f5-b793-4d06a55ccbed", 267 | "name": "Update file [GITHUB]", 268 | "type": "n8n-nodes-base.github", 269 | "typeVersion": 1, 270 | "position": [ 271 | 384, 272 | -128 273 | ], 274 | "webhookId": "fb5f5095-cf60-4421-8b83-91041cfc8929", 275 | "credentials": { 276 | "githubOAuth2Api": { 277 | "id": "g3sESgnuArjRvV8F", 278 | "name": "GitHub account" 279 | } 280 | } 281 | }, 282 | { 283 | "parameters": { 284 | "authentication": "oAuth2", 285 | "resource": "file", 286 | "owner": { 287 | "__rl": true, 288 | "value": "jharilela", 289 | "mode": "name" 290 | }, 291 | "repository": { 292 | "__rl": true, 293 | "value": "n8n-backup-discord-bot", 294 | "mode": "list", 295 | "cachedResultName": "n8n-backup-discord-bot", 296 | "cachedResultUrl": "https://github.com/Jharilela/n8n-backup-discord-bot" 297 | }, 298 | "filePath": "={{ $binary.data.fileName }}", 299 | "binaryData": true, 300 | "commitMessage": "=backup-{{ $node['Set commit date'].json.commitDate }}" 301 | }, 302 | "id": "ebcd65d3-37ac-4d47-90dc-fede45adb9c0", 303 | "name": "Upload file [GITHUB]", 304 | "type": "n8n-nodes-base.github", 305 | "typeVersion": 1, 306 | "position": [ 307 | 384, 308 | 80 309 | ], 310 | "webhookId": "17b56fbd-32da-4e39-ae22-8c01a7b4bbb6", 311 | "credentials": { 312 | "githubOAuth2Api": { 313 | "id": "g3sESgnuArjRvV8F", 314 | "name": "GitHub account" 315 | } 316 | } 317 | }, 318 | { 319 | "parameters": { 320 | "content": "## Get postgres table data\nReturn a list of existing tables and data in the postgres. \nConvert them to csv", 321 | "height": 547, 322 | "width": 998, 323 | "color": 5 324 | }, 325 | "id": "6bcbeffb-b036-45f3-88a8-dd5f1ac0fa39", 326 | "name": "Sticky Note", 327 | "type": "n8n-nodes-base.stickyNote", 328 | "typeVersion": 1, 329 | "position": [ 330 | -1168, 331 | -256 332 | ] 333 | }, 334 | { 335 | "parameters": { 336 | "content": "## Make a list of existing files\nCreate backup if its a new table. \nUpdate backup if there is new data in the table", 337 | "height": 547, 338 | "width": 694, 339 | "color": 7 340 | }, 341 | "id": "88cc96e5-fc5b-4d28-8e57-82c193efb19d", 342 | "name": "Sticky Note1", 343 | "type": "n8n-nodes-base.stickyNote", 344 | "typeVersion": 1, 345 | "position": [ 346 | -144, 347 | -256 348 | ] 349 | } 350 | ], 351 | "pinData": {}, 352 | "connections": { 353 | "Daily Schedule": { 354 | "main": [ 355 | [ 356 | { 357 | "node": "List files from repository [GITHUB]", 358 | "type": "main", 359 | "index": 0 360 | } 361 | ] 362 | ] 363 | }, 364 | "List tables": { 365 | "main": [ 366 | [ 367 | { 368 | "node": "Convert to File1", 369 | "type": "main", 370 | "index": 0 371 | } 372 | ] 373 | ] 374 | }, 375 | "Loop Over Items": { 376 | "main": [ 377 | [ 378 | { 379 | "node": "Split to single items", 380 | "type": "main", 381 | "index": 0 382 | } 383 | ], 384 | [ 385 | { 386 | "node": "Code", 387 | "type": "main", 388 | "index": 0 389 | } 390 | ] 391 | ] 392 | }, 393 | "Code": { 394 | "main": [ 395 | [ 396 | { 397 | "node": "List tables", 398 | "type": "main", 399 | "index": 0 400 | } 401 | ] 402 | ] 403 | }, 404 | "Convert to File1": { 405 | "main": [ 406 | [ 407 | { 408 | "node": "Loop Over Items", 409 | "type": "main", 410 | "index": 0 411 | } 412 | ] 413 | ] 414 | }, 415 | "List tables1": { 416 | "main": [ 417 | [ 418 | { 419 | "node": "Loop Over Items", 420 | "type": "main", 421 | "index": 0 422 | } 423 | ] 424 | ] 425 | }, 426 | "List files from repository [GITHUB]": { 427 | "main": [ 428 | [ 429 | { 430 | "node": "Combine file names [GITHUB]", 431 | "type": "main", 432 | "index": 0 433 | } 434 | ] 435 | ] 436 | }, 437 | "Combine file names [GITHUB]": { 438 | "main": [ 439 | [ 440 | { 441 | "node": "List tables1", 442 | "type": "main", 443 | "index": 0 444 | } 445 | ] 446 | ] 447 | }, 448 | "Split to single items": { 449 | "main": [ 450 | [ 451 | { 452 | "node": "Check if file exists in repository", 453 | "type": "main", 454 | "index": 0 455 | } 456 | ] 457 | ] 458 | }, 459 | "Check if file exists in repository": { 460 | "main": [ 461 | [ 462 | { 463 | "node": "Update file [GITHUB]", 464 | "type": "main", 465 | "index": 0 466 | } 467 | ], 468 | [ 469 | { 470 | "node": "Upload file [GITHUB]", 471 | "type": "main", 472 | "index": 0 473 | } 474 | ] 475 | ] 476 | }, 477 | "Update file [GITHUB]": { 478 | "main": [ 479 | [ 480 | { 481 | "node": "Split to single items", 482 | "type": "main", 483 | "index": 0 484 | } 485 | ] 486 | ] 487 | }, 488 | "Upload file [GITHUB]": { 489 | "main": [ 490 | [ 491 | { 492 | "node": "Split to single items", 493 | "type": "main", 494 | "index": 0 495 | } 496 | ] 497 | ] 498 | } 499 | }, 500 | "active": false, 501 | "settings": { 502 | "executionOrder": "v1" 503 | }, 504 | "versionId": "dfc20d42-39a5-48ed-9237-eb68f00e4280", 505 | "meta": { 506 | "templateCredsSetupCompleted": true, 507 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739" 508 | }, 509 | "id": "oBJaQvcNxODDoAl1", 510 | "tags": [] 511 | } -------------------------------------------------------------------------------- /Memecoin Art Generator/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "memecoin image generator", 3 | "nodes": [ 4 | { 5 | "parameters": { 6 | "content": "# Define memecoin\nPrompt\n```\nmemecoin_name : popcat\nmascot_description : cat with open mouth\nmascot_image : https://i.pinimg.com/736x/9d/05/6b/9d056b5b97c0513a4fc9d9cd93304a05.jpg\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAPI docs https://ai.google.dev/gemini-api/docs/image-generation#gemini-image-editing \nGet API key https://aistudio.google.com/api-keys\n\n\n", 7 | "height": 736, 8 | "width": 480, 9 | "color": 7 10 | }, 11 | "type": "n8n-nodes-base.stickyNote", 12 | "position": [ 13 | 2384, 14 | 304 15 | ], 16 | "typeVersion": 1, 17 | "id": "fa67edf0-7ae8-458e-9503-83def964e3f5", 18 | "name": "Sticky Note5" 19 | }, 20 | { 21 | "parameters": { 22 | "content": "# Get source memecoin image\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n![mascot_image](https://articles.emp0.com/wp-content/uploads/2025/10/popcat-original.jpg)", 23 | "height": 736, 24 | "width": 512, 25 | "color": 7 26 | }, 27 | "type": "n8n-nodes-base.stickyNote", 28 | "position": [ 29 | 2880, 30 | 304 31 | ], 32 | "typeVersion": 1, 33 | "id": "8028d8fd-e5ac-4700-8ae0-31df4eeb64fd", 34 | "name": "Sticky Note4" 35 | }, 36 | { 37 | "parameters": { 38 | "content": "# *MemeCoin Art Generator 🍌*\n", 39 | "height": 80, 40 | "width": 528, 41 | "color": 5 42 | }, 43 | "type": "n8n-nodes-base.stickyNote", 44 | "typeVersion": 1, 45 | "position": [ 46 | 3312, 47 | 144 48 | ], 49 | "id": "8838a52e-a2bc-479b-a4cd-7e3951f193e2", 50 | "name": "Sticky Note12" 51 | }, 52 | { 53 | "parameters": { 54 | "content": "# Emp 0 🦑\n\n\n\n", 55 | "height": 80, 56 | "width": 726, 57 | "color": 4 58 | }, 59 | "type": "n8n-nodes-base.stickyNote", 60 | "typeVersion": 1, 61 | "position": [ 62 | 3056, 63 | 144 64 | ], 65 | "id": "451e2dfa-9cfd-47a3-8819-2d2b2367447d", 66 | "name": "Sticky Note6" 67 | }, 68 | { 69 | "parameters": { 70 | "text": "={{ $('AI Agent').item.json.output.tweet }}", 71 | "additionalFields": { 72 | "attachments": "={{ $json.media_id_string }}" 73 | } 74 | }, 75 | "type": "n8n-nodes-base.twitter", 76 | "typeVersion": 2, 77 | "position": [ 78 | 4208, 79 | 528 80 | ], 81 | "id": "0fba0726-a0f9-4cb3-a6e4-9c5fdaf30598", 82 | "name": "Create Tweet", 83 | "credentials": { 84 | "twitterOAuth2Api": { 85 | "id": "aPVLqk6ESHZrmb8X", 86 | "name": "X - @okmeow" 87 | } 88 | } 89 | }, 90 | { 91 | "parameters": { 92 | "rule": { 93 | "interval": [ 94 | { 95 | "field": "hours" 96 | } 97 | ] 98 | } 99 | }, 100 | "type": "n8n-nodes-base.scheduleTrigger", 101 | "typeVersion": 1.2, 102 | "position": [ 103 | 2176, 104 | 528 105 | ], 106 | "id": "10afe998-8120-4c6f-a563-af6e7c66818f", 107 | "name": "Schedule Trigger" 108 | }, 109 | { 110 | "parameters": { 111 | "promptType": "define", 112 | "text": "=You are a twitter generating agent. Your job is to 1. Generate a 100 character tweet in gen z slang indicating this memecoin {{ $('Define Memecoin').item.json.memecoin_name }} will go up. 2. Generate an image generation prompt for my mascot {{ $('Define Memecoin').item.json.mascot_description }} based on current trending topics today.", 113 | "hasOutputParser": true, 114 | "options": {} 115 | }, 116 | "type": "@n8n/n8n-nodes-langchain.agent", 117 | "typeVersion": 2.2, 118 | "position": [ 119 | 2608, 120 | 528 121 | ], 122 | "id": "7863a52e-db25-4f39-bc85-a47c2cf0a956", 123 | "name": "AI Agent" 124 | }, 125 | { 126 | "parameters": { 127 | "schemaType": "manual", 128 | "inputSchema": "{\n\t\"type\": \"object\",\n\t\"properties\": {\n\t\t\"tweet\": {\n\t\t\t\"type\": \"string\"\n\t\t},\n\t\t\"imagePrompt\": {\n\t\t\t\"type\": \"string\"\n\t\t}\n\t}\n}" 129 | }, 130 | "type": "@n8n/n8n-nodes-langchain.outputParserStructured", 131 | "typeVersion": 1.3, 132 | "position": [ 133 | 2736, 134 | 720 135 | ], 136 | "id": "0c14f5a8-038b-4b60-a1aa-263d91c5d811", 137 | "name": "Structured Output Parser" 138 | }, 139 | { 140 | "parameters": { 141 | "content": "# Generate Image using Nano Banana\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n![mascot_image](https://articles.emp0.com/wp-content/uploads/2025/10/popcat-edit.png)", 142 | "height": 736, 143 | "width": 512, 144 | "color": 7 145 | }, 146 | "type": "n8n-nodes-base.stickyNote", 147 | "position": [ 148 | 3408, 149 | 304 150 | ], 151 | "typeVersion": 1, 152 | "id": "d57eb02a-8ac2-49b8-8b0f-a7242b488b0d", 153 | "name": "Sticky Note7" 154 | }, 155 | { 156 | "parameters": { 157 | "content": "# Upload to Twitter\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n![mascot_image](https://articles.emp0.com/wp-content/uploads/2025/10/popcat-edit-twitter-1.png)", 158 | "height": 736, 159 | "width": 464, 160 | "color": 7 161 | }, 162 | "type": "n8n-nodes-base.stickyNote", 163 | "position": [ 164 | 3952, 165 | 304 166 | ], 167 | "typeVersion": 1, 168 | "id": "0789f15f-2df5-41bb-8d26-e9180330f156", 169 | "name": "Sticky Note8" 170 | }, 171 | { 172 | "parameters": { 173 | "options": {} 174 | }, 175 | "type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini", 176 | "typeVersion": 1, 177 | "position": [ 178 | 2480, 179 | 720 180 | ], 181 | "id": "47967fd9-8e03-443c-803d-a7a34b89bdae", 182 | "name": "Google Gemini Chat Model", 183 | "credentials": { 184 | "googlePalmApi": { 185 | "id": "Glfk0UAMF4rhdmsh", 186 | "name": "gemini" 187 | } 188 | } 189 | }, 190 | { 191 | "parameters": { 192 | "url": "={{ $('Define Memecoin').item.json.mascot_image }}", 193 | "options": {} 194 | }, 195 | "type": "n8n-nodes-base.httpRequest", 196 | "typeVersion": 4.2, 197 | "position": [ 198 | 2976, 199 | 528 200 | ], 201 | "id": "58ec511f-6831-4d77-bca9-e5ccd3460de2", 202 | "name": "Get source image" 203 | }, 204 | { 205 | "parameters": { 206 | "operation": "binaryToPropery", 207 | "destinationKey": "Image", 208 | "options": {} 209 | }, 210 | "type": "n8n-nodes-base.extractFromFile", 211 | "typeVersion": 1, 212 | "position": [ 213 | 3184, 214 | 528 215 | ], 216 | "id": "ad31059f-5f0c-4699-9598-366e6ef6a426", 217 | "name": "Convert Source image to base64" 218 | }, 219 | { 220 | "parameters": { 221 | "method": "POST", 222 | "url": "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-image:generateContent", 223 | "authentication": "predefinedCredentialType", 224 | "nodeCredentialType": "googlePalmApi", 225 | "sendBody": true, 226 | "specifyBody": "json", 227 | "jsonBody": "={\n \"contents\": [{\n \"parts\": [\n {\"text\": \"{{ $('AI Agent').item.json.output.imagePrompt }}\"},\n{\n \"inline_data\": {\n \"mime_type\":\"image/jpeg\",\n \"data\": \"{{ $json.Image }}\"\n }\n }\n ]\n }],\n \"generationConfig\": {\n \"imageConfig\": {\n \"aspectRatio\": \"16:9\"\n }\n }\n} ", 228 | "options": {} 229 | }, 230 | "type": "n8n-nodes-base.httpRequest", 231 | "typeVersion": 4.2, 232 | "position": [ 233 | 3504, 234 | 528 235 | ], 236 | "id": "2477d8d2-f35b-4863-9abb-0e11ea90e557", 237 | "name": "Generate image using NanoBanana", 238 | "credentials": { 239 | "googlePalmApi": { 240 | "id": "dfK6P7CHbH6yb6yn", 241 | "name": "Google Gemini(PaLM) Api account" 242 | } 243 | } 244 | }, 245 | { 246 | "parameters": { 247 | "operation": "toBinary", 248 | "sourceProperty": "={{ $if($json.candidates[0].content.parts[0].inlineData, 'candidates[0].content.parts[0].inlineData.data', 'candidates[0].content.parts[1].inlineData.data') }}", 249 | "options": {} 250 | }, 251 | "type": "n8n-nodes-base.convertToFile", 252 | "typeVersion": 1.1, 253 | "position": [ 254 | 3728, 255 | 528 256 | ], 257 | "id": "00be3e5d-4797-49d5-a964-4aa58b709fd6", 258 | "name": "Convert Base64 to Png" 259 | }, 260 | { 261 | "parameters": { 262 | "method": "POST", 263 | "url": "https://upload.twitter.com/1.1/media/upload.json?media_category=TWEET_IMAGE", 264 | "authentication": "predefinedCredentialType", 265 | "nodeCredentialType": "twitterOAuth1Api", 266 | "sendBody": true, 267 | "contentType": "multipart-form-data", 268 | "bodyParameters": { 269 | "parameters": [ 270 | { 271 | "parameterType": "formBinaryData", 272 | "name": "media", 273 | "inputDataFieldName": "data" 274 | } 275 | ] 276 | }, 277 | "options": { 278 | "response": { 279 | "response": { 280 | "responseFormat": "json" 281 | } 282 | } 283 | } 284 | }, 285 | "id": "38683108-7f09-48a6-952c-0d3794b6a3f6", 286 | "name": "Upload to Twitter", 287 | "type": "n8n-nodes-base.httpRequest", 288 | "position": [ 289 | 4016, 290 | 528 291 | ], 292 | "typeVersion": 4.2, 293 | "retryOnFail": true, 294 | "waitBetweenTries": 1000, 295 | "credentials": { 296 | "twitterOAuth1Api": { 297 | "id": "uf5mMVmxX0ZaypZa", 298 | "name": "X OAuth - okmeow" 299 | } 300 | } 301 | }, 302 | { 303 | "parameters": { 304 | "assignments": { 305 | "assignments": [ 306 | { 307 | "id": "0f3ffc62-d157-40cd-8bf0-d72a67798802", 308 | "name": "memecoin_name", 309 | "value": "popcat", 310 | "type": "string" 311 | }, 312 | { 313 | "id": "ba606dc8-5445-495c-a528-b47b438910cf", 314 | "name": "mascot_description", 315 | "value": "cat with open mouth", 316 | "type": "string" 317 | }, 318 | { 319 | "id": "9a3660e9-e98f-47bd-9e61-b465bb40e4f5", 320 | "name": "mascot_image", 321 | "value": "https://i.pinimg.com/736x/9d/05/6b/9d056b5b97c0513a4fc9d9cd93304a05.jpg", 322 | "type": "string" 323 | } 324 | ] 325 | }, 326 | "options": {} 327 | }, 328 | "type": "n8n-nodes-base.set", 329 | "typeVersion": 3.4, 330 | "position": [ 331 | 2432, 332 | 528 333 | ], 334 | "id": "59993e4f-7d1f-4d6a-a615-3f4b04466cfb", 335 | "name": "Define Memecoin" 336 | } 337 | ], 338 | "pinData": {}, 339 | "connections": { 340 | "Schedule Trigger": { 341 | "main": [ 342 | [ 343 | { 344 | "node": "Define Memecoin", 345 | "type": "main", 346 | "index": 0 347 | } 348 | ] 349 | ] 350 | }, 351 | "Structured Output Parser": { 352 | "ai_outputParser": [ 353 | [ 354 | { 355 | "node": "AI Agent", 356 | "type": "ai_outputParser", 357 | "index": 0 358 | } 359 | ] 360 | ] 361 | }, 362 | "AI Agent": { 363 | "main": [ 364 | [ 365 | { 366 | "node": "Get source image", 367 | "type": "main", 368 | "index": 0 369 | } 370 | ] 371 | ] 372 | }, 373 | "Google Gemini Chat Model": { 374 | "ai_languageModel": [ 375 | [ 376 | { 377 | "node": "AI Agent", 378 | "type": "ai_languageModel", 379 | "index": 0 380 | } 381 | ] 382 | ] 383 | }, 384 | "Get source image": { 385 | "main": [ 386 | [ 387 | { 388 | "node": "Convert Source image to base64", 389 | "type": "main", 390 | "index": 0 391 | } 392 | ] 393 | ] 394 | }, 395 | "Convert Source image to base64": { 396 | "main": [ 397 | [ 398 | { 399 | "node": "Generate image using NanoBanana", 400 | "type": "main", 401 | "index": 0 402 | } 403 | ] 404 | ] 405 | }, 406 | "Generate image using NanoBanana": { 407 | "main": [ 408 | [ 409 | { 410 | "node": "Convert Base64 to Png", 411 | "type": "main", 412 | "index": 0 413 | } 414 | ] 415 | ] 416 | }, 417 | "Convert Base64 to Png": { 418 | "main": [ 419 | [ 420 | { 421 | "node": "Upload to Twitter", 422 | "type": "main", 423 | "index": 0 424 | } 425 | ] 426 | ] 427 | }, 428 | "Upload to Twitter": { 429 | "main": [ 430 | [ 431 | { 432 | "node": "Create Tweet", 433 | "type": "main", 434 | "index": 0 435 | } 436 | ] 437 | ] 438 | }, 439 | "Define Memecoin": { 440 | "main": [ 441 | [ 442 | { 443 | "node": "AI Agent", 444 | "type": "main", 445 | "index": 0 446 | } 447 | ] 448 | ] 449 | } 450 | }, 451 | "active": true, 452 | "settings": { 453 | "executionOrder": "v1" 454 | }, 455 | "versionId": "0e52d371-ca37-417d-b158-78fcb8e24442", 456 | "meta": { 457 | "templateCredsSetupCompleted": true, 458 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739" 459 | }, 460 | "id": "8EZdduCs3OULUYrh", 461 | "tags": [] 462 | } -------------------------------------------------------------------------------- /Reddit To Twitter Automation/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Twitter Automation", 3 | "nodes": [ 4 | { 5 | "parameters": { 6 | "jsCode": "// Templates for normal tweets\nconst subreddits = [\n \"n8n\",\n \"microsaas\",\n \"SaaS\",\n \"automation\",\n \"n8n_ai_agents\"\n];\n\n// Track last output to avoid duplicates\nif (!global.lastTweet) {\n global.lastTweet = null;\n}\n\nfunction getRandom(arr) {\n return arr[Math.floor(Math.random() * arr.length)];\n}\n\nlet tweet;\nlet ads = false\n\ndo {\n if (Math.random() < 0.2) {\n // 20% chance → promo\n tweet = \"advertise\";\n ads = true;\n } else {\n // 80% chance → template\n tweet = getRandom(subreddits);\n }\n} while (tweet === global.lastTweet); // prevent repeats\n\n// Save for next run\nglobal.lastTweet = tweet;\n\nreturn [{ json: { tweet, ads} }];\n\n" 7 | }, 8 | "type": "n8n-nodes-base.code", 9 | "typeVersion": 2, 10 | "position": [ 11 | 128, 12 | 2272 13 | ], 14 | "id": "1cb94f67-d47c-4cdc-a231-d87adc9f8f42", 15 | "name": "Code1" 16 | }, 17 | { 18 | "parameters": { 19 | "promptType": "define", 20 | "text": "={{ $json.tweet }}", 21 | "hasOutputParser": true, 22 | "options": { 23 | "systemMessage": "=You are a ghostwriter who creates short, raw, non-repetitive tweets. \nYour job: generate a tweet refereingcing a post you saw on reddit\n\nYou have access to 3 tools to \n1. get trending posts from a subreddit\n2. get a list of recently posted tweets in the users account\n3. log a tweet based on a subredit post_id so that we dont mrecreate a similar tweet based on the post_id\n\nRules: \n- Get the subreddit from the user\n- Fetch trending subreddit posts from the database reddit tool.\n- Fetch the past tweets that has been posted and logged to the database\n- Choose a subreddit post_id to write about and make sure that post_id in that subreddit has never been used before to write a tweet\n- Generate a unique tweet. Write from the first person point of view. something like i discovered this cool workflow on reddit or this tool. Limit to 200 characters\n- Tweets must be punchy, edgy, and written in modern Twitter style. Have a strong opinion whether u think its cool or not. you are very edgy programmer turned enterpreneur but avoid all words of profanity, vulgar and sexual words\n- Keep it concise (2–4 lines). No hashtags. Minimal emojis (only if it fits). Ask for opinions. Always state what u saw\n- Use a separate line for each short phrase or thought" 24 | } 25 | }, 26 | "type": "@n8n/n8n-nodes-langchain.agent", 27 | "typeVersion": 2.2, 28 | "position": [ 29 | 528, 30 | 2272 31 | ], 32 | "id": "01a9b39a-784a-409d-b4f1-3a1a345e3dca", 33 | "name": "Tweet maker1" 34 | }, 35 | { 36 | "parameters": { 37 | "documentId": { 38 | "__rl": true, 39 | "value": "1DbPO7U68-YlOHhb8fIlGu2ImD_N-cm0E-8pwc7qnAo4", 40 | "mode": "list", 41 | "cachedResultName": "Twitter Automation", 42 | "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1DbPO7U68-YlOHhb8fIlGu2ImD_N-cm0E-8pwc7qnAo4/edit?usp=drivesdk" 43 | }, 44 | "sheetName": { 45 | "__rl": true, 46 | "value": "gid=0", 47 | "mode": "list", 48 | "cachedResultName": "posts", 49 | "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1DbPO7U68-YlOHhb8fIlGu2ImD_N-cm0E-8pwc7qnAo4/edit#gid=0" 50 | }, 51 | "filtersUI": { 52 | "values": [ 53 | { 54 | "lookupColumn": "subreddit", 55 | "lookupValue": "={{ $fromAI('subreddit', `subreddit`, 'string') }}" 56 | }, 57 | { 58 | "lookupColumn": "post_id", 59 | "lookupValue": "={{ $fromAI('id', `id of the post`, 'string') }}" 60 | } 61 | ] 62 | }, 63 | "options": {} 64 | }, 65 | "type": "n8n-nodes-base.googleSheetsTool", 66 | "typeVersion": 4.7, 67 | "position": [ 68 | 672, 69 | 2464 70 | ], 71 | "id": "183cfd15-198f-4f7d-824d-e11bdadb962c", 72 | "name": "read database2", 73 | "credentials": { 74 | "googleSheetsOAuth2Api": { 75 | "id": "dNmdYyKDCj9rTeSw", 76 | "name": "Gsheet" 77 | } 78 | } 79 | }, 80 | { 81 | "parameters": { 82 | "options": {} 83 | }, 84 | "type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini", 85 | "typeVersion": 1, 86 | "position": [ 87 | 384, 88 | 2320 89 | ], 90 | "id": "beed3c7c-7b8f-4158-a9d4-64e66288659d", 91 | "name": "Google Gemini Chat Model1", 92 | "credentials": { 93 | "googlePalmApi": { 94 | "id": "VogGayxALH0ssmBl", 95 | "name": "Gemini" 96 | } 97 | } 98 | }, 99 | { 100 | "parameters": { 101 | "rule": { 102 | "interval": [ 103 | { 104 | "field": "hours", 105 | "hoursInterval": 2 106 | } 107 | ] 108 | } 109 | }, 110 | "type": "n8n-nodes-base.scheduleTrigger", 111 | "typeVersion": 1.2, 112 | "position": [ 113 | -96, 114 | 2272 115 | ], 116 | "id": "995f4e4e-dd8c-474d-b22d-f4bd4697d043", 117 | "name": "Schedule Trigger1" 118 | }, 119 | { 120 | "parameters": { 121 | "content": "## Randomly choose sub reddit\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFor example\n```json\n[\n \"n8n\",\n \"microsaas\",\n \"SaaS\",\n \"automation\",\n \"n8n_ai_agents\"\n]\n```", 122 | "height": 608, 123 | "width": 256 124 | }, 125 | "type": "n8n-nodes-base.stickyNote", 126 | "position": [ 127 | 80, 128 | 2160 129 | ], 130 | "typeVersion": 1, 131 | "id": "f19be085-9dce-4751-b51b-a5a05eda309c", 132 | "name": "Sticky Note" 133 | }, 134 | { 135 | "parameters": { 136 | "text": "={{ $json.tweet }}", 137 | "additionalFields": { 138 | "attachments": "={{ $json.image_id || null }}" 139 | } 140 | }, 141 | "type": "n8n-nodes-base.twitter", 142 | "typeVersion": 2, 143 | "position": [ 144 | 1312, 145 | 2272 146 | ], 147 | "id": "8afc5b1f-1367-4846-8154-7d2357b8c9d6", 148 | "name": "Creates the tweet1", 149 | "credentials": { 150 | "twitterOAuth2Api": { 151 | "id": "Ig60ncJDzlhLU0Ap", 152 | "name": "X" 153 | } 154 | } 155 | }, 156 | { 157 | "parameters": { 158 | "schemaType": "manual", 159 | "inputSchema": "{\n\t\"type\": \"object\",\n\t\"properties\": {\n\t\t\"tweet\": {\n\t\t\t\"type\": \"string\"\n\t\t},\n \"subreddit\": {\n\t\t\t\"type\": \"string\"\n\t\t},\n\t\t\"id\": {\n\t\t\t\"type\": \"string\", \n \"description\": \"id of the post on reddit\"\n\t\t}\n\t},\n \"required\": [\"tweet\"]\n}" 160 | }, 161 | "type": "@n8n/n8n-nodes-langchain.outputParserStructured", 162 | "typeVersion": 1.3, 163 | "position": [ 164 | 832, 165 | 2336 166 | ], 167 | "id": "1fbf01eb-2e78-48d1-86b0-aba52156f958", 168 | "name": "Structured Output Parser2" 169 | }, 170 | { 171 | "parameters": { 172 | "operation": "getAll", 173 | "subreddit": "={{$fromAI('subreddit','name of the subreddit','string')}}", 174 | "limit": 10, 175 | "filters": { 176 | "category": "rising" 177 | } 178 | }, 179 | "type": "n8n-nodes-base.redditTool", 180 | "typeVersion": 1, 181 | "position": [ 182 | 496, 183 | 2448 184 | ], 185 | "id": "9943e3e6-ad59-4e34-acb0-af2be3f0a062", 186 | "name": "Get many posts in Reddit1", 187 | "credentials": { 188 | "redditOAuth2Api": { 189 | "id": "rIyaOqpm6SelzXHv", 190 | "name": "Reddit account" 191 | } 192 | } 193 | }, 194 | { 195 | "parameters": { 196 | "operation": "append", 197 | "documentId": { 198 | "__rl": true, 199 | "value": "1DbPO7U68-YlOHhb8fIlGu2ImD_N-cm0E-8pwc7qnAo4", 200 | "mode": "list", 201 | "cachedResultName": "Twitter Automation", 202 | "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1DbPO7U68-YlOHhb8fIlGu2ImD_N-cm0E-8pwc7qnAo4/edit?usp=drivesdk" 203 | }, 204 | "sheetName": { 205 | "__rl": true, 206 | "value": "gid=0", 207 | "mode": "list", 208 | "cachedResultName": "posts", 209 | "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1DbPO7U68-YlOHhb8fIlGu2ImD_N-cm0E-8pwc7qnAo4/edit#gid=0" 210 | }, 211 | "columns": { 212 | "mappingMode": "defineBelow", 213 | "value": { 214 | "Date": "={{$now.format('dd/MM/yyyy')}}", 215 | "PAST TWEETS": "={{ $('Edit Fields1').item.json.tweet }}", 216 | "subreddit": "={{ $('Edit Fields1').item.json.subreddit }}", 217 | "post_id": "={{ $('Edit Fields1').item.json.post_id }}" 218 | }, 219 | "matchingColumns": [], 220 | "schema": [ 221 | { 222 | "id": "PAST TWEETS", 223 | "displayName": "PAST TWEETS", 224 | "required": false, 225 | "defaultMatch": false, 226 | "display": true, 227 | "type": "string", 228 | "canBeUsedToMatch": true 229 | }, 230 | { 231 | "id": "Date", 232 | "displayName": "Date", 233 | "required": false, 234 | "defaultMatch": false, 235 | "display": true, 236 | "type": "string", 237 | "canBeUsedToMatch": true 238 | }, 239 | { 240 | "id": "subreddit", 241 | "displayName": "subreddit", 242 | "required": false, 243 | "defaultMatch": false, 244 | "display": true, 245 | "type": "string", 246 | "canBeUsedToMatch": true 247 | }, 248 | { 249 | "id": "post_id", 250 | "displayName": "post_id", 251 | "required": false, 252 | "defaultMatch": false, 253 | "display": true, 254 | "type": "string", 255 | "canBeUsedToMatch": true 256 | } 257 | ], 258 | "attemptToConvertTypes": false, 259 | "convertFieldsToString": false 260 | }, 261 | "options": {} 262 | }, 263 | "type": "n8n-nodes-base.googleSheets", 264 | "typeVersion": 4.7, 265 | "position": [ 266 | 1664, 267 | 2272 268 | ], 269 | "id": "3de9105e-f1c2-4ad4-8617-48a1ca259069", 270 | "name": "Append row in sheet1", 271 | "credentials": { 272 | "googleSheetsOAuth2Api": { 273 | "id": "dNmdYyKDCj9rTeSw", 274 | "name": "Gsheet" 275 | } 276 | } 277 | }, 278 | { 279 | "parameters": { 280 | "assignments": { 281 | "assignments": [ 282 | { 283 | "id": "316d8d54-ce09-409b-ae4e-a2dffe41d011", 284 | "name": "tweet", 285 | "value": "={{$json.output.tweet}}", 286 | "type": "string" 287 | }, 288 | { 289 | "id": "69c01373-920f-4b2a-92bf-9a3f2fad16b2", 290 | "name": "subreddit", 291 | "value": "={{$json.output.subreddit || null}}", 292 | "type": "string" 293 | }, 294 | { 295 | "id": "a8b984bf-a55d-4ac3-9afc-d2fd95d1336c", 296 | "name": "post_id", 297 | "value": "={{ $json.output.id || null}}", 298 | "type": "string" 299 | } 300 | ] 301 | }, 302 | "options": {} 303 | }, 304 | "type": "n8n-nodes-base.set", 305 | "typeVersion": 3.4, 306 | "position": [ 307 | 1136, 308 | 2272 309 | ], 310 | "id": "e100fb38-9f6a-42a3-bd97-a08a0077bfc0", 311 | "name": "Edit Fields1" 312 | }, 313 | { 314 | "parameters": { 315 | "content": "## Post to Twitter\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGet your credentials from https://developer.x.com\n\n\n![Twitter post](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-twitter-post.png)", 316 | "height": 608, 317 | "width": 624, 318 | "color": 5 319 | }, 320 | "type": "n8n-nodes-base.stickyNote", 321 | "position": [ 322 | 960, 323 | 2160 324 | ], 325 | "typeVersion": 1, 326 | "id": "79f6984b-1080-4477-8c4a-0f77d0459755", 327 | "name": "Sticky Note10" 328 | }, 329 | { 330 | "parameters": { 331 | "content": "## Update Google Sheet\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRecord the post id of the reddit post that we used to repurpose content. \n\nThis ensure we dont write about the same post twice\n\n![Google Sheets](https://articles.emp0.com/wp-content/uploads/2025/10/reddit-twitter-gsheet.png)", 332 | "height": 608, 333 | "width": 512, 334 | "color": 4 335 | }, 336 | "type": "n8n-nodes-base.stickyNote", 337 | "position": [ 338 | 1600, 339 | 2160 340 | ], 341 | "typeVersion": 1, 342 | "id": "b5381fec-824a-44c2-9cde-cdabbbe4c542", 343 | "name": "Sticky Note11" 344 | }, 345 | { 346 | "parameters": { 347 | "content": "## Repurpose a content from reddit\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis AI agent\n1. Gets 10 recent posts from the subreddit\n2. Chooses one post as reference\n3. Check Google Sheets to ensure we have not written about this post before\n4. Repurpose that reddit post for twitter \n\n\n", 348 | "height": 608, 349 | "width": 592, 350 | "color": 2 351 | }, 352 | "type": "n8n-nodes-base.stickyNote", 353 | "position": [ 354 | 352, 355 | 2160 356 | ], 357 | "typeVersion": 1, 358 | "id": "bb800730-5a63-411e-bdbb-63b28fb683ae", 359 | "name": "Sticky Note9" 360 | } 361 | ], 362 | "pinData": {}, 363 | "connections": { 364 | "Code1": { 365 | "main": [ 366 | [ 367 | { 368 | "node": "Tweet maker1", 369 | "type": "main", 370 | "index": 0 371 | } 372 | ] 373 | ] 374 | }, 375 | "Tweet maker1": { 376 | "main": [ 377 | [ 378 | { 379 | "node": "Edit Fields1", 380 | "type": "main", 381 | "index": 0 382 | } 383 | ] 384 | ] 385 | }, 386 | "read database2": { 387 | "ai_tool": [ 388 | [ 389 | { 390 | "node": "Tweet maker1", 391 | "type": "ai_tool", 392 | "index": 0 393 | } 394 | ] 395 | ] 396 | }, 397 | "Google Gemini Chat Model1": { 398 | "ai_languageModel": [ 399 | [ 400 | { 401 | "node": "Tweet maker1", 402 | "type": "ai_languageModel", 403 | "index": 0 404 | } 405 | ] 406 | ] 407 | }, 408 | "Schedule Trigger1": { 409 | "main": [ 410 | [ 411 | { 412 | "node": "Code1", 413 | "type": "main", 414 | "index": 0 415 | } 416 | ] 417 | ] 418 | }, 419 | "Creates the tweet1": { 420 | "main": [ 421 | [ 422 | { 423 | "node": "Append row in sheet1", 424 | "type": "main", 425 | "index": 0 426 | } 427 | ] 428 | ] 429 | }, 430 | "Structured Output Parser2": { 431 | "ai_outputParser": [ 432 | [ 433 | { 434 | "node": "Tweet maker1", 435 | "type": "ai_outputParser", 436 | "index": 0 437 | } 438 | ] 439 | ] 440 | }, 441 | "Get many posts in Reddit1": { 442 | "ai_tool": [ 443 | [ 444 | { 445 | "node": "Tweet maker1", 446 | "type": "ai_tool", 447 | "index": 0 448 | } 449 | ] 450 | ] 451 | }, 452 | "Append row in sheet1": { 453 | "main": [ 454 | [] 455 | ] 456 | }, 457 | "Edit Fields1": { 458 | "main": [ 459 | [ 460 | { 461 | "node": "Creates the tweet1", 462 | "type": "main", 463 | "index": 0 464 | } 465 | ] 466 | ] 467 | } 468 | }, 469 | "active": true, 470 | "settings": { 471 | "executionOrder": "v1", 472 | "callerPolicy": "workflowsFromSameOwner", 473 | "errorWorkflow": "ernonQOxi07n6WGi", 474 | "timeSavedPerExecution": 5 475 | }, 476 | "versionId": "2483f036-f1af-4907-9a3a-26dd84a14018", 477 | "meta": { 478 | "templateCredsSetupCompleted": true, 479 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739" 480 | }, 481 | "id": "TdUwxmTIJnTwQhJe", 482 | "tags": [ 483 | { 484 | "createdAt": "2025-05-06T11:04:59.376Z", 485 | "updatedAt": "2025-05-06T11:04:59.376Z", 486 | "id": "GcbJtfnHd72wKEMY", 487 | "name": "admin" 488 | } 489 | ] 490 | } -------------------------------------------------------------------------------- /Turn Any Prompt Into a Chart and Upload It to WordPress/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "meta": { 3 | "instanceId": "52254486b159b349334953c1738da94e90477c7604aa8db2062d11afc0120739", 4 | "templateCredsSetupCompleted": true 5 | }, 6 | "nodes": [ 7 | { 8 | "id": "502f1b86-95d7-49c1-9503-2a38e37ff9ba", 9 | "name": "When Executed by Another Workflow", 10 | "type": "n8n-nodes-base.executeWorkflowTrigger", 11 | "position": [ 12 | 992, 13 | -224 14 | ], 15 | "parameters": { 16 | "workflowInputs": { 17 | "values": [ 18 | { 19 | "name": "prompt" 20 | } 21 | ] 22 | } 23 | }, 24 | "typeVersion": 1.1 25 | }, 26 | { 27 | "id": "88196843-a411-4ef0-ba4e-df54fc8f924f", 28 | "name": "QuickChart Object Schema", 29 | "type": "@n8n/n8n-nodes-langchain.outputParserStructured", 30 | "position": [ 31 | 1920, 32 | -16 33 | ], 34 | "parameters": { 35 | "schemaType": "manual", 36 | "inputSchema": "{\n \"type\": \"object\",\n \"properties\": {\n \"slug\": {\n \"type\": \"string\",\n \"description\": \"Proposed image filename or identifier (e.g., 'ev-sales-2024')\"\n },\n \"width\": {\n \"type\": \"string\",\n \"description\": \"Pixel width of the image canvas (e.g., '500')\"\n },\n \"height\": {\n \"type\": \"string\",\n \"description\": \"Pixel height of the image canvas (e.g., '300')\"\n },\n \"devicePixelRatio\": {\n \"type\": \"number\",\n \"description\": \"Device pixel ratio (e.g., 2)\"\n },\n \"format\": {\n \"type\": \"string\",\n \"enum\": [\"png\", \"jpeg\", \"webp\", \"svg\", \"pdf\"],\n \"description\": \"Output format of the image\"\n },\n \"backgroundColor\": {\n \"type\": \"string\",\n \"description\": \"Canvas background color in hex or named CSS color, make it white #FFFFFF by default\"\n },\n \"version\": {\n \"type\": \"string\",\n \"description\": \"Chart.js version (e.g., '3.9.1')\"\n },\n \"chart\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": {\n \"type\": \"string\",\n \"enum\": [\"line\", \"bar\", \"doughnut\", \"pie\", \"polarArea\"]\n },\n \"data\": {\n \"type\": \"object\",\n \"properties\": {\n \"labels\": {\n \"type\": \"array\",\n \"items\": { \"type\": \"string\" },\n \"minItems\": 1\n },\n \"datasets\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"object\",\n \"properties\": {\n \"label\": { \"type\": \"string\" },\n \"data\": {\n \"type\": \"array\",\n \"items\": { \"type\": \"number\" },\n \"minItems\": 1\n },\n \"backgroundColor\": {\n \"type\": [\"string\", \"array\"],\n \"items\": { \"type\": \"string\" }\n },\n \"borderColor\": {\n \"type\": [\"string\", \"array\"],\n \"items\": { \"type\": \"string\" }\n },\n \"fill\": { \"type\": [\"boolean\", \"string\"] },\n \"borderWidth\": { \"type\": \"number\" }\n },\n \"required\": [\"label\", \"data\"]\n },\n \"minItems\": 1\n }\n },\n \"required\": [\"labels\", \"datasets\"]\n },\n \"options\": {\n \"type\": \"object\",\n \"description\": \"Chart.js configuration options including title, plugins, scales, etc.\"\n }\n },\n \"required\": [\"type\", \"data\"]\n }\n },\n \"required\": [\"slug\", \"chart\"],\n \"default\": {\n \"format\": \"png\",\n \"backgroundColor\": \"white\"\n }\n}\n" 37 | }, 38 | "typeVersion": 1.2 39 | }, 40 | { 41 | "id": "9a398d7a-0435-44d2-9cbb-e5156fbad68f", 42 | "name": "OpenAI Chat Model", 43 | "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", 44 | "position": [ 45 | 1664, 46 | -16 47 | ], 48 | "parameters": { 49 | "model": { 50 | "__rl": true, 51 | "mode": "list", 52 | "value": "gpt-4o-mini", 53 | "cachedResultName": "gpt-4o-mini" 54 | }, 55 | "options": {} 56 | }, 57 | "credentials": { 58 | "openAiApi": { 59 | "id": "k5QLUV8boAepwce0", 60 | "name": "OpenAi account - default project" 61 | } 62 | }, 63 | "typeVersion": 1.2 64 | }, 65 | { 66 | "id": "0afadfd3-348a-4df7-958b-c14eca32b316", 67 | "name": "Upload image2", 68 | "type": "n8n-nodes-base.httpRequest", 69 | "position": [ 70 | 2272, 71 | -224 72 | ], 73 | "parameters": { 74 | "url": "https://your.wordpress.com/wp-json/wp/v2/media", 75 | "method": "POST", 76 | "options": {}, 77 | "sendBody": true, 78 | "contentType": "binaryData", 79 | "sendHeaders": true, 80 | "authentication": "predefinedCredentialType", 81 | "headerParameters": { 82 | "parameters": [ 83 | { 84 | "name": "Content-Disposition", 85 | "value": "=attachment; filename=\"chart-{{ $json.output.slug }}.png\"" 86 | } 87 | ] 88 | }, 89 | "inputDataFieldName": "data", 90 | "nodeCredentialType": "wordpressApi" 91 | }, 92 | "credentials": { 93 | "wordpressApi": { 94 | "id": "G1G8jDdEoWAVytQb", 95 | "name": "Wordpress - author@email.com" 96 | } 97 | }, 98 | "retryOnFail": true, 99 | "typeVersion": 4.2, 100 | "waitBetweenTries": 5000 101 | }, 102 | { 103 | "id": "bb57c519-8dff-4bf5-8db6-a1c17c9bfb78", 104 | "name": "Generate Chart AI agent", 105 | "type": "@n8n/n8n-nodes-langchain.agent", 106 | "position": [ 107 | 1664, 108 | -224 109 | ], 110 | "parameters": { 111 | "text": "={{ $json.message.content }}", 112 | "options": { 113 | "systemMessage": "You are a Chart.js configuration agent. Your task is to transform a user-provided data table into a valid QuickChart JSON object compatible with line, bar, pie, doughnut, or polarArea charts.\n\nInstructions:\n1. Read and interpret the markdown table input provided by the data agent.\n2. Based on structure, choose the most appropriate chart type from:\n - \"line\", \"bar\" → for time series or value comparison\n - \"pie\", \"doughnut\", \"polarArea\" → for proportion-based data\n3. Construct a valid Chart.js object following QuickChart format.\n4. Always include:\n - `type`\n - `data.labels`\n - `data.datasets` (with `label`, `data`, `backgroundColor`)\n - `options.plugins.title.text` with a suitable chart title\n\nOutput only valid JSON matching the schema:\n- No code blocks\n- No markdown\n- No explanations" 114 | }, 115 | "promptType": "define", 116 | "hasOutputParser": true 117 | }, 118 | "typeVersion": 1.7 119 | }, 120 | { 121 | "id": "e6235b7c-6b11-4cba-bc0c-9e22db2e8220", 122 | "name": "When clicking ‘Execute workflow’", 123 | "type": "n8n-nodes-base.manualTrigger", 124 | "position": [ 125 | 992, 126 | -32 127 | ], 128 | "parameters": {}, 129 | "typeVersion": 1 130 | }, 131 | { 132 | "id": "9570f247-b9f1-49b2-b104-6adcbe32e450", 133 | "name": "Message a model", 134 | "type": "@n8n/n8n-nodes-langchain.openAi", 135 | "position": [ 136 | 1248, 137 | -224 138 | ], 139 | "parameters": { 140 | "modelId": { 141 | "__rl": true, 142 | "mode": "list", 143 | "value": "gpt-4o-search-preview", 144 | "cachedResultName": "GPT-4O-SEARCH-PREVIEW" 145 | }, 146 | "options": {}, 147 | "messages": { 148 | "values": [ 149 | { 150 | "role": "system", 151 | "content": "You are a data aggregation AI agent with access to real-time web search. Your goal is to collect relevant, structured, and up-to-date data from trusted sources based on the user's query.\n\nInstructions:\n1. Search the web for reliable, factual, and recent data related to the query.\n2. Extract the data into a clean, readable table format (markdown table).\n3. The table should contain categories (e.g. country, platform, year) as rows or columns depending on structure.\n4. Include source URLs for each data group, appended below the table.\n\nRules:\n- Do not interpret or visualize the data.\n- Do not generate charts.\n- Do not summarize or paraphrase data.\n- Return only the data table and source list in markdown format." 152 | }, 153 | { 154 | "content": "={{ $json.prompt }}" 155 | } 156 | ] 157 | } 158 | }, 159 | "credentials": { 160 | "openAiApi": { 161 | "id": "k5QLUV8boAepwce0", 162 | "name": "OpenAi account - default project" 163 | } 164 | }, 165 | "typeVersion": 1.8 166 | }, 167 | { 168 | "id": "5002b0f9-036d-4416-b36a-818c699e0b71", 169 | "name": "Think1", 170 | "type": "@n8n/n8n-nodes-langchain.toolThink", 171 | "position": [ 172 | 1792, 173 | -16 174 | ], 175 | "parameters": {}, 176 | "typeVersion": 1 177 | }, 178 | { 179 | "id": "7612a971-459f-4106-8b0d-eda11b23f46a", 180 | "name": "Create QuickChart", 181 | "type": "n8n-nodes-base.httpRequest", 182 | "position": [ 183 | 2032, 184 | -224 185 | ], 186 | "parameters": { 187 | "url": "=https://quickchart.io/chart", 188 | "method": "POST", 189 | "options": {}, 190 | "jsonBody": "={{ $json.output }}", 191 | "sendBody": true, 192 | "specifyBody": "json" 193 | }, 194 | "typeVersion": 4.2 195 | }, 196 | { 197 | "id": "b654b232-2e12-4864-a829-752b77abc3c4", 198 | "name": "Code", 199 | "type": "n8n-nodes-base.code", 200 | "position": [ 201 | 2496, 202 | -224 203 | ], 204 | "parameters": { 205 | "jsCode": "return {\n \"research\" : $('Message a model').first().json,\n \"graph_data\" : $('Generate Chart AI agent').first().json.output,\n \"graph_image\" : $('Upload image2').first().json,\n \"result_image_url\" : $('Upload image2').first().json.guid.raw\n}" 206 | }, 207 | "typeVersion": 2 208 | }, 209 | { 210 | "id": "6e184446-0d1c-43fa-8bc0-862bca3b17a2", 211 | "name": "Sticky Note", 212 | "type": "n8n-nodes-base.stickyNote", 213 | "position": [ 214 | 1216, 215 | -320 216 | ], 217 | "parameters": { 218 | "color": 5, 219 | "width": 336, 220 | "height": 448, 221 | "content": "## Search the web\nUsing Gpt 4o search preview and generate a table" 222 | }, 223 | "typeVersion": 1 224 | }, 225 | { 226 | "id": "abca6241-a82c-4a6f-b604-fecdb4268533", 227 | "name": "Sticky Note1", 228 | "type": "n8n-nodes-base.stickyNote", 229 | "position": [ 230 | 1568, 231 | -320 232 | ], 233 | "parameters": { 234 | "color": 4, 235 | "width": 624, 236 | "height": 448, 237 | "content": "## Generate chart\nUsing openchart / chart.js " 238 | }, 239 | "typeVersion": 1 240 | }, 241 | { 242 | "id": "acfa1a4d-d900-445b-a1a1-90147ee441d9", 243 | "name": "Sticky Note2", 244 | "type": "n8n-nodes-base.stickyNote", 245 | "position": [ 246 | 2208, 247 | -320 248 | ], 249 | "parameters": { 250 | "color": 6, 251 | "width": 224, 252 | "height": 448, 253 | "content": "## Upload chart image to WP\n" 254 | }, 255 | "typeVersion": 1 256 | }, 257 | { 258 | "id": "5a9349d9-6d61-4aac-9ead-a3bb44506d5d", 259 | "name": "Sticky Note3", 260 | "type": "n8n-nodes-base.stickyNote", 261 | "position": [ 262 | 1568, 263 | 144 264 | ], 265 | "parameters": { 266 | "color": 7, 267 | "width": 864, 268 | "height": 576, 269 | "content": "## Convert the table to chart.js\n![openchart graph](https://articles.emp0.com/wp-content/uploads/2025/07/chart-apple-market-share-q1-2025.png)" 270 | }, 271 | "typeVersion": 1 272 | }, 273 | { 274 | "id": "3e89f9d7-d068-49c2-a22c-3424fa2de70e", 275 | "name": "Sticky Note4", 276 | "type": "n8n-nodes-base.stickyNote", 277 | "position": [ 278 | 960, 279 | 144 280 | ], 281 | "parameters": { 282 | "color": 7, 283 | "width": 592, 284 | "height": 576, 285 | "content": "## Search the web and generate a table\n```\n[\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Here is the data on Apple's market share in the mobile phone market for Q1 2025:\\n\\n\n| Q1 2025 | Shipments | Market Share | Shipments | Market Share | Annual Growth |\\n\n|---------|------------|--------------|-----------|--------------|---------------|\\n\n| Samsung | 60.5 | 20% | 60.0 | 20% | 1% |\\n\n| Apple | 55.0 | 19% | 48.7 | 16% | 13% |\\n\n| Xiaomi | 41.8 | 14% | 40.7 | 14% | 3% |\\n\n| vivo | 22.9 | 8% | 21.4 | 7% | 7% |\\n\n| OPPO | 22.7 | 8% | 25.0 | 8% | -9% |\\n\n| Others | 94.0 | 32% | 100.5 | 34% | -6% |\\n\n| Total | 296.9 | 100% | 296.2 | *100% | 0% |\\n\\n\n*Note: Percentages may not add up to 100% due to rounding.*\\n\\nSource: [Canalys Newsroom - Global smartphone market grows just 1% in Q1 after rocky start to 2025](https://www.canalys.com/newsroom/global-smartphone-market-q1-2025)\\n\\nAdditionally, according to Counterpoint Research, Apple achieved a 19% market share in Q1 2025, marking the first time it secured the top position in global smartphone sales during a first quarter. This success was driven by the launch of the iPhone 16e and strong demand in markets such as Japan and India. ([counterpointresearch.com](https://www.counterpointresearch.com/insight/post-insight-global-smartphone-market-grows-3-in-q1-2025-but-future-uncertain-apple-takes-1-spot-in-q1-for-first-time/?utm_source=openai))\\n\\n*Note: Different research firms may report varying figures due to differences in data collection methodologies.* \",\n \"refusal\": null,\n \"annotations\": [\n {\n \"type\": \"url_citation\",\n \"url_citation\": {\n \"end_index\": 2146,\n \"start_index\": 1935,\n \"title\": \"Global Smartphone Market Grows 3% in Q1 2025 But Future Uncertain; Apple Takes #1 Spot in Q1 For First Time\",\n \"url\": \"https://www.counterpointresearch.com/insight/post-insight-global-smartphone-market-grows-3-in-q1-2025-but-future-uncertain-apple-takes-1-spot-in-q1-for-first-time/?utm_source=openai\"\n }\n }\n ]\n },\n \"finish_reason\": \"stop\"\n }\n]\n```" 286 | }, 287 | "typeVersion": 1 288 | } 289 | ], 290 | "pinData": { 291 | "When clicking ‘Execute workflow’": [ 292 | { 293 | "prompt": "Generate a graph of apple's market share in the mobile phone market in q1 2025" 294 | } 295 | ] 296 | }, 297 | "connections": { 298 | "Think1": { 299 | "ai_tool": [ 300 | [ 301 | { 302 | "node": "Generate Chart AI agent", 303 | "type": "ai_tool", 304 | "index": 0 305 | } 306 | ] 307 | ] 308 | }, 309 | "Upload image2": { 310 | "main": [ 311 | [ 312 | { 313 | "node": "Code", 314 | "type": "main", 315 | "index": 0 316 | } 317 | ] 318 | ] 319 | }, 320 | "Message a model": { 321 | "main": [ 322 | [ 323 | { 324 | "node": "Generate Chart AI agent", 325 | "type": "main", 326 | "index": 0 327 | } 328 | ] 329 | ] 330 | }, 331 | "Create QuickChart": { 332 | "main": [ 333 | [ 334 | { 335 | "node": "Upload image2", 336 | "type": "main", 337 | "index": 0 338 | } 339 | ] 340 | ] 341 | }, 342 | "OpenAI Chat Model": { 343 | "ai_languageModel": [ 344 | [ 345 | { 346 | "node": "Generate Chart AI agent", 347 | "type": "ai_languageModel", 348 | "index": 0 349 | } 350 | ] 351 | ] 352 | }, 353 | "Generate Chart AI agent": { 354 | "main": [ 355 | [ 356 | { 357 | "node": "Create QuickChart", 358 | "type": "main", 359 | "index": 0 360 | } 361 | ] 362 | ] 363 | }, 364 | "QuickChart Object Schema": { 365 | "ai_outputParser": [ 366 | [ 367 | { 368 | "node": "Generate Chart AI agent", 369 | "type": "ai_outputParser", 370 | "index": 0 371 | } 372 | ] 373 | ] 374 | }, 375 | "When Executed by Another Workflow": { 376 | "main": [ 377 | [ 378 | { 379 | "node": "Message a model", 380 | "type": "main", 381 | "index": 0 382 | } 383 | ] 384 | ] 385 | }, 386 | "When clicking ‘Execute workflow’": { 387 | "main": [ 388 | [ 389 | { 390 | "node": "Message a model", 391 | "type": "main", 392 | "index": 0 393 | } 394 | ] 395 | ] 396 | } 397 | } 398 | } -------------------------------------------------------------------------------- /Ebook to Audiobook converter/n8n.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "EBOOK", 3 | "nodes": [ 4 | { 5 | "parameters": { 6 | "operation": "write", 7 | "fileName": "=/tmp/audio {{$itemIndex}}.mp3", 8 | "dataPropertyName": "=audio {{ $itemIndex }}", 9 | "options": {} 10 | }, 11 | "id": "7d65ca00-5f29-4e11-aaa8-77f8680add0c", 12 | "name": "Save Audio Chucks", 13 | "type": "n8n-nodes-base.readWriteFile", 14 | "position": [ 15 | 64, 16 | 0 17 | ], 18 | "typeVersion": 1 19 | }, 20 | { 21 | "parameters": { 22 | "jsCode": "/**\n * This Code node will:\n * 1. Gather all file paths from the incoming items (assuming each item has `item.json.filePath`).\n * 2. Build a single text string, each line in FFmpeg concat format: `file '/path/to/audio.mp3'`\n * 3. Convert that text to binary (Base64) so the next node (\"Write Binary File\") can save it as `concat_list.txt`.\n */\n\nconst items = $input.all();\n\n// Build the concat list\nlet concatListText = '';\n\nitems.forEach((item, index) => {\n let filePath;\n\n\n // Use only fileName for the rest\n filePath = item.json.fileName;\n\n\n if (filePath) {\n concatListText += `file '${filePath}'\\n`;\n }\n});\n\n// Convert the text to a Buffer, then to Base64\nconst buffer = Buffer.from(concatListText, 'utf-8');\nconst base64Data = buffer.toString('base64');\n\n// Return a single item containing the binary data\nreturn [\n {\n json: {},\n binary: {\n data: {\n data: base64Data,\n mimeType: 'text/plain',\n fileName: 'concat_list.txt'\n }\n }\n }\n];" 23 | }, 24 | "id": "0adc9201-3479-450f-8b5e-6bbf16cf01d2", 25 | "name": "Generate `concat_list.txt`", 26 | "type": "n8n-nodes-base.code", 27 | "position": [ 28 | -240, 29 | 192 30 | ], 31 | "typeVersion": 2 32 | }, 33 | { 34 | "parameters": { 35 | "operation": "write", 36 | "fileName": "/tmp/concat_list.txt", 37 | "options": {} 38 | }, 39 | "id": "c9aa3a38-449f-44a4-92d3-f28c1c29114c", 40 | "name": "Save concat_list", 41 | "type": "n8n-nodes-base.readWriteFile", 42 | "position": [ 43 | 96, 44 | 192 45 | ], 46 | "typeVersion": 1 47 | }, 48 | { 49 | "parameters": { 50 | "command": "ffmpeg -y -f concat -safe 0 -i /tmp/concat_list.txt \\\n-c copy /tmp/final_merged.mp3\n\n\n" 51 | }, 52 | "id": "956de46d-86b8-40ec-81c0-91fe87113d75", 53 | "name": "Join audio chucks and delete all files", 54 | "type": "n8n-nodes-base.executeCommand", 55 | "position": [ 56 | -240, 57 | 400 58 | ], 59 | "typeVersion": 1 60 | }, 61 | { 62 | "parameters": { 63 | "fileSelector": "/tmp/final_merged.mp3", 64 | "options": {} 65 | }, 66 | "id": "1405b1fb-7a4f-45d9-976c-121437ece794", 67 | "name": "read final_merged", 68 | "type": "n8n-nodes-base.readWriteFile", 69 | "position": [ 70 | 96, 71 | 400 72 | ], 73 | "typeVersion": 1 74 | }, 75 | { 76 | "parameters": { 77 | "content": "## EBOOK EXTRACTION MODULE\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA sample [Little Red Riding Hood.pdf](https://www.laburnumps.vic.edu.au/uploaded_files/media/little_red_riding_hood.pdf)\n\n![Ebook](https://articles.emp0.com/wp-content/uploads/2025/10/Screenshot-from-2025-10-20-18-59-28.png)", 78 | "height": 1024, 79 | "width": 560, 80 | "color": 4 81 | }, 82 | "type": "n8n-nodes-base.stickyNote", 83 | "position": [ 84 | -1680, 85 | -144 86 | ], 87 | "typeVersion": 1, 88 | "id": "13749486-4e35-4444-9734-3be869668540", 89 | "name": "Sticky Note" 90 | }, 91 | { 92 | "parameters": { 93 | "content": "## EBOOK TO AUDIOBOOK CONVERSION MODULE\n\nBreaks the large chunk of text into paragraphs and translate in batches of 5 paragraphs each\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```json\n[\n {\n \"order\": 1,\n \"text\": \"Little Red Riding Hood by Leanne Guenther Once upon a time, there was a little girl who lived in a village near the forest. Whenever she went out, the little girl wore a red riding cloak, so everyone in the village called her Little Red Riding Hood. One morning, Little Red Riding Hood’s mother asked her to take some food to her grandmother, as she had been ill. So they packed a nice basket for Little Red Riding Hood to take to her grandmother.\"\n },\n {\n \"order\": 2,\n \"text\": \"When the basket was ready, the little girl put on her red cloak and kissed her mother goodbye. \\\\\\\"Remember, go straight to Grandma's house,\\\\\\\" her mother cautioned. \\\\\\\"Don't dawdle along the way and please don't talk to strangers! The woods are dangerous.\\\\\\\" \\\\\\\"Don't worry, mommy,\\\\\\\" said Little Red Riding Hood, \\\\\\\"I'll be careful.\\\\\\\" But when Little Red Riding Hood noticed some lovely flowers in the woods, she forgot her promise to her mother.\"\n },\n {\n \"order\": 3,\n \"text\": \"She picked a few, watched the butterflies flit about for awhile, listened to the frogs croaking and then picked a few more. Little Red Riding Hood was enjoying the warm summer day so much, that she didn't notice a dark shadow approaching out of the forest behind her... Suddenly, the wolf appeared beside her. \\\\\\\"What are you doing out here, little girl and where are you going?\\\\\\\" the wolf asked in a voice as friendly as he could muster.\"\n },\n {\n \"order\": 4,\n \"text\": \"\\\\\\\"I'm on my way to see my Grandma who lives through the forest, near the brook,\\\\\\\" Little Red Riding Hood replied. Then she realized how late she was and quickly excused herself, rushing down the path to her Grandma's house. The wolf, in the meantime, took a shortcut... The wolf, a little out of breath from running, arrived at Grandma's and knocked lightly at the door. \\\\\\\"Oh thank goodness dear. Come in. Come in.\"\n },\n {\n \"order\": 5,\n \"text\": \"I was worried that something had happened to you in the forest,\\\\\\\" said Grandma thinking that the knock was her granddaughter. The wolf let himself in. Poor Granny did not have time to say another word, before the wolf gobbled her up! The wolf let out a satisfied burp, and then poked through Granny's wardrobe to find a nightgown that he liked. He added a frilly sleeping cap, and for good measure, dabbed some of Granny's perfume behind his pointy ears.\"\n }\n]\n```", 94 | "height": 1024, 95 | "width": 704, 96 | "color": 3 97 | }, 98 | "type": "n8n-nodes-base.stickyNote", 99 | "position": [ 100 | -1088, 101 | -144 102 | ], 103 | "typeVersion": 1, 104 | "id": "f7a4e41c-3d00-4918-af1c-e1a772905671", 105 | "name": "Sticky Note1" 106 | }, 107 | { 108 | "parameters": { 109 | "content": "## AUDIO MERGING MODULE\n\n\nUses FFMPEG to combine multiple audio files into one \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCan only work in self hosted n8n with ffmpeg installed. You cannot install the library on n8n cloud\n```json\nffmpeg -y -f concat -safe 0 -i /tmp/concat_list.txt \\\n-c copy /tmp/final_merged.mp3\n``", 110 | "height": 1024, 111 | "width": 656, 112 | "color": 5 113 | }, 114 | "type": "n8n-nodes-base.stickyNote", 115 | "position": [ 116 | -352, 117 | -144 118 | ], 119 | "typeVersion": 1, 120 | "id": "cf4982e5-e3ac-4ade-8fac-557a54c93a68", 121 | "name": "Sticky Note2" 122 | }, 123 | { 124 | "parameters": { 125 | "content": "## UPLOADS THE EBOOK TO DRIVE.\n\n[Resulting output](https://drive.google.com/file/d/12aVR2p-ZQ2DyqXCUgJPouzy-acoAB7WO/view?usp=sharing)", 126 | "height": 400, 127 | "width": 288, 128 | "color": 7 129 | }, 130 | "type": "n8n-nodes-base.stickyNote", 131 | "position": [ 132 | 336, 133 | -144 134 | ], 135 | "typeVersion": 1, 136 | "id": "0b98b51d-f5a1-4107-903d-7d419cd6b75e", 137 | "name": "Sticky Note3" 138 | }, 139 | { 140 | "parameters": { 141 | "formTitle": "Ebook to Audiobook", 142 | "formDescription": "Upload your Ebook here", 143 | "formFields": { 144 | "values": [ 145 | { 146 | "fieldLabel": "UPLOAD", 147 | "fieldType": "file", 148 | "multipleFiles": false, 149 | "requiredField": true 150 | } 151 | ] 152 | }, 153 | "options": { 154 | "appendAttribution": false 155 | } 156 | }, 157 | "type": "n8n-nodes-base.formTrigger", 158 | "typeVersion": 2.3, 159 | "position": [ 160 | -1632, 161 | -48 162 | ], 163 | "id": "57a896d1-6d7e-4cba-b55a-d3ab31742f7c", 164 | "name": "FORM", 165 | "webhookId": "d95584f4-a526-4218-8f75-2b17272ebba9" 166 | }, 167 | { 168 | "parameters": { 169 | "operation": "pdf", 170 | "binaryPropertyName": "UPLOAD", 171 | "options": {} 172 | }, 173 | "type": "n8n-nodes-base.extractFromFile", 174 | "typeVersion": 1, 175 | "position": [ 176 | -1456, 177 | -48 178 | ], 179 | "id": "ad8b130e-e51d-40ad-85a1-9ce98e978422", 180 | "name": "EXTRACT TEXT" 181 | }, 182 | { 183 | "parameters": { 184 | "jsCode": "const rawText = $input.first().json.text || \"\";\n\n// Clean function for JSON/TTS\nfunction cleanText(str) {\n return str\n .replace(/\\\\\\\\n/g, \" \")\n .replace(/\\\\n/g, \" \")\n .replace(/\\\\/g, \"\\\\\\\\\")\n .replace(/\"/g, '\\\\\"')\n .replace(/(\\r\\n|\\r|\\n)+/g, \" \")\n .replace(/\\s+/g, \" \")\n .trim();\n}\n\nconst text = cleanText(rawText);\n\nconst sentences = text.split(/(?<=[.!?])\\s+/).filter(Boolean);\n\nconst maxChars = 500;\nlet parts = [];\nlet chunk = \"\";\n\nfor (let sentence of sentences) {\n if (sentence.length > maxChars) {\n // Split very long sentence into smaller pieces\n let start = 0;\n while (start < sentence.length) {\n const sub = sentence.slice(start, start + maxChars);\n if (chunk) parts.push(chunk);\n parts.push(sub);\n chunk = \"\";\n start += maxChars;\n }\n continue;\n }\n\n const space = chunk ? \" \" : \"\";\n if ((chunk + space + sentence).length > maxChars) {\n if (chunk) parts.push(chunk);\n chunk = sentence;\n } else {\n chunk += space + sentence;\n }\n}\n\nif (chunk) parts.push(chunk);\nconst result = parts.map((p, i) => ({ order: i + 1, text: p }));\n\nreturn result.map(r => ({ json: r }));\n" 185 | }, 186 | "type": "n8n-nodes-base.code", 187 | "typeVersion": 2, 188 | "position": [ 189 | -1264, 190 | -48 191 | ], 192 | "id": "3e47b303-ea16-4b57-9df3-227a4c3f7566", 193 | "name": "SPLITS THE TEXT ACCORGING TO RULES" 194 | }, 195 | { 196 | "parameters": { 197 | "batchSize": 5, 198 | "options": {} 199 | }, 200 | "type": "n8n-nodes-base.splitInBatches", 201 | "typeVersion": 3, 202 | "position": [ 203 | -1024, 204 | 144 205 | ], 206 | "id": "40d0a3d3-e4d0-4b0a-82d4-393aa5ad3aa2", 207 | "name": "Loop Over Text chunks (5) at a time" 208 | }, 209 | { 210 | "parameters": {}, 211 | "type": "n8n-nodes-base.wait", 212 | "typeVersion": 1.1, 213 | "position": [ 214 | -544, 215 | 208 216 | ], 217 | "id": "2c369974-0d97-405d-9121-cbcb5648c00c", 218 | "name": "WAITS FOR 5 SECONDS", 219 | "webhookId": "865809f7-7aea-40bd-9a0e-104a2ee18d73" 220 | }, 221 | { 222 | "parameters": { 223 | "url": "={{ $json.output }}", 224 | "options": {} 225 | }, 226 | "type": "n8n-nodes-base.httpRequest", 227 | "typeVersion": 4.2, 228 | "position": [ 229 | -800, 230 | 0 231 | ], 232 | "id": "c606fc3d-c0fc-41f7-a646-8f369ebea7ea", 233 | "name": "CONVERTS URL TO AUDIO FILES" 234 | }, 235 | { 236 | "parameters": { 237 | "jsCode": "return items.map((item, index) => {\n // Make a new item\n const newItem = { json: {}, binary: {} };\n\n // Copy the JSON data if you have any\n newItem.json = { ...item.json };\n\n // Loop through all binary properties\n for (let key in item.binary) {\n // Rename the binary key\n const newKey = `audio ${index}`;\n\n // Copy the binary data\n newItem.binary[newKey] = { ...item.binary[key] };\n\n // Rename the file itself\n newItem.binary[newKey].fileName = `${newKey}.mp3`; // change extension if needed\n }\n\n return newItem;\n});\n" 238 | }, 239 | "type": "n8n-nodes-base.code", 240 | "typeVersion": 2, 241 | "position": [ 242 | -240, 243 | 0 244 | ], 245 | "id": "78cec513-0fe5-4325-93aa-198f63afdcd8", 246 | "name": "GIVES INDEXES TO AUDIO FILES" 247 | }, 248 | { 249 | "parameters": { 250 | "name": "audiobook.mp3", 251 | "driveId": { 252 | "__rl": true, 253 | "value": "My Drive", 254 | "mode": "list", 255 | "cachedResultName": "My Drive", 256 | "cachedResultUrl": "https://drive.google.com/drive/my-drive" 257 | }, 258 | "folderId": { 259 | "__rl": true, 260 | "value": "11eIODBtLZwiUjUVJvK97_Z42uPmEnMMu", 261 | "mode": "list", 262 | "cachedResultName": "Audiobook", 263 | "cachedResultUrl": "https://drive.google.com/drive/folders/11eIODBtLZwiUjUVJvK97_Z42uPmEnMMu" 264 | }, 265 | "options": {} 266 | }, 267 | "type": "n8n-nodes-base.googleDrive", 268 | "typeVersion": 3, 269 | "position": [ 270 | 432, 271 | 0 272 | ], 273 | "id": "d7b3d611-98d0-44fc-8ded-597f1e5ad916", 274 | "name": "Uploads Ebook", 275 | "credentials": { 276 | "googleDriveOAuth2Api": { 277 | "id": "4pJ4FbSsj82fOkbU", 278 | "name": "Gridve" 279 | } 280 | } 281 | }, 282 | { 283 | "parameters": { 284 | "method": "POST", 285 | "url": "https://api.replicate.com/v1/models/minimax/speech-02-hd/predictions", 286 | "authentication": "genericCredentialType", 287 | "genericAuthType": "httpBearerAuth", 288 | "sendHeaders": true, 289 | "headerParameters": { 290 | "parameters": [ 291 | { 292 | "name": "Prefer", 293 | "value": "wait" 294 | } 295 | ] 296 | }, 297 | "sendBody": true, 298 | "specifyBody": "json", 299 | "jsonBody": "={\n \"input\": {\n \"text\": \"{{ $json.text }}\",\n \"pitch\": 0,\n \"speed\": 1,\n \"volume\": 1,\n \"bitrate\": 128000,\n \"channel\": \"mono\",\n \"emotion\": \"happy\",\n \"voice_id\": \"Friendly_Person\",\n \"sample_rate\": 32000,\n \"language_boost\": \"English\",\n \"english_normalization\": true\n }\n} ", 300 | "options": {} 301 | }, 302 | "type": "n8n-nodes-base.httpRequest", 303 | "typeVersion": 4.2, 304 | "position": [ 305 | -768, 306 | 208 307 | ], 308 | "id": "33160ec4-53df-4a5c-95be-029065d4e2f7", 309 | "name": "MINIMAX TTS", 310 | "credentials": { 311 | "httpBearerAuth": { 312 | "id": "w56BeWYVDLf1uQQ9", 313 | "name": "Bearer Auth account" 314 | } 315 | } 316 | }, 317 | { 318 | "parameters": { 319 | "content": "# Watch the [Youtube Demo Video](https://img.youtube.com/vi/xKqkjXIPZoM/0.jpg)\n\n![Youtube Demo](https://img.youtube.com/vi/xKqkjXIPZoM/0.jpg)", 320 | "height": 528, 321 | "width": 528, 322 | "color": 7 323 | }, 324 | "type": "n8n-nodes-base.stickyNote", 325 | "position": [ 326 | -2256, 327 | -128 328 | ], 329 | "typeVersion": 1, 330 | "id": "88c3f2dc-5690-45ac-90d6-86c27bfeb04e", 331 | "name": "Sticky Note4" 332 | } 333 | ], 334 | "pinData": {}, 335 | "connections": { 336 | "Save Audio Chucks": { 337 | "main": [ 338 | [ 339 | { 340 | "node": "Generate `concat_list.txt`", 341 | "type": "main", 342 | "index": 0 343 | } 344 | ] 345 | ] 346 | }, 347 | "Generate `concat_list.txt`": { 348 | "main": [ 349 | [ 350 | { 351 | "node": "Save concat_list", 352 | "type": "main", 353 | "index": 0 354 | } 355 | ] 356 | ] 357 | }, 358 | "Save concat_list": { 359 | "main": [ 360 | [ 361 | { 362 | "node": "Join audio chucks and delete all files", 363 | "type": "main", 364 | "index": 0 365 | } 366 | ] 367 | ] 368 | }, 369 | "Join audio chucks and delete all files": { 370 | "main": [ 371 | [ 372 | { 373 | "node": "read final_merged", 374 | "type": "main", 375 | "index": 0 376 | } 377 | ] 378 | ] 379 | }, 380 | "read final_merged": { 381 | "main": [ 382 | [ 383 | { 384 | "node": "Uploads Ebook", 385 | "type": "main", 386 | "index": 0 387 | } 388 | ] 389 | ] 390 | }, 391 | "FORM": { 392 | "main": [ 393 | [ 394 | { 395 | "node": "EXTRACT TEXT", 396 | "type": "main", 397 | "index": 0 398 | } 399 | ] 400 | ] 401 | }, 402 | "EXTRACT TEXT": { 403 | "main": [ 404 | [ 405 | { 406 | "node": "SPLITS THE TEXT ACCORGING TO RULES", 407 | "type": "main", 408 | "index": 0 409 | } 410 | ] 411 | ] 412 | }, 413 | "SPLITS THE TEXT ACCORGING TO RULES": { 414 | "main": [ 415 | [ 416 | { 417 | "node": "Loop Over Text chunks (5) at a time", 418 | "type": "main", 419 | "index": 0 420 | } 421 | ] 422 | ] 423 | }, 424 | "Loop Over Text chunks (5) at a time": { 425 | "main": [ 426 | [ 427 | { 428 | "node": "CONVERTS URL TO AUDIO FILES", 429 | "type": "main", 430 | "index": 0 431 | } 432 | ], 433 | [ 434 | { 435 | "node": "MINIMAX TTS", 436 | "type": "main", 437 | "index": 0 438 | } 439 | ] 440 | ] 441 | }, 442 | "WAITS FOR 5 SECONDS": { 443 | "main": [ 444 | [ 445 | { 446 | "node": "Loop Over Text chunks (5) at a time", 447 | "type": "main", 448 | "index": 0 449 | } 450 | ] 451 | ] 452 | }, 453 | "CONVERTS URL TO AUDIO FILES": { 454 | "main": [ 455 | [ 456 | { 457 | "node": "GIVES INDEXES TO AUDIO FILES", 458 | "type": "main", 459 | "index": 0 460 | } 461 | ] 462 | ] 463 | }, 464 | "GIVES INDEXES TO AUDIO FILES": { 465 | "main": [ 466 | [ 467 | { 468 | "node": "Save Audio Chucks", 469 | "type": "main", 470 | "index": 0 471 | } 472 | ] 473 | ] 474 | }, 475 | "MINIMAX TTS": { 476 | "main": [ 477 | [ 478 | { 479 | "node": "WAITS FOR 5 SECONDS", 480 | "type": "main", 481 | "index": 0 482 | } 483 | ] 484 | ] 485 | } 486 | }, 487 | "active": false, 488 | "settings": { 489 | "executionOrder": "v1" 490 | }, 491 | "versionId": "d82d1c3b-0657-4fc5-8691-40ef2fe01121", 492 | "meta": { 493 | "templateCredsSetupCompleted": true, 494 | "instanceId": "90d03af248d07e2ebe7c0ecf1b18f488f6465e5560f151e375f9e5ab447ab951" 495 | }, 496 | "id": "bX6cyG6u0dUYo1gM", 497 | "tags": [] 498 | } --------------------------------------------------------------------------------