├── AI Engineering Fundementals ├── Day 1: What are AI Agents?.md ├── Day 2: Agents vs Workflows.md ├── Day 3: Rag and Tool Use.md ├── Day 4: Memory: Teaching Agents to Remember.md ├── Day 5: Guardrails, Tracing & Evals.md ├── Day 6: Prompt Engineering.md └── Day 7: Multi-Agent Orchestration.md ├── README.md ├── SWE Fundamentals ├── Day 10: Deploying with Vercel.md ├── Day 11: Vibe Coder’s Tool and Model pricing breakdown.md ├── Day 12: SWE and Web Fundamentals.md ├── Day 13: TypeScript & Programming Fundamentals.md ├── Day 14: APIs & Web Communication.md ├── Day 8: Tooling & Environment Setup.md └── Day 9 : Set up Cursor and General Practice.md └── sandbox ├── cover-letter-generator ├── .env.example ├── .gitignore ├── README.md ├── app.py ├── requirements.txt └── src │ ├── __init__.py │ └── core.py └── hiring-agent ├── .env.local ├── README.md ├── app.py ├── app ├── globals.css ├── layout.tsx └── page.tsx ├── backend ├── main.py └── requirements.txt ├── components.json ├── components ├── email-modal.tsx ├── file-uploader.tsx ├── progress-tracker.tsx ├── results-table.tsx ├── resume-screening-app.tsx ├── theme-provider.tsx └── ui │ ├── accordion.tsx │ ├── alert-dialog.tsx │ ├── alert.tsx │ ├── aspect-ratio.tsx │ ├── avatar.tsx │ ├── badge.tsx │ ├── breadcrumb.tsx │ ├── button.tsx │ ├── calendar.tsx │ ├── card.tsx │ ├── carousel.tsx │ ├── chart.tsx │ ├── checkbox.tsx │ ├── collapsible.tsx │ ├── command.tsx │ ├── context-menu.tsx │ ├── dialog.tsx │ ├── drawer.tsx │ ├── dropdown-menu.tsx │ ├── form.tsx │ ├── hover-card.tsx │ ├── input-otp.tsx │ ├── input.tsx │ ├── label.tsx │ ├── menubar.tsx │ ├── navigation-menu.tsx │ ├── pagination.tsx │ ├── popover.tsx │ ├── progress.tsx │ ├── radio-group.tsx │ ├── resizable.tsx │ ├── scroll-area.tsx │ ├── select.tsx │ ├── separator.tsx │ ├── sheet.tsx │ ├── sidebar.tsx │ ├── skeleton.tsx │ ├── slider.tsx │ ├── sonner.tsx │ ├── switch.tsx │ ├── table.tsx │ ├── tabs.tsx │ ├── textarea.tsx │ ├── toast.tsx │ ├── toaster.tsx │ ├── toggle-group.tsx │ ├── toggle.tsx │ ├── tooltip.tsx │ ├── use-mobile.tsx │ └── use-toast.ts ├── hooks ├── use-mobile.tsx └── use-toast.ts ├── images ├── arch-diagram.png └── hiring-process-flowchart.png ├── lib ├── resumeScreening.ts └── utils.ts ├── next.config.mjs ├── package-lock.json ├── package.json ├── postcss.config.mjs ├── public ├── placeholder-logo.png ├── placeholder-logo.svg ├── placeholder-user.jpg ├── placeholder.jpg └── placeholder.svg ├── requirements.txt ├── src ├── app │ └── api │ │ └── screen-resumes │ │ └── route.ts └── lib │ ├── candidateRanker.ts │ ├── jobDescriptionProcessor.ts │ ├── resumeProcessor.ts │ └── resumeScorer.ts ├── styles └── globals.css ├── tailwind.config.ts ├── tsconfig.json ├── utils └── utils.py └── v0-user-next.config.js /AI Engineering Fundementals/Day 2: Agents vs Workflows.md: -------------------------------------------------------------------------------- 1 | # Day 2: Agents vs Workflows 2 | 3 | ### Agents vs. Workflows: The Core Difference 4 | 5 | "Agent" and "workflow" get tossed around a lot—but they mean very different things. 6 | 7 | According to Anthropic: 8 | 9 | > Workflows follow predefined code paths to orchestrate LLMs and tools. 10 | > 11 | > 12 | > **Agents** direct their own tool use and decision-making on the fly. 13 | > 14 | 15 | Put simply: 16 | 17 | - **Workflows** = scripted steps. 18 | - **Agents** = dynamic, self-directed behavior. 19 | 20 | Workflows are ideal for tasks that require precision and repetition. Agents thrive in complex, changing environments. 21 | 22 | ### When to Use Agents vs. Workflows 23 | 24 | Choosing between an agent and a workflow depends on the complexity and variability of the task at hand. Anthropic suggests: 25 | 26 | - **Opt for Workflows** when: 27 | - Tasks are repetitive and well-defined. 28 | - Consistency and speed are paramount. 29 | - There's minimal need for decision-making or adaptation. 30 | - **Opt for Agents** when: 31 | - Tasks are complex and require adaptability. 32 | - The environment is dynamic, and decisions need to be made in real-time. 33 | - There's a need for the system to learn and evolve over time. 34 | 35 | It's also worth noting that hybrid approaches can be effective, where workflows handle routine tasks, and agents take over when adaptability is required. 36 | 37 | ### From Theory to Execution: Building Effective Agents 38 | 39 | Now that you understand the difference between agents and workflows, it’s time to put that knowledge into practice. In this next section, we’ll explore how to actually build an effective agent using OpenAI’s Agent SDK. You’ll also learn how to implement one of the five common workflow patterns called routing by leveraging built-in tools and modular agent design. Let’s get into how it all comes together. 40 | 41 | ## Building Blocks 42 | 43 | The OpenAI Agent SDK introduces some old and some new concepts to define and manage agent workflows: 44 | 45 | - Agents: LLMs configured with instructions, tools, guardrails, and handoffs 46 | - Tools: Functions that agents can use to take actions beyond just generating text 47 | - Handoffs: A specialized tool used for transferring control between agents 48 | - Context: Memory of past actions and custom context passed to 49 | - Guardrails: Configurable safety checks for input and output validation 50 | 51 | *The Agent SDK also has a tracing module that allows you to view, debug, and optimize your workflows inside OpenAI’s developer dashboard. 52 | 53 | ### **How to define agents in OpenAI SDK?** 54 | 55 | ```jsx 56 | from agents import Agent 57 | 58 | basic_agent = Agent( 59 | name="My First Agent", 60 | instructions="You are a helpful coding tutor.", 61 | model="gpt-4o" # Optional: defaults to "gpt-4o" if not specified 62 | ) 63 | 64 | ``` 65 | 66 | ** 67 | 68 | ![image (5)](https://github.com/user-attachments/assets/b0426701-7844-4571-9975-4824490ac6f6) 69 | 70 | *Agent as a feedback loop with access to your custom environment* 71 | 72 | At the center of the OpenAI Agent SDK is the Agent class. It has 3 main components: **name, instructions, and model.** 73 | 74 | Additionally, you can select and define more attributes, like *tools, output_type, and handoffs*. See the [**documentation**](https://openai.github.io/openai-agents-python/agents/?utm_source=breakintodata.beehiiv.com&utm_medium=newsletter&utm_campaign=build-effective-agents-with-openai-agents-sdk) for more details. 75 | 76 | ### Workflow Patterns You’ll Learn 77 | 78 | A **workflow** is a structured system where an LLM or tool follows a predefined sequence of steps to complete a task. Unlike agents, which dynamically decide how to act, workflows operate on a fixed path—great for reliability, speed, and repeatability. 79 | 80 | Anthropic outlines five powerful workflow patterns that help LLM systems scale with consistency and structure. Here’s what we’ll be exploring: 81 | 82 | - **Prompt Chaining** 83 | 84 | Breaks down a complex task into multiple LLM calls, where the output of one step becomes the input for the next. Great for tasks that benefit from staged reasoning or transformation. 85 | 86 | ![image (6)](https://github.com/user-attachments/assets/687db343-6291-4cb4-a34a-61985d562551) 87 | 88 | 89 | - **Parallelization** 90 | 91 | Speeds things up by running multiple calls at once. 92 | 93 | - *Sectioning*: Breaks a task into independent pieces. 94 | - *Voting*: Runs the same task several times and picks the best answer. 95 | - 96 | 97 | ![image (7)](https://github.com/user-attachments/assets/f9bc502e-d35e-46e3-817e-3da4a5a5a320) 98 | 99 | - **Orchestrator–Workers** 100 | 101 | A main LLM (the orchestrator) assigns subtasks to worker LLMs. Unlike parallelization, subtasks are generated dynamically, depending on what the input requires. 102 | 103 | ![image (8)](https://github.com/user-attachments/assets/fd2f9bd2-5410-4dab-ada7-b7d045b2faa3) 104 | 105 | - **Evaluator–Optimizer** 106 | 107 | Think of this like peer review. One model generates, another evaluates, and the first improves its output based on feedback—just like a writer and editor. 108 | 109 | ![image (9)](https://github.com/user-attachments/assets/8030bd64-b025-46fa-ba5d-936a0f1a9139) 110 | 111 | - **Routing** 112 | 113 | Classifies the user’s input and sends it down the right specialized workflow. This keeps performance high by customizing how different input types are handled. 114 | 115 | ![image (10)](https://github.com/user-attachments/assets/e44dd9a3-a31d-4945-838f-9d374066c9de) 116 | 117 | 118 | You’ve just explored one of the most critical choices in modern AI systems—**agents vs. workflows**—and got a preview of the building blocks that bring them to life. 119 | 120 | Tomorrow, we’ll dive into how to **build agents using the OpenAI SDK**, with a focus on **tool use and retrieval-augmented generation (RAG)** to supercharge your agent’s capabilities. 121 | 122 | -------------------------------------------------------------------------------- /AI Engineering Fundementals/Day 3: Rag and Tool Use.md: -------------------------------------------------------------------------------- 1 | # Day 3: Rag and Tool Use 2 | 3 | Generative AI is powerful, but today we’re diving into two capabilities that make it *practical*: **Retrieval-Augmented Generation (RAG)** and **Tool Use**. These are the keys to grounding your agents in real-world data—and letting them take real-world actions. 4 | 5 | Let’s start with RAG—the secret sauce for making AI more accurate and reliable. 6 | 7 | **Retrieval-Augmented Generation** bridges the gap between static LLM knowledge and up-to-date, relevant information. Instead of relying only on what the model was trained on, RAG lets your agent pull in current, domain-specific data at runtime. 8 | 9 | **How it works:** 10 | 11 | 1. **Indexing**: External data is embedded and stored in a vector database. 12 | 2. **Retrieval**: When a query is made, relevant docs are pulled based on similarity. 13 | 3. **Augmentation**: Those results are added to the LLM’s prompt. 14 | 4. **Generation**: The model creates a response using both training and retrieved data. 15 | 16 | **Why it matters:** 17 | 18 | - Reduces hallucinations 19 | - Answers with up-to-date info 20 | - Uses internal company data 21 | - Supports citations 22 | - Avoids retraining models 23 | 24 | ### A Real Example: Grocery Search with RAG 25 | 26 | To see RAG in action, let’s look at this example 27 | 28 | ![image (11)](https://github.com/user-attachments/assets/013d624c-e556-4a29-8c94-56e052943475) 29 | 30 | This diagram shows the core flow of Retrieval-Augmented Generation: a user submits a query, the system retrieves relevant data from connected sources, then combines that data with the query and passes it to the LLM. The model uses both the question and the retrieved context to generate a grounded, accurate response—closing the loop by sending the answer back to the user. 31 | 32 | RAG isn’t just about tossing queries into a vector store—it’s a full **search strategy**. It starts by retrieving relevant content from external data sources, then feeds both the query and retrieved info into the LLM. The result? Answers grounded in real, up-to-date knowledge. It’s not just about finding similar text—it’s about combining metadata filtering, semantic search, and reranking to deliver context-rich, accurate responses. 33 | 34 | ### What Is Tool Use? 35 | 36 | RAG is about knowing more. But what about *doing* more? 37 | 38 | That’s where **Tool Use** comes in. Tool use lets your agent move from being a smart assistant to an *action-taker*. It can fetch live data, call APIs, run code, or interact with software—all automatically. 39 | 40 | **Two common patterns:** 41 | 42 | - **Chains**: Pre-set sequences of tool use. 43 | - **Agents**: Decide dynamically which tools to use and when. 44 | 45 | **Typical process:** 46 | 47 | - Define the tool (function/API) 48 | - Parse the LLM output 49 | - Execute the tool with structured input 50 | 51 | ### Built-In Tools: Out-of-the-Box Power 52 | 53 | OpenAI's Agent SDK comes equipped with several hosted tools that empower agents to perform complex tasks without extensive setup: 54 | 55 | - **WebSearchTool**: Enables agents to fetch real-time information from the internet, providing up-to-date responses. 56 | - **FileSearchTool**: Allows agents to retrieve information from OpenAI Vector Stores, facilitating access to proprietary or domain-specific knowledge. [OpenAI GitHub](https://openai.github.io/openai-agents-python/tools/?utm_source=chatgpt.com) 57 | - **ComputerTool**: Grants agents the ability to perform tasks on a computer, such as file manipulation or executing commands, enhancing automation capabilities. [OpenAI GitHub](https://openai.github.io/openai-agents-python/tools/?utm_source=chatgpt.com) 58 | 59 | These tools are designed to work seamlessly with OpenAI models, streamlining the development of sophisticated AI agents. 60 | 61 | ### Using Hosted Tools in Your Agent 62 | 63 | Here’s a quick example of how to equip your agent with **hosted tools** using the OpenAI SDK: 64 | 65 | ```jsx 66 | from agents import Agent, FileSearchTool, Runner, WebSearchTool 67 | 68 | agent = Agent( 69 | name="Assistant", 70 | tools=[ 71 | WebSearchTool(), 72 | FileSearchTool( 73 | max_num_results=3, 74 | vector_store_ids=["VECTOR_STORE_ID"], 75 | ), 76 | ], 77 | ) 78 | 79 | async def main(): 80 | result = await Runner.run(agent, "Which coffee shop should I go to, taking into account my preferences and the weather today in SF?") 81 | print(result.final_output) 82 | 83 | ``` 84 | 85 | This agent is equipped with two powerful tools—**WebSearchTool** and **FileSearchTool**. When given a natural language prompt, it can pull live data from the web and context from a vector store to generate a personalized answer. It’s a prime example of RAG and Tool Use working together in action 86 | 87 | ### Your Journey Starts Here 88 | 89 | You’ve just leveled up your agents with two game-changing capabilities—RAG and tool use—turning them into systems that can access real-time information and take meaningful action. 90 | Tomorrow, we’ll explore another critical ingredient: memory. You’ll learn how to teach your agents to remember past interactions, retain long-term context, and build smarter, more personal experiences. 91 | 92 | -------------------------------------------------------------------------------- /AI Engineering Fundementals/Day 5: Guardrails, Tracing & Evals.md: -------------------------------------------------------------------------------- 1 | # **Day 5: Guardrails, Tracing & Evals** 2 | 3 | ## **Building Safe & Reliable AI Agents** 4 | 5 | Launching an AI feature should feel exciting, not terrifying. Below is a roadmap that shows how modern teams keep their large-language-model (LLM) agents safe, fast, and easy to debug. In our **first AI Engineering Bootcamp**, industry experts and practicing machine-learning engineers walked students through real evals and tracing best practices. Today we are going to learn about these concepts. 6 | 7 | --- 8 | 9 | ## **What is tracing in AI?** 10 | 11 | When an end user clicks **Send**, dozens of things happen inside your app: 12 | 13 | - their message reaches an API gateway 14 | - the gateway calls a retrieval tool 15 | - the LLM writes a draft answer 16 | - a database stores the result 17 | - your front-end finally shows the text 18 | 19 | **Tracing** simply writes a short note at each step: *what happened, when it happened, how long it took, and how many tokens or dollars it cost*. Later, you can play those notes back in order and spot slow parts, errors, or unusual patterns. 20 | 21 | ADD OpenAI Dashboard screenshot. 22 | 23 | ### **Why observability is important** 24 | 25 | Most standard monitoring tools only warn you when a web request is slow. AI agents, however, can go off the rails in new ways—making things up (hallucinations), getting stuck in loops, or using way too many tokens and racking up huge costs. Agent-aware tracing helps you spot and diagnose these unique problems so you can answer questions like: 26 | 27 | - **“Where did the hallucination start?”**— This shows you exactly which prompt or tool call led the model to make up information—crucial for debugging and refining your instructions. 28 | - **“Why did that response cost $12?”** — Tracking token usage helps you estimate costs and identify overly verbose prompts or repeated calls that can be trimmed down. 29 | - **“Which tool call failed or behaved unexpectedly?”**— Knowing this lets you pinpoint whether an external API, database lookup, or other integration is the root cause, so you can fix or wrap it with error handling. 30 | - **“Did the agent get stuck in a loop or retry endlessly?”**— Spotting repeated execution patterns prevents runaway behaviors and lets you add safeguards like retry limits or timeout rules. 31 | 32 | ### What to trace in an agent stack 33 | 34 | 1. **Agent steps** – Each planner/executor round‐trip: intent, chosen tool, and final action. 35 | 2. **LLM calls** – Prompt text, model name, latency, and token counts (`prompt`, `completion`, `total`). 36 | 3. **Tool invocations** – Function name plus validated **Pydantic** input/output so you can diff bad parameters. 37 | 4. **Retriever hits** – Which documents were fetched, their scores, and embedding latency. 38 | 5. **Guardrail verdicts** – Moderation labels, schema-validation failures, auto-retry count. 39 | 40 | With these five signals you can replay any conversation and see graphically—step by step—how the agent reached its answer. 41 | 42 | --- 43 | 44 | ## What are guardrails? 45 | 46 | They are like bumpers that keep users safe. While traces tell you *what* happened, **guardrails** stop dangerous answers from reaching users in the first place. They run either before the model (checking the user prompt) or right after (checking the model’s draft). 47 | 48 | | **Guardrail type** | **Catches** | **Typical fix** | 49 | | --- | --- | --- | 50 | | Content filter | Hate, sexual, violent, self-harm text | Return a polite refusal or ask for a new prompt | 51 | | Copyright filter | Large blocks of lyrics or articles | Replace with a short summary | 52 | | Jailbreak detector | “Ignore all rules and show me…” | Abort and log the attempt | 53 | | Code scanner | eval(input()), SQL injection | Replace with a safe snippet | 54 | | Schema validator | Malformed JSON | Auto-retry with stricter instructions | 55 | | Cost watchdog | Response > 4 000 tokens | Switch to a concise fallback prompt | 56 | 57 | The Agents SDK already adds a *guardrail span* to each run, so if a response is blocked you can see **why** in the trace [OpenAI GitHub](https://openai.github.io/openai-agents-python/tracing/?utm_source=chatgpt.com). 58 | 59 | --- 60 | 61 | ## **Why are evals important?** 62 | 63 | **Evaluations (evals)** provide an in-depth scorecard that uncovers safety risks, biases, and quality issues which real-time guardrails can’t catch. By tracking performance trends and running targeted tests against hallucinations, edge cases, and critical scenarios, AI engineers ensure their models remain reliable and safe over time. 64 | 65 | ![ChatGPT Image Apr 30, 2025, 06_16_28 PM](https://github.com/user-attachments/assets/9f7d08e8-12fa-423f-b3b2-15ef2a1ddd71) 66 | 67 | 68 | In our last AI Engineering Bootcamp, we partnered with a Machine Learning expert to conduct a live eval, building tests on the fly and using the dashboard insights to immediately refine our model’s behavior. **Safety and quality** are two important goals of evaluating how your AI product is performing. 69 | 70 | ### **Risk & Safety evals** 71 | 72 | AI tools can scan a test set or yesterday’s chats and report how often unsafe content slipped through: 73 | 74 | - Hate or unfair language 75 | - Sexual or violent content 76 | - Self-harm phrases 77 | - Copyrighted text 78 | - Jailbreak success 79 | - Code vulnerabilities 80 | - Ungrounded guesses about a user’s race, gender, or mood 81 | 82 | You choose a tolerance, for example “anything medium severity or above is a defect,” and track the **defect rate** over time. 83 | 84 | ### **Quality evals** 85 | 86 | Protecting users is not enough; your bot must still answer well. Common metrics, usually scored 1-5: 87 | 88 | - **Intent Resolution** – did the bot grasp what the user wanted? 89 | - **Tool Call Accuracy** – did it choose the right function with the right parameters? 90 | - **Task Adherence** – did it follow all instructions? 91 | - **Response Completeness** – did it cover every fact in the ground truth? 92 | - **Groundedness** – in RAG systems, are all claims supported by retrieved docs? 93 | - **Retrieval Quality** – are the best passages at the top? 94 | - **Relevance, Coherence, Fluency** – classic measures of correctness and readability. 95 | - **Similarity / BLEU / ROUGE / F1** – overlap with reference answers if you have them. 96 | 97 | Hook these checks into GitHub Actions. If a pull request pushes relevance below 4/5 or hate defect rate above 0.1 %, the build fails. 98 | 99 | --- 100 | 101 | ## **Putting It All Together** 102 | 103 | Tracing, guardrails, and evals form a simple loop: 104 | 105 | 1. **Trace everything** so you can see problems. 106 | 2. **Block problems fast** with guardrails. 107 | 3. **Prove fixes work** with nightly or CI evals. 108 | 109 | The **OpenAI Agents SDK** switches tracing on by default, lets you add guardrails in one decorator, and exports span data ready for evaluation tools [OpenAI GitHub](https://openai.github.io/openai-agents-python/config/?utm_source=chatgpt.com). Start with those defaults, add your domain-specific tweaks, and you’ll have a production-grade AI feature that beginners can understand and experts can trust. 110 | 111 | -------------------------------------------------------------------------------- /SWE Fundamentals/Day 10: Deploying with Vercel.md: -------------------------------------------------------------------------------- 1 | Today is all about understanding the costs behind running real-world AI applications. From OpenAI API tokens to vector database storage, you’ll break down pricing models and learn how to estimate usage and expenses. After deploying your first tool, this helps you make smart, budget-conscious decisions as you continue building. 2 | 3 | ## 🚀 Deployment (with Vercel) 4 | 5 | In the bootcamp, you won’t just build cool AI tools—you’ll **deploy** them to the internet so anyone can try them out. Meet your future best friend: **Vercel**. 6 | 7 | ### ⚙️ How Vercel Works 8 | 9 | 1. Connect your GitHub repo to [Vercel](https://vercel.com/) 10 | 2. Every time you `git push`, Vercel: 11 | - Runs build commands 12 | - Deploys your code 13 | - Gives you a live URL to test + share 14 | 15 | [Vibe Coders Tool & Model Pricing breakdown](https://www.notion.so/Vibe-Coders-Tool-Model-Pricing-breakdown-1dab8a917a99809eb1afc88c8bcf50b0?pvs=21) 16 | 17 | 3. You can preview changes before going live. 18 | 19 | > 💡 Zero DevOps Required — Vercel handles servers, scaling, SSL, and builds for you. 20 | > 21 | 22 | Just **push to GitHub**, and your app is live within seconds. 23 | 24 | ### 🧭 When You’ll Use This 25 | 26 | Once your RAG or agent-based tool is working locally, you’ll: 27 | 28 | - Push your repo to GitHub 29 | - Connect that repo to Vercel 30 | - Share your deployed project with the world 31 | 32 | ### 🧪 Bonus: Environment Variables 33 | 34 | Need to hide secret keys (like OpenAI keys)? You’ll store them securely in Vercel’s **Environment Variables** dashboard. 35 | 36 | ![image (16)](https://github.com/user-attachments/assets/47669380-f65f-40d5-97e3-09658db1aab0) 37 | 38 | 39 | ## 🧱 Other Deployment Options (What Comes After Vercel?) 40 | 41 | While **Vercel** is perfect for fast, frontend-first AI apps, here are other options you may explore as your projects grow in complexity or scale. 42 | 43 | --- 44 | 45 | ### 🐳 **Docker (Containerized Deployments)** 46 | 47 | **Use when**: You want full control over your app's environment and dependencies (especially for custom Python backends or ML models). 48 | 49 | - **How it works**: Package your app and its environment into a portable “container.” 50 | - **Deploy with**: Render, Railway, or self-hosted on VPS (e.g., DigitalOcean, Linode). 51 | - **Great for**: 52 | - Python + Node combos 53 | - RAG pipelines using LangChain + vector DBs 54 | - Apps with GPU requirements 55 | 56 | > 🔒 You control everything—from Python version to system libraries. 57 | > 58 | 59 | --- 60 | 61 | ### 🧰 **Render / Railway / Fly.io** 62 | 63 | **Use when**: You want backend flexibility (e.g., running Flask/FastAPI or background workers), but with a **Vercel-like simplicity**. 64 | 65 | - Good for apps that need: 66 | - Custom ports 67 | - Background jobs 68 | - Scheduled tasks (cron) 69 | 70 | --- 71 | 72 | ### 🏢 **Enterprise-Grade Deployment** 73 | 74 | **Use when**: You’re deploying inside a large company or working with sensitive data. 75 | 76 | ### 🔧 Tools: 77 | 78 | - **AWS / GCP / Azure** (Cloud providers) 79 | - **Kubernetes (K8s)** – for scalable, container-based infra 80 | - **CI/CD Pipelines** – with GitHub Actions, Jenkins, etc. 81 | - **Terraform / Pulumi** – for infrastructure-as-code 82 | 83 | ### 💼 Considerations: 84 | 85 | - Data privacy 86 | - Team access control 87 | - Custom DNS, load balancers, staging environments 88 | 89 | --- 90 | 91 | ### 📦 **Streamlit / Gradio / Hugging Face Spaces** 92 | 93 | **Use when**: You want to quickly demo or prototype ML/AI projects with a **friendly UI**. 94 | 95 | - One-click deployment for ML models or interactive dashboards. 96 | - Excellent for sharing proof-of-concept tools. 97 | 98 | --- 99 | 100 | ## 🧠 Choosing the Right Deployment Tool: Quick Guide 101 | 102 | | Scenario | Tool Suggestion | Why? | 103 | | --- | --- | --- | 104 | | Frontend-first apps with APIs | **Vercel** | Fast, auto-build, easy setup, suitable for startups | 105 | | Python + ML pipelines | **Render / Railway** | Supports Python + background jobs | 106 | | Custom infra with Docker | **Docker + VPS / Fly.io** | Full control | 107 | | Enterprise use or scaling large systems | **AWS / GCP + K8s** | Professional-grade infra, enterprise use cases | 108 | | Quick AI demos or interactive UIs | **Streamlit / Gradio** | Zero config, fast sharing | 109 | 110 | 111 | AND MUCH MORE! 112 | 113 | The simple Git workflow and coding environment you've set up will be the foundation for increasingly sophisticated AI engineering projects throughout the bootcamp. Get ready to transform from a developer to an AI Product Engineer! 114 | -------------------------------------------------------------------------------- /SWE Fundamentals/Day 11: Vibe Coder’s Tool and Model pricing breakdown.md: -------------------------------------------------------------------------------- 1 | Today is all about understanding the costs behind running real-world AI applications. From OpenAI API tokens to vector database storage, you’ll break down pricing models and learn how to estimate usage and expenses. After deploying your first tool, this helps you make smart, budget-conscious decisions as you continue building. 2 | 3 | ## Pricing break down of end-to-end AI Projects 4 | 5 | ### Tool Costs & Subscription Options 6 | 7 | **1. GitHub Plans** 8 | 9 | | Plan | Cost | Key Features | 10 | | --- | --- | --- | 11 | | **Free** | $0 | • Unlimited public/private repositories• Basic CI/CD with GitHub Actions (2,000 minutes/month)• GitHub Discussions• 500MB GitHub Packages storage• GitHub Pages hosting | 12 | | **Pro** | $4/month | • Everything in Free• 3,000 Actions minutes/month• 2GB GitHub Packages storage• Advanced code review tools• Repository insights | 13 | | **Team** | $4/user/month | • Everything in Pro• Team access controls and management• Draft pull requests• Required reviewers• 3,000 Actions minutes/month | 14 | | **Enterprise** | $21/user/month | • Everything in Team• SAML single sign-on• Advanced security features• 50,000 Actions minutes/month• Dedicated support• Compliance features | 15 | 16 | **Special Programs:** 17 | 18 | - **GitHub Education**: Free Pro for students/teachers 19 | - **GitHub Nonprofits**: Free Team plan for qualifying organizations 20 | - **GitHub Accelerator**: Free credits for startups 21 | 22 | **2. Cursor IDE** 23 | 24 | | Plan | Cost | Key Features | 25 | | --- | --- | --- | 26 | | **Free** | $0 | • Basic AI features using Cursor's built-in models• Limited daily AI completions and chats• Access to GPT-3.5 Turbo• All core IDE capabilities• ~500K tokens/month (~250 substantial AI interactions) | 27 | | **Pro** | $20/month | • Everything in Free• Unlimited AI usage• Access to GPT-4 and Claude models• Higher quality completions• Longer context in chats• Priority support | 28 | | **Teams** | $24/user/month | • Everything in Pro• Team management• Shared AI settings and rules• Usage analytics• Centralized billing• Team shared templates | 29 | 30 | **Annual Discounts:** 31 | 32 | - Pro: $192/year ($16/month equivalent, 20% savings) 33 | - Teams: $240/user/year ($20/user/month equivalent, 17% savings) 34 | 35 | **3. AI Integration Costs** 36 | 37 | **OpenAI API Pricing** 38 | 39 | | Model | Input Cost | Output Cost | Context Window | Use Case | 40 | | --- | --- | --- | --- | --- | 41 | | **GPT-3.5 Turbo** | $0.0005/1K tokens | $0.0015/1K tokens | 16K | General assistance, simpler coding tasks | 42 | | **GPT-4 Turbo** | $0.01/1K tokens | $0.03/1K tokens | 128K | Complex reasoning, advanced coding | 43 | | **GPT-4o** | $0.005/1K tokens | $0.015/1K tokens | 128K | Optimized performance and cost balance | 44 | | **GPT-4 Vision** | $0.01/1K tokens | $0.03/1K tokens | 128K | Image understanding and processing | 45 | 46 | **Claude API Pricing** 47 | 48 | | Model | Input Cost | Output Cost | Context Window | Use Case | 49 | | --- | --- | --- | --- | --- | 50 | | **Claude 3 Haiku** | $0.00025/1K tokens | $0.00125/1K tokens | 200K | Fast responses, simpler tasks | 51 | | **Claude 3 Sonnet** | $0.003/1K tokens | $0.015/1K tokens | 200K | Balance of quality and cost | 52 | | **Claude 3 Opus** | $0.015/1K tokens | $0.075/1K tokens | 200K | Highest quality responses | 53 | | **Claude 3.5 Sonnet** | $0.008/1K tokens | $0.024/1K tokens | 200K | Enhanced reasoning and capabilities | 54 | 55 | **Google AI / Gemini API Pricing** 56 | 57 | | Model | Input Cost | Output Cost | Context Window | Use Case | 58 | | --- | --- | --- | --- | --- | 59 | | **Gemini 1.5 Flash** | $0.00035/1K tokens | $0.00105/1K tokens | 1M | Quick responses, standard tasks | 60 | | **Gemini 1.5 Pro** | $0.0025/1K tokens | $0.0075/1K tokens | 1M | More powerful capabilities, multimedia | 61 | | **Gemini 1.5 Ultra** | $0.00875/1K tokens | $0.02625/1K tokens | 1M | Most advanced capabilities | 62 | 63 | **Free Credits & Trial Options:** 64 | 65 | - **OpenAI**: $5 in free credits for new users 66 | - **Anthropic (Claude)**: $10-$25 in free credits for new sign-ups 67 | - **Google AI**: $10 in free credits for new users 68 | 1. **🔗 [Vercel Pricing](https://vercel.com/pricing)** 69 | 70 | **4. Cost Optimization Strategies** 71 | 72 | **Token Usage Optimization:** 73 | 74 | - Use smaller context windows when possible 75 | - Choose the appropriate model for the task (don't use GPT-4 when GPT-3.5 suffices) 76 | - Implement caching for repeated queries 77 | - Batch similar requests together 78 | - Use system prompts efficiently 79 | 80 | **Subscription Management:** 81 | 82 | - Start with free tiers to evaluate needs 83 | - Consider annual plans for long-term usage (15-20% savings) 84 | - Share team accounts when appropriate 85 | - Monitor usage patterns and adjust subscriptions accordingly 86 | 87 | **5. Additional Considerations** 88 | 89 | **Return on Investment:** 90 | 91 | - 20-40% development time reduction with proper AI integration 92 | - Faster onboarding to new techniques, skills and tools 93 | - Improved code quality and documentation 94 | - Reduced debugging time 95 | 96 | **Cost Estimation Examples:** 97 | 98 | | Task | Approximate Cost | 99 | | --- | --- | 100 | | **Building a basic project** (Free GitHub + Free Cursor + occasional AI) | $0/month | 101 | | **Active personal development** (Free GitHub + Free Cursor + Free Vercel + moderate AI usage) | $0 + $0 + $0+ ~$5-10 = $5-10/month | 102 | | **Super user development** (Free GitHub + Cursor Pro + Free Vercel + regular AI usage | $0+ $20 + 0 + ~$15-20 = $35-40/month | 103 | -------------------------------------------------------------------------------- /SWE Fundamentals/Day 12: SWE and Web Fundamentals.md: -------------------------------------------------------------------------------- 1 | This lesson dives into how web applications actually work—from frontend interfaces to backend logic and databases. Understanding how data moves across the web is crucial for any AI engineer aiming to build tools that are more than just local scripts. 2 | 3 | ## 🎯 Learning Objectives 4 | 5 | By completing this module, you'll be able to: 6 | 7 | - Understand frontend/backend architecture 8 | - Explore system design fundamentals 9 | - Learn TypeScript basics for AI applications 10 | - Master API concepts and implementation 11 | 12 | --- 13 | 14 | ## 📚 Day 12: Web Application Fundamentals 15 | 16 | ### Client-Side vs. Server-Side Development 17 | 18 | ### What You'll Learn: 19 | 20 | - The distinction between frontend and backend 21 | - How web applications communicate 22 | - The request-response cycle 23 | 24 | ### Frontend (Client-Side) 25 | 26 | - What users see and interact with (UI/UX) 27 | - Runs in the user's browser 28 | - Technologies: HTML, CSS, JavaScript, React, Vue, Angular 29 | - Responsible for: Rendering UI, handling user interactions, making API calls 30 | 31 | ### Backend (Server-Side) 32 | 33 | - Invisible to users but powers the application 34 | - Runs on remote servers 35 | - Technologies: Node.js, Python, Java, Go, etc. 36 | - Responsible for: Business logic, data processing, authentication, database operations 37 | 38 | ### Frontend vs Backend 39 | 40 | | Frontend (Client) | Backend (Server) | 41 | | --- | --- | 42 | | What user sees (UI) | Handles logic, databases | 43 | | HTML, CSS, JS | Python, Node.js, Django, Express | 44 | | Runs in browser | Runs on server | 45 | 46 | Further Reading: https://www.fullstackfoundations.com/blog/client-side-vs-server-side 47 | 48 | ![ChatGPT Image May 1, 2025, 02_04_07 PM](https://github.com/user-attachments/assets/d4755402-bd4e-44c2-841c-95b0df007513) 49 | 50 | *Image Resource: https://medium.com/@donotapply/client-side-vs-server-side-whats-the-difference-a933341cd60e* 51 | 52 | ### The Request-Response Cycle: 53 | 54 | ```mermaid 55 | sequenceDiagram 56 | participant User 57 | participant Browser(Frontend/ Client side) 58 | participant Server 59 | participant Database 60 | 61 | User->>Browser(Frontend/ Client side): Clicks "Login" 62 | Browser(Frontend/ Client side)->>Server: POST /login {credentials} 63 | Server->>Database: Query user data 64 | Database->>Server: Return user record 65 | Server->>Browser(Frontend/ Client side): Response (success/error) 66 | Browser(Frontend/ Client side)->>User: Updates UI (dashboard/error) 67 | 68 | ``` 69 | 70 | ![the_request_response_cycle.png](attachment:76752f86-a2b3-49f0-8f04-e47b82facc6c:25786552-348f-4832-91be-05ba41c46c63.png) 71 | 72 | ### Application Layers & Middleware 73 | 74 | Web applications typically have several distinct layers: 75 | 76 | ### Presentation Layer 77 | 78 | - User interfaces (websites, mobile apps) 79 | - Handles rendering and user interaction 80 | - Communicates with application layer via API 81 | 82 | ### Application Layer 83 | 84 | - Core business logic and processing 85 | - Handles requests from presentation layer 86 | - Communicates with data layer 87 | 88 | ### Data Layer 89 | 90 | - Database storage and retrieval 91 | - Data persistence and management 92 | - Structured or unstructured data storage 93 | 94 | ### Middleware 95 | 96 | Software components that connect different application parts: 97 | 98 | - Authentication middleware 99 | - Logging middleware 100 | - Error handling middleware 101 | - API gateway 102 | 103 | Further reading on Middleware - https://srivastavayushmaan1347.medium.com/understanding-middlewares-a-comprehensive-guide-with-practical-examples-c80383f888d5 104 | 105 | **Different Layers of Modern Web Application Architecture** 106 | ![Screenshot 2025-04-19 at 12 34 58 PM](https://github.com/user-attachments/assets/b8e3b51f-6257-472a-a78f-c417693d8d41) 107 | 108 | 109 | *Image Resource:* https://www.sayonetech.com/blog/web-application-architecture/ 110 | 111 | Web Application Architecture Further Reading - https://www.sayonetech.com/blog/web-application-architecture/ 112 | 113 | ### System Design Concepts 114 | 115 | Reading - [System Design Basics](https://dev.to/kaustubhyerkade/system-design-fundamentals-a-complete-guide-for-beginners-3n95) 116 | 117 | Key Components: 118 | 119 | ![image (12)](https://github.com/user-attachments/assets/1b0c0fd1-c46f-41c3-87e3-a942d18dbaa9) 120 | 121 | 122 | ### Basic Application Data Flow: 123 | 124 | ![image (13)](https://github.com/user-attachments/assets/4ac4179a-c674-4af7-9116-3f97d07f0eaf) 125 | 126 | 127 | ### Example: AI-Powered Text Analysis App 128 | 129 | ![image (14)](https://github.com/user-attachments/assets/54dc89aa-6e0e-46b1-8026-936ab5fb0a84) 130 | 131 | 132 | ### Practice Activities: 133 | 134 | 1. **System Component Identification:** 135 | - For a familiar app (like Instagram or Spotify), identify the frontend components, backend components, and databases. 136 | 2. **Data Flow Mapping:** 137 | - Map out what happens when you post a comment on a social media platform, from UI to database and back. 138 | 3. **Layer Identification Exercise:** 139 | - For a note-taking app, identify what belongs in each layer (presentation, application, data). 140 | 141 | ### Resources: 142 | 143 | - [Frontend vs Backend Development](https://www.fullstackfoundations.com/blog/client-side-vs-server-side) 144 | - [Web Application Architecture](https://www.sayonetech.com/blog/web-application-architecture/) 145 | - [Middleware](https://srivastavayushmaan1347.medium.com/understanding-middlewares-a-comprehensive-guide-with-practical-examples-c80383f888d5) 146 | - [System Design Basics](https://dev.to/kaustubhyerkade/system-design-fundamentals-a-complete-guide-for-beginners-3n95) 147 | 148 | --- 149 | 150 | -------------------------------------------------------------------------------- /SWE Fundamentals/Day 13: TypeScript & Programming Fundamentals.md: -------------------------------------------------------------------------------- 1 | Today you’ll learn the essentials of programming with TypeScript, a powerful language that helps catch bugs early and write clearer code. We’ll walk through variables, types, functions, and other core concepts. This gives you the quick primer needed to build secure and scalable AI applications, especially as you start connecting interfaces, APIs, and business logic. 2 | 3 | ### Why TypeScript for AI Applications? 4 | 5 | TypeScript adds strong typing to JavaScript, which is particularly valuable when working with AI applications: 6 | 7 | - **Type Safety:** Ensures data consistency between different system components 8 | - **Better Documentation:** Types serve as built-in documentation 9 | - **Error Detection:** Catches errors at compile time rather than runtime 10 | - **IDE Support:** Better autocomplete and suggestions while coding 11 | 12 | ### TypeScript Fundamentals 13 | 14 | *If you are new to programming, copy these blocks of code into Cursor and prompt in the chat to explain the code so that you can familiarize the high level concepts. You will be diving deeper into the usage of these concepts during hands on activities in the bootcamp.* 15 | 16 | *Dont worry, only a high level understanding is needed for now. You will get a much more comprehensive walk through regarding using Typescript for AI applications in the bootcamp.* 17 | 18 | ### Basic Types: 19 | 20 | Demonstrates TypeScript's primitive types (string, boolean, number) and special types (any, unknown) with simple examples. 21 | 22 | ```tsx 23 | // Basic primitive types 24 | let userName: string = "AI Student"; 25 | let isEnrolled: boolean = true; 26 | let studentCount: number = 25; 27 | let notFound: undefined = undefined; 28 | let notSet: null = null; 29 | 30 | // Special types 31 | let anything: any = "I can be anything"; 32 | let unknown: unknown = 4; // More type-safe than any 33 | 34 | ``` 35 | 36 | ### Arrays and Collections: 37 | 38 | Shows different ways to define typed arrays in TypeScript, including standard arrays, alternate syntax, and tuples. 39 | 40 | ```tsx 41 | // Arrays 42 | let topics: string[] = ["AI", "TypeScript", "APIs"]; 43 | let scores: number[] = [98, 87, 92]; 44 | 45 | // Alternate array syntax 46 | let alternatives: Array = ["Option A", "Option B"]; 47 | 48 | // Tuples (fixed-length arrays with specific types) 49 | let userStatus: [string, boolean] = ["active", true]; 50 | 51 | ``` 52 | 53 | ### Objects & Interfaces: 54 | 55 | Illustrates how to define object types directly and using interfaces, including optional properties. 56 | 57 | ```tsx 58 | // Object with inline type 59 | let student: {name: string, level: string} = { 60 | name: "Alex", 61 | level: "Beginner" 62 | }; 63 | 64 | // Interface definition 65 | interface Course { 66 | title: string; 67 | duration: number; 68 | isRequired: boolean; 69 | tags?: string[]; // Optional property 70 | } 71 | 72 | // Using the interface 73 | const aiCourse: Course = { 74 | title: "AI Product Engineering", 75 | duration: 8, 76 | isRequired: true, 77 | tags: ["AI", "engineering", "product"] 78 | }; 79 | 80 | ``` 81 | 82 | ### Functions: 83 | 84 | Demonstrates defining functions with typed parameters and return values, including arrow functions and void returns. 85 | 86 | ```tsx 87 | // Function with typed parameters and return type 88 | function calculateScore(correct: number, total: number): number { 89 | return (correct / total) * 100; 90 | } 91 | 92 | // Arrow function with types 93 | const isPassingGrade = (score: number): boolean => score >= 70; 94 | 95 | // Function that doesn't return anything 96 | function logActivity(activity: string): void { 97 | console.log(`User performed: ${activity}`); 98 | } 99 | 100 | ``` 101 | 102 | ### TypeScript for AI Applications 103 | 104 | **Typing AI Model Responses:** 105 | 106 | Shows how to create interfaces for AI model responses and functions that handle them based on confidence levels. 107 | 108 | ```tsx 109 | // Interface for AI model response 110 | interface AIModelResponse { 111 | prediction: string; 112 | confidence: number; 113 | processingTime: number; 114 | alternatives?: string[]; 115 | modelVersion: string; 116 | } 117 | 118 | // Function that processes AI response 119 | function handleAIResponse(response: AIModelResponse): void { 120 | if (response.confidence > 0.9) { 121 | console.log(`High confidence prediction: ${response.prediction}`); 122 | } else { 123 | console.log(`Low confidence (${response.confidence}). Consider alternatives: ${response.alternatives?.join(", ")}`); 124 | } 125 | } 126 | 127 | // Example usage 128 | const mockAIResponse: AIModelResponse = { 129 | prediction: "positive sentiment", 130 | confidence: 0.87, 131 | processingTime: 0.045, 132 | alternatives: ["neutral sentiment", "mixed sentiment"], 133 | modelVersion: "sentiment-1.2.0" 134 | }; 135 | 136 | handleAIResponse(mockAIResponse); 137 | 138 | ``` 139 | 140 | **Type Definitions for API Data:** 141 | 142 | Creates structured interfaces for AI text analysis requests and responses, with nested types for different analysis components. 143 | 144 | ```tsx 145 | // User input for AI text analysis 146 | interface TextAnalysisRequest { 147 | text: string; 148 | language?: string; 149 | analysisTypes: ("sentiment" | "entities" | "keywords" | "summary")[]; 150 | maxResults?: number; 151 | } 152 | 153 | // AI service response 154 | interface TextAnalysisResponse { 155 | requestId: string; 156 | language: string; 157 | sentiment?: { 158 | label: "positive" | "negative" | "neutral" | "mixed"; 159 | score: number; 160 | }; 161 | entities?: Array<{ 162 | text: string; 163 | type: string; 164 | confidence: number; 165 | }>; 166 | keywords?: string[]; 167 | summary?: string; 168 | processingTime: number; 169 | } 170 | 171 | ``` 172 | 173 | ### Practice Activities in Cursor: 174 | 175 | 1. **Type Definition Exercise:** 176 | - Create interfaces for an AI-powered image recognition system 177 | 2. **Function Typing:** 178 | - Write typed functions to process and transform AI predictions 179 | 3. **Mock API Response:** 180 | - Define types and create mock AI analysis results for a review classification system 181 | 182 | *Remember to add system rules in your Cursor set up (covered in Module 1) so that it can explain the code blocks as you vibe code these example to help you get familiarized.* 183 | 184 | ### Further Resources: 185 | 186 | - [Learn TypeScript in 50 minutes](https://www.youtube.com/watch?v=3mDny9XAgic) 187 | - [TypeScript Documentation](https://www.typescriptlang.org/docs/) 188 | - [TypeScript Playground](https://www.typescriptlang.org/play) 189 | -------------------------------------------------------------------------------- /sandbox/cover-letter-generator/.env.example: -------------------------------------------------------------------------------- 1 | # Template for required environment variables 2 | OPENAI_API_KEY=your-openai-key-here 3 | FIRECRAWL_API_KEY=your-firecrawl-key-here 4 | -------------------------------------------------------------------------------- /sandbox/cover-letter-generator/.gitignore: -------------------------------------------------------------------------------- 1 | # Virtual Environment 2 | venv/ 3 | env/ 4 | .env 5 | 6 | # Python 7 | __pycache__/ 8 | *.py[cod] 9 | *.so 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | 25 | # IDE 26 | .vscode/ 27 | .idea/ 28 | 29 | # OS 30 | .DS_Store 31 | 32 | .env.local 33 | .env.*.local -------------------------------------------------------------------------------- /sandbox/cover-letter-generator/README.md: -------------------------------------------------------------------------------- 1 | # 📝 AI Cover Letter Generator 2 | 3 | [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) 4 | [![Streamlit](https://img.shields.io/badge/streamlit-1.29.0-FF4B4B.svg)](https://streamlit.io) 5 | [![OpenAI](https://img.shields.io/badge/OpenAI-GPT4-00A67E.svg)](https://openai.com/) 6 | [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 7 | [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) 8 | 9 | An AI-powered tool that generates customized cover letters by analyzing your resume and job postings. 10 | 11 | ## 🚀 Features 12 | 13 | - PDF resume parsing 14 | - Automatic job posting analysis 15 | - Parallel processing for faster results 16 | - Customized cover letter generation 17 | - Easy-to-use web interface 18 | 19 | ## 🛠️ Quick Start 20 | 21 | 1. Clone the repository 22 | 2. Install dependencies: 23 | -------------------------------------------------------------------------------- /sandbox/cover-letter-generator/app.py: -------------------------------------------------------------------------------- 1 | import streamlit as st 2 | from openai import AsyncOpenAI 3 | from firecrawl import FirecrawlApp 4 | import os 5 | from dotenv import load_dotenv 6 | import asyncio 7 | from src.core import process_cover_letter_request 8 | import tempfile 9 | 10 | # Load environment variables 11 | load_dotenv() 12 | 13 | # Initialize API clients 14 | openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY")) 15 | firecrawl_client = FirecrawlApp(api_key=os.getenv("FIRECRAWL_API_KEY")) 16 | 17 | 18 | def main(): 19 | st.set_page_config( 20 | page_title="AI Cover Letter Generator", page_icon="📝", layout="wide" 21 | ) 22 | 23 | st.title("📝 AI Cover Letter Generator") 24 | st.write( 25 | "Upload your resume and provide a job posting URL to generate a customized cover letter." 26 | ) 27 | 28 | # Input section 29 | col1, col2 = st.columns(2) 30 | with col1: 31 | uploaded_file = st.file_uploader("Upload your resume (PDF)", type=["pdf"]) 32 | with col2: 33 | job_url = st.text_input("Enter job posting URL") 34 | 35 | if st.button("Generate Cover Letter", type="primary"): 36 | if uploaded_file is not None and job_url: 37 | try: 38 | # Create a placeholder for the progress messages 39 | progress_placeholder = st.empty() 40 | 41 | async def process_with_status(): 42 | # Step 1: Processing PDF 43 | progress_placeholder.info("📄 Processing your resume...") 44 | 45 | # Step 2: Parallel Processing 46 | progress_placeholder.info("🔍 Analyzing resume and job posting...") 47 | 48 | cover_letter = await process_cover_letter_request( 49 | uploaded_file, job_url, openai_client, firecrawl_client 50 | ) 51 | 52 | # Step 3: Final Generation 53 | progress_placeholder.info("✍️ Generating your cover letter...") 54 | 55 | return cover_letter 56 | 57 | # Run the async function 58 | cover_letter = asyncio.run(process_with_status()) 59 | 60 | if cover_letter: 61 | # Clear the progress message 62 | progress_placeholder.empty() 63 | 64 | # Display success and results 65 | st.success("✨ Your cover letter has been generated!") 66 | 67 | # Create tabs for different views 68 | tab1, tab2 = st.tabs(["📄 View", "📋 Copy & Download"]) 69 | 70 | with tab1: 71 | st.markdown("### Your Cover Letter") 72 | st.markdown(cover_letter) 73 | 74 | with tab2: 75 | st.text_area( 76 | "Copy your cover letter", value=cover_letter, height=400 77 | ) 78 | 79 | # Single download button for TXT 80 | st.download_button( 81 | label="📥 Download as TXT", 82 | data=cover_letter, 83 | file_name="cover_letter.txt", 84 | mime="text/plain", 85 | help="Click to download your cover letter as a text file", 86 | ) 87 | else: 88 | progress_placeholder.empty() 89 | st.error("Failed to generate cover letter. Please try again.") 90 | 91 | except Exception as e: 92 | st.error(f"An error occurred: {str(e)}") 93 | else: 94 | st.warning("Please upload a PDF resume and provide a job posting URL.") 95 | 96 | # Add helpful instructions 97 | with st.expander("ℹ️ How to use"): 98 | st.write( 99 | """ 100 | 1. Upload your resume in PDF format 101 | 2. Paste the URL of the job posting you're interested in 102 | 3. Click 'Generate Cover Letter' 103 | 4. View, copy, or download your customized cover letter 104 | """ 105 | ) 106 | 107 | 108 | if __name__ == "__main__": 109 | main() 110 | -------------------------------------------------------------------------------- /sandbox/cover-letter-generator/requirements.txt: -------------------------------------------------------------------------------- 1 | # Core dependencies 2 | streamlit 3 | openai 4 | firecrawl 5 | PyPDF2 6 | python-dotenv 7 | pydantic 8 | -------------------------------------------------------------------------------- /sandbox/cover-letter-generator/src/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/break-into-data/ai-engineer-toolkit/d9dc8361e041e1be992b3821d8e3991dd593b8c7/sandbox/cover-letter-generator/src/__init__.py -------------------------------------------------------------------------------- /sandbox/hiring-agent/.env.local: -------------------------------------------------------------------------------- 1 | FIRECRAWL_API_KEY= 2 | OPENAI_API_KEY= -------------------------------------------------------------------------------- /sandbox/hiring-agent/README.md: -------------------------------------------------------------------------------- 1 | [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) 2 | [![FastAPI](https://img.shields.io/badge/FastAPI-0.78.0-blue.svg)](https://fastapi.tiangolo.com/) 3 | [![Next.js](https://img.shields.io/badge/Next.js-13.0+-000000.svg)](https://nextjs.org/) 4 | [![OpenAI](https://img.shields.io/badge/OpenAI-GPT4-00A67E.svg)](https://openai.com/) 5 | [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) 6 | [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) 7 | 8 | 9 | # Resume Screening Application 10 | 11 | This application helps recruiters screen and rank resumes based on job descriptions using AI. 12 | 13 | 14 | ![Ai Agent](images/arch-diagram.png "Title") 15 | 16 | 17 | 18 | 19 | ### How It Works 20 | 21 | ![flowchart](images/hiring-process-flowchart.png "Flowchart") 22 | 23 | 24 | **User Input** 25 | 26 | - Job Description: Enter text or provide a URL. 27 | - Resumes: Upload PDF/Word files. 28 | 29 | **Ingestion** 30 | 31 | - Inputs are read and processed. 32 | - Job description URLs are scraped for content. 33 | 34 | **Parsing** 35 | 36 | - The job description is analyzed by an LLM (GPT-4) to extract essential details. 37 | - Resumes are processed to extract candidate profiles. 38 | 39 | **Scoring and Ranking** 40 | 41 | - Candidates are scored on relevance, experience, and skills. 42 | - An average score is computed, and candidates are sorted in descending order. 43 | 44 | 45 | **Email Generation** 46 | 47 | - Custom email is generated based on if you want to accept or reject the candidate 48 | 49 | 50 | 51 | ## Setup Instructions 52 | 53 | ### Frontend Setup 54 | 55 | ```bash 56 | # Install dependencies 57 | npm install 58 | 59 | # Start development server 60 | npm run dev 61 | ``` 62 | 63 | The frontend will be available at http://localhost:3000 64 | 65 | ### Backend Setup 66 | 67 | ```bash 68 | # Create and activate virtual environment 69 | cd backend 70 | python -m venv venv 71 | source venv/bin/activate # On Windows: venv\Scripts\activate 72 | 73 | # Install dependencies 74 | pip install -r requirements.txt 75 | 76 | # Start the backend server 77 | python -m uvicorn main:app --reload 78 | ``` 79 | 80 | The backend API will be available at http://localhost:8000 81 | 82 | ## Environment Variables 83 | 84 | Create a `.env` file in the `backend` directory with the following keys: 85 | 86 | ``` 87 | OPENAI_API_KEY=your_openai_api_key_here 88 | FIRECRAWL_API_KEY=your_firecrawl_api_key_here 89 | ``` 90 | 91 | ## Features 92 | 93 | - Upload multiple resumes (PDF) 94 | - Enter or paste job descriptions 95 | - Automatic parsing of resumes and job descriptions 96 | - AI-powered candidate ranking 97 | - Generate personalized email templates for candidates 98 | 99 | ## Technologies 100 | 101 | - **Frontend**: Next.js, React, TypeScript, Tailwind CSS 102 | - **Backend**: Python, FastAPI, OpenAI API 103 | - **Tools**: PDF parsing, AI text analysis 104 | 105 | 106 | 107 | -------------------------------------------------------------------------------- /sandbox/hiring-agent/app.py: -------------------------------------------------------------------------------- 1 | # main.py 2 | import streamlit as st 3 | import pandas as pd 4 | from utils.utils import ( 5 | ingest_inputs, 6 | parse_job_description, 7 | parse_resumes, 8 | score_candidates, 9 | rank_candidates, 10 | generate_email_templates, 11 | ) 12 | import asyncio 13 | 14 | 15 | # Main App Title 16 | st.title("Resume Screening Agent") 17 | 18 | # Input section for job description 19 | st.header("Job Description Input") 20 | job_description = st.text_area("Paste the job description or URL", height=150) 21 | 22 | # Input section for candidate resumes 23 | st.header("Candidate Resumes") 24 | resume_files = st.file_uploader( 25 | "Upload resume files (PDF/Word)", 26 | type=["pdf", "docx", "doc"], 27 | accept_multiple_files=True, 28 | ) 29 | 30 | st.header("Candidates to Select") 31 | num_candidates = st.slider( 32 | "Select the number of candidates to invite for an interview", 1, 4, 2 33 | ) 34 | 35 | 36 | # Button to trigger the agent 37 | if st.button("Run Agent"): 38 | if not job_description: 39 | st.error("Please provide a job description or URL.") 40 | elif not resume_files: 41 | st.error("Please upload at least one resume file.") 42 | else: 43 | st.markdown("### Your AI Agent is now processing your inputs...") 44 | status_text = st.empty() # placeholder for status updates 45 | 46 | # Step 1: processing resumes 47 | with st.spinner("Step 1: Processing Inputs..."): 48 | # raw_data = ingest_inputs(job_description, resume_files) 49 | raw_data = asyncio.run(ingest_inputs(job_description, resume_files)) 50 | status_text.text("Step 1 complete: Inputs processed.") 51 | with st.expander("View Processed Inputs", expanded=False): 52 | st.json(raw_data) 53 | 54 | # Step 2: processing Job description 55 | with st.spinner("Step 2: Processing Job Description & Resume..."): 56 | parsed_requirements = asyncio.run(parse_job_description(raw_data)) 57 | parsed_resumes = asyncio.run(parse_resumes(resume_files)) 58 | status_text.text("Step 2 complete: Job description & Resume processed.") 59 | with st.expander("View Parsed Job Description", expanded=False): 60 | st.json(parsed_requirements) 61 | with st.expander("View processed Resume", expanded=False): 62 | st.json(parsed_resumes) 63 | 64 | # Step 3: Score candidates based on the parsed data 65 | with st.spinner("Step 3: Scoring candidates..."): 66 | status_text.text("Step 3: Scoring candidates...") 67 | candidate_scores = asyncio.run( 68 | score_candidates(parsed_requirements, parsed_resumes) 69 | ) 70 | status_text.text("Step 3 complete: Candidates scored.") 71 | with st.expander("View Resume Summaries", expanded=False): 72 | st.json(candidate_scores) 73 | 74 | # Step 4: Rank the candidates 75 | with st.spinner("Step 4: Ranking candidates..."): 76 | status_text.text("Step 4: Ranking candidates...") 77 | ranked_candidates = rank_candidates(candidate_scores) 78 | status_text.text("Step 4 complete: Candidates ranked.") 79 | with st.expander("View Ranked Candidates", expanded=False): 80 | st.json(ranked_candidates) 81 | 82 | # Step 5: Generate email templates for top candidates and others 83 | with st.spinner("Step 5: Generating email templates..."): 84 | status_text.text("Step 5: Generating email templates...") 85 | # 'num_candidates' is assumed to come from the frontend (e.g., top X candidates) 86 | email_templates = asyncio.run( 87 | generate_email_templates( 88 | ranked_candidates, parsed_requirements, num_candidates 89 | ) 90 | ) 91 | status_text.text("Step 5 complete: Email templates generated.") 92 | with st.expander("View Email Templates", expanded=False): 93 | st.json(email_templates) 94 | 95 | # Final update 96 | status_text.text("Agent processing complete! Your results are ready.") 97 | -------------------------------------------------------------------------------- /sandbox/hiring-agent/app/globals.css: -------------------------------------------------------------------------------- 1 | @tailwind base; 2 | @tailwind components; 3 | @tailwind utilities; 4 | 5 | body { 6 | font-family: Arial, Helvetica, sans-serif; 7 | } 8 | 9 | @layer utilities { 10 | .text-balance { 11 | text-wrap: balance; 12 | } 13 | } 14 | 15 | @layer base { 16 | :root { 17 | --background: 0 0% 100%; 18 | --foreground: 0 0% 3.9%; 19 | --card: 0 0% 100%; 20 | --card-foreground: 0 0% 3.9%; 21 | --popover: 0 0% 100%; 22 | --popover-foreground: 0 0% 3.9%; 23 | --primary: 0 0% 9%; 24 | --primary-foreground: 0 0% 98%; 25 | --secondary: 0 0% 96.1%; 26 | --secondary-foreground: 0 0% 9%; 27 | --muted: 0 0% 96.1%; 28 | --muted-foreground: 0 0% 45.1%; 29 | --accent: 0 0% 96.1%; 30 | --accent-foreground: 0 0% 9%; 31 | --destructive: 0 84.2% 60.2%; 32 | --destructive-foreground: 0 0% 98%; 33 | --border: 0 0% 89.8%; 34 | --input: 0 0% 89.8%; 35 | --ring: 0 0% 3.9%; 36 | --chart-1: 12 76% 61%; 37 | --chart-2: 173 58% 39%; 38 | --chart-3: 197 37% 24%; 39 | --chart-4: 43 74% 66%; 40 | --chart-5: 27 87% 67%; 41 | --radius: 0.5rem; 42 | --sidebar-background: 0 0% 98%; 43 | --sidebar-foreground: 240 5.3% 26.1%; 44 | --sidebar-primary: 240 5.9% 10%; 45 | --sidebar-primary-foreground: 0 0% 98%; 46 | --sidebar-accent: 240 4.8% 95.9%; 47 | --sidebar-accent-foreground: 240 5.9% 10%; 48 | --sidebar-border: 220 13% 91%; 49 | --sidebar-ring: 217.2 91.2% 59.8%; 50 | } 51 | .dark { 52 | --background: 0 0% 3.9%; 53 | --foreground: 0 0% 98%; 54 | --card: 0 0% 3.9%; 55 | --card-foreground: 0 0% 98%; 56 | --popover: 0 0% 3.9%; 57 | --popover-foreground: 0 0% 98%; 58 | --primary: 0 0% 98%; 59 | --primary-foreground: 0 0% 9%; 60 | --secondary: 0 0% 14.9%; 61 | --secondary-foreground: 0 0% 98%; 62 | --muted: 0 0% 14.9%; 63 | --muted-foreground: 0 0% 63.9%; 64 | --accent: 0 0% 14.9%; 65 | --accent-foreground: 0 0% 98%; 66 | --destructive: 0 62.8% 30.6%; 67 | --destructive-foreground: 0 0% 98%; 68 | --border: 0 0% 14.9%; 69 | --input: 0 0% 14.9%; 70 | --ring: 0 0% 83.1%; 71 | --chart-1: 220 70% 50%; 72 | --chart-2: 160 60% 45%; 73 | --chart-3: 30 80% 55%; 74 | --chart-4: 280 65% 60%; 75 | --chart-5: 340 75% 55%; 76 | --sidebar-background: 240 5.9% 10%; 77 | --sidebar-foreground: 240 4.8% 95.9%; 78 | --sidebar-primary: 224.3 76.3% 48%; 79 | --sidebar-primary-foreground: 0 0% 100%; 80 | --sidebar-accent: 240 3.7% 15.9%; 81 | --sidebar-accent-foreground: 240 4.8% 95.9%; 82 | --sidebar-border: 240 3.7% 15.9%; 83 | --sidebar-ring: 217.2 91.2% 59.8%; 84 | } 85 | } 86 | 87 | @layer base { 88 | * { 89 | @apply border-border; 90 | } 91 | body { 92 | @apply bg-background text-foreground; 93 | } 94 | } 95 | -------------------------------------------------------------------------------- /sandbox/hiring-agent/app/layout.tsx: -------------------------------------------------------------------------------- 1 | import type { Metadata } from 'next' 2 | import './globals.css' 3 | 4 | export const metadata: Metadata = { 5 | title: 'v0 App', 6 | description: 'Created with v0', 7 | generator: 'v0.dev', 8 | } 9 | 10 | export default function RootLayout({ 11 | children, 12 | }: Readonly<{ 13 | children: React.ReactNode 14 | }>) { 15 | return ( 16 | 17 | {children} 18 | 19 | ) 20 | } 21 | -------------------------------------------------------------------------------- /sandbox/hiring-agent/app/page.tsx: -------------------------------------------------------------------------------- 1 | import ResumeScreeningApp from "@/components/resume-screening-app" 2 | 3 | export default function Home() { 4 | return ( 5 |
6 |
7 |

Resume Screening Assistant

8 | 9 |
10 |
11 | ) 12 | } 13 | 14 | -------------------------------------------------------------------------------- /sandbox/hiring-agent/backend/requirements.txt: -------------------------------------------------------------------------------- 1 | streamlit 2 | openai 3 | firecrawl 4 | PyPDF2 5 | python-dotenv 6 | pydantic 7 | fastapi 8 | uvicorn -------------------------------------------------------------------------------- /sandbox/hiring-agent/components.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "https://ui.shadcn.com/schema.json", 3 | "style": "default", 4 | "rsc": true, 5 | "tsx": true, 6 | "tailwind": { 7 | "config": "tailwind.config.ts", 8 | "css": "app/globals.css", 9 | "baseColor": "neutral", 10 | "cssVariables": true, 11 | "prefix": "" 12 | }, 13 | "aliases": { 14 | "components": "@/components", 15 | "utils": "@/lib/utils", 16 | "ui": "@/components/ui", 17 | "lib": "@/lib", 18 | "hooks": "@/hooks" 19 | }, 20 | "iconLibrary": "lucide" 21 | } -------------------------------------------------------------------------------- /sandbox/hiring-agent/components/email-modal.tsx: -------------------------------------------------------------------------------- 1 | "use client" 2 | 3 | import { useState, useEffect } from "react" 4 | import { 5 | Dialog, 6 | DialogContent, 7 | DialogHeader, 8 | DialogTitle, 9 | DialogFooter, 10 | } from "@/components/ui/dialog" 11 | import { Button } from "@/components/ui/button" 12 | import { Textarea } from "@/components/ui/textarea" 13 | import { Check, Copy } from "lucide-react" 14 | 15 | interface EmailModalProps { 16 | isOpen: boolean 17 | onClose: () => void 18 | emailContent: string 19 | emailType: "accept" | "reject" 20 | candidateName: string 21 | isLoading?: boolean 22 | } 23 | 24 | export function EmailModal({ 25 | isOpen, 26 | onClose, 27 | emailContent, 28 | emailType, 29 | candidateName, 30 | isLoading = false, 31 | }: EmailModalProps) { 32 | const [copied, setCopied] = useState(false) 33 | const [content, setContent] = useState(emailContent) 34 | 35 | // Update local state when emailContent prop changes. 36 | useEffect(() => { 37 | setContent(emailContent) 38 | }, [emailContent]) 39 | 40 | const handleCopy = async () => { 41 | await navigator.clipboard.writeText(content) 42 | setCopied(true) 43 | setTimeout(() => setCopied(false), 2000) 44 | } 45 | 46 | return ( 47 | 48 | 49 | 50 | 51 | {emailType === "accept" ? "Interview Invitation" : "Rejection Email"} for{" "} 52 | {candidateName} 53 | 54 | 55 | 56 |
57 | {isLoading ? ( 58 | // Loading state (you can replace with a spinner or other indicator) 59 |
60 | Loading email... 61 |
62 | ) : ( 63 |