← All Playlists

Agentic AI

A practical series on building intelligent AI systems — from prompt chaining to routing, orchestration, and beyond.

19 posts


  1. 01
    Chapter 1: Prompt Chaining

    LLMs choke on complex tasks. Prompt chaining fixes that — by breaking one hard problem into a sequence of simple ones, each feeding the next.

  2. 02
    Chapter 2: Routing

    Prompt chains are predictable. The real world isn't. Routing gives agents the ability to make decisions — picking the right tool, sub-agent, or workflow based on what's actually in front of them.

  3. 03
    Chapter 3: Parallelization

    Sequential is clean. Parallel is fast. The art is knowing which tasks can run at the same time — and wiring the plumbing to make it happen.

  4. 04
    Chapter 4: Reflection

    First drafts are rarely final. Reflection gives agents the ability to critique their own outputs, find what's wrong, and iterate toward better results — before returning anything to you.

  5. 05
    Chapter 5: Tool Use (Function Calling)

    LLMs are frozen in time and disconnected from the world. Tool use breaks those walls — letting agents call APIs, run code, query databases, and trigger real actions.

  6. 06
    Chapter 6: Planning

    Reacting to a single input is easy. Achieving a complex goal across many unknown steps is not. Planning is how agents develop foresight — decomposing a destination into a route before they move.

  7. 07
    Chapter 7: Multi-Agent Collaboration

    One agent hits walls. A team of specialized agents doesn't. Multi-agent collaboration lets you decompose complex problems into coordinated workstreams — each agent doing what it does best.

  8. 08
    Chapter 8: Memory Management

    Without memory, every conversation starts from zero. Memory management gives agents the ability to remember — short-term context within a session, long-term knowledge across sessions.

  9. 09
    Chapter 9: Learning and Adaptation

    Every pattern so far assumes the agent stays the same. Learning and adaptation break that assumption — agents that improve through experience, rewrite their own code, and discover better algorithms than humans designed.

  10. 10
    Chapter 10: Model Context Protocol (MCP)

    Every tool integration so far required custom code. MCP changes that — a universal standard that lets any LLM connect to any tool, database, or service without bespoke glue code for each pair.

  11. 11
    Chapter 11: Goal Setting and Monitoring

    Without goals, agents react. With goals, agents pursue. This chapter shows how to give AI agents specific objectives, measurable success criteria, and the feedback loops that keep them on track.

  12. 12
    Chapter 12: Exception Handling and Recovery

    Real-world agents fail. Networks time out, APIs return errors, databases go down, data arrives malformed. This chapter is about building agents that handle every failure gracefully — detecting it, recovering from it, and keeping the user informed throughout.

  13. 13
    Chapter 13: Human-in-the-Loop

    Full autonomy sounds ideal — but in high-stakes domains, the cost of a single AI error is too high. Human-in-the-Loop is the pattern that keeps humans in control of the decisions that matter most.

  14. 14
    Chapter 14: Knowledge Retrieval (RAG)

    LLMs know a lot — but their knowledge stopped the day training ended. RAG is the bridge between static model weights and the live, private, specific knowledge that makes agents actually useful.

  15. 15
    Chapter 15: Inter-Agent Communication (A2A)

    MCP connects agents to tools. A2A connects agents to agents. The Agent2Agent protocol is the missing standard that lets any AI agent — regardless of framework — collaborate with any other.

  16. 16
    Chapter 16: Resource-Aware Optimization

    Not every question needs a supercomputer. Resource-aware optimization routes simple queries to cheap, fast models and reserves expensive, powerful ones for genuinely hard problems — saving cost without sacrificing quality.

  17. 17
    Chapter 17: Reasoning Techniques

    How does an AI agent actually think? This chapter reveals the techniques that transform LLMs from pattern-matchers into deliberate problem-solvers: CoT, ToT, ReAct, self-correction, RLVR, and more.

  18. 18
    Chapter 18: Guardrails and Safety Patterns

    Capable agents without guardrails are dangerous agents. This chapter shows how to build the multi-layered defense systems that keep AI behavior safe, predictable, and aligned — from input validation to jailbreak detection to human oversight.

  19. 19
    Chapter 19: Evaluation and Monitoring

    Building an agent is the beginning. Knowing whether it's actually working — accurately, efficiently, safely, and reliably — is the ongoing challenge. This chapter covers the complete framework for measuring and maintaining agent performance in production.