Ever wanted to build an advanced project to level up in building production-grade systems powered by AI agents, and didn’t know where to start?
If so, Decoding ML’s philosophy is to learn by doing. No procrastination, no useless “research,” just jump straight into it and learn along the way. Suppose that’s how you like to learn, then we have the perfect free learning roadmap to get into building real-world AI agents: from theory to APIs and LLMOps.
Packaged as a fun project. So…
For the builders from the AI community, we created the PhiloAgents open-source course, where you will learn to architect and build an AI-powered game simulation engine to impersonate popular philosophers.
→ An end-to-end AI product, powered by AI agents, from ReAct agents, to agentic RAG, to real-time deployments and LLMOps.
A collaboration between Decoding ML and Miguel Pedrido (from The Neural Maze).
This course is a gift to the AI community. Thus, it’s 100% free, with no hidden costs or registration required. You just need our GitHub, Substack, and YouTube lessons.
(and some sweat and perseverance to make the whole AI system work)
This course will teach you the core concepts required to build industry-level AI applications while implementing a fun project: a game where you can talk with historical geniuses, such as Aristotle, Turing or Socrates.
While learning how to impersonate these characters and expose them in a fun 2D game, you will learn how to:
Building AI agents with LangGraph using Python best practices.
Creating production-grade RAG systems to feed facts into the philosophers.
Implement short-term and long-term memory layers.
Engineering the system architecture (UI → Backend → Agent → Monitoring).
Expose the agent as an API deployment with FastAPI, Docker, and WebSockets.
Mastering industry tools: Groq, MongoDB, Opik, Python tooling (uv, ruff).
Applying LLMOps best practices: prompt monitoring and versioning, evaluation.
🥷 By the end, you'll be a ninja in production-ready AI agent development!
More details, such as who should join this course and the technical and hardware prerequisites, can be found in the GitHub repository.
Here is a quick demo of the PhiloAgents game you will learn to build by the end of the free series, from agents in Notebooks to real-world infrastructure:
Now, let’s move on to the fun part, where we zoom in on each lesson to understand better what it takes to build our PhiloAgent simulation ↓↓↓
Pre-AI Agents Learning Roadmap (Affiliate)
If you must fill in some gaps before starting the PhiloAgents course, such as leveling up your Python, Deep Learning, MLOps or LLM skills, we recommend DataCamp as your go-to learning platform.
We collaborated with them on a few projects and can guarantee their professionalism, expertise and product quality. We also used their platform and found their balance between theory and practice perfect for an engaging learning experience.
They respect your time. Thus, they provide targeted learning roadmaps on multiple layers. For example, the ones that we recommend the most are:
If you are not sure, the first chapter is free to explore.
If any resource is for you, consider getting a DataCamp subscription ↓
1. Build your gaming simulation AI agent
First, we will explore the system architecture of the PhiloAgents philosophers’ simulation, illustrated in the figure below. We will explain each component's role, what it is, what algorithms and tools we used, and, most importantly, why we used them.
By the end of this lesson, you will have a strong intuition of what it takes to architect a production-ready backend-frontend architecture, coupled with an RAG layer, that serves AI agents as real-time APIs. Plus, all the LLMOps goodies on top of the system that make it robust, observable and traceable.
All skills required in most AI Engineering roles.
2. Your first production-ready RAG Agent
Building AI agents can become complex relatively quickly. When implementing more advanced applications, you must orchestrate multiple LLM API calls, prompt templates, states, and tools within a single request.
That’s why you need specialized tools, such as LangGraph, which can help you orchestrate all these components, making the agent easier to implement, deploy, monitor, and debug.
In this lesson, we will start by going through the fundamentals of how agents work (the ReAct pattern) and how to build a simple ChatBot using LangGraph. Then, we will add more complexity and learn how to develop our PhiloAgent by:
Building an advanced agentic RAG system using LangGraph.
Leveraging Groq API’s as our LLM provider for low-latency inference speeds.
Prompt engineering a character card used to impersonate our philosophers (which can be adapted to other characters).
Manage multiple personas and inject them into our character card prompt template.
3. Memory: The secret sauce of AI Agents
Designing a robust memory layer for your agents is one of the most underrated aspects of building AI applications. Memory sits at the core of any AI project, guiding how you implement your RAG (or agentic RAG) algorithm, how you access external information used as context, manage multiple conversation threads, and handle multiple users. All critical aspects of any successful agentic application.
Every agent has short-term memory and some level of long-term memory. Understanding the difference between the two and what types of long-term memory exist is essential to knowing what to adopt in your toolbelt and how to design your AI application system and business logic.
With that in mind, in this lesson, we will explore short-term and long-term memory, what subtypes of long-term memory we can adopt, and how to implement them in our PhiloAgent use cases using MongoDB as the backbone of our memory layer.
As a minor spoiler, long-term memory implies building an agentic RAG system!
4. Deploying agents as real-time APIs 101
Until now, we’ve been focused on making our agents think—designing personalities, reasoning systems, and behaviors rooted in philosophical worldviews.
But what good is a brilliant philosopher if they’re locked in a basement with no way to speak to the world?
In real-world applications, intelligence alone isn’t enough. If we want our agents to be more than local experiments, they need to be accessible, interactive, and production-ready. It’s time to give your agents a voice—and more importantly, an interface.
In this lesson, we’ll take your PhiloAgent from a local prototype to a live, interactive character on the web. You’ll learn how to build a web API using FastAPI and add WebSocket support so your agent can respond in real time.
Here’s what we’ll dive into:
Understand the difference between REST APIs and WebSockets.
Build and test a REST API to serve your agent.
Stream live, token-by-token responses using WebSockets.
Design a clean backend–frontend architecture with FastAPI and Phaser.
5. Observability for RAG Agents: Monitoring & Evaluation
Until now, we’ve focused on making our agents intelligent and interactive—shaping their personalities, wiring them to tools, and deploying them through real-time APIs.
But being smart doesn’t guarantee being reliable—especially in production.
Once your agents are live, the real questions begin: Are they reasoning effectively? Are their responses actually helpful—or drifting off course? Are your prompt changes improving performance or breaking things silently? And most importantly—how would you even know?
That’s where observability steps in. In Lesson 5, we shift from building agents to measuring them.
In this lesson, we’ll take your PhiloAgent from a black-box experiment to a transparent, measurable system. You’ll learn how to monitor your agent’s behavior, track prompt versions, and evaluate performance—core skills for deploying and improving agents in the real world.
Here are the LLMOps concepts we’ll dive into:
Understand what observability means in the context of LLMs and agents.
Learn to monitor complex prompt traces using open-source tools, like Opik.
Implement prompt versioning to track changes and ensure reproducibility.
Generate evaluation datasets and run structured assessments on your agents.
Explore how LLM offline and online evaluation pipelines fit into your architecture.
6. Engineer Python projects like a PRO
During the first five lessons, we discussed in depth what it takes to build a production-ready AI agent, from creating the agent itself to wrapping it up in a backend and frontend architecture to serve it as a game and implement the LLMOps layer.
Still, in the world of AI, we are bombarded with the latest models, tools, and algorithms, but often forget what matters: building software that works.
Thus, we aim to conclude this series by returning to the fundamentals. To understand how to structure a Python project and use the right development tools (e.g., uv, ruff, Make) like a senior software engineer would. Additionally, we will explore how to containerize the project using Docker, as a senior DevOps engineer would.
These are all essential skills you will need in any software project, whether you're building AI applications around LLMs, Agents, RAG, or any other type of AI models. Thus, these are crucial skills for:
development speed and experience;
ease of deploying your app to the cloud;
making your project future-proof;
moving away from “it works on my machine” to “it works everywhere”.
Do you prefer video over written lessons?
🍾 We’ve got you covered.
As a collaboration between
and Decoding ML, completed this series with lessons 1 through 5 in video format:How to take the course?
As an open-source course, it’s 100% free, with no hidden costs and no registration required.
The course is based on an open-source GitHub repository and articles that walk you through the fundamentals and the repository.
Thus, taking the course is super easy. You have to:
Navigate to the PhiloAgents course GitHub repository and clone it.
Open the Substack and YouTube lessons found in the repository’s GitHub docs.
Set up the code using the documentation from the repository.
Start going through the lessons and running the code.
The best part? We encourage you to reuse our code for your open-source projects! If you do, DM us on Substack, and we’ll share your project on our socials.
More details, such as who should join this course and the technical and hardware prerequisites, can be found in the GitHub repository.
Enjoy!
Whenever you’re ready, there are 3 ways we can help you:
Perks: Exclusive discounts on our recommended learning resources
(books, live courses, self-paced courses and learning platforms).
The LLM Engineer’s Handbook: Our bestseller book on teaching you an end-to-end framework for building production-ready LLM and RAG applications, from data collection to deployment (get up to 20% off using our discount code).
Free open-source courses: Master production AI with our end-to-end open-source courses, which reflect real-world AI projects and cover everything from system architecture to data collection, training and deployment.
Images
If not otherwise stated, all images are created by the author.
Awesome