Structure Python projects like a PRO
Prompt engineering vs. RAG vs. Fine-tuning: When to fine-tune?
This week’s topics:
Structure Python projects like a PRO
From Beginner to Advanced LLM Developer
Prompt engineering vs. RAG vs. Fine-tuning: When to fine-tune?
Structure Python projects like a PRO
10+ years of working with Python has shown me one thing:
Most people don't know how to structure Python projects - especially in AI.
And it's a silent killer.
It turns promising AI code into unmanageable, fragile, and hard-to-scale messes.
But we tackle this head-on in Lesson 6 of our PhiloAgents course.
Here’s a glimpse of the approach we recommend:
Modular monolith: One repo with clean separation of backend (philoagents-api) and frontend (philoagents-ui), giving you flexibility without chaos.
Core logic in Python modules: Organized under src/philoagents/, this is where your reusable, testable business logic lives
Lightweight entry points: Scripts in tools/ and notebooks in notebooks/ that orchestrate your core modules without cluttering them.
Notebooks for exploration only: Use notebooks to experiment and visualize, but keep production code separate and clean.
Smart data handling: Store local data like fine-tuning sets in data/ but design for scalable cloud integrations (you don’t want your data on git).
This structure lays the foundation for scalable, maintainable, and production-ready AI systems.
No matter how advanced your AI models are, if your codebase is a mess, you won’t ship reliable products.
Ready to level up your AI engineering?
Learn how to structure your Python projects like a PRO with Lesson 6 of the PhiloAgents course ↓
From Beginner to Advanced LLM Developer (Affiliate)
If you've never built an advanced MVP with LLMs, here's your chance to do so...
Louis-François Bouchard developed a self-paced course to help you go From Beginner to Advanced LLM Developer
(And that's the name of the course)
For context, Louis-François is:
The author of 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗟𝗟𝗠 𝗙𝗼𝗿 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 (sold thousands of copies)
CTO of Towards AI
An experienced (5+ years) AI educator on YouTube
The course is geared toward helping developers and engineers take their careers to the next level.
Thus, it predominantly focuses on the new skills top companies need.
And the best bit of it all?
You'll receive a certificate upon completion - and you'll have your own working LLM RAG project.

I’ve noticed Louis's brilliant AI and teaching skills through his bestselling book “Building LLMs for Production” and YouTube channel with 60k+ subs.
Now, everything comes together in this battle-tested course that will teach you how to:
Create, deploy, and manage advanced AI solutions
Build an impressive portfolio
Develop your own advanced LLM product
With these skills, you can:
Seamlessly transition into high-demand LLM and GenAI roles
Drive innovation in your current job
Turn your product ideas into reality
All within 50+ hours of focused learning
Ready to go from beginner to an advanced LLM developer?
Support my work using the affiliate link and get 15% off with code: Paul_15
100% company reimbursement eligible
Prompt engineering vs. RAG vs. Fine-tuning: When to fine-tune?
Fine-tuning should NEVER be the first step when building an AI system.
Here’s the only time you should do it:
When nothing else works.
But let's face it... Most teams jump straight into fine-tuning
Why?
Because it feels technical. Custom. Smart.
In reality, it’s often just unnecessary complexity.
Before you spend hours generating synthetic data and burning through GPUs, you must ask yourself three questions:
Can I solve this with smart prompt engineering?
Can I improve it further by adding RAG?
Have I even built an evaluatable system yet?
If the answer to those isn’t a solid "YES," you have no business fine-tuning anything.
I say this all the time -
"You don’t need your own model; you need better system design."
Prompt engineering handles ~30–50% of cases
RAG handles another ~30–40%
Fine-tuning? Reserve it for the last 10% (when the problem demands it)
For example, in our work at Decoding ML, we only fine-tune when:
The context window is too small for RAG to help
The task requires domain-specific tone, behavior, or reasoning
The system is mature enough to warrant the extra complexity
Anything sooner is overkill.
Thanks to
for helping sharpen this thinking during our work on The LLM Engineer’s Handbook(especially when mapping tradeoffs between fine-tuning, prompting, and RAG)
Want to learn more? Check out Lesson 4 of the Second Brain AI Assistant course ↓
Whenever you’re ready, there are 3 ways we can help you:
Perks: Exclusive discounts on our recommended learning resources
(books, live courses, self-paced courses and learning platforms).
The LLM Engineer’s Handbook: Our bestseller book on teaching you an end-to-end framework for building production-ready LLM and RAG applications, from data collection to deployment (get up to 20% off using our discount code).
Free open-source courses: Master production AI with our end-to-end open-source courses, which reflect real-world AI projects and cover everything from system architecture to data collection, training and deployment.
Images
If not otherwise stated, all images are created by the author.