I read Claude’s prompt. Here are 5 tips to master prompt engineering.
The simplest way to find the right LLMs. Copying code from ChatGPT won't make you an AI Engineer.
This week’s topics:
I read Claude’s prompt. Here are 5 tips to master prompt engineering.
Copying code from ChatGPT won't make you an AI Engineer
The simplest way to find the right LLMs
I read Claude’s prompt. Here are 5 tips to master prompt engineering.
Claude’s leaked system prompt just confirmed what we all suspected:
Vertical > General (No AGI).
The best LLMs won’t do everything.
They’ll do one thing extremely well.
I read all 22,000 words of Claude's leaked system prompt…
It wasn’t some vague, high-level “you are a helpful assistant” instruction set.
It was a deeply engineered blueprint custom-built for one job.
→ Code-heavy tasks in JavaScript and Python
Here’s what stood out (and what it signals about where LLMs are heading):
1. It uses XML to structure its thinking
No, “You are a helpful assistant.” This is industrial-grade logic.
It segments instructions into reusable XML tags:
<search_reminders>
<tool_use_policy>
<automated_reminder_from_anthropic>
Each one acts like a callable function in a reasoning engine.
2. Tool use isn’t just allowed (it’s engineered)
Claude is taught how to use tools like a software engineer:
When to call
When not to
Use memory first
Limit to 1–2 calls
Over 5? Follow a strict workflow
Not “call a tool,” but design a workflow.
3. Moderation and legal safety are hardcoded
🤖 “Claude is happy to write creative content involving fictional characters, but avoids writing content involving real, named public figures.”
Even moderation is framed as a behavior, not a filter.
4. It teaches Claude how to reason step-by-step
Want Claude to count words or characters?
🤖 “It explicitly counts... assigning a number to each. It only answers once it has performed this step.”
Want to analyze books or code?
🤖 “Claude should provide a summary from its internal knowledge, and only search when necessary.”
This is instruction tuning in the wild.
5. It includes usage guides for specific tech stacks
Yes, inside the system prompt.
How to use TailwindCSS
When to reach for lodash vs. vanilla JS
What Claude should do when reading .env files
How to parse messy CSVs
Which React libraries to use for graphs…
This is a fine-tuned developer assistant pretending to be general-purpose.
So what’s the big takeaway?
Unlike GPT or Gemini’s system prompts, which are short, abstract, and vague, Claude’s is specific, opinionated, and operational.
It’s not trying to be everything.
It’s trying to do certain things very well.
Code in JS and Python
Use tools with precision
Write with context and restraint
Reason step-by-step
Stay within legal and ethical boundaries
And that explains why Claude is so good at what it does (and not great at everything else).
If you’re building agentic systems or advanced assistants, go read the prompt.
It’s a masterclass in instruction design.
Here’s a repository that aggregates system prompts from Antrophic, Google, OpenAI and X.ai. Reading them is a fantastic way to learn prompt engineering from the best:
Copying code from ChatGPT won't make you an AI Engineer (Affiliate)
Relying on code snippets you don't understand is a dead end. To build real-world AI applications, you need to move beyond copy-pasting and start thinking like a developer.
The problem? Most Python courses weren't designed for the age of LLMs.
That's why we're recommending the first LLM-native Python course for complete beginners: Python Primer for Generative AI. A project-based, hands-on course with 13 projects, from beginner to advanced.
The course is designed for absolute beginners with no prior coding experience.
What we love about it is that it takes a new mindset fit for a world dominated by AI. It teaches you how to use Python as a tool, along with LLMs, to build your dream app.

We recommend this course for individuals in a hurry who:
Want to learn the key skills around the Python ecosystem in weeks, not months.
Learn by example while doing 13 hands-on projects.
Don’t want to get into the nitty gritty details around math, but want to leverage Python and LLMs to build software that leverages AI.
Want to use LLMs, the right way, as an assistant to write better code faster
💡 Long story short, you'll learn Python the LLM-native way.
The course is made by none other than
and .If you're considering buying it, use code Paul_15 to support us and receive 15% off.
(The code is available to all courses from the Towards AI Academy.)
P.S. In case you are undecided, you can have a free preview of the course here.
The simplest way to find the right LLMs
Finding the right open-source LLMs to work with is a pain in the backside.
98% of LLM leaderboards are bloated.
Too many closed models.
Too many broken repos.
Too little clarity on what actually works in production.
It's frustrating.
Fortunately, I found something to help mitigate this issue...
If you’re looking for open-source LLMs that just run -
For fine-tuning, quantization, and deployment…
Unsloth has done the hard work for you.
They’ve compiled a list of all the popular, supported, and production-viable models that:
Fine-tune easily (with Unsloth + QLoRA)
Quantize to GGUFs for local inference (Ollama, llama.cpp, OpenWebUI)
Play well with Hugging Face and Python
Come with working code and notebook examples
Easy to deploy to Hugging Face Inference Endpoints, AWS, GCP, Modal, and more
No more jumping between broken GitHub repos or guessing which models will survive a production pipeline.
It’s the fastest way to stay current without losing your mind.
If you’re working with open-source LLMs, just bookmark this list.
Whenever you’re ready, there are 3 ways we can help you:
Perks: Exclusive discounts on our recommended learning resources
(books, live courses, self-paced courses and learning platforms).
The LLM Engineer’s Handbook: Our bestseller book on teaching you an end-to-end framework for building production-ready LLM and RAG applications, from data collection to deployment (get up to 20% off using our discount code).
Free open-source courses: Master production AI with our end-to-end open-source courses, which reflect real-world AI projects and cover everything from system architecture to data collection, training and deployment.
Images
If not otherwise stated, all images are created by the author.
There are over one million models in Hugging Face, and I’ve always found it overwhelming! The Unsloth list is super helpful, thanks for surfacing it!
Claude's system prompt does not contain specific guidance for particular tech stacks. What you're seeing is the part of the system prompt pertaining to the "Artifacts" feature. Claude has it's own environment in which it can execute code, but that environment has limitations with regards to what it has access / import etc. The system prompt includes guidance and constraints to ensure code it generates for Artifacts will work within that environment, part of which involves instructing Claude to use certain libraries and approaches. But outside of this, it is not at all biased towards a particular tech stack in it's responses — this would be against Anthropic's goals for the model.