Learn an end-to-end framework for production-ready LLM systems by building your LLM twin
Why you should take our new production-ready LLMs course
Decoding ML Notes
Want to ๐น๐ฒ๐ฎ๐ฟ๐ป an ๐ฒ๐ป๐ฑ-๐๐ผ-๐ฒ๐ป๐ฑ ๐ณ๐ฟ๐ฎ๐บ๐ฒ๐๐ผ๐ฟ๐ธ for ๐ฝ๐ฟ๐ผ๐ฑ๐๐ฐ๐๐ถ๐ผ๐ป-๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐๐๐ ๐๐๐๐๐ฒ๐บ๐ by ๐ฏ๐๐ถ๐น๐ฑ๐ถ๐ป๐ด your ๐๐๐ ๐๐๐ถ๐ป?
Then you are in luck.
โโโ
The Decoding ML team and I will ๐ฟ๐ฒ๐น๐ฒ๐ฎ๐๐ฒ (in a few days) a ๐๐ฅ๐๐ ๐ฐ๐ผ๐๐ฟ๐๐ฒ called the ๐๐๐ ๐ง๐๐ถ๐ป: ๐๐๐ถ๐น๐ฑ๐ถ๐ป๐ด ๐ฌ๐ผ๐๐ฟ ๐ฃ๐ฟ๐ผ๐ฑ๐๐ฐ๐๐ถ๐ผ๐ป-๐ฅ๐ฒ๐ฎ๐ฑ๐ ๐๐ ๐ฅ๐ฒ๐ฝ๐น๐ถ๐ฐ๐ฎ.
๐ช๐ต๐ฎ๐ ๐ถ๐ ๐ฎ๐ป ๐๐๐ ๐ง๐๐ถ๐ป? It is an AI character that learns to write like somebody by incorporating its style and personality into an LLM.
Within the course, you will learn how to:
architect
train
deploy
...a ๐ฝ๐ฟ๐ผ๐ฑ๐๐ฐ๐๐ถ๐ผ๐ป-๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐๐๐ ๐๐๐ถ๐ป of yourself powered by LLMs, vector DBs, and LLMOps good practices, such as:
experiment trackers
model registries
prompt monitoring
versioning
deploying LLMs
...and more!
It is an ๐ฒ๐ป๐ฑ-๐๐ผ-๐ฒ๐ป๐ฑ ๐๐๐ ๐ฐ๐ผ๐๐ฟ๐๐ฒ where you will ๐ฏ๐๐ถ๐น๐ฑ a ๐ฟ๐ฒ๐ฎ๐น-๐๐ผ๐ฟ๐น๐ฑ ๐๐๐ ๐๐๐๐๐ฒ๐บ:
โ from start to finish
โ from data collection to deployment
โ production-ready
โ from NO MLOps to experiment trackers, model registries, prompt monitoring, and versioning
Who is thisย for?
Audience: MLE, DE, DS, or SWE who want to learn to engineer production-ready LLM systems using LLMOps good principles.
Level: intermediate
Prerequisites: basic knowledge of Python, ML, and the cloud
How will youย learn?
The course contains 11 hands-on written lessons and the open-source code you can access on GitHub (WIP).
You can read everything at your own pace.ย
Costs?
The articles and code are completely free. They will always remain free.
This time, the Medium articles won't be under any paid wall. I want to make them entirely available to everyone.
Meet your teachers!
The course is created under the Decoding ML umbrella by:
Paul Iusztin | Senior ML & MLOps Engineer
Alex Vesa | Senior AI Engineer
Alex Razvant | Senior ML & MLOps Engineer
What will youย learn to build?
๐ ๐๐ฉ๐ฆ ๐๐๐ ๐ข๐ณ๐ค๐ฉ๐ช๐ต๐ฆ๐ค๐ต๐ถ๐ณ๐ฆ ๐ฐ๐ง ๐ต๐ฉ๐ฆ ๐ค๐ฐ๐ถ๐ณ๐ด๐ฆ ๐ช๐ด ๐ด๐ฑ๐ญ๐ช๐ต ๐ช๐ฏ๐ต๐ฐ 4 ๐๐บ๐ต๐ฉ๐ฐ๐ฏ ๐ฎ๐ช๐ค๐ณ๐ฐ๐ด๐ฆ๐ณ๐ท๐ช๐ค๐ฆ๐ด:
๐ง๐ต๐ฒ ๐ฑ๐ฎ๐๐ฎ ๐ฐ๐ผ๐น๐น๐ฒ๐ฐ๐๐ถ๐ผ๐ป ๐ฝ๐ถ๐ฝ๐ฒ๐น๐ถ๐ป๐ฒ
- Crawl your digital data from various social media platforms.
- Clean, normalize and load the data to a NoSQL DB through a series of ETL pipelines.
- Send database changes to a queue using the CDC pattern.
โ Deployed on AWS.
๐ง๐ต๐ฒ ๐ณ๐ฒ๐ฎ๐๐๐ฟ๐ฒ ๐ฝ๐ถ๐ฝ๐ฒ๐น๐ถ๐ป๐ฒ
- Consume messages from a queue through a Bytewax streaming pipeline.
- Every message will be cleaned, chunked, embedded (using Superlinked), and loaded into a Qdrant vector DB in real-time.
โ Deployed on AWS.
๐ง๐ต๐ฒ ๐๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด ๐ฝ๐ถ๐ฝ๐ฒ๐น๐ถ๐ป๐ฒ
- Create a custom dataset based on your digital data.
- Fine-tune an LLM using QLoRA.
- Use Comet ML's experiment tracker to monitor the experiments.
- Evaluate and save the best model to Comet's model registry.
โ Deployed on Qwak.
๐ง๐ต๐ฒ ๐ถ๐ป๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ ๐ฝ๐ถ๐ฝ๐ฒ๐น๐ถ๐ป๐ฒ
- Load and quantize the fine-tuned LLM from Comet's model registry.
- Deploy it as a REST API.
- Enhance the prompts using RAG.
- Generate content using your LLM twin.
- Monitor the LLM using Comet's prompt monitoring dashboard .
โ Deployed on Qwak.
.
๐๐ญ๐ฐ๐ฏ๐จ ๐ต๐ฉ๐ฆ 4 ๐ฎ๐ช๐ค๐ณ๐ฐ๐ด๐ฆ๐ณ๐ท๐ช๐ค๐ฆ๐ด, ๐บ๐ฐ๐ถ ๐ธ๐ช๐ญ๐ญ ๐ญ๐ฆ๐ข๐ณ๐ฏ ๐ต๐ฐ ๐ช๐ฏ๐ต๐ฆ๐จ๐ณ๐ข๐ต๐ฆ 3 ๐ด๐ฆ๐ณ๐ท๐ฆ๐ณ๐ญ๐ฆ๐ด๐ด ๐ต๐ฐ๐ฐ๐ญ๐ด:
- Comet ML as your ML Platform
- Qdrant as your vector DB
- Qwak as your ML infrastructure
Soon, we will release the first lesson from the ๐๐๐ ๐ง๐๐ถ๐ป: ๐๐๐ถ๐น๐ฑ๐ถ๐ป๐ด ๐ฌ๐ผ๐๐ฟ ๐ฃ๐ฟ๐ผ๐ฑ๐๐ฐ๐๐ถ๐ผ๐ป-๐ฅ๐ฒ๐ฎ๐ฑ๐ ๐๐ ๐ฅ๐ฒ๐ฝ๐น๐ถ๐ฐ๐ฎ
To stay updated...
๐พ๐๐๐๐ ๐๐ฉ ๐ค๐ช๐ฉ ๐๐๐ฉ๐๐ช๐ ๐๐ฃ๐ ๐จ๐ช๐ฅ๐ฅ๐ค๐ง๐ฉ ๐ช๐จ ๐ฌ๐๐ฉ๐ ๐ โญ๏ธ
โโโ
๐ LLM Twin: Building Your Production-Ready AIย Replica Course GitHub Repository