Fine-tuning
Pretraining teaches the model how to use language.
Fine-tuning teaches the model how to be useful with language.
Think of it like being a chef.
When you’re learning, you learn about all kinds of foods. You learn about lots of different ways to cook those foods. You have to learn a lot.
Now, imagine you want to start your own restaurant.
It would be challenging to open a restaurant that focuses on “all kinds of foods”.
It would be far easier to open say - an Italian, French, or Indian restaurant. Something with a narrower focus.
To do that, you’d have to specialize on that type of food.
That’s what fine-tuning is.
Going back to our school example, it’s like getting a degree in a specific study.
With LLMs, fine-tuning is also about tuning how the model “behaves”.
You can tune models to be polite, funny, or even mean.
You can tune them to follow specific rules, come up with their own ideas, or act like a specific role.
Fine-tuning helps shape how models learn. It shapes how they use their learnings to deliver the best results.
Now that you know how LLMs work, and how you can work with them - you can move on to learning how to tune them.
Check out the Prompt Techniques or check out the Prompt Frameworks for more.