Skip to content
Get Started. Free Consult
Glossary · AI & Development

Fine-tuning

Adjusting a pre-trained AI model's weights on a smaller, specialised dataset so it performs better on a specific task.

In detail

Fine-tuning takes a foundation model that has been pre-trained on a vast general dataset and continues training on a smaller dataset specific to your domain or task. The result is a model that retains general knowledge but is better at, for example, classifying support tickets, drafting in your house style, or generating outputs in a specific format. Modern parameter-efficient fine-tuning techniques (LoRA, QLoRA) make this affordable for small businesses, where full fine-tuning was previously cost-prohibitive.

Why it matters for Australian business

Most Australian SMBs do not need to fine-tune. RAG plus careful prompting handles the majority of use cases at lower cost and complexity, and lets you change the underlying knowledge without retraining. Fine-tuning becomes appropriate when the task requires a specific output format the model struggles with, when you need consistent tone across thousands of generations, or when a smaller fine-tuned model is cheaper to run at scale than a larger general model.

How we help with this

Related terms

← All glossary terms

Want to talk through how this applies to your business? Book a free consult.