AI

Fine-tuning

By Paul Brock·Updated on 22-04-2026
TL;DR

Fine-tuning is adapting a pre-trained LLM with domain-specific data to perform better on niche tasks or match a style.

Fine-tuning starts from an existing model (GPT, Claude, Llama) and trains further on specific data: your tone of voice, industry vocabulary, structured output formats. Benefit: behaviour hard to express in prompts becomes intrinsic to the model. Downside: takes time and money, requires quality data (500-10,000 examples typical). 2024-2026 saw LoRA/QLoRA as more efficient fine-tuning methods.

Example

A law firm fine-tunes Llama-3 on 3,000 internally reviewed contract clauses. The resulting model produces contract drafts in exact house style, far better than a generic LLM with prompt instructions alone.

Frequently asked questions

Is fine-tuning or RAG better?

Depends. RAG for current data (news, often-changing documents). Fine-tuning for style, format, specialised reasoning. Often: combined — fine-tuned model uses RAG for facts.

Related terms

Further reading

Need help with SEO or GEO?

We help Bitcoin, AI and fintech companies get found in Google and in AI search engines.

Book a call