AI

Zero-shot learning

By Paul Brock·Updated on 24-04-2026
TL;DR

Zero-shot learning is an LLM's ability to perform a task correctly without a single example in the prompt — entirely from pre-training.

In a zero-shot prompt you ask the model to perform the task directly, with no examples. Modern LLMs (GPT-4, Claude 3.7) perform surprisingly well zero-shot on many tasks because they've seen millions of similar patterns during training. Efficient for simple, unambiguous tasks; for nuance or company-specific output, few-shot or fine-tuning is needed.

Example

Zero-shot: 'Classify this review as positive or negative: [review]'. No examples, no explanation — the model knows what sentiment classification is and does it immediately.

Frequently asked questions

Zero-shot or few-shot?

Zero-shot for standard tasks (summarise, translate, classify). Few-shot once you need a specific format, tone or domain context.

When does zero-shot fail?

Niche jargon, company-specific formats, rare languages, or when the model must deviate from its default. Then: few-shot or fine-tune.

Related terms

Further reading

  • → Our service: GEO

Need help with SEO or GEO?

We help Bitcoin, AI and fintech companies get found in Google and in AI search engines.

Book a call