Few-shot learning
Few-shot learning is the technique of providing an LLM a few examples in the prompt (in-context learning) to teach it the desired style, format or reasoning for new input.
Few-shot prompting provides 2–5 input/output examples in the prompt, followed by a new input. The model learns the pattern without being trained — pure in-context learning. For consistent formats, specific tone-of-voice or company jargon, few-shot is often cheaper and more flexible than fine-tuning. Downside: every example costs tokens.
Example
Input: 3 examples of product name → marketing slogan (webrock house style). Then: 'Product: Antminer S21. Slogan:'. Output automatically matches the example style — no training required.
Frequently asked questions
How many examples are optimal?
Usually 2–5. More gives diminishing returns and costs tokens. Newer models (Claude 3.7, GPT-4o) often only need 1–2.
Few-shot or fine-tuning?
Few-shot: fast, flexible, no training. Fine-tuning: better performance at large volumes (>1000 queries), lower per-query cost, privacy (own model).
Related terms
Further reading
- → Our service: GEO