AI

Chain of Thought (CoT)

By Paul Brock·Updated on 24-04-2026
TL;DR

Chain of Thought is a prompting technique that has the LLM reason step by step before giving the final answer, dramatically improving accuracy on complex tasks.

Chain of Thought (researched by Google in 2022) showed LLMs get exponentially better at arithmetic, logic and multi-step problems when asked to 'think aloud'. Adding 'Let's think step by step' before the answer lifted GPT's maths score from 18% to 57%. Modern reasoning models (o1, o3, Claude Extended Thinking) bake CoT into architecture.

Example

Question: 'A product line costs €480 per 12, sold at €65 each. With a 10% per-unit discount, what is the margin on 36 units?' With CoT instruction the model works it out transparently: cost price, sale price after discount, quantity, total margin — with far lower error rate.

Frequently asked questions

Does CoT work for every model?

Works best for large models (70B+). Smaller models benefit less — too little capacity to reason well. Reasoning models do it automatically.

Must I trigger CoT explicitly?

For non-reasoning models yes ('Think step by step'). For o1/o3/Claude Extended Thinking it happens internally — explicit prompts can even hurt.

Related terms

Further reading

  • → Our service: GEO

Need help with SEO or GEO?

We help Bitcoin, AI and fintech companies get found in Google and in AI search engines.

Book a call