Anthropic’s Claude AI model can now handle longer prompts
Anthropic just supercharged its Claude Sonnet 4 AI coding model with a one million-token context window—roughly 750,000 words, or more than the entire Lord of the Rings trilogy. That’s five times its previous capacity and beats GPT-5’s 400,000 tokens. Available via Anthropic’s API, Amazon Bedrock and Google Cloud, the upgrade lets developers feed massive codebases or documents in one go, boosting performance on long, multi-step tasks. Usage beyond 200,000 tokens will incur higher fees, but Anthropic hopes the extra context cements Claude’s lead amid growing GPT-5 competition.