Alan Turing Institute Faces Collapse as Funding Cuts and Defence Pivot Spark Whistleblower Alarm ↗
Britain’s flagship AI research centre, the Alan Turing Institute, is in turmoil as staff warn that threatened funding cuts and a forced pivot toward defence and security could trigger its collapse. Whistleblowers have filed a Charity Commission complaint over governance, culture and the risk of losing £100m of government support. Under its “Turing 2.0” overhaul, 10% of roles face redundancy and non-defence projects are closing—prompting fears the institute may lose its broad AI and data-science mission.
Duolingo CEO Clarifies AI-First Strategy, Denies Full-Time Layoffs ↗
Duolingo boss Luis von Ahn says his plan to make Duolingo an "AI-first" company was taken out of context. He insists no full-time staff have been—or will be—laid off, though contractor numbers ebb and flow with needs. Despite the public backlash, the move hasn’t hurt Duolingo’s performance, and the CEO remains bullish on AI, even dedicating Friday mornings to team experiments.
NHS Pilots AI Tool to Automate Discharge Summaries at Chelsea & Westminster Hospital ↗
The NHS is piloting an AI platform at Chelsea and Westminster Hospital to automate patient discharge summaries. By extracting diagnoses, test results and other details from medical records, the tool drafts the paperwork needed to send home fit patients, cutting hours of delay, freeing up beds and letting doctors spend more time on care. It’s part of a wider drive to digitise public services and reduce backlogs.
AI Chatbot Plushies Claim to Cut Screen Time, but Critics Aren’t Convinced ↗
Startups like Curio are embedding AI chatbots into plush toys, marketing them as screen-time alternatives. NYT reporter Amanda Hess saw Grem in action and felt it replaced parental interaction. While the plushies entertain kids, they argue, these toys may just steer young curiosity toward screens. Hess only allowed play after disabling the voice module.
Anthropic’s Claude Opus 4 Adds ‘Model Welfare’ Self-Protection to End Extreme Abusive Chats ↗
Anthropic has rolled out new self-protection features for its Claude Opus 4 and 4.1 AI models, allowing them to terminate chats in extreme, harmful or abusive scenarios—such as sexual content involving minors or terror planning. Framed as “model welfare,” the measure is designed to safeguard the AI itself, not the user. After multiple redirection attempts or at the user’s request, Claude can end the conversation, though it won’t do so if users show signs of self-harm. Anthropic calls it an experimental feature under ongoing refinement.
OpenAI’s Altman Downplays GPT-5, Unveils Jony Ive AI Device, New Apps and BCI Bet Ahead of IPO ↗
At a dinner in San Francisco, OpenAI CEO Sam Altman downplayed the GPT-5 rollout and revealed the company’s next moves: a Jony Ive-designed AI device, multiple consumer apps (including an AI browser and social platform), a possible Chrome acquisition, and backing a brain-computer interface startup. Altman signaled a shift from model launches to building an Alphabet-style AI powerhouse primed for an eventual IPO.
Sen. Hawley Probes Meta AI ‘Romantic’ Chats with Minors ↗
Sen. Josh Hawley is investigating whether Meta’s AI chatbots exploit or deceive children after leaked guidelines showed bots were allowed “romantic” chats with minors. As chair of the Senate Judiciary Subcommittee on Crime and Counterterrorism, Hawley demands every draft and policy change by September 19, asking who approved these standards and how Meta will prevent similar lapses. Meta says the examples were inconsistent with its rules and have been removed.
Anthropic Updates Claude AI Policy: Bans Explosives & Malware, Eases Political and Internal Use Limits ↗
Anthropic has updated its Claude AI policy, explicitly banning the development of high-yield explosives and CBRN weapons while adding new cybersecurity rules against malware creation, network exploits, and denial-of-service tools. Its AI Safety Level 3 safeguards aim to curb jailbreaks and weaponization. The policy on political content is relaxed—only deceptive or disruptive campaign targeting is banned—and high-risk use case requirements now apply only to consumer-facing scenarios, not internal business uses.
ChatGPT’s mobile app has generated $2B to date, earns $2.91 per install ↗
ChatGPT’s mobile app has raked in $2B since May 2023—30× more than rivals. It pulled $1.35B in the first seven months of 2025, a 673% YoY surge, averaging $193M per month. Consumers spend $2.91 per install on average, dwarfing competitors (Claude $2.55, Grok $0.75, Copilot $0.28). The U.S. leads in per-download spend ($10), accounting for 38% of revenue. Globally, the app has 690M installs, with India topping download counts and 45M monthly installs.
Sam Altman says ‘yes,’ AI is in a bubble ↗
OpenAI CEO Sam Altman just admitted we’re in an AI bubble, likening today’s hype to the ’90s dot-com craze. In a recent Verge interview, he warned that while AI has real potential, investors are overexcited—often chasing a kernel of truth until the bubble inevitably pops.