AI Chatbots’ ‘Personhood’ Illusion Misleads Users, Experts Urge Treating Them as Tools
AI chatbots sound human but have no persistent self — they’re statistical prediction engines shaped by training data, RLHF, system prompts, retrieval, memories and randomness. The “personhood” illusion can mislead vulnerable users and obscure developer responsibility when bots err. We should keep conversational interfaces but treat LLMs as tools, not oracles, and scrutinize the people and settings behind their behaviors.