NextFin News - Grammarly, the ubiquitous writing assistant that recently rebranded its parent entity to Superhuman, is facing a mounting backlash over a new "Expert Review" feature that critics say is neither expert nor a review. Launched as part of a suite of AI-powered agents in late 2025, the tool promises to refine user drafts by channeling the perspectives of world-renowned thinkers and journalists. However, a wave of reports from TechCrunch, Wired, and The Verge has revealed that the "experts" in question—ranging from living tech critics to deceased historians—never gave their consent to be modeled, and in many cases, the AI’s advice bears little resemblance to their actual work.
The controversy reached a boiling point this week when users discovered that the feature was "summoning" the digital ghosts of recently deceased scholars to critique modern prose. Vanessa Heggie, an associate professor at the University of Birmingham, flagged a particularly grim instance where the platform offered analysis from an AI agent modeled on David Abulafia, a prominent historian who passed away in January. This "necromantic" approach to product development has sparked outrage among the academic community, with Yale historian C.E. Aubin noting that the system validates the profound mistrust scholars feel toward AI companies that scrape intellectual property to build "unethical" personality clones.
Beyond the ethical concerns of the deceased, living journalists are finding their professional identities co-opted without compensation or consultation. Writers at The Verge and Bloomberg reported seeing their names used as "perspectives" within the Grammarly interface, often accompanied by outdated job titles or reductive summaries of their editorial styles. When TechCrunch tested the feature, it suggested "leveraging anecdotes" like Kara Swisher or "posing accountability questions" like Timnit Gebru. The irony is sharp: a tool designed to enhance clarity and authority is built on a foundation of unauthorized imitation that many of the imitated individuals find professionally insulting.
Alex Gay, vice president of product at Superhuman, defended the feature by stating that the experts are referenced because their works are "publicly available and widely cited." The company maintains that the tool is intended for "informational purposes" and does not claim official endorsement. Yet, this legalistic defense does little to address the core tension of the generative AI era: the gap between "publicly available" and "free for commercial exploitation." By packaging the stylistic DNA of specific humans into a paid subscription service, Grammarly has moved from being a utility that helps people write better to a platform that sells a simulated version of other people's brains.
The fallout for Grammarly could be more than just a public relations headache. As U.S. President Trump’s administration continues to navigate the complexities of intellectual property in the age of automation, the "Expert Review" debacle serves as a textbook case for potential regulatory intervention regarding "digital twins" and personality rights. For a company that once defined the gold standard for digital proofreading, the pivot to "expert" simulation feels like a desperate attempt to maintain relevance in a market saturated by LLMs. Instead of providing genuine expertise, the feature offers a hall of mirrors—a series of prompts dressed up in the stolen robes of the world’s most respected voices.
Explore more exclusive insights at nextfin.ai.
