NextFin

Grammarly Faces Backlash as 'Expert Review' Feature Mimics Writers Without Consent

Summarized by NextFin AI
  • Grammarly's new 'Expert Review' feature has faced backlash for using the likeness of deceased scholars without consent, raising ethical concerns within the academic community.
  • Living journalists have reported unauthorized use of their names and identities, leading to professional dissatisfaction and highlighting issues of intellectual property rights.
  • Despite defense from Superhuman's VP, the feature's reliance on publicly available works raises questions about the commercialization of intellectual property and the ethics of AI.
  • The controversy may prompt regulatory intervention regarding digital twins and personality rights, reflecting broader implications for AI in the market.

NextFin News - Grammarly, the ubiquitous writing assistant that recently rebranded its parent entity to Superhuman, is facing a mounting backlash over a new "Expert Review" feature that critics say is neither expert nor a review. Launched as part of a suite of AI-powered agents in late 2025, the tool promises to refine user drafts by channeling the perspectives of world-renowned thinkers and journalists. However, a wave of reports from TechCrunch, Wired, and The Verge has revealed that the "experts" in question—ranging from living tech critics to deceased historians—never gave their consent to be modeled, and in many cases, the AI’s advice bears little resemblance to their actual work.

The controversy reached a boiling point this week when users discovered that the feature was "summoning" the digital ghosts of recently deceased scholars to critique modern prose. Vanessa Heggie, an associate professor at the University of Birmingham, flagged a particularly grim instance where the platform offered analysis from an AI agent modeled on David Abulafia, a prominent historian who passed away in January. This "necromantic" approach to product development has sparked outrage among the academic community, with Yale historian C.E. Aubin noting that the system validates the profound mistrust scholars feel toward AI companies that scrape intellectual property to build "unethical" personality clones.

Beyond the ethical concerns of the deceased, living journalists are finding their professional identities co-opted without compensation or consultation. Writers at The Verge and Bloomberg reported seeing their names used as "perspectives" within the Grammarly interface, often accompanied by outdated job titles or reductive summaries of their editorial styles. When TechCrunch tested the feature, it suggested "leveraging anecdotes" like Kara Swisher or "posing accountability questions" like Timnit Gebru. The irony is sharp: a tool designed to enhance clarity and authority is built on a foundation of unauthorized imitation that many of the imitated individuals find professionally insulting.

Alex Gay, vice president of product at Superhuman, defended the feature by stating that the experts are referenced because their works are "publicly available and widely cited." The company maintains that the tool is intended for "informational purposes" and does not claim official endorsement. Yet, this legalistic defense does little to address the core tension of the generative AI era: the gap between "publicly available" and "free for commercial exploitation." By packaging the stylistic DNA of specific humans into a paid subscription service, Grammarly has moved from being a utility that helps people write better to a platform that sells a simulated version of other people's brains.

The fallout for Grammarly could be more than just a public relations headache. As U.S. President Trump’s administration continues to navigate the complexities of intellectual property in the age of automation, the "Expert Review" debacle serves as a textbook case for potential regulatory intervention regarding "digital twins" and personality rights. For a company that once defined the gold standard for digital proofreading, the pivot to "expert" simulation feels like a desperate attempt to maintain relevance in a market saturated by LLMs. Instead of providing genuine expertise, the feature offers a hall of mirrors—a series of prompts dressed up in the stolen robes of the world’s most respected voices.

Explore more exclusive insights at nextfin.ai.

Insights

What ethical issues arise from Grammarly's 'Expert Review' feature?

How does the 'Expert Review' feature challenge intellectual property rights?

What has been the response from the academic community regarding this feature?

What are the implications of using deceased scholars' perspectives without consent?

How do users perceive the accuracy of advice from the 'Expert Review' feature?

What regulatory challenges could Grammarly face due to this controversy?

What are the core differences between Grammarly's previous offerings and the new feature?

In what ways has Grammarly's rebranding to Superhuman impacted its market position?

What trends in AI are reflected in the backlash against the 'Expert Review'?

How does the use of living journalists' names in the tool affect their professional identities?

What are the arguments made by Grammarly in defense of the 'Expert Review' feature?

What potential long-term effects could this controversy have on AI writing tools?

How do competitors in the writing assistant market respond to ethical concerns?

What historical examples exist of similar controversies in technology?

What are the limitations inherent in AI-generated content like Grammarly's 'Expert Review'?

How might user feedback influence future iterations of Grammarly's features?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App