Grammarly faced immediate backlash on March 5, 2026, after medieval historian Verena Krebs discovered the company's 'Expert Review' tool listing deceased historian David Abulafia—who died in January 2026—as an available expert to review academic papers. The revelation sparked widespread condemnation from academics and writers who described the practice as "digital necromancy" and "obscene."
Expert Review Simulates Feedback from Famous Writers Without Consent
Launched in August 2025 as part of Grammarly's broader AI features (now under rebranded company Superhuman), Expert Review allows users to receive revision suggestions "from the perspective" of subject matter experts. The tool simulates feedback from famous writers including Stephen King, Carl Sagan, and Neil deGrasse Tyson, training AI models on their published works to approximate their writing styles and critical approaches.
Vanessa Heggie, Associate Professor of History of Science and Medicine at the University of Birmingham, wrote on LinkedIn: "Without anyone's explicit permission it's creating little LLMs based on their scraped work and using their names and reputation." Historian C.E. Aubin told Wired: "These are not expert reviews, because there are no 'experts' involved in producing them."
No Evidence Living Authors Consented to Inclusion
While living authors like Neil deGrasse Tyson theoretically "have the opportunity to say whether they'd like to be turned into a chatbot," no evidence suggests Grammarly sought their permission. Deceased authors like Carl Sagan "cannot because they are dead," critics noted. The controversy reached Hacker News with 101 points and 118 comments, indicating strong developer and tech community concern.
Grammarly's response, published on their support page, states: "References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities." This disclaimer-based defense has done little to assuage critics who argue the company is exploiting established names for commercial gain.
Ethical Concerns Span Consent, Copyright, and Reputation
The controversy encompasses multiple ethical dimensions. First, consent issues arise from using both living authors' work without permission and deceased authors whose estates were not consulted. Second, copyright concerns stem from scraping published works to train author-specific models. Third, reputation exploitation occurs when companies leverage established names for commercial products. Fourth, accuracy problems emerge when AI approximations may misrepresent authors' actual views.
The story gained coverage across tech media including Wired, The Verge, TechCrunch, Futurism, Cybernews, eWEEK, Decrypt, and Boing Boing, with headlines using terms like "obscene," "necromancy," and "digital grave-robbing."
Broader Questions About AI Training and Identity Rights
The Grammarly incident highlights unresolved questions about AI training on public works and the use of real people's identities in AI products. While companies often cite the public availability of training data, critics argue that availability for reading differs fundamentally from permission to simulate someone's expertise or persona in a commercial product.
The controversy also underscores tensions between technological capability and ethical practice. While creating author-specific language models is technically feasible, the Grammarly backlash demonstrates that many view such applications as crossing ethical boundaries—particularly when they involve deceased individuals who cannot consent or object to their digital resurrection.
Key Takeaways
- Medieval historian Verena Krebs discovered Grammarly listing deceased historian David Abulafia (died January 2026) as available expert on March 5, 2026
- Expert Review simulates feedback from famous writers including Stephen King, Carl Sagan, and Neil deGrasse Tyson without evidence of permission
- The controversy reached 101 points and 118 comments on Hacker News, gaining coverage across major tech media outlets
- Grammarly's disclaimer states experts are "for informational purposes only" with no affiliation or endorsement from listed individuals
- Critics describe the practice as "digital necromancy" raising concerns about consent, copyright, reputation exploitation, and accuracy of AI approximations