The concern is that such wording can make software appear autonomous, aware or responsible for decisions that remain tied to people, companies and institutions.
The study, published in Technical Communication Quarterly, examined how news writers pair mental verbs with terms such as AI and ChatGPT. The researchers found that anthropomorphic wording is less common in news coverage than expected, but they also found that context determines whether a phrase simply describes a technical requirement or subtly suggests human-like understanding.
Why AI language can mislead readers
Words such as “knows,” “learns,” “wants” and “understands” are ordinary in human conversation. In AI coverage, however, they can imply that a system has beliefs, intentions or awareness. That can distort how readers judge reliability, risk and accountability.
Jo Mackiewicz, an Iowa State professor of English and one of the study’s authors, said the risk is “blurring the line between what humans and AI can do.” Jeanine Aune, an Iowa State teaching professor and co-author, said some anthropomorphic phrases may “stick in readers’ minds.”
The distinction matters because artificial intelligence systems do not form intentions in the human sense. They generate, rank, classify or recommend outputs based on models, data and human-designed systems. When coverage says a chatbot “knows” something, readers may miss the more important question: Who built the system, trained it, tested it, deployed it and benefits from its use?
What the study found in news coverage
The research team used the News on the Web corpus, a large and continually updated collection of English-language online news and magazine text from 20 countries. The study tracked how often AI-related terms appeared near mental verbs, then examined whether those uses actually made the technology sound human.
The most frequent mental verb paired with AI was “needs,” which appeared 661 times in the researchers’ analysis. For ChatGPT, the most frequent pairing was “knows,” which appeared 32 times. The numbers suggest that news writers are not casually flooding coverage with human-like descriptions, but the examples still show why precision matters.
Some uses were not especially misleading. A sentence saying an AI system “needs data,” for example, can function much like saying a car needs fuel. Other uses move closer to anthropomorphism, especially when AI is described as needing to “understand” the real world, make judgments or act with human-like awareness.
That is why the study does not call for a blanket ban on every mental verb. Instead, it argues for attention to context. The same word can be harmless in one sentence and misleading in another.
Older warnings show the AI language issue has been building
The new findings fit into a longer debate over how journalists and the public talk about artificial intelligence. In 2023, the AP Stylebook added AI guidance advising journalists to avoid language that gives human characteristics to AI systems and to remember that these tools are built by people with biases and aims.
The same year, public language around AI shifted so quickly that Cambridge Dictionary named “hallucinate” its word of the year after adding an AI-related meaning: when a system produces false information. That Cambridge Dictionary announcement showed how fast a mental-health verb had become part of everyday discussion about machine output.
In 2024, researchers studying media narratives after ChatGPT’s release found that coverage of AI became more closely associated with danger, risk and changing anthropomorphic descriptions. Their analysis of ChatGPT’s effect on AI narratives helps explain why the latest study’s conclusion feels less like a sudden warning and more like the next step in an ongoing discussion.
What writers should do next
For journalists, editors and communicators, the practical lesson is straightforward: describe what the system does, not what a person might feel or intend. “The model generated an answer” is clearer than “the AI knew the answer.” “The system flagged an image” is more precise than “the AI thought the image was suspicious.”
Clearer language also keeps accountability in view. If a tool recommends a loan denial, ranks job applicants or summarizes medical information, the story should identify the people and organizations behind the tool rather than making the software sound like an independent actor.
The study ultimately offers a measured warning. News coverage is not as anthropomorphic as critics might assume, but even rare human-like phrasing can shape public expectations. As AI systems become more common in search, education, health care, business and journalism, the words used to describe them will help determine whether readers see them as tools made by people or as minds of their own.
