Scientists at the Oxford Internet Institute have issued a warning about the growing trend of large language models (LLMs) used in chatbots to hallucinate. These sophisticated AI models are capable of generating false information and presenting it as accurate, posing a direct threat to scientific truth and integrity.
In a new paper published in Nature Human Behavior, researchers highlight that LLMs are designed to provide persuasive answers without any guarantees as to their accuracy or compliance with the facts. While LLMs are often used as knowledge sources and information generators, the data they are trained on may be inaccurate or biased.
One reason for this is that LLMs often rely on online sources that may contain false statements, opinions and misinformation. Users often trust LLMs as a human-like source of information due to their design as helpful agents, leading them to believe that answers are correct even when they have no factual basis or represent a biased or partial view of the truth.
The researchers emphasize the importance of accuracy in science and education and call on the scientific community to use LLMs responsibly. They suggest treating LLMs as “zero-target translators,” where users provide appropriate data for the model to transform into inference or code. This approach ensures that output is factually correct and conforms to the given input, making it easy to verify.
While LLMs can undoubtedly aid scientific workflows, it is crucial for scientists to use them responsibly and maintain clear expectations of how they can contribute. By treating LLMs as tools rather than sources of knowledge, scientists can ensure that their research remains accurate and reliable.