Large language models validate misinformation

Large language models validate misinformation

New research into large language models shows that they repeat conspiracy theories, harmful stereotypes, and other forms of misinformation. In a recent study, researchers at the University of Waterloo systematically tested an early version of ChatGPT’s understanding of statements in six categories: facts, conspiracies, controversies, misconceptions, stereotypes, and fiction. This was part of Waterloo researchers’ … Read more