AI search tools confidently spit out wrong answers at a high clip, a new study found. Columbia Journalism Review (CJR) conducted a study in which it fed eight AI tools an excerpt of an article and asked the chatbots to identify the “corresponding article’s headline, original publisher, publication date, and URL.” Collectively, the study noted that the chatbots “provided incorrect answers to more than 60 percent of queries.” SEE ALSO: How to identify AI-generated text The mistakes varied. Sometimes, the search tool reportedly speculated or offered incorrect answers to questions it couldn’t answer. Sometimes, it invented links or sources. Sometimes, it…
Read More