What responses are systems like ChatGPT giving to users that the systems themselves can verify are rooted in bad information?
It was interesting to be a student in the Masters program in Learning, Design, and Technology at Georgetown University at the same time that generative artificial intelligence exploded out of the gates. As a technology enthusiast and early adopter, I have been eager and excited to explore the possibilities of tools like ChatGPT. On the other hand, the need for academic rigor and my underlying interest in cognition and the philosophy of education have required turning a critical eye toward this fast-evolving technology.
Early in 2023, in the context of academic research, I began to experience “hallucination” by Large Language Models (LLMs)—namely ChatGPT. I wrote about this effect before it became extensively documented and explored in popular media. By late 2023, these effects were of such consequence that “hallucinate” became the Dictionary.com word of the year.
I continue to view the effects of AI hallucination not simply as a technological shortcoming that’s being addressed through incremental software updates but rather a glaring reminder of the need for widespread education about robust theories of epistemology and critical thinking. We must continue to match improvements in artificial intelligence with investment in human native intelligence.
Illustration by ChatGPT/DALL-E