It’s been interesting being in the Masters program in Learning, Design, and Technology at Georgetown University at the same time that generative artificial intelligence has exploded on the scene. As a technology enthusiast and early adopter, I have been eager and excited to explore the possibilities of tools like ChatGPT. On the other hand, the need for academic rigor and my underlying interest in cognition and the philosophy of education have required turning a critical eye toward this fast-evolving technology.
Early in 2023, in the context of academic research, I began to experience “hallucination” by Large Language Models (LLMs)—namely ChatGPT—and write about its effects before they became well-documented and explored in popular media.
I continue to view the effects of AI hallucination not simply as a technological shortcoming that’s gradually being addressed through technological fixes but rather a bellwether of the need for widespread education about robust theories of epistemology and critical thinking. We must continue to match improvements in artificial intelligence with investment in human native intelligence.
Illustration by ChatGPT/DALL-E