On the need for a better framework for learning with AI

What responses are systems like ChatGPT giving to users that the systems themselves can verify are rooted in bad information?

It’s been interesting being in the Masters program in Learning, Design, and Technology at Georgetown University at the same time that generative artificial intelligence has exploded on the scene. As a technology enthusiast and early adopter, I have been eager and excited to explore the possibilities of tools like ChatGPT. On the other hand, the need for academic rigor and my underlying interest in cognition and the philosophy of education have required turning a critical eye toward this fast-evolving technology.

Early in 2023, in the context of academic research, I began to experience “hallucination” by Large Language Models (LLMs)—namely ChatGPT—and write about its effects before they became extensively documented and explored in popular media. By late 2023, these effects were of such consequence that “hallucinate” became the Dictionary.com word of the year.

I continue to view the effects of AI hallucination not simply as a technological shortcoming that’s being addressed through incremental technological fixes but rather a bellwether of the need for widespread education about robust theories of epistemology and critical thinking. We must continue to match improvements in artificial intelligence with investment in human native intelligence.


Illustration by ChatGPT/DALL-E

Share your comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top