A recurrent criticism of AI, especially of language learning models (LLMs) like GPT, is the supposed lack of deep thinking or true understanding. Critics claim these AI models lack a fundamental conceptual grasp of the subjects they “discuss”.
However, I contend that this criticism misses a vital point. Realistically, most humans don’t deeply comprehend the underlying concepts of much of what they discuss. In many areas of life, we’re no better than parrots, simply echoing talking points absorbed from our social network or trusted media sources.
Cognitive science research lends weight to this perspective. It’s widely recognized in learning psychology that the journey to knowledge typically encompasses two stages:
- The “surface structure” stage
- The “deep structure” stage
The surface structure stage represents a preliminary, basic understanding of an idea. It’s characterized by the ability to recall facts or apply procedures without necessarily comprehending the core principles. For instance, a student might be able to apply a mathematical formula to solve a problem, without understanding why or how that formula works.
The deep structure stage, conversely, represents a more profound comprehension of concepts. It encompasses the ability to grasp underpinning principles or theories, understand the relationships between different pieces of information, and apply this knowledge in new and unfamiliar scenarios. A deep understanding of a mathematical formula, for instance, would involve knowing why it works and being able to derive it from foundational principles.
Yet, it’s crucial to acknowledge that a significant portion of what most people discuss doesn’t surpass the surface structure stage. Thus, in many domains, human understanding is comparable to the output of an LLM. While in areas of true expertise, humans can indeed achieve a deep structure understanding and are capable of conceptual innovation that LLMs can’t currently emulate.
But let’s be realistic. Most people are true experts in only a few areas. We master our careers, spending 8+ hours a day on them, and often a couple of hobby areas. The notion that humans are conceptual, deep structure experts in a multitude of different areas is simply implausible. In reality, we function as LLMs for around 80-95% of our lives, and reach a deeper, conceptually advanced understanding in just 5-20%.
I call this the Parrot Theory. The idea that we predominantly parrot what we hear or read, understanding and thinking deeply about only a small fraction of our knowledge landscape.