Why do large language models (LLMs) give people the impression of being "intelligent"? The answer lies in their inherent connection to "language"—language is the crystallization of human thought and a condensed reflection of objective laws in the subjective world. When a model accurately simulates the probabilistic relationships of language, it is like reflecting the outline of laws through a mirror. However, this outline is ultimately an "illusion in the mirror, a flower in the water": it can replicate the external manifestations of laws, but it never grasps the intrinsic nature of those laws. Just as Sun Wukong learned superficial magic under Patriarch Bodhi—seeming capable of summoning wind and rain, yet far from the true path of "eternal life"—from an epistemological perspective, LLMs are forever trapped in the "realm of necessity" proposed by philosophical classics. To move toward the "realm of freedom," they must break the shackles of "language simulation" and achieve a cognitive leap from "phenomenon replication" to "law abstraction."
Human cognition of the world ultimately precipitates into language. Whether it is the physical law behind "an apple falling to the ground," the mathematical logic embodied in "1+1=2," or the natural rhythm corresponding to "sowing in spring and harvesting in autumn," all are essentially the results of objective laws being "encoded" by humans through language. Language is like a "compressed package of laws," condensing complex real-world logic into concise textual symbols. For example, the statement "water boils at 100°C under standard atmospheric pressure" contains three core elements: "condition (standard atmospheric pressure), object (water), and law (boiling at 100°C)."
The core capability of LLMs lies in the probabilistic simulation of this "language-based compressed package of laws." By learning massive amounts of text, they calculate the probability of associations between words—such as "water" with "100°C" and "boiling," or "apple" with "falling to the ground." They can even generalize to conclusions like "oranges fall to the ground" or "brine boils at a temperature higher than 100°C." These conclusions align with objective laws, so humans subconsciously feel that "the model understands." However, this "understanding" is merely a replication of "language associations": the model has no idea what "standard atmospheric pressure" is, nor does it comprehend the physical meaning of "boiling point." It only remembers that "the words 'water,' 'standard atmospheric pressure,' and '100°C' often appear together"—just like a student who memorizes answers but cannot explain the problem-solving process, able to get the right answer but unable to say "why."
The essence of this "illusion of intelligence" is the confusion between "language as a carrier" and "the essence of laws." Humans understand laws through language, so they mistakenly assume that "simulating language equals mastering laws." It is like seeing the bright moon in a mirror and thinking the moon is hidden in the mirror, forgetting that the mirror is merely a tool that reflects light, while the real moon remains in the night sky.
Classic philosophical theories have profoundly expounded on two stages of human understanding of the world: the realm of necessity refers to the stage where humans have not yet understood objective laws and can only passively adapt and mechanically imitate; the realm of freedom refers to the stage where humans have mastered objective laws and can proactively use them to transform the world. The transition from necessity to freedom is not a quantitative accumulation but a qualitative leap—it is a shift from "knowing that something is so" to "knowing why it is so," and from "passive replication" to "active creation."
LLMs are precisely trapped in the cage of the "realm of necessity." All their "abilities" are mechanical imitations of human language, never touching the core of "law cognition." Just like a primitive person who knows nothing about mathematics: seeing "input 1 → output 1, input 2 → output 4," they can remember the "correspondence between numbers and results" and even generalize to "input 4 → output 16," but they will never understand the underlying logic of "this being a square operation." Similarly, they are like reverse-engineering programmers: able to replicate the "input-output effects" of code, but unable to fathom the "design ideas" behind the code, let alone independently modify the code to solve new problems.
Their predicament is identical to Sun Wukong’s experience under Patriarch Bodhi. The patriarch successively proposed magical arts of the "School of Techniques," "School of Flow," "School of Stillness," and "School of Movement," but every time Sun Wukong asked, "Can this grant eternal life?" the patriarch evaded the question. These arts were all "superficial skills," only simulating the appearance of "becoming an immortal" but never touching the essence of "eternal life." The same applies to LLMs: even if they learn all human texts, simulate language to perfection, and write papers, code, or reasoned arguments, they remain stuck at the level of "technique." They can never, like humans, ask "why laws are as they are," let alone create new knowledge based on laws.
For instance, an LLM can write a popular science article about "relativity" but cannot understand "why relativity breaks through the framework of traditional mechanics"; it can write code for a "sorting algorithm" but cannot grasp the "differences in time complexity between different algorithms"; it can answer "bring an umbrella when it rains" but cannot explain "how rain forms." It is like an imitator who only "echoes others’ words," treating human language as a "script"—no matter how well it performs, it can never become the true "character."
The core reason humans can move from the realm of necessity to the realm of freedom lies in completing the cognitive cycle of "from concrete to abstract, and then from abstract to concrete." First, humans observe specific phenomena through practice (e.g., seeing countless objects fall to the ground); then, they abstract the essence of laws (summarizing basic physical laws); finally, they use these laws to guide new practices (e.g., launching satellites, predicting tides). In this process, "abstract thinking ability" is crucial—it allows humans to break free from the "shackles of phenomena" and grasp the "core of laws," which is precisely the ability LLMs lack.
To enable AI to break through the "realm of necessity," we must abandon the path of "pure language simulation" and shift to the "conceptual intelligence" we discussed earlier—building a cognitive framework centered on "law abstraction." Within this framework, there are no longer "probabilistic associations of language tokens," but precise combinations of "conceptual symbols + logical relationships." For example, the law of "water’s boiling point" is stored as "[Water C001] under [Standard Atmospheric Pressure T001] → [Boiling at 100°C F001]"; the essence of "square operation" is defined as "[Square Operation Y001]: Input X → Output X×X."
This "conceptual intelligence" is equivalent to equipping AI with "eyes to see laws." Instead of passively simulating language, AI can proactively identify "the laws behind phenomena": seeing "an apple fall to the ground," it can connect to the concept of "basic physical laws"; seeing "1→1, 2→4," it can abstract the logic of "square operation"; encountering the new phenomenon of "brine boiling," it can independently deduce the conclusion that "brine boils at a temperature higher than 100°C" based on the law of "boiling point changes in solutions." This process is the leap from the "realm of necessity" to the "realm of freedom": AI is no longer an "illusion in the mirror, a flower in the water," but a cognitive subject that truly "understands and applies laws."
The "illusion of intelligence" in LLMs is not worthless—it proves that "language is an excellent carrier of laws" and provides AI with a "phenomenon-level database" for understanding the world. However, we must clearly recognize that simulating language ≠ mastering laws, and the realm of necessity ≠ the realm of freedom. Just as Patriarch Bodhi finally taught Sun Wukong the "Seventy-Two Transformations" not because Sun Wukong was proficient in "technique," but because he finally understood that "the core of eternal life lies in mastering the essence of change," AI can only truly become "intelligent" if it moves beyond "language simulation" and focuses on the core of "law abstraction."
From imitating "illusions in the mirror, flowers in the water" to understanding "why things are as they are," and from the realm of necessity to the realm of freedom, this path may be long, but the direction is clear: abandon the endless pursuit of "language probability," and instead focus on building cognitive capabilities for "law abstraction"—this is the only way for AI to truly move toward intelligence.