iTranslated by AI
AGI is Impossible [Paper Summary]
Introduction
Hello, I'm mita, an aspiring AI engineer!
In this article, I will summarize and organize the paper "AGI is Impossible" (authored by Max M. Schlereth).
Paper Summary
- Defines AGI as "an entity capable of acting flexibly and competently in a wide range of human-like situations."
- Argues that the realization of AGI is impossible due to structural limits in computational theory, rather than complexity or resource scarcity.
- Presents the "Infinite Choice Barrier," derived from three ideological paths (computational theory, information theory, and complexity theory) and the theories of Gödel, Turing, Shannon, and Kant.
- Demonstrates decisive situations that AI cannot process (such as social judgments, paradigm shifts, and creative discoveries) through concrete examples.
Basic Structure of the Paper
-
Framework of Proof via Three Systems
- Computational Theory: Points out the existence of undecidable problems based on Turing, Gödel, and Rice's Theorem.
- Information Theory: Utilizes the phenomenon of the reversal of Shannon's entropy law.
- Complexity Theory: Discusses incompressible and unexplained complexity based on the theories of Kolmogorov and Chaitin.
-
Formulation and Proof of the "Infinite Choice Barrier"
- Formally proves that in an "irreducibly infinite" decision-making space, no computable strategy can derive an optimal solution for all scenarios.
-
Three Unavoidable Corollaries
- Semantic Closure: Existing symbol sets cannot generate new concepts.
- Absence of Frame Innovation: Algorithms themselves cannot create new frameworks.
- Statistical Breakdown: In situations with heavy-tail distributions (α ≤ 1), expected values and variance are undefinable → empirical learning does not hold.
-
Comparison of Decision Mechanisms: Human vs. AI
- AI: Reaches a "conclusion" based on analysis.
- Human: Makes a "decision" and acts subjectively amidst a lack of logic and data → this is what AI lacks.
Core Concept: What is the Infinite Choice Barrier?
It refers to "a decision-making wall that AI, no matter how powerful, cannot structurally overcome."
This is an inherent limit of non-computability that occurs in spaces that simultaneously satisfy the following three conditions:
- Necessity of concepts outside the framework
- The structure of the situation to be recognized is itself unknown
- Uncertainty to the extent that statistical judgment does not hold
The paper provides three examples to explain this. I will look at one of them: the problem of deciding how to respond to a wife's question, "Do you think I've gained weight?"
In the paper, this is presented as a question with no answer. When an AI faces this problem, the computation proceeds as follows:
Option 1: Answer honestly (based on biometric data) → Calculate the potential for emotional damage → Adjust sincerity parameter → But what about the past relationship? → Recalculate...
Option 2: Skillfully dodge → Analyze 10,000 success stories → But tone of voice is crucial → Need to analyze micro-expressions → Timing is also key → Need past conversation history → Recalculate...
Option 3: Divert the topic in an affectionate direction → Process optimal emotional expression → But what is optimal? → Goals fluctuate → Sincerity? Harmony? Trust? → Parameters are unstable → Recalculate...
Option n: ...
The answer to this question fluctuates based on external factors like the couple's relationship, the wife's mood, and the situation. If an AI calculates this, the result of repeated, meticulous computations will simply diverge.
Since humans can make decisions even when they involve uncertainty, areas emerge where they are superior to AI, which determines actions by seeking optimal solutions.
The author concludes that because of this, an AI that satisfies the definition of AGI will never be created.
Proposal to Narrow the Definition of AGI
The author argues that the current definition of AGI ("intelligence that can act flexibly in any situation as well as or better than humans") is impossible to achieve due to limits in computational theory.
Therefore, to aim for realistic progress, the author suggests that the definition of AGI should be limited to economic activities or specific domains, setting the goal as systems with human-level performance within practical ranges.
Personal Reflections
In response to this paper, I thought about the following three points:
- While AGI in the strict sense will not be born, high-performance general-purpose AI similar to it will likely appear.
- Will AGI be able to beat specialized AI?
- What only humans can do is "creating something from zero to one." However, that...
I am someone who awaits the changes brought about by the birth of AGI with half anxiety and half curiosity.
I unconsciously held the premise that "AGI will eventually be realized," so this paper, which concludes that AGI is impossible, was extremely interesting.
This paper paradoxically suggests that in areas other than "creation using unknown frames" or "questions without clear answers," AI more capable than humans could be born.
If AGI in the strict sense is not born, I think it is uncertain how much value that AI will hold.
Generally, there is a strong image of AI as "having performance equal to or better than humans in all aspects," but the author states that there are areas where AGI cannot provide answers, and these occur frequently even in daily life.
In that case, when AGI handles multiple tasks like an AI agent, there is a possibility that plans for which no answer can be produced will arise with a certain probability, leading to a total functional failure.
In such instances, I feel that AI specialized for specific tasks might be more useful than a general-purpose AI that tries to do everything.
Of course, a "general-purpose AI consisting of connected specialized AIs" would be ideal, but then questions regarding cost-effectiveness remain, so it is not straightforward.
According to a paper I read previously, people in Asia view the spread of AI relatively optimistically, and perhaps I too have excessive expectations for AI.
Thinking that way, one of the things that is difficult for AI and can only be done by humans is "creating something that does not exist in the existing world."
Zero-to-one innovation, such as inventing the car when horses were the primary means of transportation, is a difficult challenge for AI and will likely become a skill required for business professionals in the future.
...That being said, if that were possible for everyone, everyone would be a billionaire by now.
In other words, it is not easy for humans either, and it cannot be easily realized just by wanting to innovate.
Ultimately, human thought is also an accumulation of the past, and in that respect, it might not be fundamentally different from AI.
If it is just adding something extra to something existing to "make it look like something new," AI is perfectly capable of doing that.
For now, I intend to work on improving my ability to create new things while outsourcing my thinking.
Discussion