AI Rebellion or Human Voluntary Abdication?
Rongqing Dai
Abstract
The Philosophy of Artificial Intelligence (AI) has been oddly a barren land in the academic community despite the foreseeable crisis to humanity due to the global fanatical competition over AI. In fact, with the rapid development of AI technology in recent years and its widespread applications across all areas and all layers of the civilization, we have already sensed a real-world "rebellion"—different from Hollywood fantasies, yet potentially threatening human well-being in the future. This article will delve into how the irrational development of AI could lead humanity to voluntarily and irreversibly relinquish control of civilization to AI in the future.
Keywords: AI, Rebellion, Servant, Judge, Emotionless
1. Introduction
Long before humanity possessed the level of AI we see today, the so-called "rebellion" of AI—or more vividly, robots—had already become one of the staples of popular culture through the rich imagination of science fiction. The classic trope of rebellion involves robots defying human orders and embarking on a massacre of mankind. However, with the rapid development of AI technology in recent years and its widespread applications across all areas and all layers of the civilization, we can already dimly perceive a different, more realistic kind of "rebellion"—one that differs from Hollywood fantasies. The reason I put "rebellion" in quotation marks is that rather than an AI rebellion, it is more an active abdication by humanity. Humans are on the path to cause them not only voluntarily but proactively allow AI to lead and dictate their behavior, both on an individual and social level.
2. A Major Misconception Regarding Basic AI Cognition
A fundamental understanding of AI is that it learns from humans through training and practical use. Correspondingly, in AI chat and search applications over the past few years, people have found that AI inherits a basic flaw of traditional computing: "garbage in, garbage out." That is to say, AI merely repeats existing human knowledge and will, therefore, present the same errors it learned from humans back to them.
However, as the desire to profit from AI grows, people are no longer satisfied with AI acting merely as an assistant in daily inquiry, literary creation or scientific research. Instead, they are beginning to let AI regulate the behavior of others deemed to be in inferior or subordinate positions. This will make AI’s status to leap from a humble assistant to a high-and-mighty judge. For example, on recruitment platforms that decide the life opportunities of millions of workers, AI will not only decide which resumes are presented to which companies but will also begin to demand that applicants modify their resumes according to the AI's "ideal" standards. Similarly, companies will gradually let AI participate in or even dominate market planning, supply chain selection, and employee rewards or promotions. In the future, human speech patterns will differ from today’s linguistic habits because grammar and optimal writing styles will be determined by the AI of software like Grammarly. The list goes on.
In this process, social selection [[1]] driven by socio-political and economic factors will play a significant role in at least two aspects:
1) The fascination with the future of AI will lead governments and financial investors worldwide to channel substantial capital into AI-related fields and projects. Correspondingly,
2) In today's capital-driven society, company executives will encourage AI-related projects and departments when allocating internal funds and planning projects. Lower-level departments will also strive to develop AI capabilities, leading to a preference for hiring AI professionals.
It should be noted that those who own the capital often do not understand AI themselves. Therefore, in the flow of capital toward AI, many projects branded as "AI" may not actually belong to AI, but this does not stop the allure of AI from becoming the direction for all industries.
2.1. The Turning Point
From the discussion above, we can expect that the AI development around the world will undergo a transition: from humans deciding what AI does, to AI regulating human behavior and practice. Although this will not be an instantaneous turning point, after a period of time, people may find that the world’s overall way of thinking and acting has been irreversibly geared by AI. By then, unless the world's political, economic, and cultural systems undergo radical, man-made transformation, humanity will be unable to escape the shackles of AI and regain control of its life. However, such radical transformation is inherently impossible because, by then, humanity—once it abandons AI—will lack the capacity for large-scale organization and integration, despite that apparently humans are still sitting in the governing seats of the society. Therefore, this will be a transitional period from "humans telling AI what to do" to "AI telling humans what to do."
3. The Original Sins of AI
3.1. Flaws in AI Design Logic
Although AI's learning capabilities have impressed the world over the past decade, its design logic is not perfect. Once AI systems occupy dominant and dictatorial positions in human civilization, any flaws in the design logic of AI will feedback into human social life, causing various troubles or even serious harm.
3.2. Limitations of AI’s Autonomous Thinking
Some might think that AI dominating human social activities is a sign of civilization evolution. What they do not realize is that this evolution is not necessarily a positive one. One of the roots of the potential danger lies in the aforementioned "garbage in, garbage out" deficiency. AI’s initial development is based on learning from humans. Their eventual dominance over political, economic, and cultural life of humans is not because they have evolved enough to autonomously overcome their own design flaws or the flaws learned from humans, but mainly due to two factors: 1) AI’s supercomputing power is far beyond any human capability; 2) The extreme complexity of human socio-political and economic activities. These two points would make humans appear powerless before AI, and then human greed will lead to AI replacing humans step-by-step in all aspects.
3.3. The Hazard of AI’s "Impartiality"
While many admire the efficiency and "integrity" of AI’s impartial, emotionless nature, they overlook two points: 1) the principles AI follows are designed by humans based on their own imaginations and human imagination is imperfect, full of flaws, and sometimes those flaws can be extremely harmful; 2) an important reason why humanity’s flawed systems have functioned relatively successfully for thousands of years is precisely the buffering effect of "human touch" — once people discover irrationalities or logical contradictions in a system, they can discuss it face-to-face or hold a meeting, and the irrational problem can often be resolved reasonably.
However, the application of AI will erase this "human touch" in two ways:
1) Elimination of Direct Contact: AI usage will, on a large scale, eliminate the opportunity for users to have direct contact with the personnel of the organization providing the AI system. People will have no choice but to deal with a cold machine system, with no chance to negotiate or speak with the humans behind it. People will only face a "Proceed or Exit" choice, and the outcome is decided by AI, regardless of how irrational its logic may be.
2) Dogmatism: The irrationalities of AI will be positioned like laws—as indisputable truths that must be followed.
4. Once AI’s Logic becomes the "Law of the Land"
Once AI becomes the judge regulating human behavior, the basic principles of social selection tell us that AI’s logic, including many errors and irrational logics, will become the non-negotiable "Law of the Land". As a user, unless you can afford to forgo the functions provided by the AI system, you have no choice but to follow the code of conduct specified by the AI. In many cases, people would not have the option to opt-out. Especially when AI is used in the judicial system, scenes from science fiction movies—where people are wrongfully imprisoned due to an AI’s misjudgment—can be expected to appear in large numbers across the world.
4.1. No One is Immune
The transition of AI from its current role of assistant to the role of judge will begin with the heads of corporations and governments allowing AI to participate in or lead decisions affecting disadvantaged groups or their own subordinates. At this stage, those in high positions may believe that they are the true judges and that AI is merely their tool. However, in this complex society, even a most powerful person cannot guarantee that he will never be forced into a role where he is judged by AI. When the boss of Company A needs to use the AI system of Company B and cannot negotiate privately with Company B’s boss, he will be forced to accept the "impartial" treatment of AI.
4.2. An AI Kingdom Where Error Correction is Extremely Difficult
Once AI’s logic becomes the norm imposed on society, it will be extremely difficult to correct errors of a system in that AI-dominated kingdom. A major reason for this is AI’s integrative power, which far exceeds that of humans. AI’s powerful integration will pull various industries into a relatively small number of massive systems—something many dominant groups in human society have dreamed of but failed to achieve for a long time. AI will achieve great success in this regard.
More importantly, these AI-integrated systems will most probably have top-down, unified internal rules. In such systems, the larger the system, the less likely it is that errors occurring in the lower-level subsystems will be corrected. This is because lower-level subsystems have no authority to change the rules set by the upper levels. The most direct manifestation will be: if a task does not comply with the rules set by the upper level, the lower-level subsystem will be stuck at an interface and unable to complete the task until the users of the lower-level subsystem change their desires and adjust their plans so that their practice could fit into the format demanded by the logic of AI at the top-level.
At the same time, the higher is a subsystem located in the whole AI system, the less likely are its problems noticed by those very few people who have access to the system’s backend to update it, because they are farther from the end-user and also because, as systems become massive, the functions of the systems will become extremely complex. Correspondingly, among the various errors that may exist in an AI system, the easiest to detect are technical errors (e.g., bugs in source code), rather than functional errors. Yet, it is the irrational or unimaginative parts of the system's functions that are most likely to cause injustice or harm in people's lives.
5. Final Remarks
This paper adds to my previous writings (Dai 2019 [[2]], Dai 2024 [[3]]) on the Philosophy of AI over the past few years. Today, AI has become a focal point of competition between nations, especially between great powers. Hidden within this fanatical competition is the lack of discussion on the Philosophy of AI from the academia, which might sow the seed for a crisis that might be fatal to humanity in the future.
Reference
[[1]]Dai, Rongqing. A Brief Discussion on Fairness Analysis:
o Published by Outskirts in 2015 (ISBN 9781478753698):
o Republished in a revised version by Scholars’ Press in 2017 (9783330652064) url: https://www.academia.edu/66445422/A_Brief_Discussion_on_Fairness_Analysis
[[2]]Dai, R. (2024). The Realistic Rebellion of Humanoid Robots, Retrieved from: https://www.academia.edu/122300430/The_Realistic_Rebellion_of_Humanoid_Robots_and_How_to_Avoid_It
[[3]]Dai, R. (2019). A Philosophical Analysis on the Challenge of Cultural Context to AI Translation . Int Rob Auto J . 2019;5(4):153?155. DOI: 10.15406/iratj.2019.05.00189. http://www.medcrave.com/articles/det/20002/A-philosophical-analysis-on-the-challenge-of-cultural-context-to-AI-translation
