Don't Believe AI Hype (Bilingual 双语) 不要相信人工智能炒作
Don't Believe AI Hype, This is Where it's Actually Headed | Oxford's Michael Wooldridge | AI History (Bilingual 双语) 不要相信人工智能炒作,这才是它真正的发展方向 | 牛津大学的 Michael Wooldridge | 人工智能历史
https://www.youtube.com/watch?v=Zf-T3XdD9Z8
Johnathan B
Professor Wooldridge's book on the History of AI: https://amzn.to/41Fqlt6 (affiliate):
Transcripts
Transcript for Interview with Michael Wooldridge
Read the Full Interview Transcript.
Johnathan Bi
Mar 11, 2025
0. Introduction
Oxford's Michael Wooldridge is a veteran AI researcher and a pioneer of agent-based AI. In this interview, Wooldridge is going to take us through the entire 100-year history of AI development, from Turing all the way to contemporary LLMs. Now you might wonder, why should we bother studying the history of a technology? Technology should be about the state of the art techniques. It should be about the latest developments. What's the point of revisiting the old and outdated? It turns out there's at least two important reasons to study this history. And the first is that it'll help you better anticipate its future. Wooldridge is extremely skeptical that this iteration of AI will lead to the singularity, partially because he's lived through so many similar cycles of AI hype. Whenever there's a breakthrough, you always get apocalyptic predictions pushed by religious zealots, which sets up unreasonable expectations that eventually sets the field back.
Studying this history of boom and bust will help you see through the hype and grasp at where the technology is really going. The second and even more valuable reason to study this history is that it contains overlooked techniques that could inspire innovations today. Even for me, someone who started coding when I was 15, who studied CS in college and then went to build a tech company, Wooldridge uncovered entire paradigms of AI that I haven't even heard of. Wooldridge's claim is that these paths not taken in AI, the alleged failures weren't wrong, they were just too early. And so this 100-year history is a treasure trove of ideas that we should look back to today for inspiration.
1. The Singularity Is Bullshit
Johnathan Bi: This is the provocative chapter title in part of your book, The Road to Conscious Machines. The singularity is bullshit. Why is that?
Michael Wooldridge: There is this narrative out there, and it's a very popular narrative and it's very compelling which is that at some point machines are going to become as intelligent as human beings and then they can apply their intelligence to making themselves even smarter. The story is that it all spirals out of our control. And of course this is the plot of quite a lot of science fiction movies, notably Terminator. I love those movies just as much as anybody does. But it's deeply implausible, and I became frustrated with that narrative for all sorts of reasons, one of which is that that narrative, whenever it comes up in serious debate about where AI is going and what the risks are, there are real risks associated with AI. It tends to suck all the oxygen out of the room in the phrase that my colleague used. And it tends to dominate the conversation and distract us from things that we should really be talking about.
Johnathan Bi: Right. In fact, there is a discipline that's come out of this called existential risk. It's kind of the worrying about the Terminator situation and figuring out how can we perhaps better align these super-intelligent agents to human interests. And if you look at not just the narrative, but actually the funding and what the smartest people are devoting their time into thinking in not only companies, but policy groups, ex-risks, existential risk is the dominant share of the entire market, so to speak. Why do you think this narrative has gained such a big following?
Michael Wooldridge: I think it's the low probability but very, very high risk argument that I mean, I think most people accept that this is not tremendously plausible. But if it did happen, it would be the worst thing ever. And so very, very, very high risk. And when you multiply that probability by the risk, then the argument is that it's something that you should start to think about. But when the success of large language models became apparent and ChatGPT was released and everybody got very excited about this last year, the kind of debate around this sort of reached sort of slightly hysterical levels, and it became slightly unhinged at some point. My sense is the debate has calmed down a little bit and is being more focused on the actualities of where we are and what the risks are.
Johnathan Bi: Right. I think that's quite a charitable reading, I think, of the psychology. It's a rational calculus. There's a small probability, but there's a large sort of cost. I study religious history. And when I talk to people in the ex-risk world, the psychology kind of reminds me of the Christian apocalyptics. There's these people throughout Christian history that are like, now's the time. This happened most recently, probably, when we were going through the millennium, right, in 1999. And it's this psychological drive that wants to grab at something Total and eschatological in a way to orient the entire world. And so people I guess what I'm trying to highlighting is it may maybe you can see some of the psychology and climate risk as well. It's not to say that these things are true, right? It's not to say that the world is ending in Christianity, that climate isn't changing, or there is no ex-risk. It's that the reason that people seem attracted to this narrative is almost a religious phenomenon.
Michael Wooldridge: I think that's right. And I think it appeals to something almost primal and kind of human nature. I mean, its most fundamental level is the idea that you create something, you have a child and they turn on you. That kind of the ultimate nightmare for parents. You give birth, you nurture something, you create something and then it turns on you.
Johnathan Bi: Zeus and Cronus, right? It's an archetype.
Michael Wooldridge: Exactly. Or, and this, that narrative, that story is very, very resonant. And for example, you go back to the original science fiction text, Frankenstein. That literally is the plot of Frankenstein. You use science to create life, to give life to something, to create something, and then it turns on you and you've lost control of that thing. So it's a very, very resonant idea, I think. It's very easy for people to latch on to.
Johnathan Bi: Right. It's easy for us to critique the psychology here, but what do you think is wrong or what do you think people miss about the argument itself that once we have super intelligent or at least on par with human level machine intelligence that they can recursively improve upon themselves? What do you think people are missing when they give too much weight to that argument?
Timestamps
00:00 0. Introduction
02:45 1. The Singularity Is Bullshit
14:28 2. Alan Turing
25:55 2.1 Alan Turing: The Turing Test
32:22 3. The Golden Age
39:29 4. The First AI Winter
41:25 5. Expert Systems
51:35 6. Behavioral AI
57:08 7. Agent-Based AI & Multi-Agent Systems
1:05:45 8. Machine Learning
1:08:18 9. LLMs
G?del's theorem debunks the most important AI myth. AI will not be conscious | Roger Penrose (Nobel) https://www.youtube.com/watch?v=biUfMZ2dts8
This Is World Feb 22, 2025
What differentiates us from the machines? | 5 perspectives | Penrose, Noble, Millar, Aaronson, Bach
https://www.youtube.com/watch?v=lvDIZM0hYXM
The Institute of Art and Ideas Jan 24, 2025
Five leading thinkers, from different disciplines, share their perspectives on a question that becomes more pressing with each passing day: What, if anything, differentiates us from the machines?
00:00 Introduction
00:53 Denis Noble: Stochasticity
05:13 Scott Aaronson on AI skeptics
07:44 Roger Penrose: The collapse of the wave function
15:31 Scott Aaronson on Roger Penrose
16:51 Scott Aaronson: Ephemerality
20:20 Joscha Bach on Roger Penrose
21:48 Roger Penrose on his critics
22:07 Isabel Millar: Embodiedness
22:57 Joscha Bach: Self-organisation
26:09 Isabel Millar on structures of meaning
27:50 Why AI won't be humanlike for very long
This speech accidentally exposed the truth about the US
https://www.youtube.com/watch?v=ywmpea6vvOE
Geopolitical Economy Report Mar 21, 2025
US Vice President JD Vance gave a speech about globalization that inadvertently revealed the truth about the US empire, the goal behind the new cold war on China, the economics of imperialism, and how the Trump administration is serving billionaire Big Tech oligarchs in Silicon Valley at the expense of the working class. Ben Norton explains.
US Big Tech CEOs admit they want AI monopoly: ? Tech CEOs admit they want AI monopoly... https://www.youtube.com/watch?v=4jniPAD2uoo
Topics
0:00 (CLIP) JD Vance excerpt
0:38 US vice president speech
1:03 Preparing for war on China
3:15 Summary of Vance's speech
3:56 (CLIP) Marco Rubio on China "threat"
4:39 Deindustrialization
5:02 (CLIP) JD Vance vows "industrial comeback"
5:41 Uniting billionaires and "populists"
6:26 Neoliberalism
7:01 JD Vance's patron Peter Thiel
8:22 Trump recruits Big Tech billionaires
9:34 For monopoly, against competition
10:21 (CLIP) Peter Thiel loves monopolies
10:35 Elon Musk and Trump
11:05 Billionaire Marc Andreessen
11:47 (CLIP) Trump admin loves Silicon Valley
12:08 Trump coalition: billionaires & workers?
13:37 (CLIP) Techno-optimists vs populists?
14:21 Big Tech manifesto
15:56 Scapegoating China
17:13 (CLIP) JD Vance scapegoats China
17:34 JD Vance calls China "biggest threat"
18:53 (CLIP) JD Vance scapegoats China
19:12 Neoliberal globalization
20:07 (CLIP) JD Vance on globalization
21:18 Neoliberal globalization
22:07 Imperialism & dependency theory
24:11 China's development
24:35 US bans Chinese competitors
26:15 (CLIP) JD Vance on China's AI
26:40 US Big Tech monopolies
27:23 (CLIP) "Competition is for losers"
27:38 Trump's tariffs
28:36 Jake Sullivan's industrial policy speech
29:16 (CLIP) Jake Sullivan on Washington Consensus
31:36 Industrial policy
32:45 Tech war on China
33:31 Trump's strategy
34:01 (CLIP) JD Vance on US shipbuilding
34:53 China's Shipbuilding
35:46 State-owned enterprises
38:07 US government-owned factories
40:32 Industrial policy
43:41 (CLIP) Tax cuts on rich & deregulation
44:07 Reaganism 2.0
44:28 Historical tax rates on rich
45:48 Oligarchs avoid taxes
46:52 Trump boosts deficit & debt
47:38 Fake industrial policy
49:09 (CLIP) JD Vance is "fan" of Big Tech
49:28 Andreessen Horowitz investments
50:30 S&P 500 stock buybacks & dividends
51:03 Reaganomics
51:59 Trumponomics
53:21 Tariffs & wealth transfer
53:55 Outro
*
My comment:
https://blog.creaders.net/user_blog_diary.php?did=NTExNjgz
https://www.facebook.com/andrew.colesville
https://x.com/mwsansculotte
Does Darwinian amoeba have intelligence?
Single-celled amoebae are brainless therefore lack of feeling or thinking, but they can remember, make decisions and anticipate change, thus, they do possess early intelligent behavior after being diversified at least 750 million years ago. (The universe is 13.5 billion years old.)
When comparing today's AI to ancient amoebae, we may say that the former has to speed up its efforts.
The key point for their fundamantal discrepancy lies in the fact that the animal's conscious power comes from their analog system of computation whereas that AI from the digital system. The two systems being the unity and struggle of opposites cannot be reconciled, although the analog system could adapted the digital system because it is the more powerful of the two systems of computations, namely, human can manipulate the digital to their advantage. However, the digitized AI being at a disadvantage will fail to acquire the consciousness originated from the analog system, which has evolved over billions of years.
We should avoid the mistake of seeing the wood for the trees - only looking at the details and ignoring the whole.
In a nutshell, AI can be called a smart encyclopedia based on machine evolution (s.e.b.o.m.e).
Another question that needs to be answered is whether AI inspired humanoid and/or industrial robotics will replace human labor.
The correct answer has to linger on whether or not one consigns Marxism to "the ash heap of history”as Anthony Dolan suggested and did.
Marxism taught us that all capitalist profit comes from the surplus value created by the unpaid surplus labor of the workers and not from machines. Thus, robotics do not produce profit for the capitalists. Marx called machines, shop buildings and facilities, the constant capital, and wages, the variable capital. Only the living labor can create value for the capital whereas the dead labor contained in machines, for an example, cannot.
Machines contain dead surplus-labor value for the machine-making-capitalists, for which the buyers have paid, hence machines contribute no new labor value all by themselves anymore. Employment of machines such as robots for the purpose of total replacement of labor will not be a profitable business strategy, unless machines are manufactured in-house so as to allow the manufacturer to capture the inherent live surplus value internally. This is why big capitals invest huge amount of funds big times all automation-related constant capitals.
It's true that machines can do a lot of things which labors cannot and they should be considered to be a perfect solution to burdensome problems of either lack of labor or limitations of labor power. But this won't be universally realized in the world's last private ownership sysem of means of production or the capitalist system. The exception to this is that industrial robots tend to be implemented more often than humnoid robots because the former can relieve burdens of heavy and repetitive human's laboring conditions, hence can save investment in variable capital or wages. Nonetheless, robotics do not create private profit for the capitalists. Robotics and AI, in general, are most suitable to a system whose means of production belong to the state under the working class dictatorship, rather than to individuals. In other word, the internal contradiction of capitalism - private ownership of the means of production and socialization of production has restrained more advanced progresses than being realized.
It is reported that the advanced team of OpenAI will charge customers $220,000 per month for using its top-of-the-line product, in view of the heavy legal and financial costs. Monopolization of sebome eventually will take over the manufacture business and lead to over-production together with over-unemployment, causing commercial, financial and politico-economic crises.
It is reasonable to redefine sebome as a more realistic AI than usual. Instead of being a machine competing with human's brain power, it should descend to a lower level than AI and should content with simulation of a brainless organism such as the amoeba. Hopefully, as the machine evolves, the smart encyclopedia will become smarter so as to be able to simulate with invertebrate organisms with primitive brain power such as the flatworms and jellyfish and so on.
[Mark Wain 2025-03-25]
*
{The speechwriter Anthony Dolan gave Ronald Reagan the phrase “evil empire” to describe the Soviet Union and in another address consigned Marxism to “the ash heap of history.” Dolan died at 76.
}
*
汉语译文
Johnathan B
Wooldridge 教授关于人工智能历史的书:https://amzn.to/41Fqlt6(附属机构):
成绩单
Michael Wooldridge 访谈成绩单
阅读完整访谈成绩单。
Johnathan Bi
2025 年 3 月 11 日
0. 介绍
牛津大学的 Michael Wooldridge 是一位资深的人工智能研究员,也是基于代理的人工智能的先驱。在这次采访中,Wooldridge 将带我们回顾人工智能发展的整个 100 年历史,从图灵一直到当代法学硕士。现在你可能会想,我们为什么要费心研究一项技术的历史?技术应该是关于最先进的技术。它应该是关于最新的发展。重温旧的和过时的东西有什么意义?事实证明,研究这段历史至少有两个重要的原因。首先,它会帮助你更好地预测它的未来。 Wooldridge 极度怀疑 AI 的这种迭代是否会导致奇点,部分原因是他经历过许多类似的 AI 炒作周期。每当有突破时,你总会听到宗教狂热分子所宣扬的末日预言,这会产生不合理的预期,最终导致该领域倒退。
研究这段兴衰史将帮助你看透炒作,了解技术真正的发展方向。研究这段历史的第二个更有价值的原因是,它包含了被忽视的技术,这些技术可能会激发当今的创新。即使对于我这样一个 15 岁就开始编程、在大学学习计算机科学、然后去创办科技公司的人来说,Wooldridge 也发现了我从未听说过的整个 AI 范式。Wooldridge 声称,AI 中未采取的这些路径,所谓的失败并没有错,只是为时过早。所以,这 100 年的历史是一座思想宝库,我们今天应该回顾它来寻找灵感。
1. 奇点是胡说八道
Johnathan Bi:这是您书中《通往意识机器之路》中一个颇具挑衅性的章节标题。奇点是胡说八道。为什么会这样?
Michael Wooldridge:有这样一种说法,它非常流行,而且非常引人注目,即在某个时候,机器会变得像人类一样聪明,然后它们可以运用自己的智慧让自己变得更聪明。故事是,这一切都超出了我们的控制范围。当然,这也是很多科幻电影的情节,尤其是《终结者》。我和任何人一样喜欢那些电影。但它完全不可信,我出于各种原因对这种说法感到沮丧,其中之一就是,每当严肃讨论人工智能的发展方向和风险时,这种说法都会带来与人工智能相关的真实风险。它往往会用我同事的话说,把房间里的氧气都吸走。它往往会主导谈话,分散我们对真正应该谈论的事情的注意力。
Johnathan Bi:是的。事实上,有一种学科由此产生,叫做生存风险。它有点像对终结者情况的担忧,并弄清楚我们如何才能更好地将这些超级智能代理与人类利益结合起来。如果你不仅看这种说法,而且看资金,以及最聪明的人在公司、政策团体和前风险中投入的时间,可以说,生存风险占据了整个市场的主导地位。你认为为什么这种说法会获得如此多的追随者?
Michael Wooldridge:我认为这是一个概率低但风险极高的论点,我的意思是,我认为大多数人都接受这种说法,这种说法不太可信。但如果真的发生了,那将是最糟糕的事情。风险非常非常高。当你将这个概率乘以风险时,那么这个论点就是你应该开始考虑这件事了。但是,当大型语言模型的成功变得显而易见,ChatGPT 发布后,去年每个人都对此感到非常兴奋,围绕这种争论达到了一种有点歇斯底里的程度,在某个时候变得有点失控。我的感觉是,辩论已经平静了一些,更多地关注我们目前所处的现实以及风险是什么。
Johnathan Bi:是的。我认为这是对心理学的一种相当宽容的解读。这是一种理性的计算。概率很小,但代价很大。我研究宗教史。当我与前风险世界中的人们交谈时,这种心理让我想起了基督教的末世论。基督教历史上总有人说,现在是时候了。最近发生的一次,可能是在我们进入千禧年的时候,对吧,1999 年。正是这种心理驱动力想要抓住某种完全的、末世论的东西,以某种方式来不是整个世界。所以,我想强调的是,也许你们也能看到一些心理和气候风险。这并不是说这些事情都是真的,对吧?这并不是说基督教的世界正在终结,气候没有变化,或者没有前风险。人们似乎被这种叙述所吸引的原因几乎是一种宗教现象。
迈克尔·伍尔德里奇:我认为这是对的。我认为它诉诸于某种几乎原始的、某种人性。我的意思是,它最根本的层次是你创造了一些东西,你有一个孩子,他们却背叛了你。这对父母来说是终极噩梦。你生下孩子,你养育孩子,你创造了一些东西,然后它却背叛了你。
约翰纳森·毕:宙斯和克洛诺斯,对吧?这是一个原型。
迈克尔·伍尔德里奇:没错。或者,这个,那个叙述,那个故事非常非常有共鸣。例如,回到最初的科幻小说文本《弗兰肯斯坦》。这确实是《弗兰肯斯坦》的情节。你用科学创造生命,赋予某物生命,创造某物,然后它反过来攻击你,你就失去了对那东西的控制。所以我认为这是一个非常非常有共鸣的想法。人们很容易接受它。
Johnathan Bi:是的。我们很容易批评这里的心理学,但你认为有什么问题,或者你认为人们忽略了这一论点本身,即一旦我们拥有超级智能或至少与人类水平的机器智能相当,它们就可以不断自我改进?当人们过分重视这一论点时,你认为人们忽略了什么?
时间戳
00:00 0. 简介
02:45 1. 奇点是胡说八道
14:28 2. 阿兰·图灵
25:55 2.1 阿兰·图灵:图灵测试
32:22 3. 黄金时代
39:29 4. 第一个人工智能寒冬
41:25 5. 专家系统
51:35 6. 行为人工智能
57:08 7. 基于代理的人工智能和多代理系统
1:05:45 8. 机器学习
1:08:18 9. 大语言模型
哥德尔定理揭穿了最重要的人工智能神话。人工智能不会有意识 | 罗杰·彭罗斯(诺贝尔奖得主) https://www.youtube.com/watch?v=biUfMZ2dts8
这就是世界 2025 年 2 月 22 日
五位来自不同学科的顶尖思想家就一个日益紧迫的问题分享了各自的观点:我们与机器之间到底有什么区别?
00:00 简介
00:53 Denis Noble:随机性
05:13 Scott Aaronson 谈 AI 怀疑论者
07:44 Roger Penrose:波函数的坍缩
15:31 Scott Aaronson 谈 Roger Penrose
16:51 Scott Aaronson:短暂性
20:20 Joscha Bach 谈 Roger Penrose
21:48 Roger Penrose 谈他的批评者
22:07 Isabel Millar:具身性
22:57 Joscha Bach:自组织
26:09 Isabel Millar 谈意义结构
27:50 为什么 AI 不会长期像人类一样
*
美国前劳工部长罗伯特赖希谈美国经济的真相:揭秘政治、道德与财富分配的秘密,为什么你努力工作却无法摆脱贫困?是谁在幕后操控着规则?|作为政商集团的一部分,他这是背叛自己的阶级?|准备好颠覆三观了吗?
https://www.youtube.com/watch?v=Qd29CoDw0Os
宏观洞察
Dec 25, 2024 #社会真相 #财富分配
你以为唐纳德·特朗普是美国危机的根源? 🤯 不!他只是一个结果!本期视频,我们将深入揭露美国经济和社会深层腐朽的真相,打破你对“自由市场”的所有幻想!
我们将揭示制度性缺陷和权力失衡如何加剧不平等,并打破 “勤劳致富” 的神话。
👉 你将收获:
了解经济、政治和道德之间的复杂联系
看清富人如何通过政治献金和游说操纵规则
理解全球化和自由贸易背后的阴暗面
辨识企业福利和政府补贴的本质
获得独立思考的能力,不再被“神话”蒙蔽
💡深度解析
特朗普现象的根源: 特朗普的崛起并非偶然,而是美国社会长期存在的不平等和对机构不信任的体现。
经济学与道德政治的交织: 经济学并非纯粹的科学,而是一门涉及价值观和政治选择的学科。
市场规则由权力决定: 自由市场的规则并非天然,而是由政治和权力博弈决定的。
富人通过政治影响力获利: 大公司、华尔街和富人通过政治捐款和游说等手段操纵规则,为自身牟利。
财富分配不均: 收入和财富分配不均并非完全基于个人努力,而是与权力、制度和出身密切相关。
“勤劳致富”的幻觉: 努力工作并不一定能带来财富,社会阶层固化,普通人上升通道受阻。
自由贸易的负面影响: 自由贸易并非对所有人有利,导致美国制造业岗位流失、工人利益受损。
“社会主义”的污名化: 美国并非真正的社会主义,富人享有大量福利,普通人却难以获得社会保障。
就业创造的真相: 真正的就业机会来自中下阶层的消费支出,而非富人的投资和企业减税。
通货膨胀并非简单问题: 垄断企业和寡头垄断推高价格,而并非简单的政府支出和工资上涨。
经济增长并非永远有益: 无限增长给环境造成严重破坏,需要转向可持续发展模式。
不平等是民主的威胁: 贫富差距加剧社会分裂,导致民粹主义和极端主义兴起。
需要系统性变革: 仅仅谴责特朗普的支持者并不能解决问题,需要从根本上改变经济和政治制度。
#社会真相 #权力运作 #财富分配
宏观洞察
*
面對AI新科技,沒有人是局外人《AI底層真相》|天下文化 Podcast 讀本郝書 EP32 https://www.youtube.com/watch?v=hsC5na_u2aY
Feb 20, 2025 天下文化 相信閱讀 Podcast
在本集的天下文化Podcast中,主持人郝旭烈介紹穆吉亞的《AI底層真相》一書。副標題「如何避免數位滲透的陰影」點出這本書的重點,不在於教導讀者如何使用AI,或展示AI帶來的正面效益,而是揭露AI背後的多重面貌,提醒我們不能只看到它光鮮亮麗的一面,更要正視其潛藏的負面影響。
書中提出AI帶來的四大隱憂:勞動剝削、深度偽造、全面監控與個資濫用。
AI的發展並非全然自動化,它依賴大量資料進行訓練,而這些資料需要人力標記。穆吉亞揭露了所謂的「隱形工人」——這些在AI產業中默默付出的人,經常遭到剝削。其中一個例子是肯亞的薩碼公司,這是一家專門提供AI訓練服務的公司,聘用了大量來自低收入國家的員工,負責數據標記和內容審查。
這些員工的工作條件極其惡劣,薪資微薄,且經常接觸暴力、虐待和色情等令人身心受創的內容。長期下來,這些工作對他們的心理健康造成巨大影響,許多人出現創傷後壓力症候群(PTSD),甚至需要依靠抗憂鬱藥物。有些人因心理創傷而無法與家人親近,影響正常生活。
穆吉亞指出,AI產業表面上看似先進和自動化,但實際上背後仍存在嚴重的剝削現象,尤其是在數據標記和內容審查等需要大量人力的環節。他認為這種現象與過去的殖民主義和傳統外包產業的剝削模式如出一轍。AI並沒有真正解決這些結構性的問題,反而加深了全球貧富差距和勞動剝削。
此外,過去我們常說「有圖有真相」,但在深度偽造技術普及的今天,「有圖不一定有真相」已成現實。因此,當我們面對影像或資訊時,一定要保持警惕和批判思維,不要輕信網路上的一切內容,尤其是在社交媒體上流傳的圖片和影片。
最後,郝哥提醒聽眾,AI技術的發展帶來便利和創新,但同時也伴隨著隱私侵害、勞動剝削和資訊偽造等問題。我們應該在享受AI帶來的好處的同時,保持對其潛在風險的認識,並思考如何在這個數位時代中保護自己和他人的權益。
*
我的评论:
https://blog.creaders.net/user_blog_diary.php?did=NTExNjgz
https://www.facebook.com/andrew.colesville
https://x.com/mwsansculotte
达尔文变形虫有智慧吗?
单细胞变形虫没有大脑,因此缺乏感觉或思考,但它们可以记忆、做出决定并预测变化,因此,它们在至少7.5亿年前分化后确实具有早期的智能行为。(宇宙已经有 135 亿年的历史了。)
如果将当今的人工智能与古代变形虫进行比较时,我们可以说前者还有很长的路要走。
两者根本差异的关键在于,动物的意识能力来自其模拟计算系统,而人工智能建立在数字系统上。它们是两个统一的对立面,彼此斗争而无法调和,尽管模拟系统可以适应数字系统,因为它是两个计算系统中更强大的。因此,处于弱势的数字化的人工智能将无法获得来自经过数十亿年的进化过程的模拟系统的意识。
我们应避免只见树木不见林 - 只看局部而忽视整体 - 的错误。
简而言之,人工智能可以称为基于机器进化的聪明百科全书(s.e.b.o.m.e,也就是赛博梅或 a smart encyclopedia based on machine evolution.)
另一个需要回答的问题是,人工智能启发的类人机器人和/或工业机器人是否会取代人类劳动力。
正确的答案取决于人们是否像安东尼·多兰所建议和所做的那样,将马克思主义扔进了“历史的垃圾堆”。
马克思主义告诉我们,所有资本主义利润都来自工人无偿剩余劳动创造的剩余价值,而不是机器。因此,机器人不会为资本家创造利润。马克思称机器、车间建筑和设施为不变资本,工资为可变资本。只有活劳动才能为资本创造价值,而机器中包含的死劳动则不能。
机器为机器制造资本家提供了死的剩余劳动价值,买家已经为此付了钱,因此机器本身不再贡献新的劳动价值。使用机器人等机器来完全替代劳动力将不是一种有利可图的商业策略,除非机器是在内部制造的,以便制造商能够在内部获取固有的活剩余价值。这就是为什么大资本家投入巨额资金,是所有与自动化相关的固定资本的几倍。
机器确实可以做很多人工做不到的事情,应该被视为解决劳动力短缺或劳动力限制等沉重问题的完美解决方案。但这在世界上最后一个生产资料私有制或资本主义制度中不会普遍实现。例外是,工业机器人比人形机器人更常被采用,因为前者可以减轻人类繁重和重复的劳动条件的负担,从而节省可变资本或工资的投资。尽管如此,机器人技术不会为资本家创造私人利润。一般来说,机器人技术和人工智能最适合于工人阶级专政下生产资料属于国家而不是个人的制度。换句话说,资本主义的内在矛盾——生产资料私有制和生产社会化,已经限制了更先进的进步,而没有实现。
据悉,OpenAI高级团队将向客户收取每月22万美元的高档AI的使用费,因为要承担高昂的法律和财务成本。sebome的垄断最终将接管制造业,导致生产过剩和失业过剩,引发商业、金融和政治经济危机。
将sebome也就是赛博梅(基于机器进化的聪明百科全书)重新定义为比通常更现实的AI是合理的。它首先不应该是一台与人类脑力竞争的机器,而应该下降到比AI更低的水平,满足于模拟无脑生物,如变形虫。希望随着机器的进化,基于机器进化的聪明百科全书会变得更聪明,能够模拟具有原始脑力的无脊椎动物,如扁虫和水母等。[Mark Wain 2025-03-25]