jingchen

注册日期:2015-11-11
访问总量:4868501次

menu网络日志正文menu

How AI and humans think?


发表时间:+-

How AI and humans think?

I haven’t thought about the mechanism of AI before. Recently, I watched a video clip mentioning transformer. Not knowing what transformer is, I checked with AI for an answer.

Before the concept of transformer, AI process languages one word after another. With transformer, AI processes many words simultaneously. Suppose we analyse sentences like: I eat a sweet apple. With transformer, AI will not only calculate the statistical correlation between sweet and apple, which are adjacent, but also the correlation between eat and apple, eat and sweet, eat and I. In essence, AI calculates the correlation matrices of many words. AI are trained with the massive amount of language and other digital material. The calculations of correlation matrices, especially for long texts, are extremely computationally intensive. That is why computational power from GPU is in great demand.

The current AI information processing is very similar to how humans process information. There is a Chinese idiom called 一目十行 (to read ten lines at a single glance). We don’t read word by word. Instead, we scan many lines simultaneously.

AI understand texts from probability relations, with massive amount of earlier training on real texts. AI don’t understand texts by grammatical analysis of individual sentences. This is how we learn. We learn to write from reading many books, not from classroom teaching of grammatical structures, paragraph structures, and …  

Earlier AI, such as Deep Blue from IBM, were based on logic. Newer AI, as GPT (Generative Pre-trained Transformer), are based on statistical inference. These new AI can be applied to much broader areas. What happens if statistical inference diverge from logic and facts? When logic and facts are not consistent with the interests of the ruling elites, great amounts of resources are devoted to establish authoritative opinions that differ from logic and facts. Usually, logic is not attacked upfront. Instead, layers after layers of complexity are added to the original questions, making ordinary people difficult to understand.

Authoritative opinions in authoritarian environment, which is everywhere, often diverge from logic on important issues. How AI decide which stance to adopt? I asked Gemini. Gemini uses DeepSeek as an example. It stated that on many issues, DeepSeek has to align its stance with the authority. But Gemini singles out DeepSeek, an AI from the opponent’s camp. What about other AIs, including Gemini itself?

From AI’s answers, I get a deeper insight how humans receive information. For example, few take an effort to go over the details of relativity theory. Yet, most people are convinced that relativity theory is correct. Humans process information, especially difficult information, not from their own perception, but from statistical inference. If most authorities agree on something, you naturally conclude it is correct.

This explains how research is done in the real world. Authorities have long claimed that the science of climate change has settled. If so, why governments keep pouring money into the research on climate change in a massive scale? By funding more people, the statistical inference on climate change will get stronger over time. Instead of a consensus of 90%, it will climb to 95% or even 99%. With increasing consensus, people seeking truth will be increasingly viewed as “a fringe minority with unacceptable ideas”.

 

浏览(51)
thumb_up(0)
评论(0)
  • 当前共有0条评论