The machine stops: AI bubble is expected to burst

As more and more investments are poured into the current AI, there is a feeling in the industry that neural networks, large language models and other AI technologies—as they are construed now—are by far not as effective and useful as the the affectionate visionaries, tech prophets and evangelists try to convince us. Even though rather feeble voices of critics have been appearing from the start of the so called “AI revolution” (and even more from the moment of the Sam Altman’s resignation comedy in 2023), it became quite clear that the expectations from the technology largely surpassed its objective capabilities. The current AI and its obsession with greater, grander and more ambitious projects, such as the “General Artificial Intelligence” (promised to approach the human mind) cannot be fulfilled because of numerous causes and inherent limitations. We still know too little about the human cognition and mind. It is ridiculous how can they think to create an analogue of the human mind without first even trying to understand how did it come about, and without trying to make a functional analogue of a cockroach or a bee mind.

But it is clear already that the current data driven, opaque, associative “deep” learning approach based on simple patter matching, interpolating big amount of data in a manner of statistical approximation has failed. Now, LLMs have access to most of the human produced texts and images—to a large extent because of massive copyright infringement, unauthorised web scraping, deceptive ToS aimed for data exploitation and similar dubious approaches. The expected amount of new genuinely human generated data goes to a diminishing return limit. But in spite of the gigantic amount of learning data and huge energy use costs, the newest models are only marginally better than those at previous iteration. An ancient rule-based ELIZA language modeli, a product of computer archeology, running on the weakest CPU from the 60s outperformed on the Turing test then revolutionary ChatGPT-3 that required huge datacenters to run.

But the negative effects abound. The web is poisoned with a growing amount of meaningless and grossly inaccurate digital waste generated by AI. Yet, instead of trying to focus on radical change of the architecture, The Big Tech invest even more in data centres, consuming even more electric power (therefore even greater carbon emission) and try to be even more intrusive in sucking out and exploiting human-generated content. For example: ☹ Gmail reads the users emails and attachments to train its “smart” features. ☹ AI bots scraping the net are as malacious as DDOS attacks. ☹ Perplexity AI bots continue crawling the content explicitly blocked for access. ☹ AI scraper bots are responsible for 86% increase of invalid Internet traffic. ☹ AI bots disrupt scientific scientific databases and journals.

The verbiage surrounding the current AI is wrong and deceptive. Essentially, the currently popular AI is just seeking correlation patterns in a large amount of data. Therefore it cannot be described meaningfully as "generative" or "agentic." A program implementing a straight line regression model has the same level of "generativity" if it generates Y from a given X. A similar program that just gets a X values from a sensor and executes a program loop converting the Y into some action has the same level of "agency" as current AL agents, essentially, it does not exceed the level of a trivial thermostat. A computer program functioning like a regression model cannot "hallucinate" because it doesn’t have a mind, it can just malfunction and output wrong result.

The facts are that

The machine stops

Where are all these really big multi-billion money going to go? Chances are high that all the money that are currently inflating the AI bubble will be lost. The pain from the expected AI bubble crash is likely to be much greater than from the two and half decades ago dot-com crash. Few people remember, but it took a decade to recover the value of the assets lost at that time. Yet, the number of US households that invest exceeded 20%, surpassing the numbers before the dot-com crash. All these people should be prepared to lose their cash or withdraw before the burst.

It does not sound likely that the current AI will take over the planet and destroy human race etc. All the scary prophecies assume that it is a real and genuinely intelligent AGI, which is false. The false prophecies willingly or unwillingly play the role of the Alcibiades' dog tail, diverting attention from real dangers of AI to unrealistic imaginary, but appearing much scarier, dangers. The current AI is harmful for human learning, cognition, environment and investors' pockets. The dangers of AI are not caused by AI itself. They are, as most other calamities, are consequences of human actions. AI does not seem to add anything over the human sins. It just multiplies human-generated evils to higher speed, scale and scope.