As more and more investments are poured into the current AI, there is a feeling in the industry that neural networks, large language models and other AI technologies—as they are construed now—are by far not as effective and useful as the the affectionate visionaries, tech prophets and evangelists try to convince us. Even though rather feeble voices of critics have been appearing from the start of the so called “AI revolution” (and even more from the moment of the Sam Altman’s resignation comedy in 2023), it became quite clear that the expectations from the technology largely surpassed its objective capabilities. The current AI and its obsession with greater, grander and more ambitious projects, such as the “General Artificial Intelligence” (promised to approach the human mind) cannot be fulfilled because of numerous causes and inherent limitations. We still know too little about the human cognition and mind. It is ridiculous how can they think to create an analogue of the human mind without first even trying to understand how did it come about, and without trying to make a functional analogue of a cockroach or a bee mind.
-
Bishop, J. M. (2021). Artificial intelligence is stupid and causal reasoning will not fix it. Frontiers in Psychology, 11, 1–18. https://doi.org/10.3389/fpsyg.2020.513474
-
Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B., & Liu, T. (2024). A survey on hallucination in large language models: principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems, 3703155. https://doi.org/10.1145/3703155
-
Kafka, P. (2025). The godfather of Meta’s AI thinks the AI boom is a dead end. Business Insider. https://www.businessinsider.com/meta-ai-yann-lecun-llm-world-model-intelligence-criticism-2025-11?op=1
-
Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025). Why language models hallucinate. https://openai.com/index/why-language-models-hallucinate/
-
Marcus, G., & Davis, E. (2019). Rebooting AI. Building artificial intelligence we can trust. Pantheon Books.
-
Mind Prison, We have made no progress toward AGI. (2025). Mind Prison. https://www.mindprison.cc/p/no-progress-toward-agi-llm-braindead-unreliable
-
Pearl, J. (2019). The limitations of opaque learning machines. In John Brockman (Ed.), Possible Minds: 25 Ways of Looking at AI. Penguin Press (https://ftp.cs.ucla.edu/pub/stat_ser/r489.pdf).
-
Schlereth, M. M. (2025). AGI is impossible. here is the proof. https://philpapers.org/archive/SCHAII-17.pdf
-
Schlereth, M. M. (2025). AGI is mathematically impossible 2: when entropy returns. https://philarchive.org/archive/SCHAIM-14
-
Schlereth, M. M. (2025). AGI is impossible 3 compression vs. comprehension. https://philpapers.org/archive/SCHAII-18.pdf
But it is clear already that the current data driven, opaque, associative “deep” learning approach based on simple patter matching, interpolating big amount of data in a manner of statistical approximation has failed. Now, LLMs have access to most of the human produced texts and images—to a large extent because of massive copyright infringement, unauthorised web scraping, deceptive ToS aimed for data exploitation and similar dubious approaches. The expected amount of new genuinely human generated data goes to a diminishing return limit. But in spite of the gigantic amount of learning data and huge energy use costs, the newest models are only marginally better than those at previous iteration. An ancient rule-based ELIZA language modeli, a product of computer archeology, running on the weakest CPU from the 60s outperformed on the Turing test then revolutionary ChatGPT-3 that required huge datacenters to run.
-
Jones, C., & Bergen, B. (2023). Does GPT-4 pass the Turing Test? http://arxiv.org/abs/2310.20216.
But the negative effects abound. The web is poisoned with a growing amount of meaningless and grossly inaccurate digital waste generated by AI. Yet, instead of trying to focus on radical change of the architecture, The Big Tech invest even more in data centres, consuming even more electric power (therefore even greater carbon emission) and try to be even more intrusive in sucking out and exploiting human-generated content. For example: ☹ Gmail reads the users emails and attachments to train its “smart” features. ☹ AI bots scraping the net are as malacious as DDOS attacks. ☹ Perplexity AI bots continue crawling the content explicitly blocked for access. ☹ AI scraper bots are responsible for 86% increase of invalid Internet traffic. ☹ AI bots disrupt scientific scientific databases and journals.
The verbiage surrounding the current AI is wrong and deceptive. Essentially, the currently popular AI is just seeking correlation patterns in a large amount of data. Therefore it cannot be described meaningfully as "generative" or "agentic." A program implementing a straight line regression model has the same level of "generativity" if it generates Y from a given X. A similar program that just gets a X values from a sensor and executes a program loop converting the Y into some action has the same level of "agency" as current AL agents, essentially, it does not exceed the level of a trivial thermostat. A computer program functioning like a regression model cannot "hallucinate" because it doesn’t have a mind, it can just malfunction and output wrong result.
The facts are that
-
AI is hugely unprofitable, too expensive and inefficient: ☠ Is OpenAI A Ponzi scheme?; ☠ You have no idea how screwed OpenAI actually is; ☠ Wall Street blows past bubble worries to supercharge AI spending frenzy
-
AI is still unreliable and untrustable: Bansal, V. (2025). Meet the AI workers who tell their friends and family to stay away from AI | Artificial intelligence (AI). The Guardian. https://www.theguardian.com/technology/2025/nov/22/ai-workers-tell-family-stay-away
-
AI models produce many false claims even on the sources they provided: Venkit, P. N., Laban, P., Zhou, Y., Huang, K.-H., Mao, Y., & Wu, C.-S. (2025). DeepTRACE: Auditing deep research AI systems for tracking reliability across citations and evidence (No. arXiv:2509.04499). arXiv. https://doi.org/10.48550/arXiv.2509.04499
-
AI chatbots give harmful medical advises: Andrikyan, W., Sametinger, S. M., Kosfeld, F., Jung-Poppe, L., Fromm, M. F., Maas, R., & Nicolaus, H. F. (2025). Artificial intelligence-powered chatbots in search engines: A cross-sectional study on the quality and risks of drug information for patients. BMJ Quality & Safety, 34(2), 100?109. https://doi.org/10.1136/bmjqs-2024-017476
-
AI "agents" fail at an alarming rate Xu, F. F., Song, Y., Li, B., Tang, Y., Jain, K., Bao, M., Wang, Z. Z., Zhou, X., Guo, Z., Cao, M., Yang, M., Lu, H. Y., Martin, A., Su, Z., Maben, L., Mehta, R., Chi, W., Jang, L., Xie, Y., ? Neubig, G. (2025). TheAgentCompany: Benchmarking LLM agents on consequential real world tasks (No. arXiv:2412.14161). arXiv. https://doi.org/10.48550/arXiv.2412.14161
-
AI tools can generally reduce company’s productivity: Niederhoffer, K., Rosen Kellerman, G., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025). AI-Generated “workslop” is destroying productivity. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
-
AI is harmful for the environment: https://allianceforscience.org/blog/2025/02/ai-is-bad-for-the-environment-and-the-problem-is-bigger-than-energy-consumption/
-
AI tools reduce developer productivity: Becker, J., Rush, N., Barnes, E., & Rein, D. (2025). Measuring the impact of early-2025 AI on experienced open-source developer productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/; Uplevel Data Labs. (2024). Can Generative AI Improve Developer Productivity. https://resources.uplevelteam.com/gen-ai-for-coding
-
Using AI leads to increased amount of unneeded code and reduced code quality: Harding, W., & Kloster, M. (2024). Coding on Copilot 2023 data shows downward pressure on code quality. https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality
-
Using AI tools adds more security vulnerabilities: Veracode. (2025). 2025 GenAI code security report. https://www.veracode.com/wp-content/uploads/2025_GenAI_Code_Security_Report_Final.pdf
-
AI-based browser is a security disaster: https://venturebeat.com/ai/when-your-ai-browser-becomes-your-enemy-the-comet-security-disaster
-
Using AI-driven "vibe coding" can result in a catastrophic disaster: https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/
-
AI tools can be destructive for learning: Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI can harm learning. SSRN. https://doi.org/10.2139/ssrn.4895486; Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv. https://doi.org/10.48550/ARXIV.2506.08872; Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. https://doi.org/10.1145/3706598.3713778; Elsayed, Y., & Verheyen, S. (2024). ChatGPT and the illusion of explanatory depth. Proceedings of the Annual Meeting of the Cognitive Science Society, 46.
-
AI tools contribute to massive amount of fake news and misinformation: NewsGuard. (2025). Tracking AI-enabled Misinformation: Over 2000 Undisclosed AI-Generated News Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools—NewsGuard. NewsGuard. https://www.newsguardtech.com/special-reports/ai-tracking-center/
-
AI tools involve large amount of invisible human labour: ☠ There are "digital sweatshops" in the Philippines or other poor countries where people work out of sight and behind the scenes to identify, sort and refine content for AI companies like OpenAI, Meta and Microsoft. ☠ An "AI-powered" service Nate helping customers make purchases turned out to be a fraud scheme with all the work done by people from Philippines call centres. ☠ How the AI industry profits from catastrophe in Venezuela.
The machine stops
Where are all these really big multi-billion money going to go? Chances are high that all the money that are currently inflating the AI bubble will be lost. The pain from the expected AI bubble crash is likely to be much greater than from the two and half decades ago dot-com crash. Few people remember, but it took a decade to recover the value of the assets lost at that time. Yet, the number of US households that invest exceeded 20%, surpassing the numbers before the dot-com crash. All these people should be prepared to lose their cash or withdraw before the burst.
-
Vogelstein, F. (2025). We remember the internet bubble. This mania looks and feels the same. https://crazystupidtech.com/2025/11/21/boom-bubble-bust-boom-why-should-ai-be-different/
-
Goldman Sachs. (2025). AI: In a bubble? Goldman Sachs Top of Mind, 143. https://www.goldmansachs.com/insights/top-of-mind/ai-in-a-bubble
-
Goldman Sachs. (2024). Gen AI: too much spend, too little benefit? Goldman Sachs Top of Mind, 129. https://www.goldmansachs.com/intelligence/pages/gen-ai-too-much-spend-too-little-benefit.html
It does not sound likely that the current AI will take over the planet and destroy human race etc. All the scary prophecies assume that it is a real and genuinely intelligent AGI, which is false. The false prophecies willingly or unwillingly play the role of the Alcibiades' dog tail, diverting attention from real dangers of AI to unrealistic imaginary, but appearing much scarier, dangers. The current AI is harmful for human learning, cognition, environment and investors' pockets. The dangers of AI are not caused by AI itself. They are, as most other calamities, are consequences of human actions. AI does not seem to add anything over the human sins. It just multiplies human-generated evils to higher speed, scale and scope.