AI Myths. Deconstructing

AI Myths.There have been numerous articles, from sensational news headlines to analytical reports, suggesting that soulless algorithms will eventually replace all (or nearly all) jobs. We delve into whether these bleak forecasts are accurate and explore the origins of our fear of AI systems, the influence of several generations of science fiction, and how the myth of ‘terrifying robots from the future’ could adversely affect developers and specialists today.

AI Myths. Why Algorithms are Currently Unpopular

AI Myths

AI Myths Artificial intelligence is a constant presence in news, scientific, and pop-scientific articles, and even humorous compilations like ‘neural network invents a joke’. However, the general public tends to view AI more as an unwelcome neighbor we must tolerate. Sociological studies reflect this, for example, a Monmouth University survey this year found that only 9% of Americans believe AI does ‘more good than harm’ to society.

The remaining 91% are almost evenly split, with 46% believing AI’s benefits and harms are about equal, and 41% thinking the harm outweighs the good. By comparison, 30-plus years ago in 1987, Cambridge Research International asked a similar question, with 20% of Americans then viewing artificial intelligence as ‘more of a blessing’.

Other studies also note skeptical attitudes — according to Pew Research Center, only 15% of respondents are inspired by the fact that AI systems are more actively used in everyday life. The remaining 75% are not too pleased. Researchers conclude that the topic of AI inspired people more a few decades ago when artificial intelligence seemed like something from the distant future.

Now, as algorithms become an integral part of our lives, many perceive them as a threat. One reason for this is the fear of becoming professionally redundant. A constant refrain to advancements in AI is a list of jobs that artificial intelligence will replace in the near future. McKinsey, back in 2017, gave an ‘optimistic’ forecast: by 2030, automation could eliminate up to 800 million jobs. Other consulting firms in their reports emphasize that automation will affect not only ‘blue-collar workers’ but also, for example, lawyers and middle managers.

In such an environment, and under the barrage of clickbait news like ‘AI instead of an accountant/tester/designer: what professions will disappear in 2024’, it’s challenging to maintain optimism about technology. Currently, the broad public’s attitude towards AI is described as ‘alarmist’; in the West, AI is increasingly criticized from the standpoint of longtermism — followers of this approach are convinced that artificial intelligence poses an existential threat to humanity. Such concerns were echoed in an open letter calling for a halt to neural network training, which made a lot of noise in the press this spring, primarily because it was signed by Elon Musk, Steve Wozniak, and several other famous entrepreneurs and scientists.

AI Myths “Something’s Wrong Here”

Sociologists are quite clear: people have a distaste for AI. But what do respondents themselves understand by ‘artificial intelligence’? Scientists have an answer to this question, and it’s not comforting. A 2019 study by Cambridge experts revealed that a quarter of respondents could not provide a ‘satisfactory’ description of an AI system (simply put, they were convinced that ‘AI’ meant robots, like in the movies). Similarly, not all respondents are aware of the concept of strong and weak AI — and they have no reason not to believe that ChatGPT could eventually evolve to the level of Skynet.

Moreover, even Generation Z and younger sometimes don’t realize that AI algorithms are a huge part of modern services, performing well-known, routine, and ‘non-scary’ tasks, like spam filtering or product recommendations. Patrick Murray, director of the Monmouth University Public Opinion Research Center, told a Washington Post journalist that many of his students were surprised to learn that they regularly use the results of AI systems.

Participants in various surveys do not always accurately assess the penetration of technologies into life and their real capabilities and limitations. Meanwhile, the term ‘artificial intelligence’ remains vague and loaded with meanings derived from movies and fiction.
Where Does Our Fear of AI Come From?

Nowadays, the motif of AI Myths in media, especially in movies, is more common than ever: from action films (‘Mission: Impossible 7’, ‘Chon-i’) to horrors (‘M3gan’) and comedies (‘(Im)perfect Robots’). However, the idea itself is not new: its origins can be traced back to 19th-century literature — not to mention Mary Shelley’s ‘Frankenstein’ (more on that below), one of the first warnings against thinking machines can be found in Samuel Butler’s ‘Erewhon’ (1872). The book told about a state (‘Erewhon’ — an anagram of ‘nowhere’) where residents intentionally abandoned technological progress for fear that machines might evolve consciousness.

The book didn’t gain much popularity, but the ideas of ‘Erewhon’ are alive in science fiction to this day. Since then, many works of AI dystopia have been written and filmed, often containing one or several tropes:

AI Myths
Frankenstein Complex (artificial intelligence turning against its creator). This dystopian narrative was used at the dawn of sci-fi cinema (e.g., in the 1935 Soviet film 'Loss of Sensation') and continues to be exploited (e.g., 'Prometheus', 'Ex Machina'). Another example of this trope is the 1956 American film 'Forbidden Planet'. According to Paul Muzio, former Vice President of Network Computing Systems, the film is notable for its predictive power. In the mid-fifties, its creators managed to depict what we would now call a 'planetary-scale Google' — and show the dangers of human interaction with such a superintelligence.

Machine Uprising — another popular trope, at the heart of Karel Čapek's play 'R.U.R.' (1920), the work that introduced the word 'robot' to the sci-fi lexicon.

AI Dictatorship and Life After AI — these two tropes (as well as 'Machine Uprising') form the backdrop for the events of Frank Herbert's 'Dune'.

Life in a Simulation — a relatively new narrative, popularized by the Wachowskis' 'The Matrix' (as well as 'The Thirteenth Floor' and 'eXistenZ' released almost simultaneously)."

“To the modern viewer, well-acquainted with all these tropes, it’s customary to associate the term ‘artificial intelligence’ with something potentially hostile, and to transfer this attitude from the screen to real life. However, the creators of science fiction works are not always straightforward, and often use the mask of robots to address quite typical ‘human’ problems.

Isabella Hermann from the Berlin Technical University, in her academic essay ‘AI in Artistic Creativity’, points out that the image of AI in science fiction often represents a ‘non-scientific’ and ‘non-fantastical’, but contemporary social narrative. From this perspective, for example, HAL 9000 from ‘2001: A Space Odyssey’ serves as a metaphor for organizations or public institutions that refuse to acknowledge their mistakes and instead blame the ‘human factor’.

Members of the British House of Lords share this view on the portrayal of AI in popular culture. In a 2018 report, they noted:

AI Myths
'The depiction of artificial intelligence in pop culture seems incredibly far removed from the often more complex reality. To the layperson relying on media images, it's forgivable to imagine AI as a humanoid robot (sometimes with criminal intent), or at least a very smart disembodied voice that can assist with a multitude of tasks — but this is far from the real capabilities of AI algorithms.'

AI Myths.The theme of ‘suspicious and dangerous AI’ is so actively promoted in the media that some journalists consider the manifestos and other statements by business leaders and well-known developers about ‘AI potentially ending humanity’ to be insincere. For example, The Guardian suggested that the purpose of such statements is to ‘increase their own value’ and support the myth of the potential omnipotence of artificial intelligence, while the actual developments are far from being as ‘smart’ as Silicon Valley would like to portray.

This sensationalism leads to indie developers, not large corporations, bearing the brunt of public criticism. One such case occurred a few months ago: Benji Smith, developer of the indie service for writers Shaxpir, was forced to halt work on another project, Prosecraft, due to public opinion. Prosecraft used text analysis algorithms to create a ‘linguistic map’ of literary works. The service counted the total number of words in a book, the volume of adjectives, etc., assessed the text by grammatical categories (e.g., the amount of active/passive voice) and compared it with other works.

A wave of outrage on social media was raised by writers — those whose books were analyzed by Prosecraft. And, as Smith suspects, most authors were not afraid of the analysis results or ratings, but of the fact that an algorithm provided them — ‘AI read my book and stole my ideas’. Smith notes that he had been developing the service for a long time, talked about it at writers’ conferences, and always found positive feedback — but that was before the ‘neural network boom’. The pop-cultural image of the ‘all-powerful bad robot’ played a cruel joke on the developer: ironically, considering that the service was meant to help writers, not take their jobs.

AI Myths.Living with Algorithms in the Future

One might listen to the opinion of Ken Goldberg, a professor at the University of California and CEO of Ambidextrous Robotics — one of those IT community representatives who does not share fears of the impending singularity. According to him, the main advantages of modern machines (in the broad sense of the word) include pattern recognition in large data sets, computational accuracy, and the ability to maintain vigilance. All these are well-suited, for example, to the development of video surveillance systems. Nevertheless, at the current stage, machines are extremely poor at making adequate decisions in complex situations and skillfully handling new materials and objects: ‘I am convinced that in this sense, nothing fundamentally will change for at least the next 20, 50, or even 100 years.’ Those who do not share the general AI panic believe that neural networks are unlikely to replace humans; new systems will simply change the approach to work.

Returning to Ken Goldberg: he suggests replacing the concept of ‘singularity’ with the idea of ‘complementarity.’ According to him, the gradual integration of AI algorithms is unlikely to lead to a sharp reduction in staffing anywhere. Rather, such changes will help reduce routine and decrease the amount of heavy work — allowing people to do what they do best (solving tasks outside of standard, highly templated procedures).

In the second part, we will discuss how to start ‘cooperating with AI systems’ — how to use algorithms to reduce routine now — including examples from our colleagues at PGK.”

“I have always viewed conflicts with AI from this perspective. Both Skynet and the intelligence in Asimov’s ‘Three Laws of Robotics’ make their destructive decisions due to their omnipotence, yet at the same time, the imperfection of their intelligence. They choose the most mathematically rational solution but fail to consider other aspects: for example, the variability of the environment has a probabilistic forecast, and for the survival of one, it’s more advantageous to support overall variability and cultivate mutual aid.

Humans, due to their intellectual limitations, came to this understanding through cultural evolution, whereas the strong AI in all these works lacked this and made incorrect conclusions.

In summary: strong AI was given absolute power, but it committed destructive actions due to its flaws.

For some reason, such works always seemed to be viewed through this prism.”