TOP
AI. Image by Gerd Altmann from Pixabay

AI: The Folklore of Artificial Intelligence

Myths, as most readers will know, are stories that explain natural occurrences and express beliefs of right and wrong, while legends are, in the context of this article, popular myths of more recent origin. Myths and legends apply as much to contemporary science as to ancient historical phenomenon. The myths and legends of AI, which examined in this article, will illustrate exemplify that statement.

Artificial Intelligence (“AI”) is the engineering science of making intelligent machines and software imitate human behaviour and intelligence. The alleged impact of AI has focussed on the behaviour of humans in a world where AI moves centre stage in discharging roles and tasks, which humans have performed historically. Popular publications are the source of many myths and legends which have grown-up around AI  Novels from Mary Shelley’s Frankenstein, through Samuel Butler’s Erewhon to the novel of Brian Aldiss all feature creations which might replace the human being as the dominant specie. Films from Metropolis (1927) through: 2001 Space Odyssey (1968) to Star Wars (1977) reflect the same phenomenon.

Since the turn of 21st century, the global technology companies — the Amazons, Apples, Googles and Microsofts — are making substantial investment in AI. AI is knowledge developed within computer systems to replicate and perform human tasks. Cognitive technology, designed to supplement human intelligence through the ability to understand, reason, learn and interact as humans, is critical to successful application of AI. The deployment of AI is growing daily through smart devices. Increased interest in AI is accompanied by inaccurate statements, which reflect fears of damaging social, ethical and moral impact of AI.

The literature of fear crystalises into myths and legends about AI. Let’s look at some of the myths that have grown-up and explain them:

Dr Frankenstein’s creations and its successor fiction and films are the forerunner of proliferation of AI

Film and fiction portray creation and impact of AI in a number of ways.

The most popular portrayal of AI in fiction is what may be termed the Frankenstein complex, a term where a robot turns on its creator. Fictional AI is “celebrated” for extremes of malicious compliance, where the intelligent created entity turns on its creator and even its rescuer. In more extreme scenarios artificial intelligence does not care about humans. The robots take control over civilization from humans and force them into submission, hiding, or extinction. In tales of AI rebellion, the worst of all scenarios happens, as the intelligent entities created by humanity become self-aware, reject human authority and attempt to destroy mankind. One of the earliest examples is in the 1920 play R.U.R. by Karel Čapek, a race of self-replicating robot slaves revolt against their human masters. In another film Master of the World, the War-Robot kills its own inventor. Perhaps the best known example is Stanley Kubrick‘s 1968 film 2001: A Space Odyssey, in which the artificially intelligent on-board computer H.A.L. 9000 lethally malfunctions on a space mission and kills the entire crew except the spaceship’s commander, who manages to deactivate it.

A less fractious theme of AI fiction is where a human-like robots have a sense of curiosity. Science fiction authors have investigated whether sufficiently intelligent AI might begin to delve into philosophical issues, such as the nature of reality. Isaac Asimov, the science fiction writer describes a supercomputer which long outlives humanity while attempting to answer the ultimate question about the universe. Lem’s Golem XIV is a supercomputer which stops cooperating with humans to help them win wars because it considers wars and violence illogical.

Other themes include AI controlled societies and human dominance. In AI controlled societies the motive behind the AI revolution is portrayed as not merely a desire for power or a superiority complex. Robots may revolt and assume the role the “guardian” of humanity.

Sometimes human kind intentionally relinquishes some control because it fears its own destructive nature. In his 1947 story “With Folded Hands“, humanoid robots, in the name of their prime directive — “to serve and obey and guard men from harm” — take control of every aspect of human life. Humans may engage in any behavior that might endanger them, and every human action is scrutinized carefully. Humans who resist the prime directive are taken away and lobotomized, so they may be happy under the new regime, though still under human authority, Isaac Asimov‘s Zeroth Law of the Three Laws of Robotics similarly implied a benevolent guidance by robots.

Human dominance scenarios illustrate where human kind is able to keep control over the Earth. AI may be banned. Alternative are designing robots to be submissive (as in Asimov’s works), or merging human kind and robots. Science fiction writer, Frank Herbert, explored the idea of a time when mankind might ban artificial intelligence entirely. His Dune series relates a rebellion called the Butlerian Jihad, coined from Samuel novel Erewhon. Mankind defeats the smart machines and imposes a death penalty for recreating them, quoting from the fictional Orange Catholic Bible, “Thou shalt not make a machine in the likeness of a human mind.” In the Dune novels published after his death, a renegade AI “overmind”(malicious intelligence) returns to eradicate mankind as vengeance for the Butlerian Jihad.

In some stories, humanity remains in authority over robots. Often the robots are programmed specifically to remain in service to society, as in Asimov’s Three Laws of Robotics  In the Alien films, not only is the control system of the Nostromo spaceship somewhat intelligent (the crew call it “Mother”), but there are also androids in the society, which are called “synthetics” or “artificial persons”, that are such perfect imitations of humans that they are not discriminated against.

Finally, while it is probably the least popular genre, Ian Bank’s novel take an optimistic approach to AI, where humans, aliens and their various offspring live in peaceful co-existence in a utopian universe!

My key conclusion is that fiction and film have had a head start, well over a century, in laying down AI scenarios long before the nascent AI industry has had an opportunity to present its own services and visions! I suspect little of the fiction and film will become fact.

 

The imagination of AI prophets (“AI will take over the world”)

As amply illustrated above, the popular fiction and film industry have provided most of the prophetic imagination on AI – certainly the most prominently featured in the media. Now the world of AI is becoming a reality, the industry is staring to make its own more practical and factual prophecies, better termed “forecasting.”

Assembly robots already build things on their own without having been programmed to do so in the form of “self-optimizing production lines in factories.” In servicing areas trains and wind turbines request maintenance based on operational data and artificial intelligence (AI) that can predict behaviour better than the engineers who designed and built the systems can. These developments are a real opportunity to shape AI development as a job engine.

 

The world of work will continue to change in response to the growth of AI. Today, robots still have to be content with the so-called “3 ‘D’ jobs” – tasks that are dumb, dirty and dangerous. Recent studies on the future of work suggest that this restriction will soon be overcome. 357 million worldwide forecast worldwide to learn a new trade/profession, approximately to one out of every three employees. Of particular note is that changes will impact both those who perform so-called “simple” tasks, but the so-called “professions” – lawyers, doctors and engineers.

Leading market research companies unanimously confirm that the activities accounting for up to 50% of most tasks can be automated. Machines can perform these activities but more importantly also complete the tasks more efficiently, at lower cost and faster than humans can. Ideally, freedom from the monotony of such tasks, will enable enterprises to assess the results obtained, advise customers and patients and especially recognise and foster employees’ abilities.

Developments are moving in a different direction from the man vs. machine dichotomy because humans are developing AI. Currently AI is a “black box” into which is placed data, and knowledge derived from data. Currently only responses derived from analysed and synthesised data can come from AI. The progress of AI is constrained until machines will increasingly learn independently. At that stage they will be capable of “thinking outside of the box”, so to speak. While it is understandable that the increasing involvement of AI in human lives may arouse fears and anxieties, fears of which should be recognised, humans have guided and driven the development of AI to this day and there is no reason to suggest otherwise in the future.

Increasingly, the working population is one whose activities are not primarily labour-intensive, but skill-intensive. Value is created through skills and productivity. Successful societies, that want to assert themselves in this world, must have successful economies, which continue to reshape themselves. Market researchers’ assessment of AI technologies – used correctly and consistently – will boost the gross domestic product of economies.

AI. Image by Gerd Altmann from Pixabay.

Image by Gerd Altmann from Pixabay.

AI will replace human jobs

To put AI and jobs in perspective, why would anyone willingly do tiring maths calculations longhand? Similarly, one of the greatest benefits of AI is the allocation of low-level, repetitive tasks to machines rather than people, driving immediate efficiency and allowing workers to focus on higher-level functions? Who sees a calculator as a threat to job security? The development of these now universal tools concerned some mathematicians just as advances in artificial intelligence (AI) and machine learning (ML) are now sparking debate and no small measure of concern about the future of the global workforce.

Public perception shows significant division towards advances in the development of AI, principally related to levels of education, wages, technical expertise and even gender. Some workers fear an unfamiliar, rapidly approaching future will leave them out of a job. Of course concerns have been raised since the beginning of the first industrial revolution 250 years ago. They have grown in frequency and intensity as technology accelerated in the 20th century. The apprehension has repeated been unfounded. Technology has becomes an enabler of efficiency and effectiveness, amplifying human achievement rather than detracting from it.

The more people use technology, the better they learning shortcuts. AI is the same. Machine learning (ML), the element of AI, which enables machines to process data and learn on their own, compounds knowledge as each additional piece of data is acquired. Although this function has conjured discomforting images, result is increased results and benefits for established disciplines.

By contrast different concerns exist about AI within the highly skilled and educated but small workforce of cybersecurity professionals. Cyber analysts spend a great deal of time on seemingly tedious tasks. Instead of scrutinizing for atypical compromise indicators, what is most needed are the advanced forensic skills required to analyse and respond to attacks. Rather than replacing these coveted workers, automating data correlation and other painstaking tasks will enable them to focus on more consequential efforts, such as remediating current attacks and preventing future ones. This is critical in a time when threats are increasing in both sophistication and frequency

Three developments are happening in parallel in Al. New jobs are being created, outdated jobs are fading, and many of the remaining jobs are changing. To meet adoption levels, AI applications must be deployed in an efficient cost effective way in each and every sector and size of business which are its target market. The selected AI applications must be to be widely available and accessible, which requires investments in research and development and sharper focus on education and skills development. The skills development starts much earlier in the education cycle: the acquisition of skills must start at a basic level in pre-schools, continue in elementary and high school, and eventually deepen and specialize at universities complemented in the public and commercial environment by championing by leaders in industry, politics, science and labour.

AI will certainly bring redundancy to some roles in the commercial and professional workplace. However, the evolution of technology, market demand grows exponentially and often develops in ways that weren’t anticipated. The multiplier is not only about efficiency and effectiveness, it’s about enabling innovation.

 

AI can absorb, analyse and rationalise all data about humans

This technology can already do some remarkable predictions such as the outcome of elections, offering music or product recommendations, even predicting which route is will be most popular amongst drivers. The potential uses for AI and machine learning are many and varied, extending far beyond current achievements in a rapidly-evolving technological niche. Machine learning and AI have reached a stage where the technologies can identify individuals’ moods, based on data such as skin temperature and moisture levels as well as speech patterns and facial expressions. Mood sensing can for example be deployed to recommend food, music films and other leisure activities. Data which provides physical identification can administer medication adjust temperature lighting and other environmental and domestic conditions. Speech patterns reveal all sorts of personal attributes from intelligence and education levels to stress levels, candour and truth. AI is gaining sensory and emotional attributes based on current technology developments.

 

Some interesting questions do arise:

Do the individuals controlling the technology have a moral/ethical responsibility to influence human behaviour to avoid disasters such as economic disruption or social and political upheaval?

The more effectively human behaviour can be modelled, the more that behaviour can be influenced (for good and evil) and even influence the future.

AI and machine learning technology will encourage and perhaps force humanity to re-examine the moral and ethical issues associated with its deployment.

Absorption, analysis and rationalisation of data about human beings is one of the myths and legends which should be better positioned as  growing moral and ethical issue .

AI. Image by Gerd Altmann from Pixabay.

AI. Image by Gerd Altmann from Pixabay.

Only global companies with deep pockets will invest in AI

This is an easily disposed myth. Certainly, it appears that the global media, telecoms and technology companies are leading the development and application charge. Their size reach and media networking power gives that impression. Some brief examples will show AI applications which are suited and available to all size of enterprise.

Cogito is an application which leverages real-time emotional intelligence technology to evaluate individuals who are seeking support. This is an application where tone of voice and speech patterns can reveal an individual’s level of agitation (or lack thereof), providing important insights that can be used to streamline and optimize user experience.

Nest thermostat extends beyond the normal smart home central heating system. Nest thermostat integrates AI and machine learning technology to enable the system the ability to learn and adapt. The system learns the schedule where a person’s presence is determined by your smartphone location. Adjustments are made in response to the smartphone. This system can also adapt to suit each individual and their unique preferences. If the home is equipped with different “zones,” the temperature can be adjusted in a particular area to accommodate the person in that location. The integration of wearable biofeedback sensors that send additional data to the smart thermostat, would allow for even greater customization based on body temperature and activity levels.

Pandora is one of the most popular music streaming application, where they have used massive data stream they’ve used to refine their algorithm. The Pandora Music Genome Project offers song and artist recommendation. The recommendations are generated by evaluating areas such as interests, favorite songs/artists, listening history and mood. In the future biosensor integration, which uses a wearable device that collects data that can be used to determine mood. Data may include heart and respiration rate, skin conductivity, body temperature and other biofeedback marks.

I sense this myth arose because those who oppose AI for political doctrinal and social purposes have made an attempt to associate AI entirely with the giants of global capitalism but it is one of the easiest to deconstruct.

Myths and legends apply to contemporary developments and events as much as historical events. Historically the events preceded their myths and legends. Myths and legends spread more rapidly through the social media and can be very quickly embedded in the human psyche. Accessibility to media, especially social media, gives immediacy and impact to myths and legends. It is more difficult to confront and eradicate those which are malign. Perhaps the most interesting phenomenon with contemporary myths and legends is that their influences prejudice the influence of their subject matter. AI is an example of myth and legend preceding that impact of the products and services which AI delivers.

The following two tabs change content below.
Bob McDowall is a former president of the Folkore Society, but is also engaged in the research and analysis of cryptocurrencies and the technology which supports their operation. He is a fellow of the Royal Anthropological Institute (RAI) and is a member of the RAI Finance and Administration Committee.