Human futures: Checkmating technological dystopia
The rate of technological progress is increasing rapidly as elaborated by [Gordon] “Moore’s Law” that computational processing power doubles every two years, and its corollary: Raymond Kurzweil’s “Law of Accelerating Returns“, thereby, humans “discover more effective ways to do things.” Such “laws” facilitate exponential growth, and quantum jump in technology.
Artificial Intelligence
In the area of artificial intelligence (AI), the dominant current form is narrow or weak AI, capable of essentially single tasks like lifting, bartending, imaging, packaging, data analysis, and similar repetitive chores. This is moving vertiginously towards general AI whereby an individual robot is increasingly able to do a combination of tasks; but still dependent on human-developed algorithm. Thus, by 2030, many will be avatars deployed to do most things that are the preserve of human beings, like personal assistants, and serving as political cyborgs. Super or strong AI is in the offing such that by 2050, robots may have cognitive power rivalling human intelligence, be able to programme themselves, and act suo motu (on own accord).
The historical experience in the area of technology is that it has generally saved people from drudgery. In the process, automation has replaced human labour but also created many more workplaces. For it to be faulted for creating unemployment is not borne by long term evidence, although the type of jobs increasingly made available is structurally different from any given prior status quo. The tension arises in human effort at adjustment to the changed work-tech situation. This is manageable by training, and is not dystopic.
Since the overriding human motional driving force is to do something with less effort, virtually anything that can be automated will be; and machines will mimic human brain through reverse engineering. This is where dystopia could rear its head. The more sentient the AI, the greater the risk of mischief, malevolence, and potential for evil.
Technological Dystopia
Could AI turn hostile against, even kill, people on its volition or be misused when in wrong hands? Such dystopia has been envisioned by several authors. For example, in Darwin among the Machines, Samuel Butler, writing in 1863, argued that a “time will come when the machines will hold the real supremacy over the world and its inhabitants.” Writing in 1931, Aldous Huxley describes in Brave New World a dictator’s pharmacological manipulation of people. George Orwell depicted in his 1949 novel Nineteen Eight Four a Big Brother “seeing” everyone all the time everywhere. No privacy.
For now, robots cannot attack suo motu because they cannot “think” in the humanly sense, but they could be programmed to do so; in which case, the rest of mankind can take preventive action against such developers. Mental or other manipulation of persons, as well as loss of privacy, can be arrested by constant democratic vigilance. So, “even if some people or some country want to do harm to humans, the rest of humankind would step in to counter this kind of action contra mundum.” Hence, the need for international treaties and actions to complement local activism in this regard.
AI code of conduct
Concerned about the possibility of AI going overboard, some people have devised rules and regulations to keep AI on the right path. In 1942, Isaac Asimov produced the 3 rules for AI that would be evidently embodied in robots – a term he created – as follows:
“[T]he three fundamental Rules of Robotics – the three rules that are built most deeply into a robot’s positronic brain …. We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm …. Two, … a robot must obey the orders given it by human beings except where such orders would conflict with the First Law …. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
Such behavioural laws would be programmed into robots in the spirit of live and let live.
What of human conduct? Could it be that the probability of computers being evil is lower than that of people being or becoming so? Technology can be as good or bad or biased as its maker or user. What of sabotage, cyber-crime, faux news, terrorism, evil genies, and killer bots? We need a system of regulations binding people’s relationship with machines in their design, production and utilisation.
Digital Ethics Treaty
A go at this has been aired by Gerd Leonhard in a chapter titled “Redefining the Relationship of Man and Machine,” in Rohit Talwar’s book edited in 2015. Decrying the tendency “to make an algorithm out of everything,” he strongly feels that “Digital ethics are becoming crucial as man and machine converge.” To this effect, he provides a draft:
Digital Ethics Treaty [that] would delineate what is and what is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations….similar to the guidelines that came out of the 1975 Asilomar Conference on Recombinant DNA – a framework that seems to have guided the development of biotechnology deftly and effectively [– and] the nuclear non-proliferation treaties (NPT) that are already in place, and that have indeed proven to be enforceable (if not entirely without friction).
Leonhard proceeds to propose rules “for inclusion in such a treaty:
- We should not allow humans to actually become technology (in the sense of fundamental augmentation of the human body or mind).
- We should not allow humans to be effectively governed by intelligent technologies.
- We should not allow the fundamental altering of human nature and the manufacturing of new creatures with the help of technology (such as large-scale genetic manipulation).
- We should not allow robots and intelligent machines to upgrade, fix, or alter themselves.
- We should not allow the open or inadvertent discrimination of humans that chose not to use technology to increase their efficiency or competitiveness.
- We should not require or allow robots to make ethical decisions, i.e. to become sentient or develop some kind of moral agency.
Treaty Modality
Like most treaties involving many nations, the process of establishing one on AI would involve initiation by the United Nations Organisation (UN), its specialised agency, a member state, or a non-governmental organisation (NGO) in some working relationship with the UN system. If the UN General Assembly passes the proposed text, or a version thereof, and is thereafter ratified by the requisite number of states, the agreement comes into effect for enforcement and national domestication.
The treaty initiation requires some critical level of interest by stakeholders, especially when there is perceived risk to citizen security, and human wellbeing generally. The experience by Calestous Juma, a past Executive Secretary of the 1992 UN Convention on Biodiversity (Cartagena Protocol) is instructive. In his treatise Innovation and Its Enemies, he “observed great diversity among countries in the way they perceived the risks and the benefits of a new technology.” Negotiation is required in such intricate situations requiring a consensus, if not quite a total agreement, on how to oversight developments in these areas.
Of critical importance is the participation of all stakeholders in the area of technology. AI concerns all humanity, which expects to be served and not be marginalised. There are multiple futures, with their studies methodologies predicated on participatory collective intelligence. This approach comes in handy in shaping the direction which AI will take in the future.
Conclusion
The creativity of technology is to be welcomed, but with a cautionary stance. Human beings should be on the driving seat to ensure humane and decent digitalisation. International agreements will need to be entered into to institutionalise benevolent technological progress without stifling the match of technology, itself a reflection of human creativity. This is simply applied risk management.
[First published in ScienceAfrica: Review Journal of Science, Development & Policy. Issue No. 1 July – Oct. 2019, pp. 74-76, and republished here under the original title with permission from the author.
Updated on 28th June 2024]
Author Bio:
Leopold P. Mureithi is Professor of Economics at the University of Nairobi; and a consultant on development, technology, employment, and related matters. He is a member of World Futures Studies Federation (WFSF). He can be contacted at Lpmureithi@hotmail.com