Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

Artificial intelligence has prospects and perils galore

IT is impossible to escape the ubiquitous impact of artificial intelligence (AI) as the ‘4th Digital Revolution’ is sweeping through the whole planet. AI applications such as Apple’s Siri, Amazon’s Alexa, the virtual nurse, voice and facial recognition techniques, language...
  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

IT is impossible to escape the ubiquitous impact of artificial intelligence (AI) as the ‘4th Digital Revolution’ is sweeping through the whole planet. AI applications such as Apple’s Siri, Amazon’s Alexa, the virtual nurse, voice and facial recognition techniques, language convertor, grammar auto correction tools and driverless automobiles are all around us. Deep learning algorithms have made it possible for the machines to gauge even the mood and sentiments of their users. Developments in the field of AI have surpassed the human imagination.

John McCarthy, one of the ‘founding fathers’ of AI, defined it as ‘the science and engineering of making intelligent machines, especially intelligent computer programmes’ that can think and act like humans, understand things rationally and ultimately start behaving like human beings. Strong artificial intelligence, though an entirely theoretical concept so far, is moving towards developing supermen who would surpass human intelligence and ability.

AI systems have made many tasks easy and human life better than ever before; but this is only the tip of the iceberg, with most of it hidden beneath the surface, and the civilisation looks like the Titanic on a collision course with it. Public response to AI reflects two extremes: some optimistically imbue it with an aura of seduction and magic, whereas others view it as a robotic villain that takes over the world in science fiction stories. Both viewpoints appear to overlook the practical realities.

Advertisement

AI is data-driven and algorithm-monitored; it relies on the speed of computers in reading and processing data, recognising patterns and storing them in memory for comparison within a fraction of a second. At the same time, it can inherit biases from the training data, leading to unfair or discriminatory outcomes. Since algorithms are written by humans, they are susceptible to be corrupted by the personal prejudices, cultural biases and knowledge constraints of the designers.

AI systems often operate as ‘black boxes’; it is difficult to understand their decision-making processes, which are crucial, especially in financial matters. AI systems are vulnerable to cyberattacks and adversaries can manipulate them from remote locations, resulting in serious privacy and security concerns. International and domestic rules and regulations to address such issues have not yet been developed.

Advertisement

AI fields do not understand gender and ethnic contexts, which can limit their perspectives and create biases in applications. Moreover, AI systems function in isolation of the surrounding environment, which many a time plays a very crucial role in decision-making.

With the increasing use of AI systems, disruption of the precarious balance in the job market is inevitable; old jobs would be replaced by new ones, which would be concentrated in communities with high level of education and availability of capital. AI has the potential of worsening the divide between the haves and the have-nots; it may trigger unprecedented level of unemployment among the lower strata of society, resulting in social instability.

Privacy is no more private in the world of AI. The electronic devices we are using have surreptitiously stolen pieces of our individual selves; we also end up sharing personal information voluntarily on social media as well as while acquiring utilities and services in day-to-day life. This flow of data from consumers to machines promotes the transfer of human agency from individuals to machines. This is similar to a ‘bloodless coup’, resulting in a gap between ‘what we know and what is known to the world about us’.

AI has managed to hack human psychology and it has the potential of inciting riots, civil wars, communal tensions and political disruptions. Based on an individual’s data, AI systems can deliver intellectual and psychological needs, thus gradually making people emotionally dependent on them.

As machine intelligence is getting smarter, humans are moving towards the world of artificially induced sensitivities and gratification, which have the potential of leading to moronisation of the masses. AI wizard Shoshana Zuboff termed this phenomenon as ‘Instrumentarian Society’ in which individuals are herded as machines, operating collectively under the control of algorithms.

AI is also fast emerging as a strategic weapon to acquire geopolitical domination. The US and China have heavily invested in AI and they control the majority of the AI-related intellectual property, investments, market share and key resources. A scenario is emerging where disruptive AI applications would have the capacity to weaken many sovereign states and destabilise fragile political equilibrium. Some private AI companies may even become more powerful than many countries, just like the erstwhile British East India Company, ushering in the era of ‘digital colonisation’.

AI-based models implicitly incorporate the value system, aesthetic sense, emotional quotient and ideologies of life that are aligned with those of the developers, who are mainly Americans or Chinese so far, each developing and propagating his/her own specific ideologies hidden beneath the veneer of algorithms. Human civilisation’s core narratives are getting embedded in AI systems, explicitly or implicitly; India must develop its own AI systems to prevent such incognito ‘cultural invasion’.

Prospects of AI are enormous, though its perils are equally frightening. In May, hundreds of top AI scientists, researchers and others, including OpenAI’s Chief Executive Sam Altman and Google DeepMind’s Chief Executive Demis Hassabis, wrote an open letter, warning the world of the ‘extinction risk’ which rapidly-advancing AI technology carries with it in unmistakable terms. Nick Bostrom, a Swedish philosopher, has expressed the same concerns: “Machine intelligence is the last invention humanity will ever need to make.” Such apprehensions do not deserve to be dismissed summarily. The perils of AI need the attention of the international community before it is too late.

Notably, the G20 AI Principles (2019) have given a call to the international community to unlock the full potential of AI, equitably share its benefits and mitigate the risks.

Advertisement
Advertisement
Advertisement
Advertisement
tlbr_img1 Home tlbr_img2 Opinion tlbr_img3 Classifieds tlbr_img4 Videos tlbr_img5 E-Paper