‘Code Dependent’ by Madhumita Murgia: Dark side of algorithmic life
Book Title: Code Dependent: Living in the Shadow of AI
Author: Madhumita Murgia
Dinesh C Sharma
In recent months, new technological applications like ChatGPT and chatbots like Gemini and Bing have made Artificial Intelligence (AI) almost a household name. The AI technology now comes integrated with mobile phone cameras, browsers and other gizmos. It is finding applications in a range of sectors, from health and agriculture to governance and judiciary. Many startups are working in this field and governments are committing huge funds to promote AI. Amidst this brouhaha over the great promise of AI, are we forgetting the risks and dangers involved in the unbridled spread of AI? This is the question that ‘Code Dependent’ seeks to raise, while exposing the reader to the dark side of AI through riveting field stories from around the world.
The term ‘AI’ is a misnomer. It gives the impression that AI tools are as good as human intelligence or even better than it. Its proponents seem to suggest that it can solve everything — from the shortage of health personnel in rural areas to handling consumer complaints. In reality, however, AI is only as good as its training provided by humans. Through training, AI systems learn patterns, correlations, and relationships in the data they are exposed to. Therefore, the quality, relevance and diversity of training data directly impact the system’s ability to generalise and make accurate predictions or decisions. For instance, to train the algorithm of the much-hyped driverless car, millions of data points relating to possible objects seen on a road under different conditions and so on are fed into it.
Data training is laborious and time-consuming. Therefore, technology giants are shipping it to developing countries with cheap labour. Just like outsourcing of coding and call centre work, data training (image labelling, object tagging, facial recognition, etc) is being outsourced. The so-called intelligence systems touted by tech companies are a result of the hard work of an army of workers in faraway lands, as the author has detailed in her book. Some of this data-training work, like filtering extreme content on social media platforms, often proves traumatic for data workers. Algorithms like Generative Adversarial Networks (GANs) that create Deep Fakes have been developed through learning based on millions of images gleaned from social media platforms, telecast of sports events, rallies, etc. The Deep Fake algorithm has spawned an industry of Deep Fake pornography and apps like Deep Nude are being weaponised to harass women.
Facial recognition software is another disturbing AI application that has found ready users among law enforcement agencies. Tech companies like Meta and TikTok have databases of billions of faces from around the world, and they are harnessed to ‘train’ facial recognition tools. Authorities have used such tools to identify people involved in the Dark Lives Matter protests in the US and farmers’ protests in India. The police in Hyderabad have been using an app called TS-Cops to log images for facial recognition. This technology is enabling, what the author calls, AI-enabled surveillance. “We have no agency as individual citizens when cameras scan our faces, and our images are used to train AI surveillance software,” she says. Predictive policing algorithms being tested in some countries in Europe can be racial, unconsciously or by design, as Murgia found in the case of a migrant family in Amsterdam.
On one plane, technology companies are using the poor to make AI tools possible, and on another, the same tools are being used to exploit the underprivileged in other parts of the world. Murgia demonstrates with the example of UberEats, which shows how AI is being used surreptitiously by tech companies to cheat gig workers. The AI systems are also designed to keep drivers apart, incentivise them to compete aggressively and gamify their lives by nudging them for rewards points, badges, etc.
People must understand how we are shaping AI and how AI is shaping us. Given that it is unregulated technology and the obvious difficulties involved in regulating it, the author says the effort should be to make AI systems fairer and inclusive as well as transparent to users. For the use of AI in important areas such as governance, policing and employment, we need to fix accountability for decisions or outcomes of an AI tool. There are many such moral, ethical and legal questions that the users of AI tools must ask. Coming from a technology reporter, the book provides deep insights into the different ramifications of AI. Field reportage and case studies make it a gripping account. In a world full of ‘the promise and potential of AI’, Murgia’s reality check comes as a breath of fresh air and a warning signal.