Set ethical, constitutional boundaries for AI
ARtIFICIAL Intelligence (AI), as the transformative technological revolution of our age, attests to humankind’s unrelenting pursuit to understand, control and remake the world in the image of its changing aspirations, an endeavour not without its challenges.
The apprehended evisceration of core human values and dehumanising of society itself have invited a raging global debate about the ethical challenges posed by AI and the possibility of an effective international regulatory regime to discipline its deployment. Several regional and global initiatives have, therefore, focused on limiting the use of AI within the larger moral framework of human rights to harmoniously moderate technological exuberance with compelling moral restraints.
The Montreal Declaration on the ethics of intelligence systems; proceedings of the ACM FAccT conferences; UNESCO’s Global Standards on AI Ethics (‘Recommendation on the Ethics of Artificial Intelligence’), adopted by all 193 member states in November 2021; the Biden administration’s non-binding AI Bill of Rights; European Union legislation on AI; congressional hearings in the US and deliberations of national regulatory bodies confirm the seriousness of ethical challenges within the AI ecosystem. Initiatives such as the Global Observatory on the effect of AI, Global Forum on Ethics of AI, AI Ethics Experts without Borders and Women for AI Ethics have contributed towards a sharper understanding of the ethical issues involved.
History proclaims that at every step on the stairway to progress, humankind has made choices to define the progress of civilisation as the deepening of humanity and happiness of the people. The benefits of AI’s positive contribution for the betterment of global health and educational systems, combating challenges of climate change, terrorism, cybercrime, pandemics, natural disasters, etc. are transformative in the progressive evolution of civilisation. Even so, there is an overwhelming consensus among stakeholders to tame the excesses of AI. It is rightly argued that the celebration of our technological triumphs cannot be a ‘mourning amidst the ruins’ of humanity. Since all knowledge must be measured in terms of the values it advances, the use of AI demands an evaluation within the framework of the prevailing ethical standards of humanity.
AI’s known encroachment of our inalienable rights to freedom, privacy of emotions and intimate relationships, individual autonomy, sacrosanctity of mental processes and individual consciousness, now recognised as fundamental constitutional rights in democracies across the world, is viewed as ‘part of the crisis of humanity’. Grave concerns about AI-enabled manipulation of behavioural and electoral processes, the hacking of language to disrupt a genuine democratic discourse, the challenge of deepfakes, intended and unintended biases for and against a class of people by exploiting their vulnerabilities, including age, gender, disabilities and ethnicity, and the compounding of prevailing inequalities (such as the digital divide) are real. Serious apprehensions have been legitimately expressed about AI’s ability ‘to effectively reshape history’. Concerns about ‘overgrazing of the Internet commons by rapacious technology companies’ to maximise profit at the cost of data privacy and transparency abound.
That the AI leviathan needs to be regulated across geographical boundaries by an internationally enforceable legal code is evident in the recent comments of the principal AI entrepreneurs. Elon Musk is reported to have warned that “when we build AI without a kill switch, we are summoning the devil”. Mark Zuckerberg has cautioned that “…the world will become more digital than physical. And that is not necessarily the best thing for human society.” Apple co-founder Steve Wozniak has, in an open letter, suggested the acceleration of the development of a ‘robust AI governance system’.
Evidently, such a mechanism cannot be limited to self-regulation and should have effective mechanisms to fix accountability for transgression of the mandated red lines. It is, indeed, necessary to navigate the shadows of science if we are to bask in the glory of its illumination.
Whether or not these concerns are exaggerated, the predominant opinion on the subject is that in crafting our perspectives on the future of AI, we must accept that the elevating advance of civilisation is located in humankind’s unswerving commitment to the moral autonomy and rationality of the individual that recognises the primacy of the ideals of human dignity, inclusion, equality, freedom and justice. In the ongoing global debate on the ethical and moral framework within which AI must operate, doubts have been expressed on whether the AI machines can serve as moral agents in “translating human moral complexity into an algorithmic form” or are capable of making ethical judgements as ‘full ethical agents’ with the human understanding of ethics. It is imperative to erect unbreachable boundaries around AI to ensure that man remains the master of his universe and that sacred private spaces of the head and the heart remain sacrosanct in an age ‘which levels everything and reveres nothing’.
Hopefully, the recently concluded Global IndiaAI Summit in New Delhi will generate ideas for the responsible development and deployment of ethical and inclusive AI. It is for us to reaffirm that civilisation at its peak is about the elevation of man in the fullness of human glory, not in a tragic, even though unintended, denouement of humanist morality. Our ability to secure a convergence in the prescription of scientists and philosophers for a humane world order will be an enduring contribution to this mission. And how we deal with this epochal challenge will define the quality of leadership and the hierarchy of moral values in which human dignity must remain at the pinnacle as the ultimate civilisational aspiration.
Views are personal