Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

Artificial intelligence is evolving, can be sentient in future, say PEC experts

Naina Mishra Chandigarh, June 15 The claim by Google engineer that LaMDA, a language model created by Google artificial intelligence (AI), had become sentient and begun reasoning like a human being has sparked a lot of debate and discussion around...
  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

Naina Mishra

Chandigarh, June 15

The claim by Google engineer that LaMDA, a language model created by Google artificial intelligence (AI), had become sentient and begun reasoning like a human being has sparked a lot of debate and discussion around it.

Advertisement

The Tribune spoke to experts to know whether AI can be sentient, capable of feeling. “We can’t rule the possibility of the AI bot becoming sentient since AI based machines keep on learning and improving from past experiences. Considering this there is a possibility that with time they can imitate human behavior too,” said Prof Manavjeet, Assistant Professor, Cyber Security Research Centre, Punjab Engineering College (PEC).

Prof Manish Kumar from the CSE Department of PEC says, “It is in the nature of the AI to evolve as per the input we share with it. The dynamics will change in coming years and it can be used effectively in medical field and pandemics hitting the world frequently.”

Advertisement

“There have been instances where two systems started talking in a language that was not known to developer and the project had to be discontinued. The AI has been known to evolve and match human intelligence. It is difficult to say that it can have a consciousness of its own yet, but it can be very much possible in future,” Kumar added.

“The AI technology is being created by humans. There have been several pieces of evidence that developers willingly or unwillingly propagate human cognitive bias through these algorithms. There are no existing mechanisms to even know whether the technology has been manipulated at the backend, resulting in misclassifications and misidentifications and thus automating discrimination through technology enabled systems,” said Prof. Divya Bansal, Professor, CSE department and a Cybersecurity Research Centre, PEC.

“Research has shown that AI-based algorithms in facial recognition are being used by industries to discriminate amongst humans impacting decisions with respect to lifestyle, finances, and liberties they should be having,” Prof Bansal added.

“Intelligence without human ethics is not intelligence at all. Ethics must be kept at the highest pedestal for the peaceful and healthy existence of the human race. However, there is a dark side to big data and the AI as they are used to exploit people, distort truth and thus threaten democracy,” said Prof Bansal.

“The AI is intelligent, but not sentient as the way humans can feel a needle prick machines cannot. The way human can enjoy aroma, the AI can’t,” said Prof Sanjeev Sofat, Professor, CSE Department and head, Data Science Centre, PEC.

According to a Google engineer, the AI chatbot he was working on had become sentient and was showing ability to think and reason like humans.

What is LaMDA

LaMDA or Language Models for Dialog Applications is a machine-learning language model created by Google as a chatbot that is supposed to mimic humans in conversation.

Advertisement
Advertisement
Advertisement
Advertisement
tlbr_img1 Home tlbr_img2 Opinion tlbr_img3 Classifieds tlbr_img4 Videos tlbr_img5 E-Paper