AI apocalypse on the horizon? Experts discuss as 'Godfather of AI' Geoffrey Hinton quits Google
Dr Geoffrey Hinton, who is widely considered as the godfather of AI, has left Google, leaving behind a remarkable legacy. In 2012, he, along with two of his students from the University of Toronto, built a groundbreaking neural net that paved the way for the modern AI revolution.
Over the past decade, Hinton has been instrumental in pushing the boundaries of AI as he divided his time working for Google and the University of Toronto. However, he waved goodbye to the tech giant this week, citing worries over AI's potential to disrupt jobs, flood the market with misinformation, and pose an "existential threat" by creating true digital intelligence.
With his departure from Google, Hinton will have the freedom to express his concerns about the dangers of AI without any constraints. He also expressed regret for his role in the field, having pioneered the approach that paved the way for current advanced systems such as ChatGPT. With the Godfather of AI expressing concerns about the possible dangers of AI, People Matters reached out to industry experts to gather their perspective on this groundbreaking technology and its potential risks.
While some are hopeful about AI’s potential benefits, others share Hinton's worries about the lack of ethical considerations in the development of AI. Concerns around data privacy, deepfake, unemployment, and the necessity for strong regulatory frameworks are also brought up.
Responsible and ethical use of AI crucial
Artificial Intelligence has the potential to transform society and drive breakthroughs in numerous fields, from healthcare to transportation, finance, and more. However, the development of AI also poses significant risks if not used responsibly and ethically.
"The decision of Geoffrey Hinton, a pioneer in the field of AI, to speak out about the potential risks associated with AI is a significant moment in the industry. Hinton's concerns about generative AI and its potential misuse should be taken seriously,” said Vedansh Priyadarshi, Head of Engineering, NoCap Meta.
“While AI has the potential to drive breakthroughs in various fields, it is crucial to ensure that it is used responsibly and ethically for the benefit of society. This calls for a moratorium on developing new AI systems should be heeded, and the industry should take the necessary steps to address the risks associated with AI,” he added.
AI's potential danger lies in its application
AI's potential danger hinges on its application. It can serve as an excellent tool to process information and augment human abilities, but it must be used responsibly and ethically. On the other hand, Geoffrey Hinton's departure from Google, while noteworthy, could have raised ethical dilemmas when raising concerns about AI while working there. Conversely, his departure could also limit his ability to establish best practices and worldwide standards for AI use while using a reliable platform to share information about AI's potential hazards.
“How dangerous AI is depends on its utilisation. It can be perceived as a technology that facilitates regular activities and advanced information processing. From that perspective, it can be an efficient tool for performance. AI can be utilised to complement and enhance human abilities for performing tactical and strategic functions. This can free up time and provide knowledge to HR personnel to improve the functions, take key decisions and increase human touch in employee interactions,” stated Professor Smita Chaudhry, Department of Human Resources at FLAME University.
She further commented that, “Raising concerns about AI while working in Google would have given rise to some ethical dilemmas for Mr Geoffrey Hinton. However, leaving Google would close doors to the opportunity for setting best practices and globally recognised standards for AI use, and regularly disseminating information about its dangers using a credible platform.”
AI regulation key to harnessing its positive potential
As AI becomes increasingly integrated into various aspects of our lives, it is essential to regulate its use to ensure it is harnessed for positive outcomes. Without proper regulation, there is a risk of AI being misused, leading to negative consequences such as job losses, privacy breaches, and even accidents resulting from biased decision-making.
“I do share Geoffrey Hinton’s concerns on the missing focus on ethics in the growth of AI. However, there is a consensus among most experts that the only part of AI that is witnessing the kind of growth that's triggering these discussions is the ‘Supervised Learning’ - more specifically the Generative (the G in GPT) AI. There are way too many domains and problems that fall outside the scope of supervised learning,” said Srinivas Vedantam, Director, OdinSchool.
“While there has been some advancement in the Unsupervised Learning space - take Cicero for example, they are far and few in between, and don’t attract the same attention as ChatGPT. The recent protests from WGA are only adding fuel to the argument that AI is evil. However, we are nowhere near being threatened by AI. In fact, if regulated properly, AI can be used to power humanity’s efforts towards fulfillment, sustainability, and seeking a larger purpose for our existence,” he added.
Calculators to AI, disruptive tech needs collective action
As technology continues to evolve and disrupt various industries, it is essential to recognise that with innovation come grey areas. Mr Daksh Sharma, Director of Iffort, explained that when, “you go back to the 1980s and look at the era of calculators, what's common is that any new piece of disruptive tech always need collective action from all the stakeholders.”
“Generative AI is evolving and what we are seeing today is radically going to transform and it will only get better in months and years to come. Data privacy, hallucinating output, and Deepfake are the biggest concerns but the potential benefits are overall way higher than the risks and that's what the whole ecosystem needs to focus on,” he added.
The widening impact of AI on jobs
One area where AI has had a profound impact is on the job market. While it is true that AI has created new jobs in areas like data science, machine learning, and robotics, it has also displaced many traditional jobs. Director of HR at the Sharda Group, Col. Gaurav Dimri, said, “While AI opens hitherto unknown frontiers and gets poised for widespread proliferation across various domains, it does come with proverbial risks. AI-based data algorithms can become manipulative and generate distorted information. There is also a likelihood of job losses, particularly in data interpretation, IT, and ITES sectors.
“Even sub-domains in finance, marketing, consumer behavior, HR, tertiary workforce in medicine, legal, and edu-tech could face job cuts as AI-enabled systems take over certain roles. However, the greater concern is the risk of AI-based machines acquiring greater control, particularly in the absence of effective regulatory mechanisms. It is likely that these concerns, along with others, have led Dr. Geoffrey Hinton to call for a renewed focus on establishing effective procedures for ensuring the ethical application of AI to harness its optimal potential,” he stated.
Don't get caught in analysis-paralysis: Vaccinate yourself with upskilling
Much like how we had to vaccinate ourselves with Covid vaccines to protect ourselves, we need to learn and master AI tools to safeguard ourselves from the potential negative consequences of the technology. It's important not to waste time pondering over the "what if's" and instead act quickly to learn and adapt.
“I think an interesting analogy to draw, with the current state of AI and what’s about to come, is if we compare it to the outbreak of the Covid pandemic. Once we know, something as viral in nature as Covid is out there, it’s foolish to waste time over analysing it. The way we should look at it is - the only thing that is in our control is to vaccinate. With Covid we had to get vaccinated to protect ourselves. With AI we have to keep learning and be our own antidote - using these technologies to upskill,” suggested Azaan Sait, the founder of The Hub Bengaluru, The Hubverse, and The HubCo.
"Also, AI will spread at that rate of velocity the way Covid took over the world. Whether we are an organisation or individual, it is time to act and master the tools available and vaccinate ourselves from the negative consequences that might come from this tech,” he added.
Fake News to killer robots: The palpable fears of AI
AI's rapid development has sparked growing concerns about its potential negative impacts. One of the most pressing issues is the widespread dissemination of fake news, images, videos, and text, which can cause people to lose their ability to differentiate between what's true and what's not. Yet, with the dawn of the AI age, a realignment between humans and machines is necessary. By delegating routine tasks to machines, people can devote their time and attention to more sophisticated endeavors.
“AI systems could eventually learn unexpected, dangerous behaviour, and that such systems will eventually power killer robots and AI could cause harmful disruption to the labour market. However, as we enter the AI era, it’s a human-machine realignment that needs to have happen wherein daily, trivial chores and tasks will be done by the machine leaving humans to do more elevated work and that has happened in every stage of previous revolution - agriculture, manufacturing and software,” stated President of 3AI, Sameer Dhanrajani.
“Dr Hinton is far from the first artificial intelligence expert to sound the alarm on the dangers of the AI that they have built. In recent months, two major open letters warned about the “profound risks to society and humanity” that it poses, and was signed by many of the people who had helped create it. However, in the future, the potential of problem solving at scale with AI for large , complex and unresolved problems far more exceed the dangers that requires regulations and governance,” he concluded.