News: OpenAI reinforces funding to safeguard ChatGPT AI against potential 'rogue' behaviour

Technology

OpenAI reinforces funding to safeguard ChatGPT AI against potential 'rogue' behaviour

OpenAI, the creator of ChatGPT revealed its plans to allocate resources to make AI safer for humans, by dedicating a research group ‘the superalignment team’.
OpenAI reinforces funding to safeguard ChatGPT AI against potential 'rogue' behaviour

ChatGPT creator, OpenAI plans to double down their resources and establish a new research team to work towards making AI safe for humans, effectively making it supervise itself.  

Addressing the fears of AI replacing humans, OpenAI co-founder Ilya Sutskever and head of alignment Jan Leike posted a blog stating, "The vast power of superintelligence could ... lead to the disempowerment of humanity or even human extinction. Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." 

In addition, OpenAI, endorsed by Microsoft, plans to dedicate 20% of its computing power over the next four years to tackle this challenge. 

Additionally, the company is forming a dedicated team called ‘the Superalignment team’ to drive this initiative. 

The Superalignment team's first objective is to develop a "human-level" AI researcher, thereafter, scale its capability using extensive computing power. 

The approach involves training AI systems with human feedback, training AI systems to assist in human evaluation, and ultimately training AI systems to conduct alignment research. The ChatGPT creators added that they will train the AI systems to operate on human feedback, to assist human evaluation, and to actually execute alignment research.

Connor Leahy, an AI safety advocate, commented that this plan is fundamentally flawed, warning that developing human-level AI without solving alignment concerns first could result in uncontrolled and potentially catastrophic consequences.

The potential risks involved with AI are a major concern among AI researchers, as well as the general public.

Recently, a group of industry leaders and experts in Artificial Intelligence, called for a six-month pause in the development of more powerful systems than OpenAI's GPT-4, citing potential risks towards society. In addition, a poll run by Reuters/Ipsos in May indicated that over two-thirds of Americans expressed their concerns about the potential negative impact of AI, with 61% believing it could pose a threat to civilisation.

Read full story

Topics: Technology, #HRTech, #ArtificialIntelligence, #HRCommunity

Did you find this story helpful?

Author

QUICK POLL

How do you envision AI transforming your work?

Your opinion matters: Tell us how we're doing this quarter!

01
10
Selected Score :