We find ourselves in an era marked by a transformative shift in artificial intelligence. In the past, machines had never achieved a level of behaviour indistinguishable from that of humans. However, emerging generative AI models are not just capable of engaging in complex conversations with users; they also have the ability to create content that appears genuinely original. This led to phenomenon that possibly never happened before!
For example the success of Mistral AI, a startup located in Paris. Despite being founded months ago, the company recently secured a remarkable 105 million euros in seed funding. With a valuation of 240 million euros, Mistral AI's accomplishment stands as the largest seed funding round ever witnessed in the European startup ecosystem.
Conversely, OpenAI, the artificial intelligence research laboratory, is on track to generate $200 million in revenue this year, marking a 150% increase from 2022. This trajectory is projected to continue, reaching $1 billion in 2024—a striking 400% growth rate. This substantial support serves as evidence of the potential and promise that Generative AI holds, propelling us toward an era of limitless innovation.
But, with huge success comes even bigger risks and generative AI is not immune to the same. Generative AI has the potential to introduce security vulnerabilities due to its reliance on pre-existing code during the training process. The fast-paced development of generative AI poses a challenge for companies. They need to quickly determine if it brings new cybersecurity complexities or worsens existing security vulnerabilities. This leads to us to a Big Question: How can we protect our data, companies, and employees from the rapid growth of this advancing technology?
To deliberate on the same, Pallavi Verma, Senior Editor-Brand Reachout, People Matters sat down with CS Krishnakumar, Sr VP & CHRO, Essar Group and Satish Rajarathnam, SVP HR & Global Head Talent Transformation, Mphasis, during Big Questions session at TechHR India 2023.
Power and risks of generative AI
While many who grew up in the digital age laud the capabilities of ChatGPT, there exists a cohort that holds concerns about its impact, seeing it as potentially more detrimental than beneficial. The apprehensions are not unfounded, as they point to a range of potential risks associated with the widespread use of AI like ChatGPT. Among these concerns, some have raised alarms about the emergence of deepfakes, fabricated media that can convincingly portray individuals saying or doing things they never actually did. The weaponisation of disinformation that can amplify the spread of false narratives and manipulate public opinion on a massive scale.
If this is not enough, fears extend to the advent of a new industrial revolution driven by AI and automation. This revolution brings with it the unsettling prospect of job displacement on a massive scale, as machines and algorithms take over tasks previously performed by humans. This transformation raises questions about the future of work, economic stability, and social inequality, casting a shadow of uncertainty over society's evolution in the AI era. To mitigate or potentially reduce the adverse effects associated with these drawbacks of AI, Satish Rajarathnam recommended exercising mindfulness.
“Open AI provides powerful data, but the fact is it's less controlled. Yes, it's useful but comes with major privacy concerns. If we shift our focus to the hosted model of Open AI, it is better, it has a controlled environment that complies with regulations. If we want to work with AI in the future, it can only happen with the right security and protocols. For example, in this session, we're following certain protocols and unsaid rules, without control, it will be chaotic. My perspective is, if you want to do something, do it responsibly. After all, it's important to be mindful of who we are,” he said.
Ethical frameworks for AI: A necessity in rapid evolution
Do you remember how the introduction of ChatGPT caught EU legislators off guard, necessitating urgent revisions to the AI act? The incident signifies given the rapid evolution of AI, businesses cannot solely rely on regulatory measures for comprehensive guidance. The pace of AI advancement surpasses that of the law's adaptation. Consequently, it's imperative for businesses to proactively establish their own ethical framework, even if they currently have no plans to utilize generative AI.
Delaying this process will make cultivating an ethical decision-making culture within an organisation more challenging. Furthermore, individuals have the potential to compromise privacy, heightening the risk of data breaches and identity theft. From an ethical standpoint, there's also the potential for inaccuracies, misinformation, and even disinformation.
The decision of how much to embrace AI and how much data to share should be guided by ethical considerations and organisational values, suggested CS Krishnakumar, Sr VP & CHRO, Essar Group and added, “In this era of technological advancement, we are faced with opportunities and dilemmas. AI's potential for revolutionising industries and increasing productivity is significant. However, it requires careful consideration and ethical navigation to ensure we reap its benefits without compromising privacy and security.”
AI progress: Guided by balance and training
As per an article by McKinsey, published in December, over a million users logged into OpenAI's ChatGPT platform within five days of its launch. Concurrently, the availability of chatbots continues to increase daily, with recent additions like Microsoft's Bing AI and Google's Bard. In the middle of March, Microsoft unveiled Copilot, intending to incorporate generative AI into its Microsoft 365 applications such as Word, Outlook, PowerPoint, and Excel.
The extensive implications for the workplace left even those well-versed in IT astounded. However, the objective is to establish guidelines that assist individuals in comprehending the boundaries, enabling them to gauge if they are veering too much in one direction or the other. Subsequently, more detailed policy guidance can be implemented once a collective understanding is achieved. This process will take time, certainly not within the upcoming few weeks or months.
Hence, the SVP HR and Global Head Talent Transformation at Mphasis stated that while embarking on this journey, it becomes paramount to emphasise “the significance of training and guidance. Robust protocols and well-defined policies play a critical role in safeguarding data security and integrity. The crucial aspect lies in achieving a delicate equilibrium between the advancement of technology and the conscientious and ethical application of these innovations.”
The goal should be to provide individuals with sufficient knowledge to comprehend without overwhelming them with intricate technical details. Similar considerations apply to AI, said the senior VP and CHRO at Essar Group.
“Certain technical staff might require an in-depth understanding of AI operations, while others may not need to engage with it extensively. Non-technical personnel, on the other hand, must develop a heightened awareness of AI-powered tools: understanding their boundaries, identifying biases, and other related aspects,” he added.
To learn more from leaders about some of the burning questions in today’s world of work, stay tuned to People Matters' Big Question series on LinkedIn. This special session was conducted at the 10th edition of People Matters TechHR India, Asia’s Largest HR & WorkTech Conference, on the 4th of August 2023 at The Leela, Ambience Mall, Gurgaon.