AI language models are currently at the forefront of technology and generating significant excitement. Tech companies are in a race to integrate these models into various products, enabling users to perform tasks such as booking trips, managing calendars, taking meeting notes, and much more. However, a looming concern is the potential for these models to be misused, serving as potent tools for phishing and scams without requiring any programming expertise. What's even more alarming is the absence of a known solution to this problem.
AI technology could be harnessed for nefarious purposes, including aiding criminals in phishing, spamming, and scam activities. Experts have warned that we may be heading towards a severe security and privacy crisis. With AI dominating discussions and causing leaders to lose sleep over privacy concerns, People Matters had to have the discussion over -Do employees need training to safeguard against AI?
In our latest Big Questions session, we deliberated on this issue alongside Jaspreet Bindra, Founder of Tech Whisperer, and Dr Aloknath De, Chairman of AgriEnIcs at MeitY, Government of India. Here's what we found and learned.
AI security and governance: A comprehensive approach
Artificial intelligence is rapidly advancing, offering businesses the opportunity to enhance productivity through process optimisation and automation. However, it's crucial to recognise that AI isn't a quick-fix solution for all business challenges. Prudent utilisation of AI requires employee training for it to yield fruitful results.
On the upside, AI offers several advantages, “including cost-effectiveness compared to hiring new staff. Given the ongoing labour shortage and high turnover rates, companies are keen to cut expenses. AI also enhances competitiveness, which is vital in today's fast-paced business landscape. Additionally, it can boost revenue by enabling businesses to better understand and cater to customer preferences,” stated Jaspreet Bindra.
Expanding on Mr Bindra's insight, Dr Aloknath De highlighted a comprehensive framework consisting of five layers that underscore the necessity for employee training to safeguard against the potential risks posed by AI:
1. Fundamental AI comprehension: Providing employees with a foundational understanding of AI, akin to a simplified AI for Dummies guide. This will ensure that everyone in the organisation grasps the basic principles and implications of AI technology.
2. Advanced technological proficiency: Moving beyond the basics, employees should receive more in-depth knowledge, including insights into machine learning and various other technological tools associated with AI.
3. Cross-cultural AI exploration: With AI's global reach, it's crucial for employees to explore AI applications across diverse cultures to emphasise the need to understand how AI operates and is perceived in different contexts.
4. AI security education: To fortify AI safeguards, organisations should provide employees with study materials and interactive sessions on AI security to empower individuals to recognise and mitigate potential security risks associated with AI.
5. Organisational AI governance: The final layer underscores the importance of organisational governance when implementing AI. This includes considerations like data anonymisation, responsible AI use, and the alignment of AI initiatives with broader organisational goals.
The perils of banning AI
Governments worldwide are becoming increasingly concerned about the far-reaching implications of advanced artificial intelligence, particularly following the launch of ChatGPT by OpenAI. The Canadian privacy commissioner initiated an investigation into ChatGPT, aligning with counterparts in various countries such as Germany, France, and Sweden, all of which have expressed unease regarding this widely used chatbot. Italy went a step further by imposing a complete ban on ChatGPT, a decision prompted by a March 20 incident where OpenAI acknowledged a system bug that exposed users' payment data and chat history.
Nevertheless, the question remains: Is it reasonable to ban software and artificial intelligence altogether? The founder of Tech Whisperer thinks - it's the death of organisations!
“AI can be a great tool, but it's important not to rely on it entirely. But, if we delve into the security aspect, banning AI could lead to various security issues, and as an organisation, we need to adapt our responses. Work is evolving, and AI can be a powerful tool when used correctly. However, it's also crucial to consider potential risks, such as privacy breaches and misuse. By banning AI outright, we might miss out on valuable opportunities and innovative solutions. Instead of a complete ban, it's wiser to establish clear policies and guidelines for AI usage, ensuring that it benefits the organisation without compromising security,” he explained.
AI violations: Implications and responses
Whether it involves a privacy breach, the improper use of AI technology, or any other wrongdoing, the outcomes can have significant consequences. Similar to workplace harassment or data security breaches, AI violations possess the potential to damage an organisation's reputation, diminish trust among stakeholders, and invite legal repercussions. Moreover, AI violations frequently entail sensitive data and personal information, rendering them pivotal concerns that organisations must confront. Overlooking these infractions may result in severe consequences, encompassing regulatory non-compliance and negative public perception. Therefore, our panellists advocate treating AI violations by employees no differently than any other violation.
“Establish a clear set of guidelines similar to those addressing other violations, such as sexual harassment. This involves determining penalties based on the severity of the AI violation, treating it with the same seriousness as other breaches. These violations might occur due to factors like inadequate training or a misunderstanding of the AI system's functionality. Therefore, organiations should also focus on enhancing employee education and awareness to prevent such incidents from recurring,” said Jaspreet Bindra.
On other other hand, Dr Aloknath De shared key points to address AI violations:
1. Recognise the importance of AI: Understand that not embracing AI can put your organisation at a severe disadvantage, especially with a younger workforce entering the scene.
2. Leverage AI tools: Explore AI tools like language translation software, which can enhance communication and accessibility within your organisation.
3. Evaluate AI initiatives: Before diving into AI projects, assess what personal information will be managed and whether you'll be sharing sensitive data with external parties.
4. Data management challenges: Acknowledge that AI introduces a new paradigm where data is as crucial as code. Learn how to handle data effectively, including version control, as it's a relatively new aspect in AI development.
5. Complexity of deep learning: Understand that deep learning, a key component of AI, can be intricate and less transparent. Explore the concept of Explainable AI to gain insights into the inner workings of AI systems.
6. Learning from data: Realise that dealing with large datasets can be challenging, with potential for unintentional mistakes. Be prepared to learn from these experiences.
7. Collaboration is key: Encourage collaboration among your organisation's scientists, forming groups or compliance teams to work together on defining AI use cases, training, and data management protocols.
8. Training and onboarding: Establish clear guidelines for training new employees in AI-related roles, specifying what knowledge levels are required for different positions.
AI violations: Who bears the responsibility?
Now that we've explored the actions for leaders when confronted with AI violations in their organisations, the next pertinent question emerges: Who will assume the responsibility of detecting such wrongdoings? Will this duty fall under the purview of the Chief Technology Officer (CTO), or will organisations establish new roles and teams?
Jaspreet Bindra, Founder of Tech Whisperer, emphasised, "The relevance of these roles hinges upon the unique needs of each organisation. For example, we are witnessing the growing importance of positions like cybersecurity experts, which have become increasingly indispensable. Notably, organisations like the Tata group are placing a significant emphasis on roles related to ESG (Environmental, Social, and Governance) considerations as a strategic move towards long-term sustainability."
The Chairman of AgriEnIcs at MeitY, Government of India, added his perspective, stating, "In the forthcoming years, organisations may contemplate the appointment of specialised officers dedicated to roles focused on managing data, ethics, or compliance. The necessity for these roles will largely depend on the nature of the business. This evolving landscape could lead to the emergence of positions akin to those seen in sectors like finance, insurance, or education. These individuals, while not always bearing the 'officer' title, could hold designations such as Chief Data Scientist or Chief Data Officer, signifying their pivotal role and contribution to the organisation."
The decision regarding the assignment of responsibility for AI governance and ethics hinges on factors such as the organisation's size, industry, and the evolving challenges associated with AI implementation. Whether it involves integrating these responsibilities into existing leadership roles or creating dedicated positions, one thing remains clear: vigilance and proactive measures are essential to address AI violations effectively.
Equity through diligence: Tackling sociological bias in AI
The pervasive challenge of assessing the impact of sociological bias in AI and ML training data and model predictions on diverse customer groups from different demographics demands ongoing diligence. Organisations will need to recognise that detecting bias is not a one-time effort; it requires a continuous commitment to ensuring fairness and equity. To address this, organisations will have to establish protocols for continuous monitoring and adjustment of AI models. This proactive approach will ensure that any emerging biases are promptly identified and rectified, minimising their influence on decision-making processes.
By embracing these techniques, organisations may gain a profound understanding of how sociological bias affects the influence of AI and ML systems on customer groups that span various demographics. This heightened awareness will serve as the foundation upon which organisations can construct technology solutions that are inherently fairer and more inclusive.
As the Chairman of AgriEnIcs, MeitY, Govt of India aptly stated, "Today, we're facing disappointment because we recognise the existence of bias. Despite the positive aspects mentioned and the progress made in DNI (Diversity and Inclusion) and other areas, these achievements can be overshadowed by the perception of bias. Some believe that the word is extensive enough to correct itself over time. However, the studies conducted indicate that the data itself might not naturally correct these disparities, especially concerning compensation. It's essential to have guidelines and measures in place to quantify how these biases affect different demographic customer segments. This is crucial as we continue to address bias-related issues and work towards fairer outcomes."