Article: What not to share with AI chatbots like ChatGPT?

Technology

What not to share with AI chatbots like ChatGPT?

Inputting personal data into AI chatbots can pose privacy and consent risks, making it crucial to be cautious about the information shared. Hence, here are seven types of information that you should never disclose to AI chatbots.
What not to share with AI chatbots like ChatGPT?

As the prevalence of chatbots powered by artificial intelligence continues to rise, it is important to acknowledge their capabilities while also recognising their imperfections. It is crucial to understand that using AI chatbots comes with certain inherent risks, such as potential cyberattacks and privacy concerns. 

It may surprise you that these seemingly friendly chatbots like ChatGPT, Bard, Bing AI, and others could inadvertently expose your private information online. These chatbots rely on artificial intelligence language models that gather insights from user data. 

For instance, Google recently made changes to its privacy policy, clearly indicating that online posts have the potential to be utilised for training its AI tools and models. Likewise, the retention of chat logs by ChatGPT for model improvement raises privacy concerns. However, there is a solution to address this concern, and it involves refraining from sharing certain information with AI chatbots.

1. Financial and banking details

In a recent report released on June 6, the CFPB (Consumer Financial Protection Bureau) has cautioned about the limitations of chatbot technology as questions become more complex. The report  highlights the potential risk of financial institutions violating federal consumer protection laws when implementing chatbot technology. 

The CFPB notes an increase in consumer complaints related to difficulties in areas such as dispute resolution, obtaining accurate information, receiving satisfactory customer service, seeking assistance from human representatives, and ensuring the security of personal information. As a result, the CFPB advises financial institutions to avoid relying solely on chatbots. 

2. Personal Identifiable Information (PII)

To protect your privacy and mitigate the risk of potential misuse, it is essential to refrain from sharing sensitive information that can personally identify you. This includes details like your full name, residential address, social security number, credit card information, or any other personally identifiable information. Safeguarding these sensitive details is paramount to ensure your privacy and prevent unauthorised use or potential harm.

3. Confidential information of your workplace: 

Users should exercise caution and refrain from sharing private company information when interacting with AI chatbots. Major tech companies like Apple, Samsung, JPMorgan, and Google have even implemented policies to prohibit the use of AI chatbots by their employees. A Bloomberg article highlighted an incident where a Samsung employee unintentionally uploaded confidential code to a generative AI platform while using ChatGPT for coding tasks. This breach led to the unauthorised disclosure of private information about Samsung and subsequently resulted in the company banning the use of AI chatbots. 

This serves as a reminder that developers seeking AI assistance for coding challenges should not trust AI chatbots like ChatGPT with sensitive data. Moreover, many employees use AI chatbots to summarise meeting minutes or automate mundane tasks, further increasing the risk of inadvertently disclosing sensitive information. By being mindful of the risks associated with sharing work-related data, users can safeguard their confidential information. 

4. Passwords and security codes

Never disclose your passwords, PINs, security codes, or any other confidential access credentials to AI chatbots. While AI chatbots are designed with privacy in mind, it is always prudent to prioritise your safety and refrain from sharing such sensitive information. Safeguarding your passwords and access credentials is paramount to maintain the security of your accounts and protect your personal data from potential unauthorised access or misuse.

5. Confidential or proprietary information

Exercise caution and refrain from sharing any confidential or proprietary information pertaining to your workplace, employer, or other organisations when interacting with AI chatbots. Such sensitive information includes trade secrets, intellectual property, internal procedures, or any data that could potentially breach non-disclosure agreements or jeopardise business interests. Preserving the confidentiality of this type of information is essential to maintain the integrity and competitiveness of organisations, as well as to uphold professional ethics and legal obligations.

6. Health-related Information

It is vital to refrain from disclosing sensitive health information to AI chatbots. This includes medical conditions, diagnoses, treatment details, or medication regimens. Instead, such personal health matters should be discussed exclusively with qualified healthcare professionals in a secure and private setting. Protecting your health data is crucial to ensure proper medical care, maintain confidentiality, and safeguard against potential privacy breaches or misuse of sensitive medical information.

7. Intimate or explicit content

Exercise utmost caution and never share explicit or intimate images, videos, or engage in discussions of a sensitive nature with AI chatbots. AI chatbots are not designed or equipped to handle such content, and sharing such materials could potentially lead to unintended consequences, privacy breaches, or misuse of personal data. It is important to treat AI chatbots as automated systems and avoid involving them in any explicit or intimate content.

Read full story

Topics: Technology, #Artificial Intelligence, #HRTech, #HRCommunity

Did you find this story helpful?

Author

QUICK POLL

How do you envision AI transforming your work?

Your opinion matters: Tell us how we're doing this quarter!

01
10
Selected Score :