News: Google says no sharing of sensitive info with AI chatbots like Bard, here's why

Technology

Google says no sharing of sensitive info with AI chatbots like Bard, here's why

According to the FAQ of Google Bard, the chatbot, the company collects various types of information during interactions, including conversation history, location data, feedback, and usage information.
Google says no sharing of sensitive info with AI chatbots like Bard, here's why

Google is cautioning its employees against sharing confidential information with AI chatbots such as ChatGPT and the company's proprietary chatbot, Bard. The warning aims to protect sensitive data from being utilized by large language models like Bard and Google, which can potentially lead to data leaks in the future. 

Additionally, human reviewers, who serve as moderators, may have access to sensitive information. The report emphasises that Google engineers are being advised against using codes generated by AI chatbots.

The FAQ of Google Bard highlights that during interactions with the chatbot, the company collects various data points such as conversation history, location, feedback, and usage information. The stated purpose of collecting this data is to support the provision, improvement, and development of Google products, services, and machine-learning technologies.

In a somewhat contradictory stance, Reuters reported that Google employees can still utilise Bard for other tasks despite the company's warning regarding sharing confidential information. This recent cautionary approach from Google contrasts with its previous position when Bard was introduced earlier this year as a competitor to ChatGPT. At that time, employees were encouraged to extensively test the AI chatbot to assess its capabilities and limitations.

Similar to a security standard embraced by numerous corporations, Google's cautionary message to its employees aligns with the practice of many companies that have prohibited the use of publicly-available AI chatbots. One such example is Samsung, which reportedly implemented a ban on the use of ChatGPT after certain employees were found to have shared sensitive information.

Google emphasised the importance of transparency regarding Bard's capabilities by stating that the AI chatbot has certain limitations. The company acknowledges that while Bard may provide code suggestions that are not desired, it still assists programmers in their work. 

Additionally, Bard is capable of performing various tasks such as drafting emails, reviewing code, proofreading lengthy essays, solving mathematical problems, and even generating images within seconds.

Read full story

Topics: Technology, #ArtificialIntelligence, #HRTech, #HRCommunity

Did you find this story helpful?

Author

QUICK POLL

How do you envision AI transforming your work?

People Matters Big Questions on Appraisals 2024: Serving or Sinking Employee Morale?

LinkedIn Live: 25th April, 4pm