Google warns employees about use of AI, Chatbots including Bard

Ten News Network

New Delhi (16/06/2023): Alphabet Inc., parent company of Google is warning staff about how they use chatbots, including its own Bard, while also marketing the programme globally.

According to the sources cited by Reuters, and confirmed by the corporation, the Google parent has cautioned staff not to enter private materials into AI chatbots, citing long-standing policy on information security.

Chatbots, such as Bard and ChatGPT, are human-sounding programmes that employ generative artificial intelligence to converse with users and respond to a variety of requests.

Human reviewers may read the discussions, and researchers discovered that similar AI might duplicate the material it received during training, posing a risk of data leakage.

According to some of the sources, Alphabet also warned its engineers to avoid using straight computer code generated by chatbots.

According to the company, Bard can provide undesirable code suggestions, yet it assists programmers. Google has stated that it intends to be open about the limitations of their technology.

The worries demonstrate Google’s desire to avoid financial harm from software released in competition with ChatGPT. Billion-dollar investments and limitless advertising and cloud revenue from future AI programmes are at stake in Google’s contest against ChatGPT sponsors OpenAI and Microsoft Corp.

Google’s caution also mirrors what is quickly becoming a corporate security standard: warning employees against utilising publicly available chat programmes.

According to Reuters, a rising number of corporations around the world have put safeguards in place for AI chatbots, including Samsung, Amazon, and Deutsche Bank.

Leave A Reply

Your email address will not be published.