Google Warns Employees: Be Careful When Using Bard – PYMNTS.com

Google is reportedly warning its employees to take care when using artificial intelligence (AI) chatbots.

That warning according to a Thursday (June 15) report by Reuters extends to Bard, the AI chatbot Google announced earlier this year amid a frenzy around the technology.

According to the report, the company told workers not to enter confidential Google materials into AI chatbots, and while also warning its engineers to avoid direct usage of computer code that chatbots can generate.

That information came from sources with knowledge of the matter, but was later confirmed by the company, which told Reuters that Bard can make undesired code suggestions, but still helps programmers. The company added it wanted to be upfront about the technologys limitations.

PYMNTS has reached out to Google for comment but has not yet received a reply.

Google debuted Bard earlier this year, part of a series of AI-focused product launches that also included Wednesdays introduction of a virtual try-on tool, designed to give online shoppers the same assurances they get when looking for clothing in stores that theyre buying clothes that fit.

At the same time, the company insists it is approaching AI with caution as it integrates it into its flagship search function.

People come to us and type queries like, Whats the Tylenol dosage for my 3-year-old? CEO Sundar Pichai said in a recent Bloomberg news interview. Theres no room to get that wrong.

PYMNTS looked at the possible limitations of AI in a report earlier this month, noting that humanity's long history of misplaced faith in next-big-thing technologies.

This should give firms pause as they race to integrate next-generation generative artificial intelligence (AI) capabilities into their products and services, PYMNTS wrote.

Why? Because the wide use of relatively early-stage AI will usher in new ways of making mistakes. Generative AI can generate or create new content such as text, images, music, video and code but it can also fabricate information entirely, in whats known as hallucination.

To combat this problem, Microsoft-backed OpenAI released a research paper last month on a new strategy for fighting hallucinations.

Even state-of-the-art models still produce logical mistakes, often called hallucinations. Mitigating hallucinations is a critical step towards building aligned AGI (artificial general intelligence), the report says.

These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution, the researchers added.

Go here to see the original:

Google Warns Employees: Be Careful When Using Bard - PYMNTS.com

Related Posts

Comments are closed.