Google Gemini live demo failed twice during the Pixel 9 event?reminding us why you cant always trust AI

Google Gemini live demo failed twice during the Pixel 9 event?reminding us why you cant always trust AI

2 months ago | 36 Views

Google launched the Pixel 9 series of devices yesterday at the Made by Google event. However, before unveiling the new Pixel 9 Pro, Pixel 9, and Pixel 9 Pro Fold, Google demonstrated the new Gemini features through a live demo on stage. But, as luck would have it, the demo failed?not once, but twice. This occurred while trying to showcase how well the new Gemini mobile AI features work with the calendar app by feeding it information through an image. However, once the prompt was submitted via voice, after several seconds of processing, Gemini failed to respond. This happened twice, and it only worked when the device was switched to a different Samsung Galaxy S24 Ultra.

Why You Can't Always Trust AI

As it stands, the aforementioned incident wasn't the only time Gemini malfunctioned. In fact, during a demonstration of the new Magic Editor feature?Google's attempt at expanding on the Magic Eraser?the AI created an awkward object in the second photo after attempting to manipulate an image by adding a hot air balloon. This shows that generative artificial intelligence, at least in its current form, cannot be trusted at all times.

Moreover, companies like Google itself acknowledge this fact. Back in February, Google stated that Gemini is a creativity and productivity tool, and it ?may not always be reliable especially when it comes to generating images or text about current events, evolving news or hot-button topics. It will make mistakes.? Google attributes this to ?hallucinations? in Large Language Models, and there are instances where AI will get things wrong.

Google Gemini's History With Missteps

If you recall, earlier this year, the Google Gemini app (formerly Bard) introduced the ability to generate images, but Google had to withdraw the feature after it generated inaccurate and sometimes even offensive images.

Google's Prabhakar Raghavan later acknowledged this in a blog post. As a result, Google had to temporarily disable this feature to avoid offending people's sentiments and causing further damage.

If you think about it, it isn't just limited to Google Gemini; other major AI players, like OpenAI, also acknowledge that their AI models may "occasionally" provide inaccurate information. And truth be told, things like these are expected, considering how quickly these models are advancing in an attempt to achieve Artificial General Intelligence (AGI).

Read Also: Massive underground water reservoir found on Mars, raises hopes for life: Check what new studies says

#