from Hacker News

Google CEO Sundar Pichai says its malfunctioning Gemini AI is 'unacceptable'

by antimora on 2/28/24, 9:59 PM with 78 comments

  • by 0xbadcafebee on 2/28/24, 10:36 PM

    Is it malfunctioning? Or is it just being generative AI? Maybe it's time people accept that generative AI is good at making shit up, and so we should just use it for that. Maybe we should stop pretending that "probabilistically predicting the next in a series of words" counts as intelligence.

    I say, let the generative AI be racist. Let it hallucinate and come up with all kinds of crazy shit. And with that, stop using it for things that matter. Use it for art, entertainment, experiments, explorations. Use it to mine the depths of the human soul and reflect back at us what we are. Don't use it for law, health care, directions, or anything humans depend on. Acknowledge that it's just a stupid program with output that "looks impressive", and that we've all been fleeced into thinking it would be anything more. Regardless how many billions we sink into it, it only appears intelligent when we conceal how batshit crazy it can be.

  • by somethoughts on 2/28/24, 10:50 PM

    I think the biggest strategic oversight that Google leadership has made is not learning from Microsoft's Tay fiasco.

    What Google should have done is setup an entirely separate startup (or startups) similar to the approach Microsoft has taken with OpenAI. Let a few young, ambitious startup CEOs lead the way and supply them with billions in funding which they must use in Google Cloud Compute AI spend - in exchange for a majority stake in their company and integrating with Google properties only.

    Book the GCP AI revenue and get stock bump from YoY GCP growth. If the startup(s) misstep, claim the tax write-off then move on to the next batch. If they are successful over a five year stretch then just buy them out fully and get some positive news coverage over the successful strategic acquisition.

    As a member of Mag7/FAANG, Google leadership must realize that any and all minor missteps are going to be blown out of proportion via clickbait articles.

    [1] https://en.wikipedia.org/wiki/Tay_(chatbot)

  • by glimshe on 2/28/24, 10:51 PM

    He talks about it as if it was someone else's fault.

    His internal policies of extreme virtue signaling and DEI focus could only have led to that outcome. And now that I think about it, this was probably not a problem of lack of testing - it was tested and the results appeared as expected given Google's past of trying to mold society through subtle but deliberate biases in its products.

  • by elintknower on 2/28/24, 10:46 PM

    The issue is anyone with an IQ over 85 can tell that this behavior is in no way a "malfunction" of the system. The system was intended to operate this way and the general population found the output abhorrent and wrong.

    The way Gemini talks about certain events (and generates images) is clearly in line with the walled garden of compromised liberal thinking at google. I say this as someone who votes blue and considers themselves progressive.

    What's unacceptable, is Google clearly thinking it was okay to use their AI to erase history how it actually happened and just play it off as their org having moral highground.

  • by ra7 on 2/28/24, 10:49 PM

    If you read his full memo [1], it's the most Sundar Pichai response possible to a crisis. It's full of vague MBA speak ("red-teaming", really?) with the obligatory Google mission statement sprinkled in along with self congratulatory statements of their AI achievements. It does not inspire any confidence whatsoever.

    [1] https://twitter.com/TechEmails/status/1762849036363505996

  • by 0xy on 2/28/24, 10:25 PM

    The reason Google has about 100 DEI employees with their hands on the controls of this thing is because they believe anything else is unacceptable.

    Gemini wasn't shocking because it showed racist and historically inaccurate material, it was shocking because it showed the world through Google's lens.

    As others have pointed out, even Google search has the same issue. Image search will highlight minorities even when the search term suggests none should appear.

    It was 100% deliberate, and they spent countless hours refining, polishing and testing the model. They were happy with it.

  • by cloudking on 2/28/24, 10:33 PM

    I'm surprised their LLMs passed the strict Google launch review process, how do you test something that gives a random response every time you query it?
  • by ChrisArchitect on 2/28/24, 10:59 PM

    [dupe]

    Lots more discussion over here: https://news.ycombinator.com/item?id=39534608

  • by tiahura on 2/28/24, 10:31 PM

    Other than for entertainment purposes, who cares what a statistical analysis of English has to say about ethics?
  • by 1vuio0pswjnm7 on 2/29/24, 9:11 AM

    We really need something like The Onion with quotes from users, the way university newspapers will have a row of headshots and quotes from random students asked some question. The Onion of course was orginally a student newspaper that had something like that.

    Instead all we get is an endless stream of BS from "Big Tech CEOs".

    I want to hear what the people say, not some "Big Tech" compulsive liar.

  • by 29athrowaway on 2/28/24, 10:48 PM

    The system was designed to do that. It is not an AI problem it is a product problem.
  • by bitcharmer on 3/4/24, 3:30 PM

    And of course this was gang-flagged by the blue-haired fairies.
  • by alephnerd on 2/28/24, 10:30 PM

    While everyone on HN is hyperventilating about DEI crap, the malfunction Pichai is most likely referencing is Gemini AI calling Modi a fascist. [0].

    A Union Minister has already gave a veiled threat that Google broke Indian Telecom Laws [1] surrounding fake news.

    It's election season in India and governments get very ban happy during this time (look at what happened to Amazon India in 2019)

    Edit: Please don't make this an Indian politics flame war. The point is, a public company like Google is constrained by PR.

    This is why OpenAI has been innovating so fast - they are private and don't have to answer to almost anybody

    Public companies need to protect their brand, otherwise they lose out to similarly sized competitors (eg. Microsoft or Facebook)

    [0] - https://www.thehindubusinessline.com/info-tech/googles-gemin...

    [1] - https://twitter.com/Rajeev_GoI/status/1760910808773710038

  • by pachorizons on 2/28/24, 10:52 PM

    It is truly indicative of the shortcomings of the supposedly enlightened Hacker News hivemind that Google - a company that has killed influential projects and sacrificed / cannibalised the integrity of it's core search product - should be condemned because it's latest AI malfunction is too 'DEI'.

    Google's AI blunders are more systemic than the latest culture war. The cringe hate spewed by earlier models are just as incompetent as Gemini, these are nothing more than the flailing of a multinational who is incapable of fashioning a vision of a world they promise. These are bugs at the highest level, that are clearly poisioning every layer of Google. They are the same bugs that led to Google search shortcuts that pushed disinformation COVID.

    This is a failure of rigour from the flailing of the world's self appointed organiser of knowledge. Your least favourite DEI minority has nothing to do with it. For fucks sake. Place your blame where it belongs.

  • by throwaway5959 on 2/28/24, 10:32 PM

    How badly does this guy have to fail before he gets the boot? Google is an embarrassment in areas it should be excelling and leading the industry at. They _invented_ transformers.
  • by behnamoh on 2/28/24, 10:39 PM

    Companies RLHF the models.

    The market RLHF's the companies.

    It was about time someone called out the evil incentives behind push for alignment.

  • by robertwt7 on 2/28/24, 10:51 PM

    What about google search? This is also happening for google image searched and need to be fixed!!
  • by emilfihlman on 2/28/24, 10:22 PM

    >Malfunctioning

    No, it's behaving exactly how the training data and training procedures instructed it to behave, ie what Google's employees instructed it to do.

    It's not malfunctioning because you made it like this purposefully.

  • by sergiotapia on 2/28/24, 10:39 PM

    >“This wasn’t what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical—or any other—images,”

    The model seemed to respond exactly how it's been trained. The problem isn't technical. It's a cultural rot and the only fix is to get rid of the people rotting it. No amount of merge requests are going to fix this.

  • by curtisblaine on 2/28/24, 10:57 PM

    It's good that this happened and I hope it stays in the news as much as possible. It's also good that Google entered an Indian culture war and I hope many incidents of this kind keep happening until the maximum amount of users see what happens when DEI takes control and has free rein with indoctrination. I'm not very hopeful for the future, but anything that exposes the ridiculousness of the people that were put in control of this tragic debacle is very welcome.