by oedmarap on 3/23/23, 11:49 AM with 76 comments
by kryptiskt on 3/23/23, 12:29 PM
"We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback"
They are richer than Croesus, that is no reason for them to hold back and get stomped on because they field an inferior product. Like, being backed by an immensely profitable corporation should be an advantage for them, but now it looks like they are so afraid of eroding those profits that they are hampered instead.
by bko on 3/23/23, 12:16 PM
When I asked Bard to list the months in alphabetical order, it failed to do so. I saw this on a tweet but confirmed that it can't sort the moths. Gpt passes. When you fail such a simple comprehension i lose faith in anything more complex where the incorrect results wouldn't be as obvious. I'm surprised they released this in such a state. My guess is they're panicking to reply. I can imagine simple queries like this weren't tested
by SimianSci on 3/23/23, 12:51 PM
From my angle, LLM’s are the next big step in search. If the company whose core business revolves around search can’t update their product to meaningfully compete with their competitors who view search as a side business, it should terrify anyone with invested time or money in Google.
This is an incredibly strong signal that the company is about to face some very difficult times ahead. I fully expect their next public showing of poor execution to be followed by investor uproar.
by Random_Person on 3/23/23, 12:42 PM
I spent a few hours last night trying to work on a story outline with Bard's help and while individual responses were sometimes okay, Bard regularly forgot about plot points and character traits we decided on earlier in the conversation and that was frustrating. ChatGPT is much better at conversational memory, but provided much less interesting results to individual queries.
Unlike other commenters, I think these artificial limitations aren't about spending or competency, but rather out of fear. Google is afraid of their bot turning racist, or saying things that could embarrass them.
by Flatcircle on 3/23/23, 12:36 PM
Then they should ask it how Google can effectively build a technology whose success will inevitably kill their company’s main source of profits. (Search)
by impulser_ on 3/23/23, 1:43 PM
The first major noticeable improvement is Bard is way more human like in it response. ChatGPT often spends time writing thing it doesn't need to write.
For example, If you ask them: "If you had to pick one movie to watch today what would you pick?"
Bard give you one single movie, "The Shawshank Redemption", and explains in depth why it liked the movie.
ChatGPT gives you boilerplate response saying it can't have an opinion because it an AI and then lists 5 popular movies with basic explanation of the movie. Not very useful.
A lot of the time using ChatGPT I'm waiting for it to get to the answer I'm looking for because it spends a lot of time writing useless text instead of being more human like and understanding what the user really want in the answer.
I know a lot of people shit on Google because they hate Google, but this is a good product if they can solve some of the bad knowledge but it's more impressive than ChatGPT IMHO simply because they are wiling to allow it to learn new data all the time. While ChatGPT is still using the same dataset from 2021.
by insane_dreamer on 3/23/23, 1:05 PM
This also makes me wonder whether Apple will come out with its own LLM chatbot which could be its opportunity to wean itself off Google.
And if Meta manages to successfully integrate its LLM into its products FB, IG, etc., then it keeps users there for web searches.
Either way, Google's future looks precarious right now.
by dang on 3/23/23, 9:31 PM
Google Bard waitlist - https://news.ycombinator.com/item?id=35246260 - March 2023 (386 comments)
Also related:
Bard uses a Hacker News comment as source to say that Bard has shut down - https://news.ycombinator.com/item?id=35255864 - March 2023 (194 comments)
Bard is much worse at puzzle solving than ChatGPT - https://news.ycombinator.com/item?id=35256867 - March 2023 (78 comments)
by glofish on 3/23/23, 12:27 PM
It lacks the spirit, sparkle, the pizzaz and surprising "intelligence" of ChatGPT.
The service, if released before ChatGPT, would not have made the splash ChatGPT did.
by siva7 on 3/23/23, 12:53 PM
by Oras on 3/23/23, 12:20 PM
by DogRunner on 3/23/23, 1:00 PM
by siva7 on 3/23/23, 1:06 PM
by anonyfox on 3/23/23, 12:14 PM
by thejackgoode on 3/23/23, 1:01 PM
by endisneigh on 3/23/23, 12:32 PM
Google should be more proactive in getting public feedback from its research and productizing as soon as possible.
---
going back to Bard - I think Google is better off making Bard fast and properly setting the expectation that it can't do everything, and then slowly scaling up its abilities as it becomes better understood on how to use fewer parameters better.
by avg_dev on 3/23/23, 12:59 PM
Some things bother me about this article and the facts that warranted it in the first place. I have very similar reservations with regard to the Bing chatbot launch (detailed below - just substitute Bing for Bard).
> Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.
1. LaMDA is not a new project. They are clearly releasing a chatbot based on it to the public to compete with MS/OpenAI. But if the tech existed some time ago (and as a developer I realize that iteration and time and attention tends to improve quality), why didn't they release it before? I am guessing - baselessly - that they saw that the quality of the output was quite poor (often factually wrong, for instance). But now that a competitor is threatening their market share, quality metrics goes out the window - revenue is once again king.
2. Funny how the valuation of the company dropped so quickly. It is because as shareholders we rely so much on short term gain. There is no focus on follow-through or long-term consequences. I certainly don't believe that capitalism is inherently bad and I am a huge fan of competition. But I think the way we practice it leaves many practical things to be desired.
> “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”
>
> But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.”
3. This quote, and the mention in the article of the "Google It" button below the Bard chat, the three versions of Bard's response ("drafts", FTA), the quote by the Google product director that says "There’s the sense of authoritativeness when you only see one example”... I could not agree more with what Margaret Mitchell has to say (I have never heard of her before, to my knowledge). Isn't it very clear by now that users don't have the time or attention to acknowledge implications? We are busy and we don't have the energy. If we see it on a screen, we take it as fact, copy and paste it as needed, and proceed with the knowledge that we have gleaned from said "facts". I suspect that if anybody really knows how misinformation works and the effects it has on society, it's data analysts at Google search. But the almighty revenue stream dictates that they push this not-necessarily-factual-information-producing-tool anyway.
If I can find the time, I'm quite curious to read that article that just pre-dates Bing chat and Bard about the dangers of using LLMs in search engines.