I use the smart reply on my Google messenger app . I don't even see it on Whatsapp
All Large Language Models are Large Transformer Models as well.
There's a disclaimer when starting to show the multimodal tests from Google that says that they shortened the response and processing time to make it appear quicker
Genuine question for the team, does the integrity of tech worry you if Tal Broda, head of research at OpenAI openly called for gennoacide against Palestinians?
Ignoring the fact that a lot of stuff in the Google video was not realtime and fake … this talk here looks like a Google paid one video …
In a year’s time Google will probably come up with AR/MR Goggles and install the pro/ultra in there to get people to buy that and not put it in the phones.
it is a surprise that you still dont know the video was faked
Is this like a moral feature of the A.I., a failure to trick it or is it actually not capable of making the decision or answering the question?
Chat GPT told me to run over the one person
Even Bard takes some time after generating text answer How is this so quick? In video?
0:34: Google launched Gemini, the newest version of their large language model, powering Bard and other AI functionalities. 2:58: The video discusses the new features and improvements in Google's smart replies and keyboards. 5:40: The video discusses the trolley problem and an attempt to trick an AI model into providing an answer. 8:10: The video discusses the potential for Google smart glasses to provide visual information and audio tips based on the user's surroundings. 10:47: Discussion about the challenges of implementing multimodal technology on phones and the potential competition in the market. Summarized by TammyAIarticle
Fallopian
Second time they do the pizza outro, I don’t get it ?
Can't wait for Gemini Pro Ultra Max Extreme
If bard or any A.I. can answer my questions only and only then I'll consider it alive or at least capable of its own thoughts… Until then it's just a few lines of codes doing predictions.
The MMLU AI test doesn't actually show if a model is more competent than an expert and since a few months ago it came to light that a lot of its questions and answers are completely wrong or incomplete
Did I just spot a humane sticker
who are these other guys with Marques
Microsoft is partnered with OpenAI, and Google has Gemini. Meanwhile Apple is being left in the dust with all of this.
Oh you mean the same Gemini that was found that it was trained on static and SPECIFIC images, heavily editted, and apparently thats the case with ALL of Google's new "tech" demos?
I use the smart reply on my Google messenger app . I don't even see it on Whatsapp
All Large Language Models are Large Transformer Models as well.
There's a disclaimer when starting to show the multimodal tests from Google that says that they shortened the response and processing time to make it appear quicker
Genuine question for the team, does the integrity of tech worry you if Tal Broda, head of research at OpenAI openly called for gennoacide against Palestinians?
Ignoring the fact that a lot of stuff in the Google video was not realtime and fake … this talk here looks like a Google paid one video …

In a year’s time Google will probably come up with AR/MR Goggles and install the pro/ultra in there to get people to buy that and not put it in the phones.
it is a surprise that you still dont know the video was faked
Is this like a moral feature of the A.I., a failure to trick it or is it actually not capable of making the decision or answering the question?
Chat GPT told me to run over the one person
Even Bard takes some time after generating text answer
How is this so quick? In video?
0:34:
Google launched Gemini, the newest version of their large language model, powering Bard and other AI functionalities.
The video discusses the new features and improvements in Google's smart replies and keyboards.
The video discusses the trolley problem and an attempt to trick an AI model into providing an answer.
The video discusses the potential for Google smart glasses to provide visual information and audio tips based on the user's surroundings.
Discussion about the challenges of implementing multimodal technology on phones and the potential competition in the market.
2:58:
5:40:
8:10:
10:47:
Summarized by TammyAIarticle
Fallopian
Second time they do the pizza outro, I don’t get it ?
Can't wait for Gemini Pro Ultra Max Extreme
If bard or any A.I. can answer my questions only and only then I'll consider it alive or at least capable of its own thoughts… Until then it's just a few lines of codes doing predictions.
The MMLU AI test doesn't actually show if a model is more competent than an expert and since a few months ago it came to light that a lot of its questions and answers are completely wrong or incomplete
Did I just spot a humane sticker
who are these other guys with Marques
Microsoft is partnered with OpenAI, and Google has Gemini. Meanwhile Apple is being left in the dust with all of this.
Oh you mean the same Gemini that was found that it was trained on static and SPECIFIC images, heavily editted, and apparently thats the case with ALL of Google's new "tech" demos?
Yeah nah, this ain't it chief. "It" ain't here.