Recommended For You

About the Author: Waveform Clips

20 Comments

  1. There's a disclaimer when starting to show the multimodal tests from Google that says that they shortened the response and processing time to make it appear quicker

  2. Genuine question for the team, does the integrity of tech worry you if Tal Broda, head of research at OpenAI openly called for gennoacide against Palestinians?

  3. 0:34: 🚀 Google launched Gemini, the newest version of their large language model, powering Bard and other AI functionalities.
    2:58: 📱 The video discusses the new features and improvements in Google's smart replies and keyboards.
    5:40: 🚃 The video discusses the trolley problem and an attempt to trick an AI model into providing an answer.
    8:10: 👓 The video discusses the potential for Google smart glasses to provide visual information and audio tips based on the user's surroundings.
    10:47: 📱 Discussion about the challenges of implementing multimodal technology on phones and the potential competition in the market.
    Summarized by TammyAIarticle

  4. If bard or any A.I. can answer my questions only and only then I'll consider it alive or at least capable of its own thoughts… Until then it's just a few lines of codes doing predictions.

  5. The MMLU AI test doesn't actually show if a model is more competent than an expert and since a few months ago it came to light that a lot of its questions and answers are completely wrong or incomplete

  6. Oh you mean the same Gemini that was found that it was trained on static and SPECIFIC images, heavily editted, and apparently thats the case with ALL of Google's new "tech" demos?

    Yeah nah, this ain't it chief. "It" ain't here.

Comments are closed.