this post was submitted on 30 Jul 2024
959 points (97.9% liked)

Technology

59605 readers
3458 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

If you've watched any Olympics coverage this week, you've likely been confronted with an ad for Google's Gemini AI called "Dear Sydney." In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone.

"I'm pretty good with words, but this has to be just right," the father intones before asking Gemini to "Help my daughter write a letter telling Sydney how inspiring she is..." Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be "just like you."

I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, "Dear Sydney" presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.

Inserting Gemini into a child's heartfelt request for parental help makes it seem like the parent in question is offloading their responsibilities to a computer in the coldest, most sterile way possible. More than that, it comes across as an attempt to avoid an opportunity to bond with a child over a shared interest in a creative way.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] dezmd@lemmy.world 6 points 3 months ago

Dear Sydney...

[–] jet@hackertalks.com 6 points 3 months ago

https://www.youtube.com/watch?v=wfEEAfjb8Ko

Now we need the machine to write a handwritten letter, and sign it. To complete the effect of genuine human connection

[–] DharkStare@lemmy.world 4 points 3 months ago

This and the Nike ad have been the worst ads during the Olympics.

[–] Lucidlethargy@sh.itjust.works 4 points 3 months ago* (last edited 3 months ago)

I saw a movie the other day, and all of the ads before the previews were about AI. It was awful, and I hated it. One of them was this one, and yes... Terrible.

[–] ealoe@ani.social 3 points 3 months ago

I saw a similar ad in theaters this week, it started by asking Gemini to write a breakup letter and I thought my friend next to me was going to cry because she's going through a breakup but then right at the end it goes "...to my old phone, because the Pixel 9 is just so cool!"

Gemini is awesome, I use it all the time for applied algebra and coding but using it to replace human emotions is not awesome. Google can do better

[–] gentooer@programming.dev 2 points 3 months ago

I've been watching quite a lot of Olympics coverage on TV, but never seen any ads. Is there an official Olympics TV channel with these ads?

[–] iAvicenna@lemmy.world 2 points 3 months ago* (last edited 3 months ago) (13 children)

Being a non native English speaker this is actually one of the better uses of LLMs for me. When I need to write in "fancier" English I ask LLMs and use it as an initial point (sometimes end up doing heavy modifications sometimes light). I mean this is one of the more logical uses of LLM, it is good at languages (unlike trying to get it to solve math problems).

And I dont agree with the pov that just because you use LLM output to find a good starting point it stops being personal.

[–] Emmie@lemm.ee 3 points 3 months ago* (last edited 3 months ago)

The problem with this is that effectively you aren’t speaking anymore, the bot does for you. And if on the other side someone does not read anymore (the bot does it for them) then we are in very bizarre situation where all sorts of crazy shit starts to happen that never did.
You will ‚say’ something you didn’t mean at all, they will ‚read’ something that wasn’t there. The very language, communication collapses.

If everyone relies on it this will lead to total paralysis of society because the tool is flawed but in such a way that is not immediately apparent until it is everywhere, processes its own output and chokes on the garbage it produces.

It wouldn’t be so bad if it was immediately apparent but it seems so helpful and nice what can go wrong

load more comments (12 replies)
load more comments
view more: ‹ prev next ›