this post was submitted on 21 Jun 2024
50 points (80.5% liked)

Technology

59087 readers
3433 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A month after he left OpenAI amid disagreements regarding the safety of the company's products, Dr. Ilya Sutskever announced a new venture called Safe Superintelligence (SSI). “Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” read the new company's announcement also signed by fellow co-founders Daniel Gross and Daniel Levy. “We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence. SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.”

The founders of SSI have deep ties to Israel. Sutskever (37) was born in the USSR before immigrating to Jerusalem at the age of 5. He began his academic studies at the Open University but completed all his degrees at the University of Toronto, where he earned a doctorate in machine learning under the guidance of Prof. Geoffrey Hinton, one of the early pioneers in the field of artificial intelligence (AI).

you are viewing a single comment's thread
view the rest of the comments
[–] NevermindNoMind@lemmy.world 23 points 4 months ago (2 children)

While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn't going to coopt your "safe superai" for their own purposes?

Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

So Ilya is a shit head is my takeaway.

[–] Daqu@lemm.ee 7 points 4 months ago

safe /for us/ super AI

load more comments (1 replies)