this post was submitted on 18 Jan 2024
206 points (93.6% liked)

Technology

58115 readers
4871 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

you are viewing a single comment's thread
view the rest of the comments
[–] captainastronaut@seattlelunarsociety.org 3 points 8 months ago (3 children)

But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?

Too little too late, Sam. 

[–] pearsaltchocolatebar@discuss.online -1 points 8 months ago (2 children)

Yes on everything but drone strikes.

A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.

[–] Deceptichum@kbin.social 4 points 8 months ago (1 children)

So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?

[–] pearsaltchocolatebar@discuss.online 2 points 8 months ago (1 children)

I'm not sure why you think that's how they would work.

[–] Deceptichum@kbin.social 3 points 8 months ago (1 children)

Well it's simple, who do you think should make the life or death decision?

[–] pearsaltchocolatebar@discuss.online -3 points 8 months ago* (last edited 8 months ago) (1 children)

The computer, of course.

A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human's reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.

It has a way better chance of a 'life' outcome than a human who's either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.

Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.

[–] Potatar@lemmy.world 6 points 8 months ago (1 children)

Are there any pedestrians in your perfectly flowing grid?

[–] pearsaltchocolatebar@discuss.online -4 points 8 months ago (1 children)

Again, a computer can react faster than a human can, which means the car can detect a human and start reacting before a human even notices the pedestrian.

[–] Icalasari@kbin.social 2 points 8 months ago

Plus, there will be far fewer variables when humans aren't allowed to drive outside of race tracks and the like. Reason why fully AI cars are a bad idea right now is because of all the chaotic human drivers that react in nonsensical ways. e.g. Pedestrian steps out. Thing that makes sense is for the AI to stop the car. But then the driver behind them decides to swerve around and blare the horn, then see the pedestrian, freak, turn into the AI car, and an accident is caused. Without the human drivers, then all the vehicles can communicate with each other and all of them can react in appropriate ways, adjusting how they drive up to miles back

[–] LWD@lemm.ee 3 points 8 months ago* (last edited 8 months ago) (1 children)
[–] pearsaltchocolatebar@discuss.online -1 points 8 months ago (1 children)

Teslas aren't self driving cars.

[–] LWD@lemm.ee 3 points 8 months ago* (last edited 8 months ago) (1 children)
[–] pearsaltchocolatebar@discuss.online 0 points 8 months ago (1 children)

Well, yes. Elon Musk is a liar. Teslas are by no means fully autonomous vehicles.

[–] LWD@lemm.ee 2 points 8 months ago* (last edited 8 months ago) (1 children)
[–] wikibot@lemmy.world 1 points 8 months ago

Here's the summary for the wikipedia article you mentioned in your comment:

No true Scotsman, or appeal to purity, is an informal fallacy in which one attempts to protect their generalized statement from a falsifying counterexample by excluding the counterexample improperly. Rather than abandoning the falsified universal generalization or providing evidence that would disqualify the falsifying counterexample, a slightly modified generalization is constructed ad-hoc to definitionally exclude the undesirable specific case and similar counterexamples by appeal to rhetoric. This rhetoric takes the form of emotionally charged but nonsubstantive purity platitudes such as "true", "pure", "genuine", "authentic", "real", etc. Philosophy professor Bradley Dowden explains the fallacy as an "ad hoc rescue" of a refuted generalization attempt.

^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^'optout'.^ ^article^ ^|^ ^about^

[–] halva@discuss.tchncs.de -1 points 8 months ago

As advanced cruise control, yes. No, but in practice it doesn't change a thing as humans can bomb civilians just fine themselves. Yes and yes.

If we're not talking about LLMs which is basically computer slop made up of books and sites pretending to be a brain, using a tool for statistical analysis to analyze a shitload of data (like optical, acoustic and mechanical data to assist driving or seismic data to forecast tsunamis) is a bit of a no-brainer.