this post was submitted on 05 Sep 2024
131 points (92.3% liked)

Technology

59605 readers
2976 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Drusas@fedia.io 26 points 2 months ago (4 children)

Can someone explain like I'm five how Waymo has robitaxis without drivers behind the wheel and automated driving such as that offered by Tesla is not yet able to do the same?

Is it just that Waymo has mapped a small area really, really well? What's the difference? Why is Tesla so bad at it but Waymo is able to do it?

[–] techwithjake@lemm.ee 26 points 2 months ago (1 children)

Going off what fishpen0 said, Waymo actually has sensors on it to detect things and can "understand" its surroundings much better than Tesla cars can wish just cameras.

I've ridden in Waymos and they are smooth riding. After the initial "OMG! There's no driver!" You kind of forget about it. You get to your (limited area) destination safely and without much hassle.

I can go more in depth if ya want.

[–] Wanderer@lemm.ee 1 points 2 months ago* (last edited 2 months ago) (4 children)

Humans can drive with just vision.

Tesla is doing it the hard way. Their model involves cars just having vision and driving the same as humans do. Humans can do it, why can't computers? Seeing as they have more cameras than 2. In theory they should be better than human drivers. Once it is solved they could instantly drive anywhere humans can.

Waymo has taken an easier route and they have used a lot of detailed mapping with also an assortment of additional sensors. Waymo doing it the easy way has only recently achieved this. Turns out it's really hard. Harder than everyone including the experts expected probably.

But with advances in computing and things like LLM's Tesla is catching up. Who knows how long that will take though? I always thought waymo was doing the right thing so I'm biased.

Edit: this fucking website I swear. I answered the question and got downvoted for it. What more you people want from me?

[–] IllNess@infosec.pub 9 points 2 months ago (1 children)

Human vision also have the brain that does a lot of automation like figuring out distance and looking out for danger with real time reaction speed. Night vision is usually better for most people too. The brain also combines that with sound so it can detect things out of vision. Eyes already have a range of view but the human head can also move around accurately. On top of all this focus is what the human brain is best at. While cameras can see 360°, years of data built in the subconscious taught a human driver what to look out for.

[–] ContrarianTrail@lemm.ee 3 points 2 months ago (1 children)

Human vision also have the brain that does a lot of automation like figuring out distance and looking out for danger with real time reaction speed.

To be fair, the reaction time of a self driving vehicle is orders of magnitude greater than that of even the best human driver.

This is what leads to many moral questions about autonomous vehicles; where as human may not have time to react when an accident is about to happen, a self-driving car does. Laws of physics may prevent it from stopping in time, but it may have the ability to choose who to hit; the kid of the grandmom.

[–] IllNess@infosec.pub 2 points 2 months ago (1 children)

The reports of the safety of AVs is overstated when you consider that they are limited within a city limit, they rarely go on the highway, they follow speed limits in cities which is lower than highways, people are more aware of AVs, and during their trial runs they had an actual human in the car to correct them.

On average, AVs are safer especially when you consider some bad drivers do not get better, people drink, people get sleepy, people distract themselves. and young drivers lack experience. But the average driver with it with their full faculties would do better in tests based solely on reactions.

if you look at the accident reports and took out drivers who were on a substance, are younger than 25 or older than 70, was distracted with something like their phones or others in the car, were not following laws, and those who were emotional then the stats would be pretty close.

Overall I do believe AVs are better for world because peak performance of an average driver is rare.

[–] ContrarianTrail@lemm.ee 1 points 2 months ago (1 children)

But the average driver with it with their full faculties would do better in tests based solely on reactions.

React faster than a computer would? I cannot imagine how that would be the case.

[–] IllNess@infosec.pub 1 points 2 months ago (1 children)

If it was a simple flag, you would be correct a computer will react faster than any human but when you factor in everything else like constantly analysis of surroundings, decision making, and accounting for physical limitations, then yes. It's the reason why Waymo cars move so slowly.

If a person was standing at a sidewalk, hidden behind an object, far away from a pedestrian way or traffic signal and jumps 2 feet in front of a car going 25 mph, the average driver with their full faculties would do better than Waymo.

[–] ContrarianTrail@lemm.ee 1 points 2 months ago

Well yeah right now that may still be the case but I was mostly thinking about the "true" self driving cars of the future. It seems obvious to be that they would vastly outperform human drivers on literally everything. Just like a true AGI would.

[–] Strykker@programming.dev 2 points 2 months ago* (last edited 2 months ago) (2 children)

You apparently haven't seen the video of a fsd tesla going full speed through the fog towards a train crossing with an active train.

The cars display didn't even indicate that it thought something was in front of it, and would have happily driven right into the side of this train if the driver hadn't taken over at the last moment. (Driver was an idiot for using fsd in the fog to begin with) but it shows the cameras can't handle reduced visibility well currently, they saw the fog and just decided it was open road or clear sky.

[–] Wanderer@lemm.ee 1 points 2 months ago

I don't see how that goes against anything I have said? That just supports what I said if anything.

[–] fruitycoder@sh.itjust.works 1 points 2 months ago

Makes me wonder if other human senses would be necessary for that tbh. Like if the train crossing has no lights, the horn and vibration of the train would be needed to replicate how people don't drive into trains.

[–] Redfugee@lemmy.world 1 points 2 months ago (1 children)

A human is not just a computer with a camera.

[–] Wanderer@lemm.ee 2 points 2 months ago
[–] ContrarianTrail@lemm.ee 1 points 2 months ago (1 children)

Not only that, but as far as I know, other companies are still relying on human-written code, whereas Tesla has gone with neural nets. If it turns out that manually coding how to handle every possible variation of traffic scenarios is an impossible task, those companies would essentially have to start from scratch, giving Tesla a massive lead for adopting AI so much earlier. Of course, it’s a gamble, things could go the other way too, but considering the leap FSD made from version 1.3 to 1.4, when they switched to neural nets, I’m rather confident they're on the right track.

[–] ForgotAboutDre@lemmy.world 2 points 2 months ago

An undeterministic system is dangerous. A deterministic with flaws can be better, the flaws can be identified understood and corrected. The flaws are more likely to be present in testing.

Machine learning is nearly always going to be undeterministic. If they then use continuous training, the situation only gets worse.

If you use machine learning because you can’t understand how to solve the problem, then you’ll never understand how the system works. You’ll never be able to pass a basic inspection test.

[–] ContrarianTrail@lemm.ee -1 points 2 months ago (1 children)

I'm not sure what you mean by suggesting Tesla is bad at it. Have you looked at any recent videos of Tesla FSD driving in cities? It's not flawless and neither is Waymo but claiming it's bad is far from the truth. Most people seem to be basing their opinion about FSD on outdated information. It has come a long way. It will reliably take you from your home to the grocery store and back with zero driver interventions. Nowdays it's almost boring to watch videos about FSD because it is so good.

[–] masterspace@lemmy.ca 12 points 2 months ago (1 children)

Tesla FSD has killed multiple people.

[–] ContrarianTrail@lemm.ee -1 points 2 months ago (2 children)

And it will keep killing people even after it surpasses the most skilled human driver. What's your point?

If we replaced every single car in the US with a self driving vehicle that was 10x safer driver than an average human is, there would still be 11 deaths every single day. Does that mean it's unsafe we should go back to human drivers and 110 daily deaths?

[–] masterspace@lemmy.ca 1 points 2 months ago (1 children)

There is no evidence that Tesla's FSD is 10x safer than a human driver, nor particularly strong reason to believe that it will get there using just cameras that are worse than the human eye.

Waymo on the other hand, actually has the safety data to back up a 10x claim, if not higher.

[–] ContrarianTrail@lemm.ee 4 points 2 months ago* (last edited 2 months ago) (2 children)

So if we replaced every single car in the US with Waymo's vehicles the daily deaths from traffic accidents would drop from 110 to 11. That's 11 news articles every day to use as evidence about how self driving cars are "not safe" because Waymo has killed multiple people.

That's the absurdity my comment tries to highlight. It's all relative. Pointing to individual accidents is not a proof in itself of something being unsafe. This applies to Tesla FSD as well.

[–] masterspace@lemmy.ca 2 points 2 months ago* (last edited 2 months ago) (1 children)

Fair point in the abstract, but in this scenario Waymo has killed zero people while developing self driving technology while Tesla has already killed several. The deaths have also not been caused by random unavoidable happenstance, but from driving full speed into trucks and medians.

It's entirely possible that by the time both are ready for actually full primetime and are both 10x safer than the average human driver, that Waymo's software will have killed zero people and Tesla's software will have killed several.

[–] ContrarianTrail@lemm.ee 4 points 2 months ago (1 children)

Both will lead to people getting killed eventually. It's near-unavoidable fact of reality. Better not let perfect be the enemy of good. The key is that less and less people are dying and getting injured.

[–] masterspace@lemmy.ca 2 points 2 months ago* (last edited 2 months ago) (1 children)

This is a bit of a false equivalency.

There is zero reason to think that Waymo software, that has slowly and incrementally rolled out to new areas, and relies on cameras, radar, and sonar, will have the same fatality rate as Tesla's FSD software that just got pushed out to anyone with a Tesla, and relies just on cameras.

More to the point, we still don't know if Tesla FSD can actually outperform a human. It is again, based on cameras that are worse than the human eye.

[–] ContrarianTrail@lemm.ee 2 points 2 months ago* (last edited 2 months ago) (1 children)

More to the point

This whole conversation so far has entirely missed the point.

The fact that a self driving car company has gotten people killed is a moot point. Even if we had a self driving car that is 100x safer driver than humans it will still get people killed. Saying "This 100 times safer than human car company has gotten multiple people killed" doesn't mean anything. Human drivers get 110 people killed every single day in the US alone. That's the starting point. Not 0 people getting killed. The only thing that's important here is being better driver than human. Not perfect - better.

[–] masterspace@lemmy.ca 1 points 2 months ago

More to the point, we still don’t know if Tesla FSD can actually outperform a human. It is again, based on cameras that are worse than the human eye.

This whole conversation so far has entirely missed the point.

The only thing that’s important here is being better driver than human. Not perfect - better.

Not sure if you read the above?

[–] technocrit@lemmy.dbzer0.com -1 points 2 months ago (1 children)

Why not zero intentional murders?

[–] ContrarianTrail@lemm.ee 3 points 2 months ago

Let's hear your plan then.

[–] technocrit@lemmy.dbzer0.com -2 points 2 months ago* (last edited 2 months ago)

We shouldn't be consciously murdering people so that suburbanites can drive around in a huge metal cage with two sofas, a stereo system, HVAC, micro-plastic tires, slave-produced resources, exhaust/energy, etc.

Instead we should ban cars and replace them with readily available infrastructure for walkers, bikers, wheelchairs, and LEVs that's sustainable, healthy, affordable, pleasant, efficient, cheap, etc.