More Tesla investigations

Status
Not open for further replies.
No, and they are likely not going to choose the rhubarb in a situation where you could end up dead (pile-up) vs just going off the road. There are so many situations that require driver intuition to know what the right course of action is, even if it would be the "wrong" course of action for AI.
Not sure what you mean in the bolded sentence.
 
Autonomous driving cars do not exist.
The cost of development, exploration and test of any product is high. But the promise autonomous cars is incredible.
And the idea that drivers are better is ludacris (no offense to anyone).
Can any mind check a thousand things in an instant?

Tesla has billions of miles of real world data and continues to capture more. There is a long way to go; many, many obstacles.
The only constant is change.
Opinions certainly vary and I know I'm a better driver than any car doing it for me. AI will NEVER replace the human mindset regardless of what some wanna believe. IMBHO the more technology advances the more dependent/lazier humans are becoming. Yeah true that some technological advances are daily lifesavers for many, but society in general is just getting too lax. I still find it hilarious how a huge majority of people can't function daily without a **** phone in their hand nearly 24/7. Or those that just have to have smart switches to flip lights or whatever on/off so they don't have to get off that ass and do it...
 
OK. Try looking 360* at once.
The point is, once a comluter and its periphrials are programmed, they will react in a known way. There are about 7B people on the earth; do any 2 react in exactly the same way?
Once a program is updated, it can be pushed out to every instance of that program.
How long does it take for, say 100M people to be updated?

Computers are better because they are predictable and accept the changes programmed to them.
Again, a computer will only do what it's programmed to do by the dee dee dee engineers designing them. A chain is only as strong as the weakest link... then when a computerized system leads to destruction who gets the blame?
 
If a person drives enough, in situations where there’s a lot of bad drivers around, he or she will develop almost a sixth sense when something bad/stupid is about to happen. Beat up POS with temporary tag, look out. Person coming up from behind at 25 over & there‘s no passing lane, look out. Light turns green for you in the ‘hood at a busy cross street full of speeding knuckleheads, look out. When Tesla (or any other “self driving car”) can develop that sixth sense, I’ll buy one. Not happening in my lifetime!
 
I’m not sure how I’ve survived to the ripe old age of 59 lol.
The sad part is these youngsters today will never get to experience some of the best things in life because according to those with the gold every single thing we do is gonna kill us. Nor will they ever have the desire and reward to drive halfway across the United States to bring stuff like this home.
1645369059032.webp
 
The National Highway Transportation Safety Administration (NHTSA) found that somewhere between 94% and 96% of all motor vehicle accidents are caused by some type of human error.

Bettering human behavior seems a pretty low bar to beat...

Computers produce consistant, predictable results. People? Not so much.
@P10crew posted a pic of a tailgater, which is a dangerous situation. Our Model 3 has a setting on how many car lengths to follow.
 
No, and they are likely not going to choose the rhubarb in a situation where you could end up dead (pile-up) vs just going off the road. There are so many situations that require driver intuition to know what the right course of action is, even if it would be the "wrong" course of action for AI.
Good point. The goal should be to avoid creating these conditions. I would argue that the vast number of these conditions are caused by human error.
An ounce of prevention, as they say.
 
Good point. The goal should be to avoid creating these conditions. I would argue that the vast number of these conditions are caused by human error.
An ounce of prevention, as they say.
The problem is that nature creates the conditions. White outs on the 401 happen, and, in areas, this can produce black ice. You see an 18-wheeler beginning to jackknife up ahead and you have zero traction, hammering on the brakes is going to do absolutely nothing except reduce or eliminate control. In that situation, if there's a ditch/field you can head into, vs the pile-up that's unfolding in front of you, that you will not be able to stop for, and will be almost absolutely guaranteed to be piled-into from behind, picking the ditch and avoiding it is arguably the safest course, but AI will choose to just brake hard, on a surface where braking isn't going stop you, and you will be in a situation where death or serious injury is likely.
 
The National Highway Transportation Safety Administration (NHTSA) found that somewhere between 94% and 96% of all motor vehicle accidents are caused by some type of human error.
I can believe that.

Only thing is that computers are going to produce a whole new set of causes of accidents. Because they'll do things that humans would never have done. Things like braking heavily when that half ton crossed in front of my Tesla, or braking when a plastic bag blew across the road (as described by another member).

Will we be safer? Maybe on average. But we may die in a car wreck where an experienced driver will be shouting - No, No, Don't do that!

Flying is overall much safer now than when the good old boys flew by the seat of their pants. But novel kinds of accidents also happen - think of the Air France passenger jet that crashed into the Atlantic ocean after a pitot tube iced over and the autopilot kicked out.
 
The problem is that nature creates the conditions. White outs on the 401 happen, and, in areas, this can produce black ice. You see an 18-wheeler beginning to jackknife up ahead and you have zero traction, hammering on the brakes is going to do absolutely nothing except reduce or eliminate control. In that situation, if there's a ditch/field you can head into, vs the pile-up that's unfolding in front of you, that you will not be able to stop for, and will be almost absolutely guaranteed to be piled-into from behind, picking the ditch and avoiding it is arguably the safest course, but AI will choose to just brake hard, on a surface where braking isn't going stop you, and you will be in a situation where death or serious injury is likely.


Great example. A defensive driver does what he or she can do at the earliest possibly moment to avoid calamity.

A computer will trigger the brakes at a specified distance while a experienced human will react sooner. A common situation I encounter is driving on the freeway and seeing brake lights come on well into the distance. At that point my foot is already coming off the gas. The computer or a inexperienced driver will continue at speed until they realize they have to stop.
 
The National Highway Transportation Safety Administration (NHTSA) found that somewhere between 94% and 96% of all motor vehicle accidents are caused by some type of human error.

Bettering human behavior seems a pretty low bar to beat...

Computers produce consistant, predictable results. People? Not so much.
@P10crew posted a pic of a tailgater, which is a dangerous situation. Our Model 3 has a setting on how many car lengths to follow.
Pretty sure he was towing that pickup-unless GM was building self driving trucks in the ‘50s? Clearly nobody behind the wheel…
 
I can believe that.

Only thing is that computers are going to produce a whole new set of causes of accidents. Because they'll do things that humans would never have done. Things like braking heavily when that half ton crossed in front of my Tesla, or braking when a plastic bag blew across the road (as described by another member).

Will we be safer? Maybe on average. But we may die in a car wreck where an experienced driver will be shouting - No, No, Don't do that!

Flying is overall much safer now than when the good old boys flew by the seat of their pants. But novel kinds of accidents also happen - think of the Air France passenger jet that crashed into the Atlantic ocean after a pitot tube iced over and the autopilot kicked out.
Yup, or the 777 debacle where planes flew themselves into the ground. And aircraft have much higher levels of redundancy and development money put into their autopilot systems.
 
The problem is that nature creates the conditions. White outs on the 401 happen, and, in areas, this can produce black ice. You see an 18-wheeler beginning to jackknife up ahead and you have zero traction, hammering on the brakes is going to do absolutely nothing except reduce or eliminate control. In that situation, if there's a ditch/field you can head into, vs the pile-up that's unfolding in front of you, that you will not be able to stop for, and will be almost absolutely guaranteed to be piled-into from behind, picking the ditch and avoiding it is arguably the safest course, but AI will choose to just brake hard, on a surface where braking isn't going stop you, and you will be in a situation where death or serious injury is likely.
No offense, but you are not not understanding my point. A computer is much better than a human in your scenario.
What is a computer? An information giving machine based on data and programming.
Imagine a network of real time data inputs (aka cars) connected to a system, which can include weather and road conditions. Inputs can be added as they arise, additional inputs are basically unlimited, algorithms can be modified.

The point is, a computer allows for predictable actions, conformity and information that no human or group of humans can begin to compare to.
I made a career of predictive analytics. Results from my system, honed over years of changes based on what we learned, was far more accurate then C level executives who were the best in the world in what they do. Why? A critical reason was human error, oftentimes because people did not do what they said (planned, etc) they would do. A computer can olny do wat it has been programmed to do.

As an aside, bad data is part of the game...
 
Yup, or the 777 debacle where planes flew themselves into the ground. And aircraft have much higher levels of redundancy and development money put into their autopilot systems.
777 or 737 Max? In the case of the 737 Max, the pilots didn't even know how to simply over-ride the auto-pilot by flicking a switch to "OFF" and just fly it manually - lack of training. This is kind of how humans and cars are being disconnected.
 
No offense, but you are not not understanding my point. A computer is much better than a human in your scenario.
What is a computer? An information giving machine based on data and programming.
Imagine a network of real time data inputs (aka cars) connected to a system, which can include weather and road conditions. Inputs can be added as they arise, additional inputs are basically unlimited, algorithms can be modified.

The point is, a computer allows for predictable actions, conformity and information that no human or group of humans can begin to compare to.
I made a career of predictive analytics. Results from my system, honed over years of changes based on what we learned, was far more accurate then C level executives who were the best in the world in what they do. Why? A critical reason was human error, oftentimes because people did not do what they said (planned, etc) they would do. A computer can olny do wat it has been programmed to do.

As an aside, bad data is part of the game...
I'm completely understanding your point and offence taken. You seem to be struggling with the idea that a human can ever be better in a given situation than a series of algorithms and feedback sensors (and GPS, weather data...etc) because they can make the decision before the event unfolds. Your personal anecdotes about execs making strategy aren't similar. It's far more like all the models for climate change, they've all been wrong, every single one, despite having essentially unlimited resources at their disposal, because there are just too many unknowns. Be creative, think of situations where there are simply too many variables for predictive computation to succeed, that, instead of being reactive, intuitively making the wrong choice intentionally because it was smart.

I work with computers, remember, you aren't surrounded by a gaggle of technologically illiterate luddites. GIGO doesn't even come into play here, this is intuition and making the "wrong" decision (going for the ditch/field) because it's the better decision than trying to stop in a situation where doing so is impossible.

For the sake of keeping this rooted in reality, take a look at my situation again:

You are driving down the highway and there's a patch of black ice coming into the corner. No weather data tells you that there's black ice on the road, the highway could be clear for miles leading up and leading away from it. The transport trailer stepping out in front of you is your only indication that something is about to unfold. At that point, in traffic, even if the truck was part of the same system and this situation could be communicated to others, the clutch of vehicles all within that area are without a means of stopping them from being involved in this collision because of the surface they are on are and all of them, if automated, just "learned" about that fact as it is happening. Now consider vehicles closer to the trailer that weren't aware of the ice prior to applying the brakes are already sliding, no amount of ABS is going to allow avoidance at this point. The person, who saw the trailer stepping out and already, instead of braking, made the decision to head toward the shoulder, can avoid the pile-up that's now happening.

Now, yes, vehicles further up the road, still on a tractive surface can predictively avoid piling into the wreck, so, statistically, the pileup could certainly be smaller than if it was all just people, driving on their own accord, and being of varied ability, and that seems to be the part of AI that you are focusing on, the overall statistical reduction in accidents, while, I must assume intentionally, dismissing the fact that a good driver would be able to avoid collisions that AI would not, and, there will be new types of collisions that emerge with AI, just like we've seen with aviation where unintended consequences of automation create novel failure models that, while addressable, resulted in avoidable fatalities during this "learning" process.
 
Last edited:
A computer will trigger the brakes at a specified distance while a experienced human will react sooner. A common situation I encounter is driving on the freeway and seeing brake lights come on well into the distance. At that point my foot is already coming off the gas. The computer or a inexperienced driver will continue at speed until they realize they have to stop.
Yep, an AI car will never have the level of "spidey sense" that very good and attentive human drivers have. A very good driver should constantly have defensive options and outs running going through their head, and they should always be watching and reading what boneheads around him are doing. Little signs are all around of how people are driving around you. This is something I've honed over many years riding motorcycles on the roads.

AI cars have been developed because humans are loosing the skill of driving, and people are slowly just becoming a passenger behind the wheel. Their inattentiveness (cell phone the main culprit) has led to automakers coming up with all these nannies that will save them when they don't pay attention to something they should be - namely driving. When the day comes where ever car on the road is a 100% self driving car without any human intervention (ie, "Johnny Cabs"), and all are controlled as a system, then cars will get along better on the roads.
 
Last edited:
777 or 737 Max? In the case of the 737 Max, the pilots didn't even know how to simply over-ride the auto-pilot by flicking a switch to "OFF" and just fly it manually - lack of training. This is kind of how humans and cars are being disconnected.
You are right, it was the 737 Max 8, but it wasn't the auto pilot causing the issue, this was during takeoff, where auto pilot isn't engaged, it was instead the MCAS system, which is an automated safety system designed to prevent angle changes severe enough that they could induce a stall. This system could be turned off, but would then automatically turn itself back on if the perceived angle issue persisted, so the pilots were in a constant battle of fighting a system they couldn't disable, based on the manufacturer's operating instructions.

Now, you are right that apparently there was a series of steps that could be followed to completely disable MCAS, but it would seem, based on the report that I've read, that this was not effective, and so they turned the computer back on, and it immediately tried to "correct" the craft again, causing the crash.

Boeing claimed, at the time, that they were working on an update to correct the issue.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom