No offense, but you are not not understanding my point. A computer is much better than a human in your scenario.
What is a computer? An information giving machine based on data and programming.
Imagine a network of real time data inputs (aka cars) connected to a system, which can include weather and road conditions. Inputs can be added as they arise, additional inputs are basically unlimited, algorithms can be modified.
The point is, a computer allows for predictable actions, conformity and information that no human or group of humans can begin to compare to.
I made a career of predictive analytics. Results from my system, honed over years of changes based on what we learned, was far more accurate then C level executives who were the best in the world in what they do. Why? A critical reason was human error, oftentimes because people did not do what they said (planned, etc) they would do. A computer can olny do wat it has been programmed to do.
As an aside, bad data is part of the game...
I'm completely understanding your point and offence taken. You seem to be struggling with the idea that a human can ever be better in a given situation than a series of algorithms and feedback sensors (and GPS, weather data...etc) because they can make the decision before the event unfolds. Your personal anecdotes about execs making strategy aren't similar. It's far more like all the models for climate change, they've all been wrong, every single one, despite having essentially unlimited resources at their disposal, because there are just too many unknowns. Be creative, think of situations where there are simply too many variables for predictive computation to succeed, that, instead of being reactive, intuitively making the wrong choice intentionally because it was smart.
I work with computers, remember, you aren't surrounded by a gaggle of technologically illiterate luddites. GIGO doesn't even come into play here, this is intuition and making the "wrong" decision (going for the ditch/field) because it's the
better decision than trying to stop in a situation where doing so is impossible.
For the sake of keeping this rooted in reality, take a look at my situation again:
You are driving down the highway and there's a patch of black ice coming into the corner. No weather data tells you that there's black ice on the road, the highway could be clear for miles leading up and leading away from it. The transport trailer stepping out in front of you is your only indication that something is about to unfold. At that point, in traffic, even if the truck was part of the same system and this situation could be communicated to others, the clutch of vehicles all within that area are without a means of stopping them from being involved in this collision because of the surface they are on are and all of them, if automated, just "learned" about that fact as it is happening. Now consider vehicles closer to the trailer that weren't aware of the ice prior to applying the brakes are already sliding, no amount of ABS is going to allow avoidance at this point. The person, who saw the trailer stepping out and already, instead of braking, made the decision to head toward the shoulder, can avoid the pile-up that's now happening.
Now, yes, vehicles further up the road, still on a tractive surface can predictively avoid piling into the wreck, so, statistically, the pileup could certainly be smaller than if it was all just people, driving on their own accord, and being of varied ability, and that seems to be the part of AI that you are focusing on, the overall statistical reduction in accidents, while, I must assume intentionally, dismissing the fact that a good driver would be able to avoid collisions that AI would not, and, there will be new types of collisions that emerge with AI, just like we've seen with aviation where unintended consequences of automation create novel failure models that, while addressable, resulted in avoidable fatalities during this "learning" process.