The semi incident they blamed because the autopilot could see under the trailer, so therefore missed it.
This clearly refutes that.
Tesla claims that it's safer than a human, but that's based on when they allow it, which isn't twisty mountain curves (where according to local signs, 9/10 rural accidents occur on curves).
Tesla calims that the driver is still ultimately in control, but as can be seen in industrial automation, the more removed the driver is from the regular operation, the slower he is to parse the alarm screens, identify the issue, and take correct corrective action.
I second (or third, or 4th) the statement that this junk shouldn't be being tested on public roads (google cars or other).
And it should be either full autopilot, or off.
And Tesla, or the worker who wrote the algorythms should be charged as appropriate for every accident that they have partaken in. (Oz law is that if I see my wife heading for an accident, and grab the wheel, I own 50% of the outcome)...should be exactly the same here.