Folks in this thread are trying to compare Waymo to human driving as some sort of expectation setting threshold. If humans can’t be perfect why should we expect machines to be?
We don’t expect humans to be perfect. When a human breaks the law we punish them. When they are sued civilly and found liable, we take their money/property.
There’s also a sense of self-preservation that guides human decision making that doesn’t guide computers.
Until we account for the agency that comes along with accountability, and the self-preservation mechanisms that keep humans from driving someone else onto a light rail track, we are making a false equivalence in saying that somehow we can’t expect machines to be as good as humans. We should expect exactly that if we’re giving them human agency but not human accountability, and while they still lack the sense of preservation of self or others.
Oof, psychopath much?
There, fixed it.