

There is no human element to this implantation, it is the technology itself malfunctioning. There was no damage but the system thinks there is damage.
Let’s make sure we’re building up from the same foundation. My assumptions are:
- Algorithms will make mistakes.
- There’s an acceptable level of error for all algorithms.
- If an algorithm is making too many mistakes, that can be mitigated with human supervision and overrides.
Let me know if you disagree with any of these assumptions.
In this case, the lack of human override discussed in assumption 3 is, itself, a human-made decision that I am claiming is an error in implementing this technology. That is the human element. As management, you can either go on a snipe hunt trying to find an algorithm that is perfect, or you can make sure that trained employees can verify and correct the algorithm when needed. Instead hertz management chose option 3 - run an imperfect algorithm with absolutely 0 employee oversight. THAT is where they fucked up. THAT is where the human element screwed a potentially useful technology.
I work with machine learning algorithms. You will not, ever, find a practical machine learning algorithm that gets something right 100% of the time and is never wrong. But we don’t say “the technology is malfunctioning” when it gets something wrong, otherwise there’s a ton of invisible technology that we all rely on in our day to day lives that is “malfunctioning”.
You’re absolutely right. The technology isn’t perfect if it needs to be implemented with supervision, but it can be good enough to have a role in everyday society.
Great examples are self checkout lanes, where there’s always an employee watching, and speed cameras, which always have an officer reviewing and signing off on tickets.
Traffic lights are meant to control traffic. Yet you don’t expect them to prevent folks from running red lights. Folks don’t expect them to, because that’s not their role in their implementation - they are meant to be used alongside folks who will enforce traffic laws, and, maybe in fact, traffic controllers. This is arguably an example of an implementation done right.
This technology is meant to flag car damage. If there was a correct implementation, I would be able to say “folks don’t expect them to be perfect, because that’s not their role in their implementation - they are meant to be used alongside employees trained to verify damage exists, who can correct the algorithm if needed”, but the implementation in this case is sadly bad.
EDIT: hold on, just to make sure we’re on the same page, what is included when you refer to “the technology”? Is it just the hardware needed to produce the detections as well as the algorithm making the detections?