Toronto recently used an AI tool to predict when a public beach will be safe. It went horribly awry. The developer claimed the tool achieved over 90% accuracy in predicting when beaches would be safe to swim in. But the tool did much worse: on a majority of the days when the water was in fact unsafe, beaches remained open based on the tool’s assessments. It was less accurate than the previous method of simply testing the water for bacteria each day.
Nov 17, 2022Liked by Sayash Kapoor, Arvind Narayanan
Thanks for highlighting this. It's such an important topic. One of the root sources of overoptimism of ML performance is the academic literature. We documented this for digital health:
In general, products that are used for critical matters of health and safety are held to a higher legal standard, but it does seem like the "bait and switch" mentioned above is used as a legal "out" if predictive software prompts a wrong decision that results in damages. If predictive software is supposed to be monitored, then in the case of a decision that causes damages, EITHER the software OR the person monitoring it is at fault (if the right decision is clear or could have been assessed by other means). At some point, there will be (or maybe was) a lawsuit against predictive technology for making wrong decisions that caused harm, where blame either falls on the technology or the person who is "supposed to monitor it", in which case the law will sort it out. There needs to be a law against using "magical technologies" for any situation involving health and/or safety that can't be left to chance or experimentation, so that the use of a technology can be challenged from being used by proving that it's claim is impossible and that its use or misuse can pose risks. Also, government departments should be entitled to refunds for technology that they purchased under false or impossible claims.
i am Dutch and the #toeslagenschandaal, as we know it, has had devastating effects on families and society as a whole, proving deeply ingrained racism and discrimination in our tax authorities and services, and resulting in severe distrust of public authorities, but i thought it was mainly an algorithm based on selection lists?
anyway, i read Weapons of Math Destruction by Cathy O’Neil and this was truly shocking and predictive of what happened in my native country.
You guys are the designated drivers in the AI hypeway .. Thank you for being the sober ones.
Thanks for highlighting this. It's such an important topic. One of the root sources of overoptimism of ML performance is the academic literature. We documented this for digital health:
https://www.nature.com/articles/s41746-021-00521-5
and talked about implications here:
https://www.scientificamerican.com/article/ai-in-medicine-is-overhyped/
The upside of this is the motherWEFers think AI is the key to their dystopian sci-fi fantasy future.
In general, products that are used for critical matters of health and safety are held to a higher legal standard, but it does seem like the "bait and switch" mentioned above is used as a legal "out" if predictive software prompts a wrong decision that results in damages. If predictive software is supposed to be monitored, then in the case of a decision that causes damages, EITHER the software OR the person monitoring it is at fault (if the right decision is clear or could have been assessed by other means). At some point, there will be (or maybe was) a lawsuit against predictive technology for making wrong decisions that caused harm, where blame either falls on the technology or the person who is "supposed to monitor it", in which case the law will sort it out. There needs to be a law against using "magical technologies" for any situation involving health and/or safety that can't be left to chance or experimentation, so that the use of a technology can be challenged from being used by proving that it's claim is impossible and that its use or misuse can pose risks. Also, government departments should be entitled to refunds for technology that they purchased under false or impossible claims.
(Data visualization that abridges data for "Decision Support" is not AI, but it has some of the same issues.)
i am Dutch and the #toeslagenschandaal, as we know it, has had devastating effects on families and society as a whole, proving deeply ingrained racism and discrimination in our tax authorities and services, and resulting in severe distrust of public authorities, but i thought it was mainly an algorithm based on selection lists?
anyway, i read Weapons of Math Destruction by Cathy O’Neil and this was truly shocking and predictive of what happened in my native country.
This is amazing work thank you guys!