Of course, you can't solve lack of Congressional action with AI algorithms while tens of thousands of organs just get lost or go bad. The algorithm isn't going to take an organ and put it back in the jar if someone's spilled it
This post is very over the top, but the reality of transplantation is far more nuanced.
You are attributing one protected group being harmed without trying to get at the causal factor for why there is a disparity. The cancer/age issue is tricky. In this response to the initial critique of the score, Raj et al. point out that young people with cancer have more stable livers and don’t fare well after early liver transplantation because of cancer recurrence:
Raj et al. also highlight that the new score reduced mortality on liver transplant waitlists. Medical decision-making is always multiobjective.
The TBS critics also propose not throwing away risk scores but using a different risk score, MELD. Qu and Schmelzle point out that MELD has its own issues concerning cancer patients:
The problem of prioritizing liver transplantation is a wicked problem. In a resource-constrained system, some sort of rules of prioritization are inevitable. It’s worth emphasizing that the old rule for transplantation was also a statistical rule! You seem to be complaining about the new rule solely because it has more variables.
Or perhaps you are arguing here that all rules are bad. That’s fine, but that’s a call for anarchism, not an indictment of AI.
And I’d urge you all to consider that calling Cox proportional hazard regression “AI” is itself a kind of snake oil.
Wow! The 5-year truncation of the dependent variable explains the age bias so well. You have indelibly impressed upon me the need to check for target-construct mismatch!
What does it have to do with utilitarian ethics? The algorithm is just bad. Any regular data scientist or ML engineer could have spotted the mistake. Unfortunately, mistakes do happen, so every algorithm's performance has to be observed and validated.
They had a serious study observing and validating that the algorithm was in practice reducing waiting-list times and increasing 5-year survival.
I know enough data scientists and ML engineers and I wouldn't expect most of them to catch something like this. The number that was supposed to go down, went down. The number that was supposed to go up, went up.
No incentive to make a case against results like that -- not only are you not a team player, in this case you literally are wishing death upon very sick people being saved by this wonderful new algorithm.
Important to draw attention to this. Article 22 in GDPR protect against automated decision-making in the EU, which would prohibit this practice. Unfortunately, not relevant for the UK anymore after Brexit.
The problem goes beyond solving the issue with technology
Take a look at this https://thehill.com/blogs/congress-blog/healthcare/477710-its-time-to-provide-needed-reform-to-the-organ-donation-system/
Of course, you can't solve lack of Congressional action with AI algorithms while tens of thousands of organs just get lost or go bad. The algorithm isn't going to take an organ and put it back in the jar if someone's spilled it
This post is very over the top, but the reality of transplantation is far more nuanced.
You are attributing one protected group being harmed without trying to get at the causal factor for why there is a disparity. The cancer/age issue is tricky. In this response to the initial critique of the score, Raj et al. point out that young people with cancer have more stable livers and don’t fare well after early liver transplantation because of cancer recurrence:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)01308-9/fulltext
Raj et al. also highlight that the new score reduced mortality on liver transplant waitlists. Medical decision-making is always multiobjective.
The TBS critics also propose not throwing away risk scores but using a different risk score, MELD. Qu and Schmelzle point out that MELD has its own issues concerning cancer patients:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)01309-0/fulltext
The problem of prioritizing liver transplantation is a wicked problem. In a resource-constrained system, some sort of rules of prioritization are inevitable. It’s worth emphasizing that the old rule for transplantation was also a statistical rule! You seem to be complaining about the new rule solely because it has more variables.
Or perhaps you are arguing here that all rules are bad. That’s fine, but that’s a call for anarchism, not an indictment of AI.
And I’d urge you all to consider that calling Cox proportional hazard regression “AI” is itself a kind of snake oil.
Wow! The 5-year truncation of the dependent variable explains the age bias so well. You have indelibly impressed upon me the need to check for target-construct mismatch!
What does it have to do with utilitarian ethics? The algorithm is just bad. Any regular data scientist or ML engineer could have spotted the mistake. Unfortunately, mistakes do happen, so every algorithm's performance has to be observed and validated.
They had a serious study observing and validating that the algorithm was in practice reducing waiting-list times and increasing 5-year survival.
I know enough data scientists and ML engineers and I wouldn't expect most of them to catch something like this. The number that was supposed to go down, went down. The number that was supposed to go up, went up.
No incentive to make a case against results like that -- not only are you not a team player, in this case you literally are wishing death upon very sick people being saved by this wonderful new algorithm.
Important to draw attention to this. Article 22 in GDPR protect against automated decision-making in the EU, which would prohibit this practice. Unfortunately, not relevant for the UK anymore after Brexit.