Discussion about this post

User's avatar
Greg G's avatar

Getting models to avoid expressing opinions feels like the wrong strategy to me. The models still have the same weights internally, so their discussions of those topics will still have the same slant in their underlying analysis.

The problem becomes analogous to the challenges we're seeing with media organizations. Organizations don't explicitly state their point of view, but the articles still reflect that. Seeing a particular point of view in pieces that are claimed to be objective is part of what is eroding public confidence in these sources. Also, any point on the spectrum will seem "biased" to many people, so there's no way to avoid this issue other than to disclose the basis for a particular output.

On the flip side, I do not necessarily want a hamstrung, "both sides" presentation from a model. I want the actual conclusion. Especially considering how many mundane topics are now political contests in our society, ruling out political opinions results in a crippled model if followed too far. Should a model be able to express an opinion on whether Covid vaccines are effective, and to what extent? That's now a political question. So is whether nuclear power helps mitigate climate change. So is whether we should build market rate housing. To the extent any company succeeds in making their model avoid opining on topics like these, other models become more useful in comparison, which could lead to the unintended consequence of more people using the more opinionated models than otherwise would.

Expand full comment
Nick Potkalitsky's avatar

Clearest analysis of Gen AI bias to date. Love the 3 level breakdown. This should help disambiguate general claims to bias. I love your overall mission to deescalate polarizing rhetoric. Part of wonders if sound research will really have an impact knowing the way things work in this country and our media. But I sure am rooting for you!

Expand full comment
6 more comments...

No posts