Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is all correct, but it doesn't change make it any less of a real issue because adding an AI intermediate step in the biased process only makes things worse. It's already hard enough to to try to prove or disprove bias in a the current system without companies being able "outsource" the bias to an AI tool and claim ignorance of it.

The reason research like this can still be useful is that of the people who write labor laws (and most of the people who vote for them) aren't necessarily going to "understand that the results from any data-based modeling process is a concactination of the cumulative input data topologies and nothing else"; an academic study that makes a specific claim about what results would be expected from using ChatGPT to filter resumes helps people understand without needing domain knowledge.



Bingo. When suits tell us they plan to replace us with LLMs, that means they also plan to absolve themselves of any guilt for their mistakes, so we should know about the mistakes they make.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: