There is a case to be said about factor analysis (which IQ is based on) is the AI of the 20th century. It was the most powerful statistical tool of the time (just like deep learning is today), needed a lot of data, increasing the data increased the accuracy, and if you used biased data, you got biased outcome. There is one big difference though, you kind of have to be targeted in your bias if your using factor analysis, while AI the bias reflects society. This is because you can effectively train your AI with terabytes of data, but with factor analysis, if you do that the biased results you wanted, will probably become statistically insignificant.
This leads me to believe that the scientific racism behind intelligence testing was quite deep. And the people behind this pseudo-science were as intent in making their racist “discoveries” as the people behind ChatGPT are in making sure their model doesn’t reflect racist views.
> There is one big difference though, you kind of have to be targeted in your bias if your using factor analysis, while AI the bias reflects society
Good point. I agree, factor analysis is a great tool, but can easily end up showing what the researcher is looking for instead of deeper truths. The problem being, often the factors used aren't causal factors but just correlated, which often seems to be the case for race-based stuff (from the little I've looked at).
> And the people behind this pseudo-science
I think it's probably pseudo-science to the extent that most social science is pseudo-science, in that the results may be based on scientific methodology, but are only useful in the context of whatever social theory they've made up.
This leads me to believe that the scientific racism behind intelligence testing was quite deep. And the people behind this pseudo-science were as intent in making their racist “discoveries” as the people behind ChatGPT are in making sure their model doesn’t reflect racist views.