
29 Jul
2020
29 Jul
'20
11:35 a.m.
On Tue, 28 Jul 2020 23:51:48 +1200, David McNab wrote:
This is fueled by evidence that some algorithmic outcomes, inadvertently, are statistically correlated to different treatment of different social groups.
Well, if the raw data being used to train the “algorithms” are biased against those social groups, then naturally the decisions made by those “algorithms” will be similarly biased. Nobody is seriously questioning the scientific validity of such a basic point, are they?