
22 Sep
2020
22 Sep
'20
1:36 p.m.
On Wed, 29 Jul 2020 11:35:20 +1200, I wrote:
Well, if the raw data being used to train the “algorithms” are biased against those social groups, then naturally the decisions made by those “algorithms” will be similarly biased.
Another example comes from Twitter’s new autocropping algorithm <https://www.theregister.com/2020/09/21/twitter_image_cropping_ai/>. There is a link to a rather dramatic, if NSFW, test, where two versions of an image of a certain prominent US politician are presented, one original, the other with his anatomy distorted in a particular way. Two composites are created, with exactly the same component images, just ordered differently. In each case, Twitter’s algorithm unerringly zooms in on ... guess which version ...