NZ Govt Unveils “Algorithm Charter”

The Charter <https://www.theregister.com/2020/07/28/new_zealand_algorithm_charter/> requires Government departments who sign on to it to “maintain transparency” about the algorithms they use, but does not make them reveal the actual details of these algorithms. Nor does it explain what it means by an “algorithm”.

This is just a bit of virtue signalling. It's important to be aware that in common parlance, outside of programming and mathematics, the word 'algorithm' has increasingly taken on a dark conspiratorial meaning. The connotation is deliberate mechanisms for inflicting all kinds of invisible secret oppressive controls on the masses. This is fueled by evidence that some algorithmic outcomes, inadvertently, are statistically correlated to different treatment of different social groups. Just look the other way when the tinfoil hatters weave in the correlation-causation fallacy, and you've got a real live "plot against humanity". Cheers David On 28/07/20 9:38 pm, Lawrence D'Oliveiro wrote:
The Charter <https://www.theregister.com/2020/07/28/new_zealand_algorithm_charter/> requires Government departments who sign on to it to “maintain transparency” about the algorithms they use, but does not make them reveal the actual details of these algorithms. Nor does it explain what it means by an “algorithm”. _______________________________________________ wlug mailing list -- wlug(a)list.waikato.ac.nz | To unsubscribe send an email to wlug-leave(a)list.waikato.ac.nz Unsubscribe: https://list.waikato.ac.nz/postorius/lists/wlug.list.waikato.ac.nz

On Tue, 28 Jul 2020 23:51:48 +1200, David McNab wrote:
This is fueled by evidence that some algorithmic outcomes, inadvertently, are statistically correlated to different treatment of different social groups.
Well, if the raw data being used to train the “algorithms” are biased against those social groups, then naturally the decisions made by those “algorithms” will be similarly biased. Nobody is seriously questioning the scientific validity of such a basic point, are they?

On Wed, 29 Jul 2020 11:35:20 +1200, I wrote:
Well, if the raw data being used to train the “algorithms” are biased against those social groups, then naturally the decisions made by those “algorithms” will be similarly biased.
Another example comes from Twitter’s new autocropping algorithm <https://www.theregister.com/2020/09/21/twitter_image_cropping_ai/>. There is a link to a rather dramatic, if NSFW, test, where two versions of an image of a certain prominent US politician are presented, one original, the other with his anatomy distorted in a particular way. Two composites are created, with exactly the same component images, just ordered differently. In each case, Twitter’s algorithm unerringly zooms in on ... guess which version ...
participants (2)
-
David McNab
-
Lawrence D'Oliveiro