
On Wed, 27 Nov 2019 06:38:33 +1300, Roderick Aldridge wrote:
The Government is calling for feedback on the draft algorithm charter. May interest Peter, others?
This book review <https://www.theregister.co.uk/2019/11/26/you_look_like_a_thing_and_i_love_you_review/> offers a warning note on the difficulty of eradicating bias: AI inherits the bias of the data it is given, and if it comes from humans, it will not be neutral. Amazon, we are told, gave up on AI for identifying promising job applications because it could not eradicate gender bias, among other things. Simply removing gender information was insufficient as the AI used other clues to prefer male applicants – because they were preferred in the data on which it was trained. Huge effort is expended to work around problems like this, but it is difficult – made worse by the fact that working out exactly how an AI process has reached its conclusions can itself be a challenge.
We want your feedback on a draft algorithm charter, an enduring commitment for government agencies to use algorithms in a fair, ethical and transparent way.
Worth also pointing out that “transparent” and “neural nets” are very much diametrically opposed.