
The upside: the script was made available with the study. Lots of studies cannot be replicated due to data and/or scripts/programs not being made available as well. Hence the "open science" movement (https://en.wikipedia.org/wiki/Open_science).
Yup. It seems so obvious to me (and to lots of others) that when you publish the results of scientific research, you have to make available notes, raw data and other stuff that would be necessary for others to replicate your results. This would have to include any essential processing software. Otherwise how can we trust them?
To people working with and on open source projects, this is second nature. However, a lot of academics are not necessarily experts in programming and other tools that they require in order to analyze data. I've encountered it before that people didn't want other people to see their "ugly" code (can't say that I'm proud of all the code that I've written over the years!). In other instances, they don't like being told that there is a bug. Admitting mistakes is hard, but reality is that they happen. Rather than seeing it as criticism, it should be viewed as encouragement to improve things. I mean, I'm well aware of the fact that my (large) software projects still contain lots of bugs and I'm glad when somebody stumbles upon one and reports it. :-)
Here <https://www.theregister.co.uk/2019/10/15/bug_python_scripts/> is a further article, which includes a copy of the buggy function. How depressing is it that all it does is collect the items of a list, only to append them to another list ...
Oh dear... It was probably a fluke on their system back then that, when they checked, the files got returned in a (seemingly?) ordered fashion. Cheers, Peter -- Peter Reutemann Dept. of Computer Science University of Waikato, NZ +64 (7) 858-5174 http://www.cms.waikato.ac.nz/~fracpete/ http://www.data-mining.co.nz/