Developers Trust AI-Written Code Too Much

Some worrying results from a survey <https://devclass.com/2023/12/05/ai-assistants-write-insecure-code-that-humans-trust-too-much-snyk-survey-finds/> of the increasing use of AI-generated program code. The survey references a study ... from late last year which looked at “how developers choose to interact with AI code assistants and the ways in which those interactions cause security mistakes,” though the study was limited to university students. There are some shocks here, including the willingness of the AI assistant to generate SQL “that built the query string via string concatenation,” a sure route to SQL injection vulnerabilities. And also a very common feature in code written by PHP developers. Perhaps that kind of code figured very prominently in the training sets for these AIs? Also note this: A key problem is that human intuition seems inclined to trust AI engines more than their known limitations merit. It is known that generative AI is not reliable; yet a lifetime of learning that computers are more logical than humans is hard to overcome. Surely this is just the stereotypical, simplistic Hollywood movie/TV view of computers? Can people in the real world be that dumb? Those of us with experience of computers realize that “more logical than humans” merely means “being able to follow longer logical inference chains than humans”. But if the assumptions the computer is starting from are wrong, the inferences it draws are going to be just as wrong: “Garbage In, Garbage Out” as the old saying goes. And bugs in the algorithms for following those inference chains (e.g. incorrect coding of calculation formulas) just make things worse.
participants (1)
-
Lawrence D'Oliveiro