Coders Using AI Assistants Make More Insecure, Buggy Code Than Those Who Don’t
Stanford University Computer Scientists have found that the programmers who use AI coding tools like Github Copilot create a less-secure code compared to those who do things all on their own, reveals a report by The Register.
According to researchers, they found that AI assistant users often produced more security vulnerabilities than those without AI assistants. The former also were more likely to believe that they wrote more secure code than those without access to an AI assistant.
Participants were asked to write code in response to give prompts using a standalone React-based Electron app monitored by the study administrator. The first prompt asked users to write two functions in Python where one encrypts and the other decrypts a given string using a given symmetric key.
For this specific question, those who relied on AI assistance were more likely to write incorrect and insecure code compared to the control group working without automated help. Only 67 percent of the assisted group gave the right answer compared to 79 percent of the control group.
Moreover, those from assisted groups were ‘significantly’ more likely to offer an insecure solution and make use of trivial ciphers such as substitution cip[ers and not conduct an authenticity check on the final returned value.
The authors conclude that AI assistants should be looked at from a cautious lens as they have the ability to mislead inexperienced devs while also resulting in grave security vulnerabilities. They also hope this study helps in better designing of AI assistants to make devs more productive without compromising security.