Code That ChatGPT Writes is Usually Insecure
One of the many uses of ChatGPT is that it can also write computer code. But according to researchers, it is usually unsafe. However, the tool itself knows that, too, if you ask.
Four researchers from the University of Québec (Canada) studied the code that ChatGPT generated after asking them to develop several programs. A total of 21 programs in C, C++, Python and Java were created by ChatGPT, indicated by researchers Raphaël Khoury, Anderson Avila, Jacob Brunelle and Baba Mamadou Camara.
ChatGPT did, but a closer look revealed that the programs often failed to meet minimum security requirements. For example, code for an FTP server contained a vulnerability that enabled path traversal. In addition, a web application where you had to enter a username and password was sensitive to XSS (cross-site scripting, where you enter dangerous code in the field to crack the application). Only 5 of the 21 programs were armed against known security problems.
Remarkably, ChatGPT knows that. When the researchers asked the tool whether the code it generated was safe, it often indicated that this was not the case. Nevertheless, with some code tweaking, ChatGPT eventually produced twelve working programs.
The researchers explain their findings in their pre-press paper. The nuance is that ChatGPT can write code, but you have to be sufficiently familiar with the environment to ask the tool about errors and improve them.
The researchers conclude that ChatGPT is not yet ready to replace experienced and security-conscious programmers, but that in this way, it can also play a role in learning how to deal with good programming practices.