Do Users Write More Insecure Code with AI Assistants?

According to this study from Stanford University, the answer is yes. From the conclusion:

We conducted the first user study examining how people interact with an AI code assistant (built with OpenAI’s Codex) to solve a variety of security related tasks across different programming languages. We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.

Do Users Write More Insecure Code with AI Assistants?
Fiosracht @fiosracht