This month I want to highlight a Bluesky post by Rémi Verschelde, a Godot open source game engine veteran. Rémi describes a challenge with the sheer number of bad AI contributions to the Godot engine and how that is burying the team.
With well-framed context, AI is scary good at producing quality code. It makes sense that the next bottleneck in delivering value is the code review step, and it also makes sense that many AI code contributions are low quality since good context takes skill and effort.
I’m not surprised that people are contributing to open source with low effort as there are several incentives to do so. Way back in 2022, open source code contributions had a small presumption of competence from the contributor, as coding wasn’t democratized the way it is today. Now, code is more prevalent but harder to discern its quality.
I think open source projects are an canary for this problem space, and all large projects will end up in the same spot. My primary takeaway is we can’t solely speed up code generation, but instead must ensure the end-to-end value delivery pipeline (e.g., code creation through to prod) can handle the volume.
Separately – but related – I think the definition for AI slop is now generated code with insufficient context.
“Godot prides itself in being welcoming to new contributors, letting any engine user have the possibility to make an impact on their engine of choice.
Maintainers spend a lot of time assisting new contributors to help them get PRs in a mergeable state.
I don’t know how long we can keep it up.”

Leave a comment