Speed at the cost of quality: Study of use of Cursor AI in open source projects (2025)
TL;DR Highlight
An empirical study finding that while adopting Cursor AI dramatically boosts short-term development velocity, it steadily increases code complexity and static analysis warnings — gradually eating away at long-term velocity.
Who Should Read
Developers or engineering managers who have adopted AI coding tools like Cursor and Claude Code on their team or are evaluating adoption — especially those thinking through code quality management processes.
Core Mechanics
- Teams using Cursor showed a 40-60% increase in feature delivery velocity in the first month, but code complexity metrics (cyclomatic complexity, coupling) increased steadily over the same period.
- Static analysis warning counts grew roughly 3x faster in Cursor-assisted teams compared to control groups using traditional development.
- The researchers hypothesize that AI tools optimize for 'working code fast' rather than 'maintainable code,' and without explicit quality constraints, they take shortcuts that accumulate as technical debt.
- After 3 months, the velocity advantage began shrinking as teams spent more time debugging and untangling complex AI-generated code.
- Teams that paired AI coding tools with mandatory code review and quality gates maintained velocity advantages longer — suggesting the tool itself isn't the problem, the workflow is.
Evidence
- The study tracked metrics across 8 teams over 6 months, comparing Cursor-assisted and traditional development teams on the same types of projects.
- Commenters noted this matches their anecdotal experience — initial productivity gains followed by a 'complexity hangover' as the codebase becomes harder to navigate.
- Several engineering managers shared that they've started requiring AI-generated code to pass stricter linting and complexity thresholds before merging.
- Some pushed back on the methodology, arguing teams using AI tools tackle more ambitious features, so comparing raw complexity metrics isn't apples-to-apples.
- The finding that quality gates preserve velocity longer resonated strongly — several teams shared their own gate configurations as a result.
How to Apply
- Set complexity thresholds in your CI pipeline (e.g., max cyclomatic complexity per function) and treat AI-generated code the same as human-written code — it must pass.
- Schedule monthly 'complexity audits': run static analysis across the codebase and track the trend. An upward trend is an early warning signal.
- When using AI coding tools, explicitly include quality constraints in your prompts: 'Write this function with cyclomatic complexity under 5' or 'avoid deeply nested conditionals.'
- Use the 3-month mark as a natural review point for AI tool adoption — assess whether velocity gains are still outpacing the complexity accumulation.
Terminology
Cyclomatic ComplexityA metric counting the number of independent paths through code — higher values indicate more complex, harder-to-test code.
Static AnalysisAutomated code analysis that detects potential issues without running the code, including style violations, complexity warnings, and potential bugs.
Technical DebtThe accumulated cost of shortcuts and suboptimal code decisions, which must eventually be paid back through refactoring.
VelocityIn agile development, the rate at which a team delivers features, typically measured in story points per sprint.