Drawing on the findings of the most recent two volumes of the State of Platform Engineering report, this episodes looks at the difference between shift-left and shift-down, portal traps, and abstract requirement satisfaction. There’s also a look at AI in platforms and platforms for AI.
There’s also this idea of platform pluralism. We used to chase the mythical single platform to rule them all, the data says that’s the wrong goal.
You’ll also hear about the measurement crisis, which has been validated in multiple reports from different research teams.
Crucially, AI makes 1.7x more errors, increasing major and critical defects. The AI-authored/assisted changes cause problems with logic errors, security vulnerabilities, and performance regressions (whereas humans are more likely to make spelling mistakes or have testability issues).
That means the AI is introducing foundational risk that a competent developer should be catching very early. Is the training data just poisoning the model with old insecure patterns?
DORA’s State of AI-Assisted Software Development report
/
RSS Feed
Share
Link
Embed
DORA has published their latest report and it goes deep into AI-Assisted Software development. It covers the extent of adoption, how it moves the needle on outcomes, and (crucially) what you need in place if you want to succeed.
As well as thoughtful and thorough analysis on software delivery with AI assistance, the report also looks at different team types and how throughput and stability happen together, not in conflict with each other.
Burnout, friction, and instability are on the list of things to watch out for, so find out how you can avoid amplifying the bad stuff, and boost the positive outcomes instead.
AI coding assistants and perceptions of productivity
/
RSS Feed
Share
Link
Embed
A very deep exploration, conducted by METR with 16 open-source developers and 246 real issues, has looked at perceptions and reality of productivity when using AI coding assistants. Titled Measuring the impact of early-2025 AI on experienced open-source developer productivity, the report tackles something we’ve known for a while, our perception of productivity is no indicator for reality.
We had the same issue with multi-tasking, where people thought they were more productive, but the reality was they were less productive. So, how does this translate to software delivery with AI assistance? The TL;DR is a perception of a 20% less time to complete tasks, but a reality of an additional 19%. Less than half of AI suggestions were accepted by the developers.
A lot of earlier studies looked at artificial problems, things that were self contained, maybe didn’t reflect the messiness or real code, or they relied on metrics that, honestly, AI could game.
The report looks at benefits and problems at the individual and team levels, uncovering some surprises along the way like the vacuum hypothesis and the five key perspectives on AI.
Here’s another one of those head-scratching moments. Despite all these positive indicators in code and in process the researching surprisingly links AI adoption to negative impacts on overall software delivery performance.
Digging into the 2025 AI Copilot Code Quality report from GitClear and Alloy, which looked at 211,000,000 lines of code and made projections for 2025.
Find out how AI is increasing the speed of change, and the knock on effects of optimizing for short-term speed. Or, more poetically: “Oh, what a tangled web we weave when AI agents we use for speed.”
Good developers focus on building systems that are not just functional, but also elegant and efficient. They refactor their code, meaning they constantly look for ways to improve the structure and make it more reusable.