- If you ask two humans to explain a problem to you, and human 1 takes an hour to explain what human 2 explained in 5 minutes… everyone would consider human 1 LESS ‘productive’ than human 2.
But what if human 2 was wrong?
What if both were wrong and human 3 simply said ‘I don’t know’.
LoC is a measure ripe for ignorance driven managerial abuse.
We’ve all seen senior devs explain concepts to junior devs, increasing their understanding and productivity while they themselves ‘produced’ zero lines of code.
Yes zero LoC maybe point to laziness; or to proper preparation.
All this is so obvious. LoC are easy to count but otherwise have hardly any value
by dakshgupta
8 subcomments
- Hi, I'm Daksh, a co-founder of Greptile. We're an AI code review agent used by 2,000 companies from startups like PostHog, Brex, and Partiful, to F500s and F10s.
About a billion lines of code go through Greptile every month, and we're able to do a lot of interesting analysis on that data.
We decided to compile some of the most interesting findings into a report. This is the first time we've done this, so any feedback would be great, especially around what analytics we should include next time.
- Your graphs roughly marry up with my anecdotal experience. After a while, when you know when and how to utilize LLMs/agents, coding does become more productive. There is a discernible improvement in productivity at the same quality level.
Also I notice it when the LLMs are offline. It feels a bit like when the internet connect fails. You remember the old days of lower productivity.
Of course, there is a lot of junk/silly ways to approach these tools but all tools are just a lever, and need judgement/skill to use them well.
- I take this "code-output" metrics with a pinch of salt. Ofcourse, a machine can generate 1000 times more lines of code similar to a power loom does. However, the comparison with power loom ends there.
How maintainable is this code output? I saw a SPA html file produced by a model, which appeared almost similar to assembly code. So if the code can only be maintained by model, then an appropriate metric should should be based on a long-term maintainability achieved, but not on instant generation of code.
by locusofself
1 subcomments
- This is definitely interesting information and I plan to take a deeper look at it.
What a lot of us must be wondering though is:
- how maintainable is the code being outputted
- how much is this newfound productivity saving (costing) on compute, given that we are definitely seeing more code
- how many livesite/security incidents will be caused by AI generated code that hasn't been reviewed properly
- The site/visualisations look great. But having used AI tools in my programming, I still haven't been able to justify the cost (to the planet too) vs benefit. I've noticed that it's great for pattern recognition and if I've missed something small or missed a variable name here and there, then it's quite good at finding those problems. However, if I ask it to produce a complete piece of work, I've never been able to get something without any bugs. Forget about getting it to design data pipelines with customer privacy and data security in mind!
For reference I work in finance/econometrics and the code is often about numerical analysis written in SQL and python. More often than not I end up wasting a lot of time fixing issues with AI generated code. None of these nuances ever gets captured by metrics like these and it makes me question people (mostly sales and top execs) that push for "AI" at work.
by TuringNYC
2 subcomments
- Kudos to the designer, this site is beautiful.
- > Lines of code per developer grew from 4,450 to 7,839 as AI coding tools act as a force multiplier.
Is that a per-year number?
If a year has 200 working days that's still only about 40 lines of code a day.
When I'm in full-blown work mode with a decent coding agent (usually Claude Code) I'm genuinely producing 1,000+ lines of (good, tested, reviewed) code a day.
Maybe there is something to those absurd 10x multiplier claims after all!
(I still think there's plenty of work done by software engineers that isn't crunching out code, much of which isn't accelerated by AI assistance nearly as much. 40 lines of code per day felt about right for me a few years ago.)
- I take it as greptile folks know LOC metric is in no way a metric that can be correlated to productivity in LLM era. But putting aside that just knowing how much code is going thru their system seems interesting enough to read the report. Thanks for the dot matrix report.
by lemonish97
0 subcomment
- Not sure if it's a TPU constraint, but according to this report it seems like the Gemini models have really poor TTFT and tps inference times.
- In the engineering team velocity section, the most important metric is missing: change rate of new code or how many times it is change before being fully consolidated.
- loved this, thank you
- Why are we still measuring velocity in lines of code in 2025?
- >measuring productivity by lines of code
I wrote zero lines of code today. I read some code and some emails, and wrote a few lines of markdown and some short emails.
All of the code I've written in the past couple weeks was meant to be thrown away. I used it to make some notes, which ended up condensed into those few lines of markdown.
- i'm a designer and even i know not to measure 'lines of code' as meaningful output or impact. are we really doing this?
by heliumtera
0 subcomment
- Create an automated tools that inserts comments and line breaks wherever it's possible.
Productivity multiplied by 10^23.
With humans being this stupid, I'm not that impressed they confused llms to human cognition. Maybe it truly is a replacement.
- So not only are we measuring lines of code as a productivity metric as though that has any actual relation to productivity, but across the board they are boasting that lines of code is going up and PR density is getting bigger as well.
Those numbers should be seen as a giant red flag, not as any kind of positive.
by citizenpaul
0 subcomment
- Oh wow, this is the revolving door of dumb.
KLOC's KLOC's KLOC's
Even Steve Balmer was smart enough to realize LOC was a dumb metric.
To add some substance. Many regard a great deal of IBM's decline to managements near obsession with developer LOC metrics, driving out skilled employees.
- clearly selling the report to business people whom don't code. Like most things in the AI arena today, the report is BS about a system the mostly create technical debt and is sold as intelligence.
by superchris
0 subcomment
- This thing that can't be measured is up 76%. Eyeroll
- [flagged]
by psunavy03
1 subcomments
- Sigh . . . once again I see "velocity" as something to be increased.
This makes me metaphorically stabby.