The study tracked around 800 developers, comparing their output with and without GitHub’s Copilot coding assistant over three-month periods. Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.
It’s a glorified autocorrect. Using it for anything else and expecting magic is an interesting idea. I’m not sure what folks are expecting there.
- It suggests variables and function names I was already going to type more accurately. Better, it suggests ones even when I cannot remember the name because i got stumped trying to remember.
- It fills in basic control structures and functions more effectively than the IDE’s completion templates.
- It provides a decent starting point for both comments and documentation (the hard part). Meaning my code actually has comments and documentation. Regularly!
But I don’t ask it to explain things or generate algorithms willy nilly. I don’t expect or try to have it do something that’s not more than simply auto-completion.
I honestly like it, even if I strongly dislike the use of AI elsewhere. It’s working in this area for me.
Takumidesh@lemmy.world 3 days ago
I basically exclusively use LLMs to explain broad concepts I’m unfamiliar with. a contrived example would be ‘what is a component in angular’ or ‘explain to a c# dev how x,y, and z work in rust’ The answers don’t need to be 100% accurate and they provide a nice general jumping point to get specific information.
xavier_berthiaume@jlai.lu 3 days ago
Exactly, I’ve found them most useful to either summarize a text you feed it, or do broad ‘google like’ queries. I don’t trust it with anything beyond that