Really? It spotted a missing push_back
like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.
Comment on Microsoft Copilot has been banned for use by US House staff members, at least for now
Limeey@lemmy.world 8 months agoI can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.
QuaternionsRock@lemmy.world 8 months ago
BrikoX@lemmy.zip 8 months ago
You can thank all open source developers for that by supporting them.
QuaternionsRock@lemmy.world 8 months ago
Huh?
BrikoX@lemmy.zip 8 months ago
All LLMs are trained on open source code without any acknowledgment or compliance with the licenses. So their hard work is responsible for you being able to take advantage of it now. You can say thank you by supporting them.
AeroLemming@lemm.ee 8 months ago
[deleted]ForgotAboutDre@lemmy.world 8 months ago
It’s probably just the novelty wearing off. People expected very little from it initially, then it got hyped up. This raised expectations. Combining the raised expectations with the memory of it exceeding expectations will let you see all the flaws.
Wizard_Pope@lemmy.world 8 months ago
I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It’s often far simpler and quicker to just drop that in there and say what to dp than to program a short python script
Eyck_of_denesle@lemmy.zip 8 months ago
My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.
ForgotAboutDre@lemmy.world 8 months ago
All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.