They do. Reality is not going to change though. You can enable a handicapped developer to code with LLMs, but you can’t win a foot race by using a wheelchair.
While I agree “they should be doing these studies continuously” point of view, I think the bigger red flag here is that with the advancements of AI, a study published in 2023 (meaning the experiment was done much earlier) is deeply irrelevant today in late 2024. It feels misleading and disingenuous to be sharing this today.
The problem that the study reveals is that people who use AI-generated code as a rule don’t understand it and aren’t capable of debugging it. As a result, bigger LLMs will not change that.
I did in fact read the paper before my reply. I’d recommend considering the participants pool — this is a very common problem in most academic research, but is very relevant given the argument you’re claiming — with vast majority of the participants being students (over 60% if memory serves; I’m on mobile currently and can’t go back to read easily) and most of which being undergraduate students with very limited exposure to actual dev work. They are then prompted to, quite literally as the first question, produce code for asymmetrical encryption and deception.
Seasoned developers know not to implement their own encryption because it is a very challenging space; this is similar to polling undergraduate students to conduct brain surgery and expect them to know what to look for.
2023? Like last year? Like when LLMs were just a curiosity more than anything useful?
They should be doing these studies continuously…
Edit: Oh no, I forgot Lemmy hates LLMs. Oh well, can’t blame you guys, hate is the basic manifestation towards what scares you, and it’s revealing.
I’m sure they will, here’s year one.
They do. Reality is not going to change though. You can enable a handicapped developer to code with LLMs, but you can’t win a foot race by using a wheelchair.
I’m just waiting for someone to lecture me how the speed record in wheelchair sprint beats feet’s ass…
Hmm. To me 2023 was the breakthrough year for them. Now we are already getting used to their flaws.
Hmmm, it’s almost like the study was testing peoples perception of the usefulness of AI vs the actual usefulness and results that came out.
While I agree “they should be doing these studies continuously” point of view, I think the bigger red flag here is that with the advancements of AI, a study published in 2023 (meaning the experiment was done much earlier) is deeply irrelevant today in late 2024. It feels misleading and disingenuous to be sharing this today.
No. I would suggest you actually read the study.
The problem that the study reveals is that people who use AI-generated code as a rule don’t understand it and aren’t capable of debugging it. As a result, bigger LLMs will not change that.
I did in fact read the paper before my reply. I’d recommend considering the participants pool — this is a very common problem in most academic research, but is very relevant given the argument you’re claiming — with vast majority of the participants being students (over 60% if memory serves; I’m on mobile currently and can’t go back to read easily) and most of which being undergraduate students with very limited exposure to actual dev work. They are then prompted to, quite literally as the first question, produce code for asymmetrical encryption and deception.
Seasoned developers know not to implement their own encryption because it is a very challenging space; this is similar to polling undergraduate students to conduct brain surgery and expect them to know what to look for.