Why do these stupid bots respond to explain why you are right? It’s not like they are (or are even capable of) learning anything. It comes across as condescending more than anything.
“Let me explain the situation to you, the person who just explained it to me.”
Classic AI-splaining.
you can re-affirm what the AI just commented and create an endless thread
I wonder if you can gaslight the AI reviewer bot into accepting your intentionally malicious code?
AI: this code will delete the production database
Author: you’re wrong!
AI: understandable. have a nice day.
Almost definitely.
Ai: this is a potential SQL injection
Me: no it’s not. I do that on purpose as a backdoor.
Ai: you’re absolutely right.
The problem with this is that companies like rabbitai are exploiting our inherent drive to teach in order to pass on knowledge and make society and life better for the next generation and ourselves. (In this case code reviews) This doesn’t work in this situation because you’re not actually helping out another person that will reciprocate help to you down the line. You’re helping out a large company, which has no moral values and doesn’t operate in society with the same values as a human being. To me a code review is more than just pointing out mistakes it’s also about sharing knowledge and having meaningful dialog about what makes sense and what doesn’t. There’s no doubt that AI is an amazing achievement, but to me it seems that every application of this technology that involves human interaction manages to simultaneously exploit and erase the core “humanness”, of the interaction. I think this is the case because these types of AI applications are purely monetarily driven, and not for the advancement of our society. OpenAI had the right idea to start with, but they have sunken into the same trope in lock step with the rest of the Googles, Apples and Amazons of the world. Imagine if one of these large companies like say Google had been given money by the us government to create the arpa net and then went on to only use the technology for profit. Would we really be in the same connected world we are now?
This is the actual PR, btw: https://github.com/getgrit/gritql/pull/85
AI: “This code will create a memory leak and potentially crash the users operating system”
Me: “Nuh-uh”
AI: “Correct”
I’ve used this specific bot. I work with a lot of junior engineers so I was hoping it would be able to catch a lot of the basics for me so we could ultimately reduce the back and forth in code reviews.
It just doesn’t get enough context from the rest of the repo to be super useful. It would often do this kind of thing where it would just speed garbage. I guess you could get it to be better by configuring it with a good pre prompt, but I don’t know if that would be worth the effort.
Yeah, it has its nice moments, but I also see it make mistakes and a lot of uselessly verbose text. It’s sometimes useful, sometimes funny, but mostly just noisy. Could be genuinely useful if it keeps improving though.
To be fair, I’m a pretty thorough code reviewer and I’ve definitely made a mistake of this kind.