I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
ChatGPT has been helpful in being an interactive rubber duck. I used it to help myself breakdown the technical problems that I need to solve and it helps to cut down time taken to complete a difficult ticket that usually take a couple of days of work to a couple of hours.
I’ve had similar experiences with it telling me to call functions of third party libs that don’t exist. When you tell it “That function X does not exist” it says “I’m sorry, your right function X does not exist on library A. here is another example using function Y” then function Y doesn’t exist either.
I have found it useful in a limited scope, but I have found co-pilot to be much more of a daily time saver.
So? Those mistakes will come up in testing, and you can easily fix them (either yourself, or ask it to do it for you, whichever is faster).
I regularly ask ChatGPT to write code against classes/functions that didn’t exist until earlier today when I wrote those APIs. Obviously the model doesn’t know those APIs… but it doesn’t matter, you can just paste the function list or whole class definitions in and now it does know they’re there and will use them.
I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
ChatGPT has been helpful in being an interactive rubber duck. I used it to help myself breakdown the technical problems that I need to solve and it helps to cut down time taken to complete a difficult ticket that usually take a couple of days of work to a couple of hours.
“just good enough to be dangerous”
I’ve had similar experiences with it telling me to call functions of third party libs that don’t exist. When you tell it “That function X does not exist” it says “I’m sorry, your right function X does not exist on library A. here is another example using function Y” then function Y doesn’t exist either.
I have found it useful in a limited scope, but I have found co-pilot to be much more of a daily time saver.
So? Those mistakes will come up in testing, and you can easily fix them (either yourself, or ask it to do it for you, whichever is faster).
I regularly ask ChatGPT to write code against classes/functions that didn’t exist until earlier today when I wrote those APIs. Obviously the model doesn’t know those APIs… but it doesn’t matter, you can just paste the function list or whole class definitions in and now it does know they’re there and will use them.