I trialed GitHub Copilot and used ChatGPT for a bit, but recently I found myself using them less and less.

I’ve found them valuable when doing something new(at least to me), but for most of my day-to-day it seems to have lost it’s luster.

  • koreth@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I use ChatGPT with GPT-4 as a search engine when a Google search doesn’t immediately turn up the answer I’m looking for. ChatGPT won’t necessarily give me the right answer either (though sometimes it does), but reading its answers almost always causes me to think of a better search query to plug into Google. That doesn’t sound like much but it can save a lot of time compared to stumbling around trying to figure out the right keywords.

    Occasionally I ask ChatGPT to write code samples, but (though they’re way better than GPT-3.5) they still hallucinate a bit too much, e.g., inventing library functions that don’t exist or, worse, inventing plausible-sounding but wrong facts about the problem domain. For example, I recently asked it to write some sample code to work with geographic data where the coordinate system could be customized, and it invented a heuristic about coordinate system identifiers that is true most of the time but has a ton of exceptions. If I didn’t already know better, I might have tried it out, seen that it appeared to work on a simple happy-path example, and accepted it without knowing it was going to break on a bunch of real-world data.

    Every once in a while I give Copilot another shot, but so far, I’ve always ended up turning it off after realizing that I’m spending more time double-checking and fixing its code than it would have taken me to write the code by hand. To be fair, I’m usually working on backend code in a language that doesn’t have nearly as much training data as some other languages. Maybe if I were writing, say, Node.js code, it would do better.