https://archive.ph/hMZPi

Remember when tech workers dreamed of working for a big company for a few years, before striking out on their own to start their own company that would knock that tech giant over?

Then that dream shrank to: work for a giant for a few years, quit, do a fake startup, get acqui-hired by your old employer, as a complicated way of getting a bonus and a promotion.

Then the dream shrank further: work for a tech giant for your whole life, get free kombucha and massages on Wednesdays.

And now, the dream is over. All that’s left is: work for a tech giant until they fire your ass, like those 12,000 Googlers who got fired six months after a stock buyback that would have paid their salaries for the next 27 years.

We deserve better than this. We can get it.

  • FaeDrifter@midwest.social
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    4
    ·
    10 months ago

    It was impossible for a computer to be smart enough to beat grandmasters at chess, until it wasn’t. It was impossible to beat Go Masters at Go, until it wasn’t.

    No software engineering jobs are getting replaced this year or next year. But considering the rapid pace of AI development, and considering how much code development is just straight up redundant… looking at 20 years from now, it’s not so bright.

    It would be way better to start putting AI legislation in place this year. That or it’s time to start transitioning to UBI.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      4
      ·
      edit-2
      10 months ago

      I am an actual (senior) software engineer, with a background in ML to boot.

      I would start to worry if we were anywhere close to even dreaming of how AGI might actually work, but we’re not. It’s purely in the realm of science fiction. Until you meet the bar of AGI, there’s absolutely no risk of software engineering jobs being replaced.

      Go or Chess are games with a fixed and simple ruleset and are very suited to what computers are really good at. Software engineering is the art of making the ambiguous and ill-defined into something entirely unambiguous and precisely defined, and that is something we are so far from achieving in computers it’s not even funny. ML is ultimately just applied statistics. It’s not magic, and it’s far from anything we would consider “intelligence”.

      I do think we need legislation targeting ML, but not because of “omg our jobs”. Rather we need legislation to combat huge tech companies vacuuming any and all data on the general public and using that data to manipulate and control the public.

      Also, LOL at “how much code development is straight up redundant”. If you think development amounts to just writing a bunch of boilerplate as though we were some kind of assembly line putting together the same thing over and over again, you’re sorely mistaken.

      • Not_mikey@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        10 months ago

        I think you overestimate what the average software developer is doing.

        Do I think in 10 years ai will be patching the Linux kernel or optimizing aws scaling functions, no. Do I think it will be creating functional crud apps with Django or Ruby on rails, yes, and I think that’s what a large amount of software developers are doing. Even if it’s not a majority a lot of the more precarious developers without a cs degree will probably lose their job. Not every developer is a senior engineer working on ML.

      • FaeDrifter@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        10 months ago

        It’s purely in the realm of science fiction.

        This isn’t proof of anything, I would just like to point out that a lot of science fiction has become reality in the last few decades.

        Go or Chess are games with a fixed and simple ruleset

        At the end of the day, what is a computer except a machine with a fixed and simple ruleset: logic gates.

        ambiguous and ill-defined into something entirely unambiguous and precisely defined, and that is something we are so far from achieving in computers it’s not even funny

        You don’t need AI to write you perfect C or JavaScript or HTML. You just need it to create an interface for an end user to make the computer do what they want. I predict the AI itself won’t write the languages, it will tend to replace the languages. Many orders of magnitude more computationally expensive, but the hardware is quickly becoming cheaper to buy than paying software engineers.

        If you think development amounts to just writing a bunch of boilerplate as though we were some kind of assembly line putting together the same thing over and over again, you’re sorely mistaken.

        Obviously not, that’s why libraries and OOP and frameworks exist, I’m aware, not pretending like I have anything to teach you about it either.

        And I’ll take the L if you have the insider knowledge that there’s a requirement for massive creativity behind the scenes in widespread fundamental overhauls of the way software works. But afaik, the fundamentals of code haven’t changed in decades. The way users interact has not changed much since smartphones became standard. I don’t see a capitalistic incentive to pay for lots of new creativity, instead of just making usable products.

    • tburkhol@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It was impossible for computers to beat chess and go masters when the computers were trying to play like humans -trying to model high level understanding of strategy and abstract values. The computers started winning when they got fast enough to brute force games - to calculate all of the possible outcomes from all of the possible moves, and to choose the best one.

      This is basically the same difference between LLMs and ‘true’ general AI. The LLMs are brute forcing the next line of a screenplay, with no way to incorporate abstract concepts like truth or logic. If you confuse an LLM for an AI, then you’re going to be disappointed in its performance. If you accept that an LLM is a way to average past communications, and accept that a lot of its training set were fiction, then it’s an amazing tool for generating consensus text (given that the consensus includes fantasies and lies). It’s not going to write new code, but it will give you an approximation of all the existing examples of some algorithm. An approximation that may introduce errors, like copy-pasting sequential lines from every stackexchange answer.

      Computer graphics, computer game opponents, they’re still doing the same things they were doing decades ago, and the improvements are just doing it all faster. General AI needs to do something different than LLMs and most other ML algorithms.