Analysing 22,000 tasks in the economy covering every type of job, the IPPR said 11% of tasks currently done by workers were at risk. This could, though, increase to 59% of tasks in the second wave as technologies develop to handle increasingly complex processes.

Too bad capitalism is slowing down this development so much as I’m pretty sure we could have already reached some 10% of jobs erased if we were working towards it as a society, instead we have for now good AIs that get dumbed down as much as possible in the name of profits and market share.

  • ☆ Yσɠƚԋσʂ ☆
    link
    fedilink
    arrow-up
    23
    ·
    3 months ago

    We’re seeing one of the core contradictions of capitalism at play here. Capitalism presents a situation where companies aim to maximize profits by minimizing expenses, primarily through reducing the cost of labor, while simultaneously requiring a consumer base with disposable income to purchase their goods and services. Under the pressures of competition and shareholder demands for profitability, companies prioritize cost-cutting measures, leading to a reduction in the disposable income pool. This cycle ultimately results in economic crashes, followed by readjustments and a repetition of the cycle. Hence why we see economic crashes every decade like clockwork. It’s important to keep in mind that AI is just the latest catalyst here.

    • bobs_guns
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      Another side of this is AI as a means of production. If value is derived from socially necessary labor, AI makes a good chunk of that labor no longer necessary. This is especially true for services, and even more for those without much interaction with humans or the world. So, service-based economies will be especially disrupted as services are devalued, and the majority of service workers will see the strongest downward adjustment of wages.

      • ☆ Yσɠƚԋσʂ ☆
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        For sure, ultimately AI will end up automating a lot of human labor, and that’s going to feed the reserve army of labor in turn.

      • KrasnaiaZvezdaOPM
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        nd the majority of service workers will see the strongest downward adjustment of wages.

        But this will actually affect everyone. Not only will unemployment reduce spending, generating even more unemployment, those who lose their jobs will try to get other jobs lowering all wages, which in turn means less spending and less jobs in a loop that could cause a lot more unemployment than simply from automation very fast.

        But I guess how exactly it goes will depend on what governments and the people do.

    • KrasnaiaZvezdaOPM
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      It’s important to keep in mind that AI is just the latest catalyst here.

      I do have to disagree with this, to some extent at least, as with AI being used to do more and more jobs and leading to more unemployment this is likely to be the last crash of capitalism and whatever people make after it, for better or worse, is likely to be a new system with a new, fully automated mode of production.

      • ☆ Yσɠƚԋσʂ ☆
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        3 months ago

        While AI has been receiving considerable attention and hype, it’s essential to understand that its capabilities in practical applications are somewhat limited. LLMs, for instance, excel at producing creative content such as writing and art, where the primary focus is on aesthetics. They also prove useful in situations that require analyzing extensive data and making predictions based on it. For example, these types of systems are employed in China to monitor the health of high-speed rail lines and predict potential problems so they can be addressed proactively.

        However, AI’s limitations become more apparent when it comes to tasks that require definitive real-world interactions or decision-making in complex, contextual scenarios. This is evident with self-driving cars, which are inherently unsafe due to the impossibility in ensuring that the AI algorithm will always make the correct decisions. These models rely on statistical analysis of data without an understanding of the context or human perspective, making them unreliable for tasks that require nuanced decision-making.

        The main difference between human beings and AI when it comes to decision making is that with people, you can ask questions about why they made a certain choice in a given situation. This allows for correction of wrong decisions and guidance towards better ones. However, with AI, it’s not as simple because there is no shared context or intuition for how to interact with the physical world. This is due to the lack of human intuition about how the physical world behaves that we develop by interacting with it from the day we’re born. This forms the basis of understanding in a human sense. As a result, AI lacks this capacity for genuine understanding of the tasks it’s accomplishing and making informed decisions.

        To ensure machines can operate safely in the physical world and effectively interact with humans, we’d need to follow a similar process as with human child development. This involves training through embodiment and constructing an internal world model that allows the AI to develop an intuition about how objects behave in the physical realm. Then we’d have to teach it language within this context. However, we’re nowhere close to being able to do that sort of stuff at the moment.

        • bobs_guns
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          I think excel is a bit of a strong word to describe LLMs’ creative output. Ask them to write a joke and you will likely be disappointed for the foreseeable future.