Analysing 22,000 tasks in the economy covering every type of job, the IPPR said 11% of tasks currently done by workers were at risk. This could, though, increase to 59% of tasks in the second wave as technologies develop to handle increasingly complex processes.

Too bad capitalism is slowing down this development so much as I’m pretty sure we could have already reached some 10% of jobs erased if we were working towards it as a society, instead we have for now good AIs that get dumbed down as much as possible in the name of profits and market share.

  • KrasnaiaZvezdaOPM
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    It’s important to keep in mind that AI is just the latest catalyst here.

    I do have to disagree with this, to some extent at least, as with AI being used to do more and more jobs and leading to more unemployment this is likely to be the last crash of capitalism and whatever people make after it, for better or worse, is likely to be a new system with a new, fully automated mode of production.

    • ☆ Yσɠƚԋσʂ ☆
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      3 months ago

      While AI has been receiving considerable attention and hype, it’s essential to understand that its capabilities in practical applications are somewhat limited. LLMs, for instance, excel at producing creative content such as writing and art, where the primary focus is on aesthetics. They also prove useful in situations that require analyzing extensive data and making predictions based on it. For example, these types of systems are employed in China to monitor the health of high-speed rail lines and predict potential problems so they can be addressed proactively.

      However, AI’s limitations become more apparent when it comes to tasks that require definitive real-world interactions or decision-making in complex, contextual scenarios. This is evident with self-driving cars, which are inherently unsafe due to the impossibility in ensuring that the AI algorithm will always make the correct decisions. These models rely on statistical analysis of data without an understanding of the context or human perspective, making them unreliable for tasks that require nuanced decision-making.

      The main difference between human beings and AI when it comes to decision making is that with people, you can ask questions about why they made a certain choice in a given situation. This allows for correction of wrong decisions and guidance towards better ones. However, with AI, it’s not as simple because there is no shared context or intuition for how to interact with the physical world. This is due to the lack of human intuition about how the physical world behaves that we develop by interacting with it from the day we’re born. This forms the basis of understanding in a human sense. As a result, AI lacks this capacity for genuine understanding of the tasks it’s accomplishing and making informed decisions.

      To ensure machines can operate safely in the physical world and effectively interact with humans, we’d need to follow a similar process as with human child development. This involves training through embodiment and constructing an internal world model that allows the AI to develop an intuition about how objects behave in the physical realm. Then we’d have to teach it language within this context. However, we’re nowhere close to being able to do that sort of stuff at the moment.

      • bobs_guns
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        I think excel is a bit of a strong word to describe LLMs’ creative output. Ask them to write a joke and you will likely be disappointed for the foreseeable future.