• Assian_Candor [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    3 days ago

    AI is sick. Deepmind solved protein folding.

    Yes slop is a real problem but people who hate on or dismiss AI reflexively are being extremely ignorant and short sighted. Unfortunately it’s most people on my beloved hexbear

    • JustVik@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      edit-2
      3 days ago

      AI for protein folding and LLM chatbots are significantly different I think. At least the first one was created for one clear scientific purpose.

      • CriticalResist8A
        link
        fedilink
        arrow-up
        12
        arrow-down
        3
        ·
        3 days ago

        Significantly how? Both LLMs and AlphaFold are transformer-based neural networks. The LLM chatbot is trained on sequences of words, and AlphaFold is trained on sequences of amino acids. Certainly training AlphaFold to make real amino acid chains was ‘easier’ because we know how they’re formed so it only has so many sequences it can produce and there’s a checklist to determine whether the sequence it produced is real or impossible, so it’s also easier to have it produce a reliable output and makes it very good at a specific task, but they both work the same under the hood. Word prediction LLMs can’t have that deterministic output because we use words for so many different things. It would be like asking a person to only ever communicate in poetry and no other way.

        one clear scientific purpose

        Computer scientists in academia are using Deepseek to solve new problems in new ways too. They especially like Deepseek and Chinese models because they’re open-weights and don’t obfuscate any of their inner workings (such as the reasoning chain), so they can fine-tune them to their specific needs.

        I have to assume the purpose of amino acids wasn’t so clear when we first found out about them and before we set out to investigate and, through extensive research and testing found out how they work and what they actually do. It’s on us to discover the laws of the universe, they don’t come to us beamed from heaven straight into our brain.

        • JustVik@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          Well, at least they differ in that AlphaFold has a specific goal and we can verify it, perhaps not easily, and it has practical scientific benefits, while the LLM trains to solve all tasks at once and even unknown which ones.

          • Assian_Candor [comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            LLMs don’t train to “solve” anything. They’re just sequence predictors. They predict the next item in sequences of words, that’s it. Some predict better than others in specific scenarios.

            Through techniques like RAG and multi-model frameworks you can tailor the output to fit specific tasks or use cases. This is very powerful for automating routine workflows.

          • PolandIsAStateOfMind
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            3 days ago

            You can also verify output of chatbot or artbot, even easier, for example i have no clue about protein folding whatsoever, but (i hope) we all can recognize word salad from coherent text.