One use of LLMs that I haven’t seen mentioned before is to use them as a sounding board for your own ideas. By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.

In this context, the LLM’s actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.

Definitely recommend trying this trick next time you’re writing something.

  • loathesome dongeaterA
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    I read about this on the cursed orange site. Some guy talked about going on a walk with his wireless warplugs on, talking to ChatGPT’s audio interface discussing some world building he was doing.

    Are there any LLM services that can be reasonably used without paying? I tried some llamafiles but seems like my laptop cannot handle them well.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      As long as you don’t care about your inputs being harvested, gemini is free currently. I’ve been using GPT4All to run stuff locally, but if your laptop is having trouble with llamafiles, then it’s probably gonna have trouble with that too.

      • FuckBigTech347
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        4 months ago

        On the topic of GPT4ALL, I’m curious is there an equivalent of that that but for txt2img/img2img models? All the FOSS txt2img stuff I’ve tried so far is either buggy (some of the projects I tried don’t even compile), require a stupid amount of third party dependencies, are made with NVidia hardware in mind while everyone else is second class or require unspeakable amounts of VRAM.

        • lurkerlady [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          automatic1111 webui launcher, its stable diffusion. fun fact its icon is a pic of ho chi minh

          if you wait, stable diffusion 3 is coming out soon. nvidia will run faster because its tensors are better unfortunately. SD is more ethical than others, you can load up models that are trained only on public art and pics

    • lurkerlady [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 months ago

      seconding gpt4all, makes it quick and easy to run and if youre fancy you can stream the output from your computer to your phone. i run a capybara-hermes-mistral mix but i would suggest starting with mistral instruct until claude3 comes out