One use of LLMs that I haven’t seen mentioned before is to use them as a sounding board for your own ideas. By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.
In this context, the LLM’s actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.
Definitely recommend trying this trick next time you’re writing something.
I’m pretty sure I tried that one but it kept running out of VRAM. Also it utilizes proprietary AMD/NVidia software stacks which are a pain to set up. GPT4ALL is a lot better in that regard, they just use Vulkan compute shaders to run the models.
There’s also ComfyUI, but the learning curve is a bit steeper https://github.com/comfyanonymous/ComfyUI
although there’s CushyStudio frontend for it that’s more user friendly https://github.com/rvion/CushyStudio
ComfyUI seems like the most promising but it also uses ROCm/CUDA which don’t officially support any of my current GPUs (models load successfully but midway through computing it fails). Why can’t everyone just use compute shaders lol.
Oh yeah that whole thing is just such a mess, another L for proprietary tech.
could try out the turbo models, might help