• Comprehensive49
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      You can run multimodal models like LLaVA and LLaMA on Ollama as well.

      The AI models are coded and used in a way that makes them basically platform-agnostic, so the specific platform (Ollama, LocalAI, vLLM, llama.cpp, etc.) you run them with ends up being irrelevant.

      Because of that, the only reasons to use one platform over another are if it’s best for your specific use case (depends), it’s the best supported (Ollama by far), or if it has the best performance (vLLM seems to win right now).