It would be interesting to have a Large Language Model (LLM) fine-tuned on the ProleWiki and leftist books, as it could be very useful for debunking arguments related to leftist ideology. However, local models are not yet doing search and cited sources, which makes it difficult to trust them. With citations, it is possible to check if the model is referencing what it is citing or just making things up. The inclusion of citations would enable users to verify the references and ensure the model is accurately representing the sources it cites. In the future, when a local platform with search capabilities for LLMs becomes available, it would be interesting to prioritize leftist sources in the search results. Collaboratively curating a list of reliable leftist sources could facilitate this process. What are your thoughts on this?

  • comradeRichard
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 months ago

    I think that would be awesome, I’m still learning a lot and that could be helpful.

    I do also believe that my understanding of communism and my place in it calls for a lot more interpersonal connections than I cultivate 🥴

    I’ve been messing with bard a little seeing what questions it will answer and how those answers line up with what I’ve been learning from lately and how I’ve been fed a steady diet of propaganda in US schools… And it’s easy to see from the questions I’ve asked that it SEEMS to have been trained on a good bit of factual info but clearly from the lens of “liberal democracy=good” even as far as acknowledging that many conflicts in my life, if asked about the situations, as I’ve come to understand them, that happened initially destabilizing certain areas, had “fear of communism” high in the list of reasons. Obviously it’s a program trained on certain information, and with guardrails installed. If asked about how much devastation the US has caused related to fearing communism it will talk about that, but if you ask for example “why has the US supported fascists” I got a road block and it stopped answering lmao.

    Obviously just anecdotal about me playing with an existing program, and I’d love to see one trained on a large variety of different leftist theory and writings. It would still need to be used with the broad understanding that in the end, it’s just a program that takes input and processes it and spits some info out, and shouldn’t be trusted more than pointing us in directions for more study.