• amemorablename
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    Good points, lot to think about.

    everyone wants to make it out of capitalism alive in any way possible.

    This part resonates with me in particular. I’ve had aspirations before to “make it out alive” via one artistic craft or another and it’s possible I still could “make it” well enough to live off of that (primarily if I got lucky), but generative AI may make it harder to do so. But I also understand that capitalism is unsustainable, as is much of the western internet landscape even pre-generative-AI, so it’s sorta like… yeah, some of my potential opportunities may be evaporating, but so is the stability of capitalism as a whole. And living in the US, the stability of the governance as a whole is in question with the stuff being done to the federal workforce, the seeming efforts to consolidate power behind a single neo-fascist(? for lack of a better term) faction, and so on. It comes out very individualist for me to be fretting about whether I can personally succeed in making a living out of some craft, while “the world burns”, so to speak.

    So yeah, I suspect some of the ire surrounding generative AI is due to individualism; people thinking about it like “I was supposed to get [or had already gotten] mine and now I can’t get it [or it is going to be taken away.” Rather than thinking of it like, “This is a progression of automation that has long been happening and much like in the past, the working class needs to organize because it’s never going to get fundamentally better until they have the levers of power.”

    I think one contradiction people against AI have is they say it’s both replacing your brain while also not being that good. It’s a complete contradiction because it can only be one or the other (is it better than human cognition or is it not?), and until one addresses the contradiction and resolves it, they will live ‘in utter chaos under heaven’ as Mao said (paraphrased lol), and it leads to problematic conclusions such as “people who use AI are lesser people because AI is not very good, so clearly if they use it, their brain must be worse than AI, that’s why they think they gain something from it”.

    Yeah, I think there’s a fair bit of elitist tropes wrapped up in thinking about AI as well. Human beings still don’t even understand our own consciousness all that well, much less the entire brain and its functioning, so it’s easy to fill in the gaps with nonsense like “people are stupid”. Arising out of that (it seems, I can’t demonstrate the connection cleanly) you get stuff like the people who hype “AGI” as something that will replace “human intelligence.” But what I never see in that realm, is any taking into account the fact that human capability derives out of the human form, not out of the ether (unless I suppose one believes in something metaphysical about it). So in order to believe a computer can reach the same capability, you have to believe it will be granted something metaphysical too. Otherwise I’d think the only way for “AI” to get anywhere close to humanity is for some kind of bio-engineering to be able to create artificial human life. And at that point, we’re basically just talking about making babies without a woman needing to go through pregnancy.

    But I do think when like, China, is getting into robotics, they are at least closer to understanding that particular problem. That for an AI to do certain of human tasks, it needs to have a human-like form. Still though, none of that brings us fundamentally closer to a self-aware artificially-created lifeform (partly because we still don’t entirely know what that form develops out of in the first place, in our own case; what cluster of factors crosses over into what we call sapience). It just brings us closer to tools that require less direction and maintenance than previous forms of tools. Which could eventually be used to replace us at certain kinds of tasks and thus change the labor landscape somewhat, but isn’t replacing us fundamentally.