It’s been a while since I’ve seen this meme template being used correctly
Turns out, most people think their stupid views are actually genius
So the problem isn’t the technology. The problem is unethical big corporations.
Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.
All it can do now and ever will do is destroy the environment by using oodles of energy, just so some fucker can generate a boring big titty goth pinup with weird hands and weirder feet. Feeding it exponentially more energy will do what? Reduce the amount of fingers and the foot weirdness? Great. That is so worth squandering our dwindling resources to.
Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.
We definitely don’t need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to “nearby but different, enough” tasks. And once they’re trained (and possibly quantized), they (LLMs and reinforcement learning policies) don’t require that much more power to implement compared to traditional algorithms. So IMO, the question should be “is it worthwhile to spend the energy to train X thing?” Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.
For a person without access to big computing resources (me lol), there’s also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on “normal” computers with FOSS tools.
all it does is remix a huge field of data without even knowing what that data functionally says.
IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.
IMO I’m waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists’ newest plaything.
Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don’t understand these tools and use them against the powerful.
Since I don’t feel like arguing, I will grant you that you are correct in what you say AI can do. I am not really but whatever, say it can:
How will these reasonable AI tools emerge out of this under capitalism? And how is it not all still just theft with extra steps that is imoral to use?
Since I don’t feel like arguing
I’ll try to keep this short then.
How will these reasonable AI tools emerge out of this under capitalism?
How does any technology ever see use outside of oppressive structures? By understanding it and putting to work on liberatory goals.
I think that crucial to working with AI is that, as it stands, the need for expensive hardware to train it makes it currently a centralizing technology. However, there are things we can do to combat that. For example, the AI Horde offers distributed computing for AI applications.
And how is it not all still just theft with extra steps that is imoral to use?
We gotta find datasets that are ethically collected. As a practitioner, that means not using data for training unless you are certain it wasn’t stolen. To be completely honest, I am quite skeptical of the ethics of the datasets that the popular AI products were trained on. Hence why I refuse to use those products.
Personally, I’m a lot more interested in the applications to robotics and industrial automation than generating anime tiddies and building chat bots. Like I’m not looking to convince you that these tools are “intelligent”, merely useful. In a similar vein, PID controllers are not “smart” at all, but they are the backbone of industrial automation. (Actually, a proven use for “AI” algorithms is to make an adaptive PID controller so that’s it can respond to changes in the plant over time.)
These datasets do not exist, you got that right.
I highly doubt there is much AI deep learning needed to keep a robot arms PIDs accurate. That seems like something a regular old algorithm can do.
A deep neural adaptive PID controller would be a bit overkill for a simple robot arm, but for say a flexible-link robot arm it could prove useful. They can also work as part of the controller for systems governed by partial differential equations, like in fluid dynamics. They’re also great for system identification, the results of which might indicate that the ultimate controller should be some “boring” algorithm.
Idk. I find it a great coding help. IMO AI tech have legitimate good uses.
Image generation have algo great uses without falling into porn. It ables to people who don’t know how to paint to do some art.
Wow, great, the AI is here to defend itself. Working about as well as you’d think.
What?
I really don’t know whats going about the Anti-AI people. But is getting pretty similar to any other negationism, anti-science, anti-progress… Completely irrational and radicalized.
Sorry to hurt your fefes, but I don’t like theft and that is what AI content ALL is. How does it “know” how to program? Code stolen form humans. How does it speak? Words stolen from humans. How does it draw? Art stolen from humans.
Until this shit stops being built on a mountain of stolen data and stolen livelihoods, the argument is over. I don’t care if you like stealing money from artists so that you can pretend you had any creative input into an AIs art output. You’re stealing the work of normal people and think it’s okay because it was already stolen once before by the billionaires who are now selling it to you.
Intelectual property is a capitalist invention.
Human culture is to be shared.
Oh right, we live under communism, where everyone’s needs are cared for. My bad
Oh wait, we aren’t and you are just a shithead who, once again, wants to tell me that stealing from other workers is good.
depends. for “AI” “art” the problem is both terms are lies. there is no intelligence and there is no art.
Define art.
Any work made to convey a concept and/or emotion can be art. I’d throw in “intent”, having “deeper meaning”, and the context of its creation to distinguish between an accounting spreadsheet and art.
The problem with AI “art” is it’s produced by something that isn’t sentient and is incapable of original thought. AI doesn’t understand intent, context, emotion, or even the most basic concepts behind the prompt or the end result. Its “art” is merely a mashup of ideas stolen from countless works of actual, original art run through an esoteric logic network.
AI can serve as a tool to create art of course, but the further removed from the process a human is the less the end result can truly be considered “art”.
Well said!
That’s like saying photoshop doesn’t understand the context and the meaning of art.
“Only physically painted art is art”.
Using AI to achieve an concrete piece of art can be pretty complex and surely the artist can create something with an intended meaning with it.
i won’t, but art has intent. AI doesn’t.
Pollock’s paintings are art. a bunch of paint buckets falling on a canvas in an earthquake wouldn’t make art, even if it resembled Pollock’s paintings. there’s no intent behind it. no artist.
The intent comes from the person who writes the prompt and selects/refines the most fitting image it makes
that’s like me intending for it to rain and when it eventually would, claiming i made it rain because i intended for it.
How can you tell if an entity has intent or not?
comes with having a brain and knowing what intent means.
Yes, but where do you draw a line in AI of having an intent. Surely AGI has intent but you say current AIs do not.
yes because there is no intelligence. AI is a misnomer. intent needs intelligence.
there is no intelligence and there is no art.
People said exact same thing about CGI, and photography before. I wouldn’t be surprised if somebody scream “IT’S NOT ART” at Michaelangelo or people carving walls of temples in ancient Egypt.
the “people” you’re talking about were talking about tools. I’m talking about intent. Just because you compare two arguments that use similar words doesn’t mean the arguments are similar.
Intent is not needed for the art, else all the art in history where we can’t say what author wanted to express or the ones misunderstood wouldn’t be considered art. Art is in the eye of the beholder. Note that one of the first regulations of AI art that is always proposed is that AI art be clearly labeled as such, because whomever propose it do know the above.
i didn’t say knowing the intent is needed. i believe in death of the author, so that isn’t relevant.
the intent to create art is, however, needed. the fountain is art, but before it became the fountain, the urinal itself wasn’t.
I get you but it’s really not necessary. In case of (somewhat) realist art you can still recognize AI artifacts, but abstract art is already unrecognizable (and this is the precise reason they want AI art to be marked, so they won’t embarrass themselves with peans over something churned out by computer in few seconds), not to mention there is also art created by animals, and it is considered art but it’s not created with intent, except maybe the intent of people dipping dog’s paw in paint. Thus we again just get to the distinction that art needs to be created just by living things? It’s meaningless.
Anyway, i guess next few years will make this even more muddled and the art scene will get transformed permanently. Hell recently i’ve encountered some AI power metal music which is basically completely indistinguishable from normal, but in this case it mostly serve to show how uninspired and generic entire genre is.
AI is a tool used by a human. The human using the tools has an intention, wants to create something with it.
It’s exactly the same as painting digital art. But instead o moving the mouse around, or copying other images into a collage, you use the AI tool, which can be pretty complex to use to create something beautiful.
Do you know what generative art is? It existed before AI. Surely with your gatekeeping you think that’s also no art.
I’m so sick of this. there are scenarios in which so-called “AI” can be used as a tool. for example, resampling. it’s dodgy, but whatever, let’s say the tech is perfected and it truly analyzes data to give a good result rather than stealing other art to match.
but a tool is something that does exactly what you intend for it to do. you can’t say 100 dice are collectively “a tool that outputs 600” because you can sit there and roll them for as long as it takes for all of them to turn up sixes, technically. and if you do call it that, that’s still a shitty tool, and you did nothing worth crediting to get 600. a robot can do it. and it does. and that makes it not art.
So do you not what generative art is. And you pretend to stablish catedra on art.
Generative art, that existed before even computers, is s form of art in which a algorithm created a form of art, and that algorithm can be repeated easily. Humans can replicate that algorithm, but computers can too, and generative art is mostly used with computers because obvious reasons. Those generative algorithms can be deterministic or non deterministic.
And all this before AI, way before.
AI on its essence is just a really complex and large generative algorithm, that some people do not understand and this are afraid of it, like people used to be afraid of eclipses.
Also, you seems not to know that photographs also take hundreds or thousands of pictures with just pressing a button and just select the good ones.
cameras do not make random images. you know exactly what you’re getting with a photograph. the reason you take multiples is mostly for timing and lighting. also, rolling a hundred dice is not the same as painting something 100 times and picking the best one, nor is it like photographing it. the fact that you’re even making this comparison is insane.
If you know how to use an AI you also know how it’s working and what are you going to get, is not random. It’s a complex generative algorithm where you put in the initial variables, nothing more.
the AI itself doesn’t know what it’s doing, neither are you. the fact that you’re putting in words to change the outcome until the dice fall somewhat close to where you want them to fall doesn’t make it yours. you can’t add your own style to it, because you’re not doing it.
deleted by creator
Always has been
This has been going on since big oil popularized the “carbon footprint”. They want us arguing with each other about how useful crypto/AI/whatever are instead of agreeing about pigouvian energy taxes and socialized control of the (already monopolized) grid.
Considering most new technology these days is merely a distilation of the ethos of the big corporations, how do you distinguish?
Not true though.
Current AI generative have its bases in# Frank Rosenblatt and other scientists working mostly in universities.
Big corporations had made an implementation but the science behind it already existed. It was not created by those corporations.
The root problem is capitalism though, if it wasn’t AI it would be some other idiotic scheme like cryptocurrency that would be wasting energy instead. The problem is with the system as opposed to technology.
Right, but the technology has the system’s philosophy baked into it. All inventions encourage a certain way of seeing the world. It’s not a coincidence that agriculture yields land ownership, mass production yields wage labor, or in this case fuzzy plagiarism machines yield a transhuman death cult.
Sure, technology is a product of the culture and it in turn influences how the culture develops, there’s a dialectical relationship there.
So why take the heat off of AI, as if profiting from mass plagiarism is different when it has an API instead of flesh and bone?
Because as I explained in my original comment, if it’s not AI it’s going to be some other bullshit.
The root problem is human ideology. I do not know if we can have humans without ideology.
Nah, human ideology is much broader than a single economic system. The fact that people who live under capitalism can’t understand this just shows the power of indoctrination.
I’m not a fan of ideology.
What you’re saying is that you’re not self aware enough to realize that you have an ideology. Everyone has a world view that they develop to understand how the world works, and every world view necessarily represents a simplification of reality. Forming abstractions is how our minds deal with complexity.
I’m autistic.
Do you think people should be treated with respect? Do you think there should be consideration for your condition so you are not exempt from certain events, activities, opportunities?
These are matters of ideology. If you say yes to it, it is ideological in the same way when you say no to it. There is no inherent objective truth to these value questions.
Same for the economy. It doesn’t matter if you think that growth should be the main objective, or that equal opportunity should be the focus or sustainability or other things. You will have to make a value judgement and the sum of these values represent your ideology.
There is no inherent objective truth to these value questions.
I disagree. These values are based on objective observations.
What is an ideology to you?
The dictionary definition.
This sounds like some Žižekian nonsense. Capitalism’s Court Jester: Slavoj Žižek
I’m open to trying a non-Capitalist system, but I’m pretty sure hierarchical bullshit will happen and the majority will end up being exploited.
Whether anyone else is open to it before humans extinguish themselves, I don’t know.
If you think that sounds like “Žižekian nonsense”, then you obviously don’t understand what Žižek argues, because he clearly doesn’t say anything silly like “human ideology” (or “Žižekianism”, for that matter). The article you posted also does wonders completely breaking down Žižek as an abonimable human being - while not truly engaging with his ideas. It is pretty worthless, takes things deliberately out of context, and, after rigorously defining him as a persona non grata, invests no proper effort to do what actual communists like Marx and Lenin did - acknowledge that even enemies like that can give contributions to understanding, and things to learn from and work at doing so.
Does he sometimes spew bullshit? Absolutely. Does he believe in “human ideology” or spout anticommunism on a worse level than The Black Book of Communism, as the article wants to imply? Only if you deliberately misread and misinterpret him.
We apparently read different articles. I bet you didn’t even read it.
invests no proper effort to do what actual communists like Marx and Lenin did
😂
- https://en.wikipedia.org/wiki/Gabriel_Rockhill
- https://gabrielrockhill.com/
- https://criticaltheoryworkshop.com/about-2/
.
No one knows the compatible left better than Rockhill, because he did his graduate work under some of them, namely Derrida & Badiou.- Foucault: The Faux Radical
- The CIA & the Frankfurt School’s Anti-Communism
- The Myth of 1968 Thought and the French Intelligentsia: Historical Commodity Fetishism and Ideological Rollback
- Imperialist Propaganda and the Ideology of the Western Left Intelligentsia: From Anticommunism and Identity Politics to Democratic Illusions and Fascism
Yeah, look, I did read the article, and the article, unlike the person who might very well have done that in their work, did not do that. All I see is the same flipping of materialist analysis into an ideological dogma, that becomes ahistoric, trying to repeat instead of following material developments towards communism. From a quick look at your links, there’s even a lot I agree with, especially in criticising the French intellectuals. It still reads like a polemic removed from reality, that values its own farts more than understanding and working towards change, but it has value. And the article you linked in the beginning does nothing, but try to opportunistically recruit people away from one ideologue (which Zizek can definitiely be called) to another idealist “team” that tries to redirect proletarian material interests and analysis. You seem to think it’s a contest of who can quote “great people” the best and who can be the most orthodox, which treats it all like a religion instead of a material movement to change the world and mode of production.
In the end, I fear, we will be on other sides of the river, each seeing “their idealist perversions” across from “our materialist analysis”, but I at least won’t cross the river for your side any time soon.
Okay, Holden Caulfield, best of luck with your own personal, non-phony, left-libertarian revolution.
Nice burn, even brought in the “libertarian”, at least be consistent, if I am a Zizekian heretic, I’m not an individualist libertarian who’s afraid of authority, I am of course a liberal anticommunist reactionary who won’t acknowledge the achievements of “really existing socialism”. You strike me as someone who would have written a hit piece on Marx for profiting from British imperialism and his capitalist buddy Engels, citing the letter and his drinking habits to make clear that he is an immature mind, then join some utopian socialist fringe group.
The root problem is never ideology, always material conditions. Ideology arises from material conditions and not the other way around.
But what if we use AI in robots and have them go out with giant vacuums to suck up all the bad gasses?
My climate change solution consultation services are available for hire anytime.
Careful! Last time I sarcastically posted a stupid AI idea, within minutes a bunch of venture capitalists tracked me down, broke down my door and threw money at me non stop for hours.
Robots figuring out that without humans releasing gas their job is a lot more efficient could cause a few problems.
Don’t worry, they will figure out that without humans releasing gasses they have no purpose, so they will cull most of the human population but keep just enough to justify their existence to manage it.
Although you don’t need AI to figure that one out. Just look at the relationships between the US intelligence and military and “terrorist groups”.
Don’t worry, they will figure out that without humans releasing gasses they have no purpose, so they will cull most of the human population but keep just enough to justify their existence to manage it.
Unfortunately this statement also applies to the 1%. And the “just enough” will get smaller and smaller as AI and automation replace humans.
It’s wild how we went from…
Critics: “Crypto is an energy hog and its main use case is a convoluted pyramid scheme”
Boosters: “Bro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementations”
…to…
Critics: “AI is an energy hog and its main use case is a convoluted labor exploitation scheme”
Boosters: “Bro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementations”
They’re not really comparable. Crypto and blockchain were good solutions looking for problems to solve. They’re innovative and cool? Sure, but they never had a widescale use. AI has been around for awhile, it just got recently rebranded as artificial intellectual, the same technologies were called algorithms a few years ago… And they basically run the internet and the global economy. Hospitals, schools, corporations, governments, the militaries, etc all use them. Maybe certain uses of AI are dumb, but trying to pretend that the thing as a whole doesn’t have, or rather already has, genuine uses is just dumb
I feel like you’re being incredibly generous with the usage of AI here. I feel as though the post and comment above refer to LLM/image generation AI. Those “types of ‘AI’” certainly don’t run all those things.
The term AI is very vague because intelligence is an inherently subjective concept. If we’re defining AI as something that has consciousness then it doesn’t exist, but if we’re defining it as a task that a computer can do on it’s own, then virtually everything that is automated is run by AI.
Even with generative AI models, they’ve been around for a while too. For example, lot of the news articles you read, especially about the weather or news aren’t written by actual people, they’re AI generated. Another example would be scientific simulations, they use AI to generate a bunch of possible scenarios based on given parameters. Yet another example would be the gaming industry, what do you think generates Minecraft worlds? The point here is that AI has been around for awhile and is already being used everywhere. What we’re seeing with chatGPT and these other new models is that these models are now being released for public access. It’s like democratization of AI, and a lot of good and bad things are bound to come of it. We’re at the infancy stage of this now, but just like with the world wide web before it, these technologies are going to fundamentally change how we do many things from now on.
We can’t fight technology, that’s a losing battle. These AIs are here and they’re here to stay. So strap on and enjoy the ride.
I think you misunderstood me, I’m not trying to make some point about “LLMs aren’t ‘real AI’” or even what is and is not AI. I’m just saying the post is talking about that type of AI specifically and I wouldn’t say those types are controlling that much of the world.
Stupid AI will destroy humanity. But the important thing to remember is that for a brief, shining moment, profit will be made.
Line go up 🤓
deleted by creator
This conveniently ignores the progress being made with smaller and smaller models in the open source community.
As with literally every technical progress, tech itself is no problem, capitalism usage of it is.
The problem is the concentration of power, Sam “regulate me daddy” Altman’s plan is to get the government to create a web of regulation that makes it so only the big tech giants have access to the uncensored models.
Of course, as usual with capitalism and basically everything, we had hope to recieve a tool making expressing themselves easy for workers lacking time and training to do art, and we will superexpensive proprietary software and monopolies quite possibly gatekeep by law. Again just as in software some hope is in open source.
Nowadays you can actually get a semi decent chat bot working on a n100 that consumes next to nothing even at full charge.
I guess someone needs to tell google.
Someone needs to tell google that AI powered search is not working right now, and that they better wait a few years to try massively implementing that in a successful way.
Other AI fields are working really good. But search engine “instant AI answers” for general use are not in a phase when they should be as widely used as google (or microsoft) is trying to use them right now.
It’s almost all if Google chasing a quick buck is the issue.
The big companies are racing to get the best model, and they’re using highly inefficient GPUs to get there. Not just Google, Meta is doing it as well. They’re also completely missing their “climate target” goals because of it
Crazy how corporations do that
And all for some drunken answers and a few new memes
In my country this kind of AI is being used to more efficiently find tax fraud and to create chatbots for users to understand taxes, that due to the much more reliable and limited training set does not allucinate and can provide clear sources for the information given.
Which magical country is this? Can I come?
;-)
I’m actually curious (kind of desperate for some good news nowadays). Not trying to make fun of you
Spain. AEAT is out tax authority and has begun using AI in recent years, as an early adopter. The Spanish government in general seems very favorable towards AI and it’s funding a nationally trained model.
Cool, thanks for the info and link
Personally I think AI systems will kill us dead simply by having no idea what to do, dodgy old coots thinking machines are magic and know everything when in reality machines can barely approximate what we tell them to do and base their information on this terrible approximation.
Machines will do exactly what you tell them to do and is the cause of many software bugs. That’s kind of the problem, no matter how elegant the algorithm, fuzzy goes in, fuzzy comes out. It was clear this very basic principle was not even considered when Google started telling people to eat rocks and glue. You can’t patch special cases out when they are so poorly understood.
I don’t like to use relative numbers to illustrate the increase. 48% can be miniscule or enormous based on the emission last year.
While I don’t think the increase is miniscule it’s still an unessesary ambiguity.
The relative number here might be more useful as long as it’s understood that Google already has significant emissions. It’s also sufficient to convey that they’re headed in the wrong direction relative to their goal of net zero. A number like 14.3 million tCO₂e isn’t as clear IMO.
Can understand that, but I feel it’s dumbed down. Better to state the increase and then say it’s relative to [some relatable fact] perhaps?
This is the way
Not only the pollution.
It has triggered an economic race to the bottom for any industry that can incorporate it. Employers will be forced to replace more workers with AI to keep prices competitive. And that is a lot of industries, especially if AI continues its growth.
The result is a lot of unemployment, which means an economic slowdown due to a lack of discretionary spending, which is a feedback loop.There are only 3 outcomes I can imagine:
- AI fizzles out. It can’t maintain its advancement enough to impress execs.
- An unimaginable wealth disparity and probably a return to something like feudalism.
- social revolution where AI is taken out of the hands of owners and placed into the hands of workers. Would require changes that we’d consider radically socialist now, like UBI and strong af social safety nets.
The second seems more likely than the third, and I consider that more or less a destruction of humanity
wait until the curveless anon comes in
There are some pretty smart/knowledgeable people in the left camp
https://www.youtube.com/watch?v=2ziuPUeewK0Miles is chill in my book. I appreciate what he is tackling, and hope he continues.
It seems that there are much worse issues with AI systems that are happening right now. I think those issues should be taking precedent over the alignment problem.
Some of the issues are bad enough right now that AI development and use should be banned for a limited time frame (at least 5 years) while we figure out more ethical ways of doing it. The fact that we aren’t doing that is a massive failure of our already constantly-fucking-up governments.
The way it’s done at this current moment is in no way sustainable. Once we start seeing better dedicated hardware for doing ai on client side hardware and remove the need to use massive GPU farms. AI is cool but it’s like driving a tank to the grocery store. We need the Prius of ai.
Where’s the “If AI destroys humanity, we deserved it”?