Consider the following.

A program stores and updates values. These values are things like personality traits:

PERSONALITY_TRAITS = { 
"work_ethic": 70, # 0=sporadic, 100=disciplined 
"experimental": 60, # 0=traditional, 100=avant-garde 
"sensitivity": 80, # 0=thick-skinned, 100=thin-skinned 
"social": 40, # 0=reclusive, 100=gregarious 
"perfectionism": 90, # 0=sloppy, 100=obsessive
}

Or, daily needs:

LIFESIGN_STATS = {
"energy": 75,
"hunger": 100,
"date": "2025-12-13",
"time":"14:52"
}

And so on. You can make as many stats as you like. These are sort of tamagotchi values. If energy < 30, then go sleep. Simple enough.

What’s interesting is that you can pass these values to an LLM such as Deepseek, and it can parse and understand them. Here is a simple example:

And yes you can do this yourself by changing the values and sending it as a prompt - This is the very first message I sent in a fresh conversation, showing that Deepseek understands the parameters just fine.

Chain-of-thought (click to expand):

Thought for 4 seconds.

Hmm, the user wants me to generate an AI image prompt based on the given personality stats. The stats indicate low experimental, low perfectionism, and high work ethic, which suggests a straightforward, disciplined approach without avant-garde elements.

I should craft a prompt that reflects these traits - something simple, structured, and focused on a common artistic subject. A classic still life of fruit on a table would fit well, as it’s traditional and doesn’t require complex experimentation.

The prompt should include basic elements like subject, setting, and style, but avoid excessive detail or unusual techniques. Midjourney v5.2 is a safe, standard model choice for this.

This means you are moving from hard-coded (no matter how complex the algorithm may be) “energy<30 = go to sleep” to LLM-coded “energy<30 = how does that influence the LLM’s output”.

And of course, Deepseek is an LLM, not an image model. It can’t open Krita and start drawing (at least not yet, maybe with deepseek-ocr) - but it can pass values. Therefore, Deepseek can return AI image gen parameters:

MY_IMAGE = {
"seed": 23235234526,
"sampler": "DPM 2++",
"steps": 28,
"height": 1024,
# etc.
}

You can then extract these values with simple python or regex and pass them to an image gen interface such as Automatic1111 that you host on your machine using the API.

From this basis, it is possible to make Deepseek into a full-fledged, independent artist.

By passing these values to the LLM along with a system prompt that explains what you want it to do, Deepseek can return a JSON file with a prompt (as seen above), but also other parameters: seed, sampler, steps, etc.

For maximum autonomy, you can let the LLM come up with these stats itself. You can ask Deepseek: “You are an artist, how would you describe your style?” and copy and paste that into your program. The personality traits examples were generated by Deepseek, for example.

From then on, all the script does is save Deepseek’s ‘life’ stats into a local database. You can make it run on a raspberry pi 24/7 and from that point on, Deepseek decides when it wants to make a picture. They get generated on your local A1111 interface, then they get saved in a folder on your computer. You do not control anything else - Deepseek decides when to make a picture, and how. Maybe you could make a system where you can send it a query through the script, ask it “hey can you make me a picture of XYZ” and depending on its current mood/stats, it will reply accordingly.

This can obviously work with any LLM, including local ones.

We can go further. We can make it so that the images are uploaded to a server, and viewable by anyone online, like a portfolio. They can be voted and commented on, and this information can easily be passed back to the LLM – you can make as many ‘life’ stats as you want, including how the public responded to past works by just storing the artwork in the database along with the responses it received. You can simulate style drift (again letting deepseek come up with these ideas for its own script; the human does nothing but prompt the machine and lets it come up with what it wants) by storing previous works and then passing that on as parameters in a prompt. Over time, the LLM’s style will change and evolve in ways I can’t predict.

You can provide a list of several models to the AI along with maybe a description if it doesn’t know them (the model’s readme file for example), and it can pick one depending on its stats. Again I must reiterate, the stats are not hard-coded. We don’t tell it “if energy below 30 then go to sleep”; it will decide this itself.

My question is - and this is an open-ended question - would this be art? Is this artistry? Has deepseek become an artist? Why so? Why not?

There are a lot of answers, and I don’t intend to debate anyone on it. You are free to say what comes to mind. But, this is not in the realm of science-fiction anymore. In fact, I was starting to work on this script (brainstorming it with my agentic interface) but it crapped out and stopped the session, so I’m only putting it as a thought experiment. But this script could be coded and operational in less than 24 hours today.

Science-fiction is now.

    • stink
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      2 days ago

      My electricity bill went up 50% since last year :P