Since we are so close to the NFT art space and 80% of all NFT art created is no better than the future of that one serial killer looking kid whose mama has a lot of opinions on things but not the brain cells to back them up. Ie. Kevin is fucked.
Ok so what if you like art but neither have the time or the skill? Is it ok to just give up yet yearn to create something amazing?
Yes.
If you don't want to give up though and still want to try creating things without too much effort and still have it look pretty good (warranty not included) then you could do with some AI help.
Generative Art can be just as bad as allowing white trash to keep breeding or it can be as good as one of them breaking out and becoming the best darn artist they can be. That was Kevin's sister, Kevirina...
Here is a quick link to some more generative art resources I just happened on - Generative Art Tools
CLIP + Diffusion Models
CLIP is a AI that learns to identify images from natural language processing. Basically, it gets told in normal language what is in an image and after millions of images it starts to understand the concept of things like: "That is a nft monkey in a tree, scratching its arse."
For more on CLIP and how they use it with neural networks that generate images from specific inputs, you can watch this nice introduction video.
There seem to be many videos and guides on the stuff but mainly I found an example of an image on a tweet today and jumped into just making stuff.
Needless to say I will probably need to catch up on all the work I did not do while I was instead watching the progression of the image being generated trying to guess "WTF is that".
Since from the tweet I saw it was called Disco-Diffusion, which I now know is a model for the data I ended up choosing this Version 5 as found in search.
I am not really sure if that is better than the Version 4.1 which most videos seem to reference.
I would assume they are very similar but did have my doubts when things did not look so good while generating.
A very good tutorial on using this "notebook" as they call it is this video: Image Generation with CLIP + Diffusion models (Disco Diffusion 4.1)
He goes through starting with an initial image but you really do not need to and I think as a complete noob it is probably better to let the AI generate things from scratch so you can see what the process is.
This video will get you jumping right in blindly:
My attempts at prompting art
So the main idea is to describe what I would now say is more a search term than an imaginative prompt, then the neural network will attempt to find images and merge or evolve them into what it is you are trying to describe.
The prompt is very important. As for quality settings etc mostly the defaults seem but something to keep in mind your generations could take from an hour and a half to just fifteen minutes.
This depends on what kind of free computer they have provided your account and whether there are resources available. Since the service runs on google cloud and you won't need fancy hardware for it the free tier provides you with what it can when you connect. Sometimes more will be available sometimes less.
The image below was the very first one I tried, and I have to say I was expecting something a bit more epic.
Instead I just got a mushroom, at one point it looked like there were little people in with a sword but nope it was just another fucking mushroom.
All in all though it is one of the best ones, and the detail is really cool. If I could take a picture like that it would not be bad by any means.
Prompt: "Tiny lights overwhelming a big mushroom bully at sunset. Epic Fantasy."
Next I had my coworker try a prompt also, since I started mine on his PC, we started his on my PC using my google account.
Oddly account generated an image in 15 minutes vs his which kept saying 1"30' .
His prompt was: "A Samurai wielding a lightning blade..." I think there was in the first one something about a nsake in the clouds also. The results were kinda icky for the snake version -
Then we changed the prompt a bit to say "Knight" and to mention "Artstation" which is a big art website and since mainly CLIP will google images the more it can link your prompt to images that exist online the better it can fulfil a style etc. Like saying something from a specific artist who posts a ton of examples.
So as you can see just changing it a bit to a knight who might have more reference imaging online and mentioning a website that would have similar styles across multiple artists, gives a bit more well defined scenic image.
Still abstract and even just nonsensical but cool nonetheless.
Scenic is probably better
When I got home I started up another run, but only got 2 batches of 3 images each before running out of free allocation.
The first prompt was: "Coral cities rising up from the sea to the clouds with fish airships flying between them at sunset."
As you can see it seems to have hooked into that orange pretty heavy, and maybe I can improve the focus on the city style more with some reworking of my wording.
The prompt is really important at driving the art direction. Then again maybe the same prompt and just a different noise starting point will already give you better results. Even more batches. I only ran 3 batches, each took 20minutes.
The second prompt was: "A matte painting from the perspective of looking out of a window over a cloudy sea made of light orbs with a galaxy in the distance over the horizon."
To be fair when I don't even know what I want I should not expect absolutely awe inspiring. It is pretty good at conceptial but although it does make abstract it makes them better when the input is very descriptive and has many components I think.
It is pretty fun to watch them generate, and as a tip you should also let it save intermediate files and not just the final ones, some of those are just as nice or great for a starting point to continue painting ontop.
It would be interesting what imaginative pieces @insaneworks come up with :P