What is this site for?
This site is a non-commercial research project showing how artist styles are interpreted in various AI models and how the technology evolves over time. It's also an educational tool to explore artistic styles, discover new artists, find non-living artists and those in the public domain, and collect commentary from living artists on AI art.
What makes this site unique?
This site makes extensive use of tags that groups artists by similarities in all kinds of ways. The tags prioritize visual similarities over historical ways the artists are typically grouped. This makes it easier to find artists that AI generators interprets with similar features.
Why is AI art controversial?
The datasets AI trained on used copyrighted images from living artists. The artists were not asked permission to use their images to train AI. Some people think AI developers shouldn't be allowed to profit from using their copyrighted artwork, while others contend that it's okay because the AI is only learning their style (which can't be copyrighted) and not actually reproducing the artwork.
Furthermore, many artists are concerned people will have a harder time finding them because AI art is crowding them out on social media and internet searches. AI art is coming up in search engines when people search for the artists, making it more difficult to find their original artwork. The alternate viewpoint is that it can increase exposure to real artists when people see the art generated using their names.
There's also concern AI art will damage the art industry because people will hire fewer real artists. AI art is faster, cheaper, and can produce results that are good enough for many commercial purposes. The alternative viewpoint is that AI art might expand the art industry by opening up new fields.
How does this site try to use AI art ethically?
This site is just a personal project. The purpose is for education and reference. It doesn't sell AI artwork and isn't selling a product or service. When possible, I link back to living artists and include their opinions on AI art. For people looking for prompt inspiration and don't want to use the names of living artists, I try to make it clear which artists are living, deceased, and have work in the public domain.
Can I do whatever I want with the art of artists tagged as public domain?
Not necessarily. I tagged artists as public domain if I found at least some of their work in the public domain. However:
- This might apply to some but not all of their work.
- This might only apply in certain countries; copyright law doesn't work the same globally.
- This might only apply to certain formats of their work.
- Their original work can be in public domain, but not derivatives created by other people.
- The internet could be wrong; you should double check if you plan to use their work commercially.
How do AI art generators work?
The AI generators covered on this site work by training a neural network on a dataset of images. The neural network learns to recognize patterns in the images. Users provide AI with a text prompt or source image, then the AI matches that input with what it learned from the images it trained on to generate new images.
How many AI art generators are there?
A lot, but this site focuses on the big three that are currently popular and publically available: Stable Diffusion, DALL-E, and Midjourney.
Where did the AI get the image datasets?
Stable Diffusion mainly used the LAION dataset. Stable Diffusion is open source and users can train individual instances of the AI on additional datasets. Midjourney and DALL-E have not made their datasets public.
What's the difference between a generator and a service provider?
The service providers are the front end and the generators work behind the scenes. So you might use a service like Night Cafe or neural.love, but they're actually running prompts through Stable Diffusion. Some, such as CF Spark Art, use both Stable Diffusion and DALL-E. Over time, service providers may change which generators they're using.
Images generated on different services can still look very different even when when they're using the same generator. There are a couple of reasons for this. First, the generators have lots of options that change the look of image output and can be configured to use different default settings. Second, their images can be trained using different datasets.