These images were created by feeding the original artwork or photograph into an Artificial Neural Network.
Artificial Neural Networks have spurred remarkable recent progress in image classification. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.
Very much like Pareidolia which is the psychological phenomenon involving a stimulus (an image or sound ) wherein the mind perceives a familiar pattern where none actually exists. Like seeing patterns or images when looking into the clouds.