Neural Networks interpreting Fractals

For me, one of the most amazing things about fractals is that at first glance they seem to resemble something from real life, but at closer inspection this appears an illusion. With the advent of Google's Deep Dream and many other Neural Networks I thought it was an interesting experiment to see how AI would interpret a fractal picture. I've done two simple tests to see where this could go.
First, I gave a couple of fractal images to Google's similar image search algorithm, to see what Google would think is on the picture.
So I uploaded this image of a pure fractal:

What would Google 'think' is on this picture? Some similar images came up, not surprisingly most where buildings, churches and such:

With DeepArt you can transform any image into a 'style' using a base picture and a style picture. I used the fractal image as the base image, and the church as a style image to render this:

Here are some other examples, all made from a base image and a style image based on its 'similar images' result:

Another algorithm that uses deep learning colors in black and white pictures. Its algorithm has learned the best way to do this based on thousands of real images, so it can determine the difference between a sky which is supposed to be blue and a tree that is green.

Feeding it fractals in order to see what it would 'think' it needed to color it gave some interesting results. I made sure each image was black and white when I gave it to the algorithm. Most just got back completely sepia, which gave it still a beautiful, old look. Here's some of the most interesting results:
All processed using demos.algorithmia.com/colorize-photos/

Although all this technology is still very young, it's absolutely apparent to me that these techniques will start to play an extremely large role in creating and interpreting art, and this is very exciting to me.
There's something intensely satisfying about peering into a machine's mind.