“Seeing comes before words. The child looks and recognizes before it can speak.” So begins John Berger’s Ways of Seeings. Transliterating into the language of computer science, we might say:
Pixels come before text. The algorithm scans and recognizes before it can generate output.
We can’t continue this exercise. Berger’s passage continues, “There is also another sense in which pixels come before text. It is seeing which establishes our place in the surrounding world; we explain that world with words, but words can never undo the fact that we are surrounded by it.”
When we see, we convert an incomprehensible array of colors and shapes into metaphors we call words or categories. We say stone of what appears as an agglomeration of more or less irregular curves and jags. We make consciousness through an extemporaneous and continuous poetry which biologists call autopoiesis, or the making of self (poet has its roots in “maker.”)
The painter Ben Shahn said you can’t invent the shape of a stone. You can cast it only in language.
Our perception of photographs follows this same course. We see, we categorize, we describe. But what of an abstracted computer mind? It has no need of situating itself in the world, or of casting data as anything other than data. Its duties consist entirely in receiving inputs and generating outputs.
Still, it can make judgments. Philosopher of mind John Searle distinguishes between epistemically subjective and epistemically objective judgements. That Coubert’s L’Origine du Monde is lewd lies in the realm of subjectivity. That it abounds with flesh tones is epistemically objective. A human makes both judgements easily, but only the former would be contentious.
When a human curates, she draws from a combination objective and subjective judgements. She may select photographs of a single artist, or those that deal with similar content. Or she may pick them on the basis of an aesthetic, a purely subjective judgement. An dull curator or simple algorithm might group photos by color and shape.
When Google’s Images’s algorithm curates, does it betray a subjectivity? When it scans pixels, does it re-cognize? When it generates output, another “visually similar” photograph, does there lurk beneath that decision a shadowy digital poetry?
-Owen Frank Davis, 2013
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.
How does artificial intelligence interpret and compare artistic formal decisions? Does Google’s Image search reveal a subjectivity or consciousness in its algorithms? Or does it reduce photographs to their formal aspects – its shapes and colors, its discrete pixels?
DO BACKGROUND RESEARCH
The lens through which we view photographs changes every day. As more and more images are created, they are increasingly released into the vast space of the internet. Greater minds than myself have taken the task upon themselves to create a system that organizes, interprets, categorizes, and presents all the photographs on the web.
Google has released an image search feature which allows users to view “visually similar” images. The true purpose of this feature has never been explicitly stated, but Google explains in the “how to” section that, “search using a picture of your favorite band and see search results that might include similar images, webpages about the band, and even sites that include the same picture.” It’s unclear whether Google uses an image’s metadata to assist in this process.
The Google Visually Similar function approximates a form of consciousness by demonstrating aesthetics.
TEST HYPOTHESIS BY PERFORMING EXPERIMENT
My Analysis –
I received twelve images from twelve different artists selected by Wandering Bears for this experiment and entered them into the Google Images Visually Similar feature. I found that Google to be quite accurate at matching overall color patterns. I was surprised to find that the function detected subtle shifts in tone. I believe that it was this accuracy which resulted in similar subject matter to the original image to be matched. An image of a beach was matched with images of beaches. An image of a B&W hillside was matched with B&W images of hillsides. However, when the original image was not a “typical” scene – such as the image of shapes and color on a table, or achromatic lines of paint – the results were even more revealing. When not presented with a familiar shape (leafs, buildings, faces), Google found formally similar images with ease.
Wandering Bear’s Analysis –
Much like Max, we quickly recognised Google’s difficulty in registering and appropriating basic colours and form. The software is well equipped in differentiating landscape and portrait focused images, comparing them to other shots including a model or landscape of the same colour tones. (Colour appearing to be the biggest contributing factor when appropriating imagery through the software). However, in contrast to more minimalistic images such as textures or tonal focus, Google’s Visually Similar software would often relay the search to a variety of logos or digital graphics. Struggling to adapt the original search to ‘similar’ images. Posing the question, are the more abstract images more ‘original’?
But in considering this software’s worth, it is difficult, essentially it does exactly what it intends to, successfully finding the user’s requested image and quickly directing you to it’s source. However, Google’s Visually Similar feature poses a number of important questions, what effect does this process have on the original work? To what extent does it alter it’s ‘worth’ and does this process in any way devalue an image? However as always, it depends on your approach, on one side, Google’s Visually Similar software is a fun way to compare and contrast photographs, to simply see what a machine considers similar. But on a more cynical approach, much like Tumblr, Pintrest and Instagram, the software is yet another tool that pools original artworks out of their context and into another endless chasm where voices are lost and valuable opinions are misunderstood.
The results confirmed that Google’s Visually Similar function is an effective tool in examining an image’s formal aspects objectively. Although the process is revealing in terms of how a program is able to break down shapes and colors, I found the results lacking. Like a child, Google groups photographs with similar colors and shapes without knowing their original context or meaning. Google is only able to break apart an image to its pixels, when in practice an image’s meaning is created when a human views it. There is some value in this feature, but I ultimately wonder if it reveals anything deeper than the random results of an algorithm reaching beyond its technical capabilities.
All images are copyright to their respective artists.