Abstract – Max Marshall

 

Article by Max Marshall

In an effort to further explore the way that Google’s “visually similar” search function reacts and interprets imagery I performed an exercise. In contrast to the experiment performed in September 2013, this exercise used repetitive searches as a form of exploration. I began by inserting an image of my own, Image AA, into Google’s visually similar feature and selecting what I considered to be the best suited match, Image AB. I saved this image. I then inserted Image AB into the search function and selected its best match, Image AC. I repeated this process. I performed the procedure until I had two hundred images, the last of which is Image HR.

 

1-26: Images AA-AZ

27-52: Images BA-BZ

53-78: Images CA-CZ

79-104: Images DA-DZ

105-130: Images EA-EZ

131-156: Images FA-FZ

157-182: Images GA-GZ

183-200: Images HA-HR

 

The results are presented below in a short video. The images pass by at four per second.

The process of entering and selecting such a large number of images allowed me to explore and contemplate various topics relating to the internet, authorship, commerce, stock photography, portraiture, and accessibility. We find that certain types of images are produced in great quantity and are therefore more readily available to be matched to other existing imagery. Examples of this include product images and stock photography. The experiment also finds that Google’s algorithm more easily recognizes some categories of images. Photographs of water or beaches are easily matched with similar vistas. Certain body parts (hands, flesh) are also readily matched. As a result, Google’s visually similar function gets caught in its own loop of imagery, recycling from the same image pool until a transition out can be found.

As the video makes apparent, images on the web conform to no set of standards regarding size, proportion, image quality or color profile. Google handles images in the same manner regardless.

 

 

www.latentimage.us / @thelatentimage