How to See Like a Machine:
A Review of Trevor Paglen’s Exhibition "Invisible Images" at Metro Pictures

2017

Seeing Trevor Paglen’s art for the first time, after only having read about it, I somewhat expected to step into a hard drive, and yet the gallery was so typical. White walls, chic withdrawn receptionists, tourists roaming about. It was an art show, rather than a shocking deep web exposé. And then it got weird.

It began with a tour group led in Hebrew, which I started to surveil; my dyed hair and English note-taking working as camouflage to my Israeli identity. After all, who would suspect a 5’1” blonde girl could be listening in on a foreign conversation? The group, consisting of around twelve people, shrouded the piece center-gallery Machine Readable Hito (2017).

The tourists walked along the wall, some taking in the images, while others discussed New York City, in a vernacular particular to Israeli culture. The guide, attempting to redirect their attention, began discussing the piece, raising her tone as if shepherding young students. We all stared at the 16 x 4.5 foot wall, covered by, to my count, 360 Hito Steyerls.

Steyerl, a friend of Paglen,[1] is a Berlin-based artist and scholar, whose work focuses on the digital realm, specifically image reproduction. Her likeness has been seen in galleries before, such as in her piece How Not to be Seen: A Fucking Didactic Educational .MOV File (2013), in which she uses technology to shift her face and make herself “disappear.” It is a piece that is particularly targeted (so to speak) at surveillance drones, a topic not foreign to Paglen himself.

The photos of Steyerl in the middle of Chelsea’s Metro Pictures stick to the aesthetics of passports, the ultimate pictures-as-categorization. Passport photos are used to identify people and are attached to information -- date of birth, nationality, eye color, identification number. In this instance, the images are not inspected by humans, but rather by various facial-analysis algorithms. Some detect facial hair, while others calculate probabilities for not only age and gender, but even emotional state. While a portrait evokes a curiosity about the identity of a person, this series reduces a woman to data and statistics, speaking to what is happening currently to all humans across the globe.[2]

Despite the enthusiasm of their guide, the Israeli group, median age of 50, seemed to be rather unfazed. They didn’t even mind much when she hurried them past the video installation on the other side of the wall and skipped to the back room, for lack of time. Quietly, they shuffled forward, and I after them. As we entered the second gallery, the group spread out, each corner of the room taken over by a subset of 1-3 individuals. The guide looked at her watch and sped up her spiel.

Gallery 2 holds pieces from Paglen’s series Adversarially Evolved Hallucinations (2017), known colloquially as Hallucinations. Almost all the images are dark, with blurred colors bleeding in from the black mist. Some have a lighter palette, but even they seem to take the tones of a gloomy, foggy day, or of the red neons of an underground rave. The shapes on the prints are reminiscent of contemporary abstract paintings, or perhaps the photo-transformations of Lucas Samaras. The images have an odd feeling to them and are hard to pin down as representational, even though their titles are recognizable terms - Porn, A Man, Vampire, etc. The images are foreign, almost unimaginable. This is because they are a result of a conversation between two programs.

With the assistance of a software platform called “Chair,” which was also used for the pieces “Fanon” (Even the Dead Are Not Safe) and Eigenface, Paglen taught an AI to recognize literature, philosophy, and history, using image-based training sets similar to ones used to train, say, Facebook AI, to recognize faces and places in uploaded pictures. Paglen then taught a second AI to draw images -- ones that could confuse the initial one. These two programs communicated back and forth until finally an image was created that was unsortable by the first one; an image that has no basis in the first AI’s reality. These are what Paglen calls “hallucinations.”

This choice of word “hallucinations” terrifyingly anthropomorphizes machines. In fact, the entire show melds together the artificial and the flesh. In Invisible Images, computers become not only human, but the best of mankind. They are intelligent, learning to surveil, to sift through data and come to conclusions, but these machines are also artists, with more work in the gallery than that produced by Paglen himself. People at Invisible Images, on the other hand, become machines, a concept particularly emphasized in viewing Behold these Glorious Times! (2017).

Spanning 12 minutes, the video runs quickly through recognizable images as well as hallucinatory patterns. The former is for the most part video bits of faces. The expressions go by quickly, but our human eyes recognize certain information instantaneously. Juxtaposed are abstract forms -- pixels and black and white patterns. Despite the abstraction, connections slowly begin to be made. The viewer sitting before the screen starts to see as an algorithm does, collecting information flying by quickly, categorizing images that are similar. A face of woman is no longer hers, but becomes a part of “expressions.” The viewer becomes convinced that he or she is learning. Machine learning.

As the images on the screen flickered in front of my eyes, the group of Israelis walked past me. They didn’t quite register until they were gone. At the end of the piece, I walked away onto the busy street. I took out my phone to map my route away. As I made my way to the subway, I noticed a stream of Link NYC[3] kiosks. Around me were cameras and screens, phones upon phones. I could sense I was being watched, but I couldn’t see how the machines saw me.


________________
[1] Another Berlin-based friend of Paglen’s is filmmaker Laura Poitras, known particularly for her work with -- and documentary about Edward Snowden -- Citizenfour (2014). It seems that there has arisen a coalition of data-centric artists in Berlin.
[2] Yes, including those without internet and social media.
[3] Google’s wifi project, which provides free service, calls, and phone charging, while also collecting data by tracking and through data.