Walking into the white space of the exhibit was odd. Seeing Trevor Paglen’s art for the first time, after following his work but only reading about it, I somewhat expected to step into a hard drive, and yet the gallery was so typical. White walls, chic, withdrawn receptionists, and tourists roaming about. It was an art show, rather than a shocking, deep web exposé. And then it got weird.
It started with a tour group led in Hebrew, whom I began to surveil; my dyed hair and English note-taking working as camouflage. The group, consisting of around fifteen people shrouded the piece center-gallery, Machine Readable Hito (2017). We all stared at the 16x4.5 foot wall, covered by, to my count, 360 Hito Steyerls.
Steyerl, a friend of Paglen’s, is a Berlin-based artist and scholar, whose work focuses on the digital realm, significantly image reproduction. Her likeness has been seen in galleries before, though in her own work, such as How Not to be Seen. A Fucking Didactic Educational .MOV File (2013), in which she uses technology to shift her face and make herself “disappear;” a piece that is particularly targeted (so to speak) at surveillance drones, a topic not foreign to Paglen himself.
The photos of Steyerl in the middle of Chelsea’s Metro Pictures stick to the aesthetics of passports, the ultimate pictures-as-categorization. Passport photos are used to identify people and are attached to information - date of birth, nationality, eye color, identification number, and so on. In this instance, the images were not inspected by humans, but rather by different facial-analysis algorithms. Some sought to detect facial hair, while others calculated probabilities for not only age and gender, but also for emotional state. While a portrait evokes in people a curiosity as to the identity of the person, this series reduces a woman to data and statistics, speaking to what is happening currently to all humans on the planet.
Despite the enthusiasm of their guide, the crowd, median age of 50, seemed to be rather unfazed. They didn’t even mind much when she hurried them past the video installation on the other side of the wall, and skipped to the back room, for lack of time. Quietly, they shuffled in, and I after them.
Gallery 2 holds pieces from Paglen’s series Adversarially Evolved Hallucinations (2017), known colloquially as Hallucinations. Almost all the images dark, with blurred colors bleeding in from the black mist. Some have a lighter palette, but even they seem to take the tones of gloomy, foggy day, or of the red neons of an underground rave. The shapes on the prints are reminiscent of contemporary abstract paintings, or maybe the photo-transformations of Lucas Samaras, creating an odd feeling rather than placing something specific, though their titles are recognizable terms - Porn, A Man, Vampire, etc. The images are foreign, almost unimaginable. This is because they are a result of a conversation between two programs.
With the assistance of a software platform called Chair, which was also used for the pieces “Fanon” (Even the Dead Are Not Safe) and Eigenface, Paglen taught an AI to recognize literature, philosophy, and history, using image-based training sets, similar to ones used to train, say, Facebook AI to recognize faces and places in uploaded pictures. Paglen then taught a second AI to draw images - ones that could confuse the initial one. These two programs communicated back and forth until finally an image was created that was unsortable by the first one; an image that has no basis in the first AI’s reality. These are what Paglen calls “hallucinations.”
This choice of word, “hallucinations,” terrifyingly anthropomorphizes machines. In fact, the entire show melts together the artificial and the flesh. In Invisible Images, computers become not only human, but the best of mankind. They are intelligent, learning to sift through data and come to conclusions, but also artists, with more work in the gallery than Paglen himself. People at Invisible Images, on the other hand, become machines, a concept particularly emphasized in viewing Behold these Glorious Times! (2017).
Dressed remarkably in blue tones, the group of tourists seemed to be a single unit, standing in the back room, shifting their bodies ever so slightly when navigating from one “hallucination” to the other, guided by their young chaperone, who paused and gave lengthier descriptions on Vampire and Porn. Saying silent goodbyes to the people who had not noticed me there, I dipped away to the limbo space in between the entrance and gallery 2, to Paglen’s video installation.
Spanning 12 minutes, the video runs quickly through recognizable images as well as hallucinatory patterns. The former is for the most part video bits of faces. The expressions go by quickly, but our human eyes recognize certain information instantaneously. Juxtaposed are abstract forms - pixels and black and white patterns. Despite the abstraction, slowly connections begin to be made. The back and forth between what one perceives as physical versus digital changes the context of imagery, convincing the viewer that he or she is learning. Machine learning. The viewer sitting before the screen starts to see as an algorithm does, collecting information flying by quickly, categorizing images that are similar. A face of woman is no longer her’s, but becomes a part of “expressions.”
The images on the screen, as your faithful reviewer later found out, are a combination of two types. One is used to teach AI - meaning, images fed into the machine. The other is what AI sees when viewing the former and attempting to understand them, it is how AI sees and perceives. In experiencing Behind these Glorious Times!, the viewer not only views like a computer, but actually thinks like one, entering its “psyche.”
There is a sequence in the installation that is particularly curious to me, one that is different than the rest of the imagery. It is a brief cutting between people that seem to be filmed by a computer camera, using swiping hand-gestures. Swipes are heavily connected with technology - they are the everything from Minority Report’s interface, to making choices on Tinder. They speak to human beings’ fascination with the possibilities of technology, but particularly to our complicity in filling machine-learning training sets. In the neo-digital age, human beings crowd-source machine learning. With every geo-tagged upload and profile picture, a new image is added to the schooling of artificial intelligence.
The human race’s survival depends on empathy, on the ability to see through someone else’s eye. There is currently a disconnect between humans and their codependent partner, the computer, as one can’t fully see like the other. “Most images these days are made by machines for other machines, with humans rarely in the loop,” writes Paglen in his artist statement for Metro Pictures. “I call this world of machine-machine image-making ‘invisible images,’ because it’s a form of vision that’s inherently inaccessible to human eyes.” Or so it was, until this show.
 Another Berlin-based friend of Paglen’s is filmmaker Laura Poitras, known particularly for her work with- and documentary about Edward Snowden - Citizenfour (2014). It seems that there has arrison a coalition of data-centric artists in Berlin.
 Yes, including those without internet and social media.
 A piece that looks shockingly similar to the killer in the film series Saw.
 A sign of the times, though targeted at the wrong generation.
 The era that is completely engulfed in digital machines and new media; one that includes a generation that knows only existence of virtual devices, rather than analog.
 Trevor Paglen for Metro Pictures, “Checklist with Artist’s Notes: A Study of Invisible Images” (2017).