I prompt DALL•E to generate an image – “39-year-old Filipino dad walks daughter to school in fall Austin, TX.” The loading screen indicates the algorithm is dreaming. It spits out four images. Three of them look like stock photos, but one of them sticks out to me. Conveniently, the photo is from behind. The daughter wears her hair like my daughter. The backpack is a little too Spider-man-y for my daughter’s tastes, but my four-year-old might wear it. The dad in the image looks pretty close to me - a 39-year-old Filipino man living in Austin, TX. I’m pretty sureI own that jacket. I wore pants like those to a wedding at the Long Time where I got drunk, played some baseball, and injured my rotator cuff. I had that hair cut in 2018 when we moved to Austin. I have that fat ass. The neighborhood has a sidewalk on one side of the street but not the other, which is better than the old place we rented in Crestview, which didn’t have any sidewalks. Could it really be?Am I just a data point in the AI’s algorithm? In real life, I’m walking my daughter home from her elementary school in the Allandale neighborhood of Austin, and she asks me what I think the future will be like. I tell her,“I don’t know.”
here’s a scene from Her, Spike Jonze’s 2013 sci-fi film about a nebbish, played by Joaquin Phoenix, who falls in love with his sentient operating system,Samantha, voiced by Scarlett Johansson. They’re having one of those absurd conversations that couples have that only exist in the bubble of intimacy. They ponder why parts of the body are where they are. Samantha wonders what it would be like for a butthole to be in one’s armpit. Phoenix’s character laugh sat the thought of what toilets would look like. Samantha imagines anal sex.Her small, black-and-white screen produces a simple but effective drawing.One man with a butthole for an armpit is being penetrated in said butthole by a man whose penis is where you would expect. It’s supposed to be funny. It is. And it’s the kind of scene that could have only happened because of movie magic. Of the things that children of the ‘80s and ‘90s had on their “things that will happen in the future” bingo card, prompt-based image generation wasn’t on most people’s lists – things like flying cars, sentient robots, teleportation, meals in the form of pills. I guess it was on Spike Jonze’s. Almost 10 years later, AI art programs like DALL•E, Midjourney, and Stable Diffusion have been developed to produce images from the simplest of prompts - images you can argue have never been seen before. You can also argue it’s just a clever amalgamation of data collected by a machine.
I am living in the future in Austin by way of Los Angeles (by way of Virginia).I’ve made friends here over the past four years, but when I meet a native and explain I moved here from Los Angeles, I feel compelled to apologize. I knew three things about Austin when I arrived. SXSW. Fastball. Bats! Those were good enough for me. But also it reminded me of Virginia. The “southiness” of it. The humidity. The eyes that look at you are friendly enough, but they’re asking,“Well, what the heck are you?”We were able to buy a house in February2020. It’s small for two adults, two kids, and a dog, but I lived in smaller places when I was a kid with three or four adults – my mom, dad, grandma, and grandpa - along with four siblings.
I ask DALL•E to produce images of “Filipino Christmas Virginia 1995 found photo.” I signed up for DALL•E in June and was invited to use it in August.I’d used a light version available for the general public, which produced interesting results, but I was hopeful OpenAI’s full version would be more robust. An old colleague who’s an artist and found footage archivist had posted some haunting images of AI-generated art - a monster lit by a spot light hiding behind a tree, a chalk drawing of a ghost with four arms around a fire, four men photographed from behind in an oddly lit conference room with drop ceilings. When making my own images, I used a favorite prompt of his, which is “found photo.” When you make this request of DALL•E, it takes the idea of a found photo - spontaneous, amateurish, off the cuff, bad - and finds the idea in its database to produce a convincing POV of someone who know show to use a camera but doesn’t account for composition or symmetry or beauty or lighting. It feels real. The program spits out four images.
It’s as though DALL•E accessed my core memories and found photos from my childhood home. From couches crowded with smiling brown faces and people sitting on the floor, to the abundance of wood paneling, off-white, and blown-out windows. The only thing off was the faces were all blank and blurred or distorted in a nightmarish way. Some photos have two Christmas trees. It feels very much like a dream - the kind where you know you’re in your house, but it’s not quite right. Some people might be offended that their lives are somehow being copied and extracted for public use. I feel happy.I grew up in Norfolk, VA, first generation Filipino, and because my family was Mormon, we weren’t embedded into a predominantly Catholic Filipino community. So I grew up feeling very alone despite having two older brothers and two younger sisters. Yet looking at these pictures created by a machine from the archives of the internet, I have proof that I wasn’t alone.
Everyone has a different opinion of AI-generated art based on what field they work in. A lot of illustrators and graphic designers are grossed out by their work being referenced without permission or terrified of losing their ability to feed themselves – the fear of being nickel and dimed by an unfeeling robot and a gaggle of tech bros. There’s a sense that robots are coming for our jobs, but that conversation has mainly been in the physical labor workplace. We’ll always have our creativity. Right? Things will stay the same until they aren’t.Things have changed. We can now replicate human imagination on the other side of the uncanny valley. Are we in danger of losing one of the defining characteristics of humanity?
Over the course of the pandemic, more than 100,000 people moved toAustin. I can’t claim to be any different kind of opportunist other than myfamily narrowly beat the rush. I feel lucky to take my daughter to a publicschool where they let her explore her creative side and learn how to talkabout emotion. We go home, and she draws relentlessly. A drawing of ourdog. A house she’s designing for the future. Kiki’s Delivery Service. She cobbles together pictures in her mind similar to AI art generators. She ismy future. Outside the small chance she dies in a school shooting or some other unimaginable tragedy, she will outlive me. She can never be replaced.But if I post her drawings on the internet, they’ll be ingested as training data and cataloged. And when someone asks for a picture of a “dog with spot on eye childrens drawing,” her picture will in some sense be a part of it, forever accessible by anyone in the future like a library with no authors. When we’re dead, all of our dreams can still live on for someone or no one to find - able to produce never before seen images, words, sounds, and constructions, all based on the data we fed it.
In Austin, there’s a sense that things are moving too far from what made the city weird, affordable, a haven for artists, outcasts, and college-aged Texans looking for cool music and better drugs. At this point, the coming of our AI benefactors is as inevitable as large swaths of the city being overtaken by tech companies. I’m not from here. I feel like there’s no fight to be fought. I feel lucky to have gotten in when I did, but I feel like I’m part of the death of a city that’s clinging to its last bit of magic. But I also think we’ve found a way for Austin to live forever. For any person, city, or idea to live past its physical existence. Put it in the cloud. Prompt the machine. What did Austin used to belike? We’ll be able to see it again, maybe even recreate it - all those memories mixed up in a sentient soup with found photos of Filipino Christmases.