The mirror ball is a recurring motif in A. A. Debling’s work. It has stood in variously for the all-seeing eye of surveillance, the irradiating blast of a drone strike, a vision of apocalypse. But beyond destruction, it might also represent hope. For many queer people, nightlife is a mode of survival. Whether from homophobic and transphobic violence, or the HIV/AIDS crisis of the 1980s and 90s, queer spaces have provided shelter, uniting chosen family under the glow of the mirror ball.
COVID-19 poses a novel threat to this support system, with spaces shuttered indefinitely, their longterm financial survival in jeopardy. The proposed project draws parallels between the queer community’s ability to adapt, evolve, and survive a pandemic at the macro scale and the coronavirus’s own processes of mutation, adaptation, and survival at the molecular scale. Using artificial intelligence, the artist will produce a continually evolving visual form. Much like the virus itself -- which displays processes we associate with life, without “living” in a strictly biological sense -- AI operates in a grey (or, even, “queer”) space between the living and the non-living. One might also draw a direct comparison between the neural network’s ability to produce and iterate imagery based on past archives and José Esteban Muñoz’s definition of queerness as an “ideality that can be distilled from the past and used to imagine a future.” This project, then, does not seek to use AI as the latest technological gimmick: its aim is to establish a link between the latent spaces of the neural network, the viral life-form, and the conceptual category of queerness. The resulting multi-channel video work will be a free-wheeling meditation on mutation, mortality, and queer futurity.
The artist has compiled a dataset containing thousands of images, all connected by their thematic and formal links to the concept of survival. Images include visualizations of the novel coronavirus, instruments of surveillance, apocalyptic futures, and the spinning mirror ball.
This data set has been deployed to train a neural network using StyleGAN2 machine learning algorithms to generate new and hybrid forms. [The training process is pictured to the right]
The trained model is already producing convincing still images and video but additional funding is needed to fully develop the project. Since many of the source images are individual frames pulled from stock video sources, the most urgent cost is that of paying the correct licensing fees for image use. Not only a legal requirement, this will also greatly improve the aesthetic results of the project as vestiges of the "Shutterstock" watermark will no longer appear in the AI-generated images. After image licensing and data processing, the next phase of development would be to incorporate the algorithmically-produced footage into a fully-fledged video work roughly 15-20 minutes in length. The final phase of development is the design and construction of a multi-channel installation. Financial support is sought for each stage of the project's development.