“There was a time in the ancient world – a very long time – in which the central cultural problem must have seemed an inexhaustible outpouring of books,” writes prominent Renaissance scholar Stephen Greenblatt. “Where to put them all? How to organize them on the groaning shelves? How to hold the profusion of knowledge in one’s head? The loss of this plenitude would have been virtually inconceivable to anyone living in its midst.” Centuries later, even for us who are not living in its midst, the unimaginably vast corpus of creative works and knowledge that have been produced, re-produced, and circulated during Renaissance remains highly unconceivable. For MEET Digital Culture Center’s Immersive Room, Refik Anadol Studio created a site-specific immersive art experience that emerged from RAS Lab’s most recent, groundbreaking experiments with turning visual and textual Renaissance datasets into mesmerizing multimedia art pieces with machine learning algorithms.
The immersive multimedia installation features four chapters; each focusing on separate datasets of painting, sculpture, literary texts and architectural works created between 1300 and 1600. These datasets have been processed through unique GAN algorithms that were specifically developed to generate a dynamic multidimensional shape matching the architecture and infrastructure of MEET. As the machine re-imagines these historical works of powerful imagination, elevated craftsmanship, and distinguished acumen, the dynamic installation changes its immersive shapes, colors, and audio design, representing a unique walk through the machine’s latent data universe. Creating an illustrative audio-visual channel of multi-dimensionality through which to re-view the entire Renaissance corpus, the piece displays an entirely new and poetic way of renewing our connection to living traces of art history.
About RAS : Machine Hallucinations
Machine Hallucinations is an ongoing project of data aesthetics based on collective visual memories of space, nature, and urban environments. Since the inception of the project in 2016 (Google AMI Residency), our studio has been utilizing machine intelligence as a collaborator to human consciousness, specifically DCGAN, PGAN and StyleGAN algorithms trained on these vast datasets to unfold unrecognized layers of our external realities. We collect data from digital archives and publicly available resources and process them with machine learning classification models such as CNN’s, Variational Autoencoders and deep ranking, to filter out people, noise and irrelevant data points. The sorted image datasets are then clustered into thematic categories to better understand the semantic context of the data universe.
This expanding data universe not only represents the interpolation of data as synthesis, but also becomes a latent cosmos in which hallucinative potential is the main currency of artistic creativity. In order to capture these hallucinations from a multi-dimensional space, we use NVIDIA’s StyleGAN, StyleGAN2 and StyleGAN2 ADA which generates a model for the machine to process the archive and the model is trained on subsets of the sorted images, creating embeddings in 1024 dimensions.
As a masterfully curated multi-channel experience, Machine Hallucinations brings a self-regenerating element of surprise to the audience and offers a new form of sensational autonomy via cybernetic serendipity.
Ho Man Leung
Raman K. Mustafa
15 Channel Projectors
8 Channel Sound
20M x 10M Immersive Space