Emptyset

Blossoms

BY Bryon HayesPublished Oct 9, 2019

7
The cases for artificial intelligence and machine learning continue to flourish and spiral outward, growing in number minute by minute. Medical researchers use it to discover new molecules to hopefully cure difficult diseases; manufacturers implement it to predict when their equipment is going to break down; retailers even use it to optimize store inventory levels. The possibilities seem practically endless.
 
The use of AI and ML in music, however, feels completely antithetical — computers have no soul. Yet innovators like Emptyset's Paul Purgas and James Ginzburg have spent the better part of 18 months working with programmers and sound synthesists to create an AI platform that can produce audio. Not just audio, but audible structures — music. They trained a neural network by feeding it their own electronic music, along with ten hours of improvised acoustic recordings, and Blossoms is the outcome.
 
The music that pours from the Emptyset AI is incredibly complex.  Shades of past recordings peek through, such as in opening track "Petal," which consists of an algorithmic language delivered with a rhythmic cadence, having been carved out of electronic noise. There are more abstract notions that take shape as well: "Blossom" buzzes, hiccups and chirps before finding its own fragmented sense of repetitive locomotion. The more amorphous tracks, such as "Bloom," shed any notion of moving through time. This particular track exists as a series of electronic squirts that hover inside a sub-aquatic antechamber before completely evaporating into a bleak nothingness.
 
Blossom shows much promise for AI-augmented composition in the realm of electronic sound. It's unlikely that a software version of the Beatles will exist in our lifetime, but Purgas and Ginzburg have proven that the boundaries of technological possibility are completely mutable.
(Thrill Jockey)

Latest Coverage