Synths and Sensibility
From Beethoven to Kraftwerk, innovative artists have used new technology to make music more human, not less
by Dan Cohen
As we struggle to understand how AI will impact our creativity — or, perhaps, replace it — it’s instructive to look back at other convergences of technology and art. The history of music provides rich examples, as there have been radical technological changes in the modern era, from the restructuring of the concert hall two hundred years ago, to the advent of recording devices over a century ago, to amplifiers, synthesizers, MIDI and other digital interfaces in the second half of the twentieth century, to the complete composition and performance of music through software in our century.
Where has this unimpeded technological march led music, musicians, and listeners?
In his insightful cultural history Listening in Paris, which examines why the audiences at concerts went from chatty and occasionally even rowdy to seated and silent between 1750 and 1850, James Johnson traces the rise of modern musical performance as a large-scale shared human experience, often invoking Romantic notions. Beethoven not only produced expansive music that challenged and engrossed the listener with dramatic shifts in tone, but took advantage of the acoustics of new spaces by filling the hall with more musicians and instruments (many of them, like the trombone and timpani, relatively novel in these settings), and even accompanying choruses. It was not just a big sound, but a profoundly moving sound, one that demanded rapt (and silent) attention. The technology deployed by the revolutionary composers of Beethoven’s age altered what music sounded like, but their purpose was very human: to create a form of social experience that referenced deep emotions and concepts.
More recent musical technologies and acoustic environments, like the synthesizer and the dance club, have done much the same, as Simon Reynolds details in Futuromania: Electronic Dreams, Desiring Machines, and Tomorrow's Music Today. From their inception, synths sounded like the icy, distant future, and indeed they were immediately used in backing audio tracks for science fiction films filled with robots and aliens. But soon the champions of these avant-garde instruments incorporated them into music in a way in which humanity was emphasized, not minimized.
Look at the early adopters of synthesizers and what they conveyed with the technology, including Germany’s Kraftwerk, Japan’s Yellow Magic Orchestra, and UK’s The Human League. They may have worn retro sci-fi jumpsuits, but their futurism, even their roboticism, was simply humanism wrapped in much cooler garb. They sought to portray and understand contemporary human experiences through sound: Do swift forms of transportation connect us or isolate us? What does identity mean in a massive, anonymous city? Is our interaction with new media healthy? How do we find joy in the monotony of modern life?
Likewise, the electronic music that followed these pioneers, into spacier or harsher or chiller modes, from dub to industrial to ambient, did not cede composition and meaning to digital technology, but used that technology to create new kinds of shared human experiences, in the dance hall, rave warehouse, or afternoon chillout. This is why Ralf Hütter of Kraftwerk, and synth bands like Front 242, preferred the term “electronic body music” — synthesizers and drum machines were just practical technologies, being used by humans to make music that enveloped and engaged the human body.
Reynolds’ chapter on Auto-Tune (a revision of his Pitchfork article from 2018, “How Auto-Tune Revolutionized the Sound of Popular Music”) shows it to be a fascinating precursor to AI, and a likely template for the incorporation of AI into music-making — one that is bimodal, involving both imperceptible and extremely unsubtle interventions in sound. Auto-Tune was invented by a mathematician, Andy Hildebrand, who helped Exxon model oil fields and then “realized that the same math that he’d used to map the geological subsurface could be applied to pitch-correction.” Hildebrand thought Auto-Tune would be a handy, behind-the-scenes technology to smooth out the imperfections of human singing, and indeed it has been used in countless recording studios for that purpose, in the pursuit of “perfect” human notes.
But anyone who has heard of Auto-Tune knows that along with this “intended” use of the technology, other artists playfully took Auto-Tune into the realm of the experimental, finding unexpected joy in the wildly inhuman sonics the software could produce. Dialing Auto-Tune’s Retune Speed setting to 0 rather than the standard range of 10-50, so that the pitch of the singer instantly moved from one note to the next rather than slowly shifting as our vocal cords naturally do, generated a strange new sound that cutting-edge producers liked and that audiences responded to. Auto-Tune 0 famously generated hits from artists like Cher, T-Pain, Kanye West, and Future, and it’s still used extensively today.
It’s not my cup of tea, but I get it: A new sound, plucked from many possible digital variations, connected with listeners. Whether you like it or not, Reynolds is probably correct that Auto-Tune is the defining “instrument” of twenty-first-century popular music, and even musicians who initially disparaged it as the generator of “false notes” have taken to incorporating it from time to time as the ears of the audience have changed to hear the sound as recognizable and captivating.
Such is the way that music moves forward, through technological changes and related shifts in taste. As James Johnson concludes in Listening in Paris:
Set in the stream of time, listening becomes a dialectic between aesthetic expectations and musical innovations. It is a continuous negotiation conducted at the boundaries of musical sense. Change occurs when music accessible enough to meet listeners’ criteria for meaning is at the same time innovative enough to prod them into revising and expanding those assumptions.
Further reading on this theme: My Northeastern colleague Deirdre Loughridge’s recent book Sounding Human: Music and Machines, 1740/2020 is terrific and covers additional historical ground. Credit: I have shamelessly nicked the title of this post from Simon Reynolds; it was too good to pass up.