By Brandon Grill
Music has changed drastically throughout the decades, perhaps more in the past century than in any other time in history. While genres and styles have been in constant development and evolution prior to the widespread use of electricity, the instruments being used by composers and performers were largely the same ones that had been used a hundred years before. A massive leap in music production occurred when electronic instruments and tools were incorporated into performances and recordings to make and augment sounds. Much of today’s most popular music doesn’t involve a single physical instrument at all, with computers providing entire albums worth of work through simulated sounds. While electronic music has shifted away from live performance, some artists choose to create their own instruments to play live, even when all the sounds are originally computer generated.
An important distinction to make when it comes to music and instruments is the difference between analog and digital. Analog, for our purposes, refers to instruments that express sound data that is continuous and defined by some physical variable. Violins, for example, are analog instruments because the sound they create is propagated through the air and they sound different as the frequency varies in a continuous manner. Digital refers to instruments that express data in discrete terms, such as the zeros and ones in binary code. A digital synthesizer, for instance, reacts when the artist presses a button on the keyboard. The button is either off or on, with no change in between. Even pitch-changing effects and various other seemingly continuous dials on digital instruments are most often discrete settings, having a limited number of states.
One of the earliest electronic instruments is the theremin, made popular by thriller movie composer Miklós Rózsa (and perhaps more recently by Sheldon Cooper from the Big Bang Theory). As a musician moves their hands through the electromagnetic field created by the theremin, disruptions are created. These disruptions are translated into sound. The whole instrument is still analog, but was nonetheless a pioneer in introducing new musical possibilities.
One of the most significant leaps in bringing digital instrumentation into the mainstream of music was with the digital synthesizer. This device, most often set up as a keyboard, simply creates certain preset sounds when the keys are pressed and has built-in features to modulate pitch and volume or to loop sounds. Even once artists started using these instruments, at the core of the work was a skilled performance. A complex synth riff is as much of a challenge as playing a song on a piano.
While writing songs before a performance has always been standard practice in most genres, a critical change occurred when entire songs could be finished at an artist’s leisure. Due to the nature of digital music, various sounds can often be recreated simply using a computer program. Many programs on the market today, such as GarageBand and Ableton, have thousands of sounds available, all able to be manipulated in limitless ways. This paved the way for DJ’s to get away with standing in front of a turntable, or in recent years, a laptop, and simply pressing play. Recent breakthrough trends, however, are shifting the focus back into live performance and recording.
The gap between analog and digital instruments is still not something that is closed, and there is no way for a computer program to create a “natural” sounding orchestra or rock band performance any time soon; there are just too many variables involved in the sounds produced by physical instruments. Theoretically, though, a digital instrument can have every sound recreated exactly on a computer screen. So what makes these electronic instruments so widespread? Perhaps the best explanation is that some artists prefer the imperfection of an off-the-cuff performance piece, even when computer-generated sounds are involved.
Jeremy Morris, an assistant professor of Media and Cultural Studies at UW-Madison suggests that contemporary music innovation is approached from two sides. Not only is music itself being constantly changed and developed, but the technology side of it has become its own entity. There appear to be two approaches to bridging the gap between disciplines. “Some artists are so acutely aware of their own practice and their own way of making music that they design an instrument to do what they want to do,” Morris says. Artists such as Imogen Heap are known for modifying their instruments to create unique sounds, and ever since the classical era, artists have been known to request their instruments to be made to certain specifications. What’s arguably new is the development of technologies first, without clear musical demand for it. This interest has grown among developers and academics alike, leading to new movements and gatherings dedicated solely to music development.
An event dedicated to music technology is the Music Tech Fest. It is a traveling event that showcases new developments in music technology, with an emphasis on novel and unique ways to create and control sounds. Examples of devices from recent appearances include devices that scan brainwaves and sensors that detect subtle movement of the fingers to create sounds. Morris characterizes these instruments by claiming that, “the tactility of performance is being put back into instruments.” Technologies such as the Reactable allow multiple people to surround a table and use blocks to control the sounds, creating an entirely interactive musical experience not too unlike playing a piano with friends. As Morris describes, the festival’s theme is “an approach to the instrument, as opposed to an approach to the music.”
Music today stands in a unique position, where engineers are influencing music just as much as musicians are influencing the engineers. Technology is advancing at amazing rates in all fields, and perhaps it was only natural that technical creativity caught up with musical creativity. This could be seen as a complete upending of the source of innovation, but it is not something that musical engineers fear. Morris is a contributing thinker to the festival’s manifesto, which describes how “Music technologies help us explore what it means to be human, to create, and to participate.” Music is in a very confusing place in time, but perhaps the work that comes out of these modern collaborations between engineers and artists will come to define the cultural era in which we live.