From electric guitars to samplers to drum machines and beyond, the music we love is only possible thanks to the technology used to create it. In many ways, the history of popular music is really a history of technological innovation. In this episode, we partnered with BandLab to unpack four inventions that changed music forever. Featuring author and journalist Greg Milner.
MUSIC FEATURED IN THIS EPISODE
Original music by Wesley Slover
Prelude by Ghostnaut
To Little, With Love by Elvin Vanguard
And All the Rest by Dream Cave
Subtractions by Epocha
To Find You (with KYAND) by Modera
Out Linear by Sweet Stare
Grumpalo by High Horse
Grand Theft by Katori Walker
One Day by Ten Towers
Hopscotch Bop by Stan Forebee
I Feel You by Yuppycult
Swing Step by iamalex
Here With You by Super Duper
Watch our video shorts on Youtube, Instagram, and TikTok.
Follow us on Reddit, Twitter, and Facebook.
Sign up for Twenty Thousand Hertz+ to get our entire catalog ad-free.
If you know what this week's mystery sound is, tell us at mystery.20k.org.
Visit bandlab.com/download to start creating and sharing music anytime, anywhere.
Buy Greg’s book Perfecting Sound Forever: An Aural History of Recorded Music.
View Transcript ▶︎
You're listening to Twenty Thousand Hertz.
[music in]
What makes a song great? Of course, the writing, the performance, and the arrangement are all important.
But there's another huge factor that’s really easy to miss - the technology behind the music.
Andrew: In some ways, technology is like an invisible instrument.
That's Twenty Thousand Hertz producer Andrew Anderson.
Andrew: We don't always notice the role it plays, but without it songs just don't sound the same.
There are so many examples of new inventions that transformed the sound of music… from magnetic tape, to electric guitars, to drum machines, and beyond.
Andrew: Developments like these can change the course of music history. And sometimes, they can even change the world.
Let's get into it.
[music out]
Andrew: Music recording began back in the late 1800s. And due to the limits of technology, these recordings sounded pretty rough. As an example here's a track from 1888 called The Lost Chord.
[clip: The Lost Chord]
Andrew: But over the next hundred years, recorded music became a closer and closer replication of live sound,[sfx: tape machine on] thanks to inventions like reel to reel tape multitrack recorders and high-fidelity microphones like this one [sfx: tap tap]
Andrew: As time went on, musicians expected their instruments to sound as pristine as possible when captured on record.
[clip: Benny Goodman Sextet - Limehouse Blues]
Andrew: Here's a tune by the Benny Goodman Sextet, from the early forties. By modern standards, it sounds pretty vintage, but you can hear that recording quality had already come a long way since the 1880s.
Andrew: But then in the 1950s, something strange started to happen.
[music in]
Greg Milner: All of a sudden, you had these sounds that were just dirty and messed up.
Andrew: That's journalist and author Greg Milner. Greg literally wrote the book on the history of music technology. And he says that the 1950s were a turning point.
Greg Milner: Musicians just found ways to like mess it up, to create sounds that were interesting if not sounds that were actually you know, “good” based on normal standards of fidelity.
Andrew: During that time, you had artists like Big Mama Thornton, Howlin’ Wolf and Gene Vincent using abrasive guitar tones that hadn't been heard before.
[music out into montage: Hound Dog, Moanin’ For My Baby, Baby Blue]
Andrew: Those dirty guitar sounds are some of the first examples of intentional distortion.
[guitar chords in]
Now in music, there are a few different kinds of distortion. One of the most common is called harmonic distortion, which adds overtones to the original sound.
[chords switch to distorted]
These extra frequencies make the sound feel richer and more powerful.
[distorted chords up, then back to clean]
Another common type of distortion is clipping, this is when the signal gets boosted so much that the top of the waveform gets completely flattened.
[chords switch to clipped]
This makes it sound squashed and harsh, like when you crank the volume all the way on a cheap speaker.
[clipped chords out]
Andrew: There's some debate over the first recorded song to use distortion, but the most popular candidate is Ike Turner's “Rocket 88,” released in 1951.
[clip: Ike Turner - Rocket 88]
Greg Milner: The legend is that the amplifier fell off the truck on the way to the recording session and it created this messed up sound that Ike Turner really liked.
Andrew: Stories like this were pretty common back then, because at the time, there weren't any distortion devices for sale. So if you wanted that gritty sound, you had to improvise. For example, Dave Davies from The Kinks got creative with his amp to make the distorted sound on “You Really Got Me.” Here's Dave in an interview with VH1.
Dave Davies: I came across this little amp in the shop, and I just got a razor blade and started to cut the cone in the speaker. [sfx: cutting into amp] I don’t know why. And I plugged it in, [sfx: amp switches on] and it made that, “urrgh,” that amazing sound.
[clip: The Kinks - You Really Got Me]
Andrew: But in 1962, Gibson changed the game with the Maestro Fuzz-Tone guitar pedal.
Fuzz-Tone Commercial: It's mellow. It's raucous. It's tender. It's raw. It's the Maestro Fuzz Tone.
Andrew: The Fuzz-Tone took that broken amp sound, and turned it into a pedal that you could [sfx: switch sound] switch on [sfx: switch off sound] and off as needed.
Fuzz-Tone Commercial: Now, let's listen to some of the unbelievable effects that you can create with the Fuzz-Tone.
[clip: fuzztone demo]
Fuzz-Tone Commercial: Wasn't that something?
Andrew: Well, it was definitely something, but it turned out most guitarists didn't want that sound on their record. What the Fuzz-Tone really needed was a hit song to take it into the big time. And in 1965, it got exactly that.
[clip: The Rolling Stones - Satisfaction]
Andrew: Played by Keith Richards using the Fuzz-Tone, “I Can't Get No Satisfaction” took distortion into the mainstream.
Andrew: But strangely enough, Keith never meant for that guitar line to be used in the final mix. Instead, he recorded it as a placeholder for horn parts that were supposed to be added later.
[sfx: horn part]
Keith thought the distorted guitar sounded gimmicky. But when the rest of the band heard it, they liked it. So they took a vote...
Mick: All those in favour of keeping the guitar part? Aye.
Stones: Aye.
Mick: All those opposed?
Keith: Nay.
And the rest is history.
[clip: Satisfaction]
Andrew: The crunchy tone of “Satisfaction” sparked a never-ending quest for more and more distortion in rock music, from Jimi Hendrix…
[clip: Jimi Hendrix - Purple Haze]
Andrew: To Black Sabbath…
[clip: Black Sabbath - Iron Man]
Andrew: To Slayer…
[clip: Slayer - Angel of Death]
Andrew: But of course, distortion isn’t just limited to hard rock and metal. At this point, it’s a standard part of a musician’s toolkit, whether it’s a pop star like Britney Spears…
[clip: Britney Spears - Toxic]
Andrew: Or a rapper like Tyler the Creator.
[clip: Tyler, The Creator - MAGIC WAND]
Andrew: So why does distortion sound so good?
[music in]
On a scientific level, distortion makes instruments seem louder. For example, here's a guitar line that’s totally clean.
[clip: clean guitar part]
Now if I play that same part with distortion on it, it sounds louder, even though the average volume is exactly the same.
[clip: distorted guitar part]
Andrew: But more than that, distortion is really about a feeling.
Greg Milner: The way I think about it with distortion is it creates friction and traction, you know, it's a way for the sounds to really take hold. It's a way that both sounds so right and so wrong. It doesn't sound the way that music should sound. And in that sense, becomes this whole new truth of the way music can sound.
Andrew: Distortion had a massive impact on music, because it allowed musicians to create sounds that simply weren't possible before. But not long after, another invention came to the fore, one that meant musicians technically didn't have to play at all: the sequencer.
[music in]
Sequencers come in all shapes and sizes, but basically they're a type of computer that tells instruments - like synthesizers or drum machines - what to play.
All you have to do is program the notes you want to hear [sfx: someone pushing buttons], the order and speed to play them in and then push play.
[sfx: play button pressed, melody repeats]
Greg Milner: It's one of the earliest examples of being able to organize sounds in a way that seems perfect. You know, you can actually put things together, make things link up in a way that has very little to do with live performance. And you can essentially make something go on forever.
[music out]
Andrew: Now, sequencers have actually been around for a really long time. For example, the famous Big Ben clock at the Houses of Parliament in London plays a pre-programmed sequence of notes every quarter hour, known as the Westminster Chimes.
[clip: big ben chimes]
Andrew: These days, it's powered by electricity, but when it first chimed in the 1800s, it was basically a sequencer powered by clockwork.
Andrew: Those self-playing pianos you see in old Westerns are also an early type of sequencer. The data is actually stored on a roll of paper punched full of holes, and that tells the piano which notes to play. Here's one playing The Entertainer.
[clip: player piano]
Andrew: However, electronic sequencers didn't come along until the 1950s. And the music that was made with them tended to be… well, pretty weird. Here's a piece by Raymond Scott from 1962 that he created with a home-made sequencer.
[clip: Raymond Scott]
Andrew: Believe it or not, this was actually meant to help babies fall asleep...
Andrew: Electronic sequencers first became commercially available in the late 1960s, around the time that synthesizers hit the market. But at first, they were really expensive, so they weren't used much in pop music. Here's a rare example from the band Tonto's Expanding Head, released in 1971.
[clip: Tonto’s Expanding Head]
Andrew: It wasn't until the mid 1970s that sequencers became affordable for working musicians. And one of the first people who really put them to good use was an Italian composer and producer named Giorgio Moroder.
[clip in: Donna Summer - Love To Love You, Baby]
Andrew: Giorgio was a disco pioneer who wrote songs like Donna Summer’s 1975 hit “Love To Love You Baby.”
Andrew: But a couple of years later, Giorgio saw a movie that changed his mind about how his music should sound. And that movie was...
[clip: Star Wars theme]
Andrew: Here's Giorgio in an interview with The Guardian.
Giorgio: I went to look at the movie Star Wars, which had a scene called La Cantina, where they supposedly played the music of the future.
[clip: Cantina Song]
Giorgio: And I didn't think it was the music of the future. It looked like, but it didn't sound like.
Giorgio: So I thought the only way to do it is to do it with the computers, only computers.
Andrew: Giorgio set about bringing his vision of futuristic music to life. He started by writing a traditional bass line on a bass guitar, which would have sounded something like this.
[clip: bass guitar line]
Andrew: Then, he took that part and programmed it into a sequencer, which then triggered the notes on a synthesizer. That meant they could speed it up, so it sounded more like this.
[clip: synth bass line]
Andrew: And finally, he added some delay so that each note would be played twice.
[clip: synth bass with delay]
Giorgio: And suddenly it sounded da da da da da da. I said, “Oh, that's a whole new…” That was the key moment.
[clip: I Feel Love]
Andrew: The song was Donna Summer’s “I Feel Love,” and that precise, sequenced groove inspired generations of electronic artists, from Eurythmics…
[clip: Eurythmics - Sweet Dreams]
Andrew: To The Chemical Brothers…
[clip: The Chemical Brothers - Out of Control]
Andrew: To LCD Soundsystem…
[clip: LCD Soundsystem - Someone Great]
[music in]
Andrew: Today, almost all pop music uses a sequencer in one way or another, whether it’s for a drum beat, a bass line, a synth pattern, or something else. Basically, if a song is made on a computer, it’s almost certainly going to involve a sequencer.
Greg Milner: It's so much a part of music, it almost is hard to separate it from music itself. It's so ingrained in the ethic of music. It's a good example of something that's become so prevalent it's almost hidden in plain sight.
[music in]
Andrew: In a way, sequencers made computers seem more human, because now computers could play music. But then, another invention would do the exact opposite, making humans sound almost like computers. And while its original intention was to perfect human performance, the end result was something very different.
That’s coming up after the break.
[music out]
MIDROLL
[music in - Our Prayer by the Beach Boys]
This is a Beach Boys song called Our Prayer, from 1966. The harmonies are so perfect that they almost sound artificial.
[music up]
For a long time, this kind of pitch-perfect singing was only possible with years of training, and natural talent.
[music out]
Andrew: But then in the 1990s, a musician and engineer had an idea that would make this kind of sound - or at least, something similar - accessible for everyone.
[music in]
Greg Milner: Andy Hildebrand was a flute player. He actually went to university on a music scholarship. But then he later earned his doctorate in electrical engineering and he was doing work for the oil industry.
Andrew: Andy's job was to use soundwaves to find oil underground. He'd fire very low frequency sound waves at the ocean floor [clip: bass notes], and then listen for the echoes. By analyzing those echoes, he could then tell if there might be oil or not.
Andrew: Before long, Andy realized that same technology could be used in music. Here he is remembering that moment in an interview with the Smithsonian Institution.
[music in]
Andy Hildebrand: I had a luncheon at a trade show with my distributor and my distributor's wife. And we were talking about “What project do we do next to make money?” And she says, "Well, why don't you make me a box where I could sing in tune?" And I said, “Well, that's a lousy idea,” so I didn't do a thing.
Andrew: But Andy kept thinking about it, and eventually had a change of heart.
Andy Hildebrand: About eight months later, I thought, “Well, that might actually be a good idea,” and I knew exactly how to do it, because of my geophysical technologies. And by the same trade show, twelve months later I could demonstrate it with a live singer, it worked in real time.
Andrew: Andy called his invention Auto-Tune. And before long, almost every major recording studio had his software.
[music out]
Essentially, Auto-Tune analyzes the pitch of a voice, and then raises or lowers it to the exact in-tune pitch. So if a singer is flat…
[clip: Roof - Hungry (not fixed)]
…you can fix it.
[clip: Roof - Hungry (fixed)]
Andrew: You could even choose how many milliseconds it took to shift to the note you selected. For a rap song, since each word is so short, you could get away with a really fast change.
[clip: Katori Walker - Grand Theft [Clean Version]]
Andrew: But for a slow ballad, with long, drawn out notes, you’d set it to a longer time, so that the changes would be more subtle.
[clip: Ten Towers - One Day]
Andrew: Technically, the dial could go down to zero milliseconds, meaning an instant pitch change. But Andy figured no one would want to do that, since it would sound so unnatural. But then, in 1998, this happened.
[clip: Cher - Believe]
Greg Milner: I can remember hearing it for the first time, just thinking "Wait, I know that voice, but that doesn't sound like anyone.” So it's like, "Who is that?"
Andrew: “Believe,” by Cher was a massive international hit, and it didn't take long for other musicians to start copying that hyper Auto-Tuned vocal sound. It was all over Kanye West's album 808s & Heartbreak…
[clip: Kanye West - Love Lockdown]
Andrew: It even made the crossover to alternative music with Bon Iver.
[clip: Bon Iver - Woods]
Andrew: And perhaps most famously, T-Pain made that robotic sound his trademark.
[clip: T-Pain - I'm Sprung]
Andrew: In fact, Auto-Tune became so popular that Andy Hildebrand won a Grammy for his invention… Although during his acceptance speech, he acknowledged that not everyone loves it.
Andy Hildebrand: I created Auto-Tune. I put a dial on the front of the software, and just for fun, I let that speed go to zero, not knowing that I was going to corrupt music for the rest of my life.
[music in]
Andrew: But for Greg, those stylized robotic vocal sounds are what make Auto-Tune so interesting.
Greg Milner: The people who invented it thought of it as something that would be a way to perfect something that was supposedly imperfect. But instead, it was used essentially as a musical instrument, right? As a way to make something completely different and sound sort of crazy.
Andrew: However, when Auto-Tune is used to make a performance pitch perfect, it can cover up the beautiful imperfections that make a voice unique.
Greg Milner: Yeah, I mean, the human voice is amazingly expressive, right? It’s more so, maybe, than any instrument. I mean, if you can just like put a little catch in your voice and it changes the emotion of it completely.
Greg Milner: And so that was always what I found sort of disappointing about when Auto-Tune took over. If you take away the real artistic uses of Auto-Tune, you're left with example after example after example of people using it to make vocalists sound almost superhuman, to sound more perfect than they could ever be, and there's a sort of way in which your brain knows that.
[music out]
Andrew: Distortion, sequencers and Auto-Tune all brought brand new sounds to the masses. But there’s one type of software that truly democratized music making. The Digital Audio Workstation, commonly known by its acronym DAW.
[music in]
Now, Digital Audio Workstation sounds like a pretty technical term, but the idea is simple: putting a recording studio inside your computer.
With a DAW, you can record multiple tracks, play with things like panning and EQ, and add effects like chorus or delay.
And you can do it all without ever messing with tape, which can be hard to work with, takes up space, and degrades over time.
Andrew: DAWs actually have a surprisingly long history, with the first one being released in 1977: The SoundStream. It could record four channels of digital audio, and had built in effects. You could even edit the audio using an oscilloscope - which was kind of like an early computer monitor.
[music out]
Andrew: One of the first musicians to use the Soundstream was our disco hero from before: Giorgio Moroder. Here's his 1979 song E=MC2, which was recorded entirely on a Soundstream.
[clip: E=MC2]
Andrew: Soon enough, DAWS started to replace traditional recording setups for a number of reasons. One of them is adaptability.
[music in]
Greg Milner: It does for sounds what the word processor did for writing and for words. Everything becomes infinitely flexible. You can build on top of the rough draft. And that was really powerful.
Andrew: And most importantly, DAWs made music recording far more accessible.
Greg Milner: It put music solely in the mind of the music maker. It no longer mattered what you could do in a studio because you could come up with ways of doing it at home in your bedroom. That was, in a way, I think, the last frontier. Now any space could be a place to make music, because physical space didn't even really matter anymore.
[music out]
Andrew: By the early nineties, there were quite a few affordable DAWs that could run on your home computer, like Cakewalk, Pro Tools and Ableton. Before long, artists like Nine Inch Nails, PJ Harvey and the Beastie Boys were making entire albums using DAWs - or "in the box," as it's called within the industry.
Andrew: Here's the song “Where It's At” from Beck's 1994 album Odelay, which was recorded almost entirely in the home of his producers using a DAW.
[clip: Beck - Where It's At]
Andrew: But it took a few more years before a song that was recorded, mixed and mastered entirely in a DAW reached number one on the US billboard charts. And that song was…
[clip: Ricky Martin - Livin La Vida Loca]
Andrew: These days, the majority of music is created entirely in the box. And that allows artists to really focus on details, in a way that wasn't possible in the past.
Andrew: For example, when Billie Eilish and her brother slash producer Finneas create a song, they actually assemble the main vocal from dozens of takes. Here they are explaining that process in an interview with David Letterman.
Finneas: Here is the vocal take for Billy's song, “Happier Than Ever,” we got up to like 87 takes.
Billy Eilish: So pay attention. Different take, right? So this is all one take? And then different take. Different take. Different take. And you would never know!
Andrew: And as the internet took over, DAWs opened the window for more and more collaboration. Today, you can work on a recording, sync it to the cloud, then someone on the other side of the world can pop it open and make their own changes.
Andrew: For instance, during the pandemic, a group called Trip the Witch recorded an entire album remotely, without ever meeting each other in person.
[clip: Trip the Witch]
[music in: Wesley Slover - Cognisphere]
Andrew: As for the future, the only certainty is that technology will continue to inspire musicians to come up with new sounds.
Greg Milner: I think in a way though, technology changes music more than music changes technology. And the reason I say that is because there are, you know, many instances of technology changing music in ways that the people who built the technology had no idea were going to happen.
Our relationship with technology is messy, and exciting, and constantly evolving. I mean, there’s a night and day difference between how we use technology today versus thirty years ago. And that complex relationship comes through loud and clear in the music we make.
Greg Milner: I mean, obviously I'm biased and I would never tell everyone that they should be a music geek, but I think it really behooves people to think about what music sounds like and why it sounds like that, and how maybe it sounded like ten, twenty, thirty, forty… a hundred years ago, even, and why music has changed and what it says about what we want out of not just music, but sort of out of life itself.
[music out]
[music in]
Twenty Thousand Hertz is produced out of the sound design studios of Defacto Sound.
Andrew: This episode was written and produced by Andrew Anderson.
Other voices: It was story edited by Casey Emmerling, with help from Grace East. It was sound designed and mixed by Justin Hollis.
Andrew: Thanks to our guest Greg Milner. Greg's book Perfecting Sound Forever takes an even deeper dive into this topic, and is available wherever you buy books.
Thanks also to BandLab for partnering with us on this episode. To learn more, visit bandlab dot com.
I'm Dallas Taylor. Thanks for listening.
[music out]