← BACK TO SHOP
← BACK TO SHOP

Amen Break: The world’s most sampled drum beat

Amen Break.png

This show was written and produced by James Introcaso.

There’s a sample of music that’s been heard around the world in over 2,000 songs. Odds are you’ve heard it many times and didn’t even realize you were listening to the same breakbeat. The amen break might be the most sampled piece of music in history. Where did it come from? This episode features interviews with artist Nate Harrison and Grammy-winner Richard Louis Spencer.

MUSIC IN THIS EPISODE

Umber - Aether
All I Know - Stray Theories
Tell Me A Story - Chad Lawson
Smooth Talk - Phillip Cuccias

AMEN BREAK EXAMPLES

Straight Outta Compton - N.W.A.
I Desire - Salt-N-Pepa
Futurama Theme - Christopher Tyng
Can't Knock The Hustle (Desired State Remix) - Jay-Z feat. Mary J Blige
Eyeless - Slipknot
In for the Kill (Skream's Let's Get Ravey Remix) - La Roux
Pigs - Tyler, The Creator
King of the Beats - Mantronix
Tundra - Squarepusher
Fear - Amen Andrews
Feel Alright Y'all - 2 Live Crew
Compton - The Game feat. Will.i.am
Red Eye - Big K.R.I.T.

Twenty Thousand Hertz is produced out of the studios of Defacto Sound and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Check out wetransfer.com for all of your file sending needs!

Donate to  Richard Louis Spencer at amen.20k.org.

View Transcript ▶︎

[SFX: Amen break at normal speed]

You’re listening to Twenty Thousand Hertz, the stories behind the world’s most recognizable and interesting sounds. I’m Dallas Taylor.

[SFX: End Amen break]

What you just heard is called the amen break, or ah-men break depending on how you say it. Anyway, It’s likely the most sampled piece of music in the world. You’ve definitely heard it a million times, but you might have a hard time remembering from where. So, let’s hear those six-second again. This time, see if you can remember where you’ve heard it.

[SFX: Amen break at normal speed]

[SFX: Straight Outta Compton (radio edit)]

[SFX: I Desire]

[SFX: Streets on Fire]

[SFX: Futurama Theme]

The amen break has also been sped up.

[SFX: Can’t Knock the Hustle]

[SFX: Eyeless]

[SFX: Scary Monsters and Nice Sprites]

And it’s been slowed down.

[SFX: Minefiles]

[SFX: Pigs]

[SFX: King of Beats]

It’s been used in commercials.

[SFX: Jeep Commercial]

The amen break is sampled in over 2,000 songs and counting. If you search Amen Break you’ll find examples and curated playlists everywhere. But, where did this beat come from?

[Music in]

Nate: It is a about five or six second passage in the middle of a song called “Amen Brother” that was recorded by a band in the late 1960s called The Winstons.

That’s Nate Harrison, an artist and professor from Tufts University. Nate did extensive research on the break for an audio art project called, “Can I Get An Amen?”

Nate: In the middle of the song, there's a drum breakdown where all the other instruments drop out.

[Music drops out to amen break]

The drummer, GC Coleman, does his thing for like five or six seconds.

[Music out]

He syncopates them in this interesting, weird way.

Imagine like a… four to floor standard beat, like a one, two, three, four.

[SFX: Standard Beat]

A breakbeat, has a little bit more syncopation on it the down beats would happen on maybe in between sort of beats, and what not. It gives it a little bit of a funkier vibe to it.

A break is just short for a breakbeat. There's the tighten up break...

[SFX: Tighten up break]

… and there's the funky drummer break...

[SFX: Funky drummer break]

… and there's the apache break.

[SFX: Apache break]

All of these breaks were taken from old records, just like the amen break.

[SFX: bump out Apache break]

More than a decade passed after The Winston’s recorded “Amen Brother” before the break began to show up in hip hop tracks. That’s mainly because sampling music didn’t really come into vogue until the 80s.

Nate: Samplers were actual, physical boxes, machines. They were about the size of a DVD player. Nowadays It's all software on a computer.

Think of the golden era of hip hop music in the mid to late 80s and early 90s, that whole 10, 12 year period is predominantly a period in which hip hop music, particularly, is lifting samples, drum samples (SFX), guitar riffs (SFX), center horns (SFX), all that kind of stuff, from older records.

Samplers became popular around the same time musicians were starting to use drum machines and synthesizers. At first, it was kind of a novelty.

Nate: Sampling was new and interesting. It produced sounds again in contrast to the kind of synthesized, artificial sounds (SFX). Early electro music, early breakdance music, had a very robot kind of sound, futuristic kind of sound to it. To introduce sampling into it was to sort of recover the aesthetics of an earlier moment.

Sampling also had one other powerful element that made it desirable - nostalgia.

[Music in]

Nate: When producers get their hands on samplers they realize they can start borrowing the sounds of records that they had grown up listening to.

A record company called Street Beat Records put out a series of albums called Ultimate Beats and Breaks. These compilations included songs perfect for sampling.

Nate: That included a bunch of different breaks, including the amen.

[Music out]

The amen wasn’t the only breakbeat feature, but it did become the most sampled. In the US, it was big in hip hop, while in the UK it was used for jungle and drum and bass. But, of all the breakbeats to choose from, why did the amen become the most popular?

Nate: The first thing with that break is that it's really long. It's like a six second sample, so there's a lot of material to play with.

Six seconds might not seem like much, but in the early days of sampling, it was a ton of time.

Nate: People digging through the crates of vinyl records at used record stores looking for samples. If they come across one clean bar of a drum sample, they're happy. That's why the amen break is such a treasure.

In addition to its length, the amen break has variety.

Nate: In the course of those five or six seconds, there are a few different snare drum hits. Each one of those snare drum hits is slightly different than the others, because GC Coleman hit the drum a certain way, and slightly differently than he did the second before he did the previous hit.

You can choose between snares. You can start chopping up the amen break and rearranging the individual beats into other configurations. Pretty soon, you start getting into some really interesting patterns and textures.

[SFX: Cymbal crash from the amen break]

In addition to rearranging the break, a musician sampling it can speed it up...

[SFX: Fast amen break]

Slow it down…

[SFX: Slow amen break]

Or even play it backwards.

[SFX: Backwards amen break]

The amen break’s length and versatility made it so prolific among electronic musicians in the UK that finding new ways to use it became an intellectual pursuit.

Nate: It branched out even farther into so called IDM music, or intelligent dance music, which was kind of the response to the rave and dance culture in the UK. They would call it like electronic music dance music that you can't dance to a lot of that music also used the amen break. Tom Jenkinson, also known as Squarepusher, used it thoroughly, thoroughly, thoroughly.

Squarepusher’s indulgent use of the amen break can be heard in his track “Tundra.”

[SFX: Tundra from 6:00 to about 6:10]

Nate: Luke Vibert was one of the first people to do really beyond weird things with it. He recorded under the name Amen Andrews.

[SFX: Fear from 3:30 to about 3:40]

Obviously, this intellectual use of the amen isn’t limited to the UK. Tons of American artists have used it too.

[SFX: Feel Alright Y'all - 2 Live Crew]

[SFX: Compton - The Game feat. Will.i.am]

[SFX: Red Eye - Big K.R.I.T.]

When it comes to the amen break, and sampling in general, there’s a lot of legal and moral questions.

Nate: The entire aesthetic of the 'Amen Break,' and I would say breakbeat culture generally is an aesthetic of copying.

In some respects that goes against current copyright laws. It's kind of legally contentious practice.

That's definitely a strange, bittersweet part of sample-based music is on the one hand, it's kind of revivifying old forms and maybe generates some interest in those older forms. But, it's also a taking, too.

GC Coleman, the drummer, didn't make any money certainly not any royalties, or any residuals, or anything from all that sampling.

GC Coleman passed away in 2006, but a surviving member of The Winstons named Richard Louis Spencer wrote “Amen, Brother.” He still holds the copyright to the song. Like GC, Richard was never paid royalties from the massive sampling of the song. We’ll hear from him after this.

[Music out]

MIDROLL

[Music in]

“Amen, Brother,” the song from which the amen break is sampled, was recorded by The Winstons in 1969. They had no idea their song would make such a cultural impact.

Richard: It was just a throwaway piece.

[Music out]

That’s Richard Louis Spencer, a Grammy-winner and former member of the Winstons. He’s the one who wrote “Amen, Brother.”

Richard: We were a group of young men in Washington, D.C., during the club scene in the '60s.

We were a bar band. We played in places and played all the hits, and we were very good at it.

I was the tenor saxophone player in the group. I ended up writing and singing the song that became a hit for us, but I was a tenor player.

The Winstons performed as the backup band for Curtis Mayfield and the Impressions.

[Music in]

Richard: We played with them for about six or seven months, and Curtis and I became very good friends. He was a very nice guy. He's probably one of the pure people I met.

He was a very, very straight up guy.

It was he who encouraged me to write, cause he always said oh, you got some good ideas.

[Music out]

After some encouragement and advice from Curtis, Richard composed a song that won him the Grammy Award for R&B Songwriter of the Year in the late 60’s. It wasn’t “Amen, Brother.” It was a song called “Color Him Father.”

[Music in]

Richard: When I wrote the words for “Color Him Father”. I tried to call my dad, my dad left us in 1958, my mom was having children, and I ran up on him in New York.

We began talking and stuff over the years and so then one morning, I tried to call him and his phone was disconnected and the first thing came to my mind, wow, this guy is gone again. I wrote this song, kind of this letter to him.

So I took it to rehearsal and recorded it and it became a hit.

[Music out]

When the Winstons recorded the single for “Color Him Father,” they needed a B side. As a band that played mostly covers and back up, they didn’t have a lot of options.

The only other original the Winstons had was a chaser - filler music that engages a live audience as the announcer introduces the band. You still hear chasers today, mostly in late night talk shows whenever a new guest is introduced.

Richard: You have some music to bring them on. It was very short and when they went off, [SFX: sings the instrumental] amen, brother.

The Winstons made their instrumental chaser their B side and called it, “Amen, Brother.”

[Music in]

Richard: During that time everybody had drum breaks and we had been doing songs where Greg would play these drum beats.

Richard asked his drummer, GC or Greg, to play a breakbeat during “Amen, Brother.”

Richard: I said that just sounds too much like so and so, so and so, because I was kind of the leader of the band at that time. I said why don't you take the piece from blah, blah, blah. Anyway, I told him two or three pieces he was going to put them together, and he did.

It was just another drum break, only this one was a composite of a couple that Greg played.

That became the Amen break.

It was a filler. A throwaway as they call.

[SFX: Amen Break]

[Music out]

With a hit single and a Grammy-winning frontman, the Winstons were about to make it big time.

[Music in]

Richard: The Winstons were flown in to New York, our manager had signed us and they had set up this big 38-week tour opening for Credence Clearwater, and it was just like the answer of prayer. We were going to make pretty big money.

Then we had this big meeting, I call a signing party, on the 126th floor over on Avenue of the Americas. It was a beautiful thing.

[Music out]

But what Richard didn’t know was that the rest of the Winstons weren’t in it for the long haul. They were planning on quitting the band.

Richard: They brought the contract around for us to sign and they took that contract and said well, we have to take them to our lawyers. I said well no, this is not a negotiation. And it was pretty obvious right then that they had intended to quit.

The guys said well, you can bring them back in the morning. These New Yorkers, they've been through this before. They knew the group was finished.

[Music in]

After the Winston’s broke up, Richard left the music industry and had an eclectic career.

Richard: I sat around for about two and a half years feeling sorry for myself.

I got a job working at a liquor store, delivering liquor in Georgetown. The same clubs I used to hang out and spend $200 and $300 a night on booze, here I was pushing liquor up in there.

Then I got the job at the transit system driving a bus, and I absolutely loved it.

It was such a great thing for me because it's a people thing. Then I went back and enrolled in a university, and so I was working and attended university at the same time.

I worked in the transit system for 28 years. I had done my BA and my master's and I came back and I was in town here where I am now, I was only 58 years old.

I wasn't ready to call it quits. Plus I had a son. I had an 11-year-old son I brought home with me, so I went to teaching. I taught from 2000 to 2008 and I loved it.

[Music out]

Richard was busy. He was working, going to college, and taking care of his son. This was all in the 80’s and 90’s before the internet. He had no idea that “Amen, Brother” was being sampled in all of these songs.

Richard: I had no idea about the whole Amen break thing until almost the early 2000s. I realized after I learned how to use the computer, it was one of the most sampled pieces of music in history.

I was just amazed NWA had used it...

[SFX: Straight Outta Compton (radio edit)]

and Futurama...

[SFX: Futurama Theme]

I just looked at the list, and it was just kind of heartbreaking because I realized my publisher had just really just robbed me. I spoke to it about a lawyer and he said well, it's been 10 years, and this, that, and another.

There's a wine in Australia called the Amen break, and here I was sitting around eating sardines and drinking sodas and feeling sorry for myself and somebody was getting paid.

Richard tried moved on with his life, but people kept bringing up the amen break.

[SFX: Phone vibrating]

Richard: I started getting calls from these young men from Great Britain and they almost worshiping that thing over there and it was into that whole jungle and drum and bass thing.

[SFX: Doorbell]

Some guy showed up with television cameras and they did an interview, they said it was for the BBC or something.

They start saying to me, man you should be worth about $30 million.

Nate agrees with that estimate.

Nate: He'd be certainly a millionaire if he would have gotten just a few pennies from every time somebody used the 'Amen Break.

But Richard wasn’t a millionaire. He hadn’t collected anything from the thousands of songs that sampled “Amen, Brother.” For years he was asked to speak about his influential break and acknowledge he was never paid. Then a few years ago, he got an email from a UK-based DJ named Martyn Webster.

Richard: It was seemed like he was suggesting that some of these people felt badly and they wanted to take up some money for me. I had never heard of a GoFund, to tell you the truth.

Martyn asked Richard if he could set up a GoFundMe page. The page allowed musicians around the world who sampled “Amen, Brother” to donate money as a thank you to Richard.

Richard: I said well fine, I had no idea what that meant. They started sending money around.

To date, the GoFundMe efforts have raised over thirty thousand dollars for Richard.

Richard: It was very nice of them, too, because these are young people who probably weren't even alive when “Color Him Father” and amen break and stuff came about.

[Music in]

Richard has never officially been paid royalties for the over two-thousand known samples of the amen break, but when he looks back on his dynamic life, he’s also got a lot to be proud of.

Richard: It was amazing even when I retired, there were people at Metro who never knew that I had a record out. Not that I was trying to hide it but it wasn't anything to talk about. It was great, I enjoyed it, move on.

I've been inducted into the North Carolina Music Hall of Fame and I'm also in the D.C. Legendary Musicians Hall of Fame

I published two books.

Mostly proud because I raised a young black down south by myself. He graduated from Pfeiffer University. I’m very proud of him and now he's coaching soccer at Georgetown Visitation in D.C., and he works with kids with special needs. And he also is the head coach varsity girls at Langley High School. Very proud of that.

I'm proud of that than anything.

It's been a good, good life, man.

CREDITS

Twenty Thousand Hertz is produced out of the studios of Defacto Sound. If you do anything creative that also uses sound, go check out defactosound dot com. And don’t forget to reach out. We’d love to know who you are.

This episode was written and produced by James Introcaso… and me, Dallas Taylor. With help from Sam Schneble. It was edited, sound designed and mixed by Colin Devarney.

Thanks to our guest, Nate Harrison. You can checkout Nate’s project, “Can I Get An Amen?” and all his other work at nkhstudio.com.

Thanks also to Richard Louis Spencer. Please consider showing him some monetary love for his contributions to the music industry. You can do that at amen dot 20 kay dot org. That’s amen dot 20 kat dot org. We also put this link in the show description. Also, I hear there’s a few celebrities in the music business that listen to the show. If that’s you, show your respects by sending some money Richard’s way.

The music in this episode is from of our friends at Musicbed. Having great music should be an asset to your project, not a roadblock. Musicbed is dedicated to making that a reality. That’s why they’ve completely rebuilt their platform of world-class artists and composers with brand-new features and advanced filters to make finding the perfect song easier and faster. Learn more at musicbed.com/new.

You can say hello, submit a show idea, give general feedback, read transcripts, or buy a t-shirt at 20k.org. We’re also on Facebook and Twitter. You can also sign up for our superfan newsletter at newsletter dot twenty-kay dot org. Hearing from listeners is the most fun thing about making this podcast, so please don’t hesitate to drop us a note.

Finally, be sure to tell all your friends about the show.

Thanks for listening.

[Music out]

Recent Episodes

Ultrasonic Tracking: Are our phones really listening to us?

Ultrasonic Pic.png

This show was written and produced by Leigh McDonald.

Did you know your phone is a really good listener? Apps on your phone might be sending and receiving data over ultrasound. Ultrasonic communication is used for everything from tracking your daily habits to enabling light shows at music festivals. We hear from Yale Privacy Lab's Sean O’Brien and Michael Kwet, and privacy and technology counsel Katie McInnis. We also discuss the more positive uses of data over sound with LISNR CEO and co-founder Rodney Williams.

MUSIC IN THIS EPISODE

Autumn Eyes by The light The Heat
The Fairest Things by Chad Lawson
Fore by Steven Gutheinz
We Need Each Other by Dexter Britain
Gentle Without by Steven Gutheinz
Chasing Time by David A Molina
Butterflies (Night Hawk Remix) by Tony Anderson
Miles (instrumental) by Sonjo
Finding Glass by Steven Gutheinz

Check out Defacto Sound, the studios that produced Twenty Thousand Hertz, hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Check out wetransfer.com for all of your file sending needs!

Go to forhims.com/20k for your $5 trial month.

View Transcript ▶︎

[Mall ambience at SFX]

You're listening to Twenty Thousand Hertz, the stories behind the world's most recognizable and interesting sounds. I'm Dallas Taylor.

[Music Start]

Put yourself in a shopping mall. What do you hear? Maybe the sound of clothes hangers sliding across a rack…

[Clothes hangers SFX]

… or a cash register ringing up a purchase…

[Cash register SFX]

...maybe it’s rustling shopping bags?

[Shopping bags SFX]

What you probably don’t hear is this?

[Macy’s signal SFX]

Just on the edge of human hearing, at around 18 thousand to 20 thousand hertz, data is being transmitted over sound. It’s called ultrasonic communication, even though it might be audible to a child or someone with excellent hearing. The sample you just heard has been pitched down so the rest of us can hear it.

[ Continue Macy’s single SFX]

<span data-preserve-html-node="true" style=“color:rgb(127,44,202)"> Sean: Ultrasonic tracking if sci-fi, right? It's the kind of thing that seems like it comes out of a comic book or a movie. And I think that gets under people's skin.

That’s Sean O’Brien, from Yale University’s Privacy Lab..

Sean: We do privacy and security work and we look at advertising trackers inside of mobile apps, such as the ultrasonic trackers.

Ultrasonic tracking is tracking that's done through your microphone.

<span data-preserve-html-node="true" style=“color:rgb(127,44,202)"> Sean: If you have an app on your phone that allows microphone permissions, permissions to record onto your device, it can eavesdrop on you in the room and light up that microphone when you don't know.

Providers can embed their ultrasonic tones, or beacons, as they call them, into television shows and advertisements.

Sean: Let's say for example you're playing a game...

[Tiny Wings SFX]

...and you have the television on…

[Jessica Jones SFX]

Sean: There could be a signal coming through that television that's ultrasonic or near ultrasonic in most cases, that you can't hear. That sound can be picked up by your microphone, processed by the app, and then communicate with a server on the internet so that advertisers can gain data about what you're watching and potentially where you are.

If you’re listening to this, you’re probably using your phone at this exact moment. You carry that little device with you everywhere. And it might be spying on you. It can listen to what’s around you, and give that information to advertisers so they can get you to buy stuff.

[Music Start]

Sean: Hopefully at this point people have at least heard of binary and understand that there is zeros and ones inside computers. Which still sounds pretty mystical. The ability to discern between different tones can be correlated to zeros and ones. Or to use a simplified example, which we wouldn't see in the wild, 26 letters of the alphabet. You could have 26 different ultrasonic frequencies that are slightly different, so we call it frequency shift keying, because there's shifts in the frequency, and they could do A through Z with these tones.

You can look at the wave form, so a microphone and devices that are specifically designed to look at sound, can look at these waveforms and get data from them in this way.

[Music out]

This technology is also being used outside the home. You can find it at sporting events, music festivals, and yes, even the mall.

Sean: They take a look at people in retail outlets and they try to do things like, for example if you're walking by a rack of clothing, they might send you an advertisement for some clothing on that rack. It might be a 50% off coupon, it might be some other kind of promotion, that's going to try to motivate you to buy that piece of clothing.

Remember that sound at the top of the show?

[Macy’s Signal SFX]

[Music start]

That was found at a Macy’s department store nearby. The provider responsible for the ultrasonic beacon is ShopKick. They’re exclusively in retail stores. Shopkick has an app that lets you earn points and gift cards for walking into stores like Lord & Taylor, Yankee Candle and American Eagle. When you walk in, your phone picks up this ultrasonic beacon from the store speakers, and let’s the app know that you’re there.

The thing is, Macy’s doesn’t advertise integration with the Shopkick app. Shoppers can’t earn points for visiting. So it’s unclear how they’re using Shopkick’s technology. We reached out to Macy’s and Shopkick for interviews, but they declined.

[Music out]

While earning points and gift cards for simply walking around the mall is enticing, there’s a bigger picture here. And in this case, the bigger picture is big data.

Michael: There is an incredible amount of things that can be learned about an individual based on a small amount of data.

That’s Michael Kwet. He works with Sean at Yale Privacy Lab.

Michael: Companies can infer quite a bit about you. They can infer what your sexuality is, what your politics are, and we're learning that they're able to infer things about potentially your mental health based on the frequency of words you use, how often you swear.

And it’s not just ultrasonic tracking. Yale Privacy Lab found that over 75% of Android apps have some kind of tracker. Apps can use WiFi, Bluetooth and GPS to track your behaviors. And these trackers can work together to collect even more data from you. Here’s Sean again.

Sean: The message we're trying to bring is that this tracking is layer after layer after layer, really interwoven, very difficult to untangle the business relationships between these different trackers.

It's not just that when I go get an Android device or get an Apple device that Google or Apple are looking at me. It's that there's this entire ecosystem of trackers that are doing all kinds of nuanced things to track me, sharing data with each other, building profiles of us that, can usually be used to identify us backwards because it is unique to us.

Sean and Michael are confident that these trackers are in iOS apps, too. But Apple has more restrictions on their devices and software, so it’s harder to research.

Sean: We know that these trackers are also in iOS apps. We want to be very careful, at Yale Privacy Lab we want to always say that this is not a Google versus Apple thing. There are very strict laws in the United States specifically about circumventing DRM. That's digital restrictions management, or digital rights management as they like to call it. Not being able to get around pieces of software that lock down an iPhone because you could go to a federal prison, is a big barrier for us as researchers.

[Music start]

While there are strict laws protecting proprietary information, there isn’t much protection for the consumer.

Katie: So to some extent, this is a little of a wild west, right? Like, this is kind of brand new technology.

That’s Katie McInnis. She’s a privacy and technology attorney. With Katie’s help, the Federal Trade Commission issued warnings to apps using ultrasonic trackers. The FTC is the government agency that protects consumers.

Katie: We wrote comments to the FTC about how users are tracked, and one of these methods was ultrasonic beacons, which we were highly concerned about, because it was really unclear to the user that their activities across devices were being correlated using an ultrasonic audio beacon. And we felt like, unlike other methods of tracking, this one had the least amount of consumer exposure.

[Music out]

The FTC warned apps against SilverPush, which provides ultrasonic tracking in retail stores. And, when they got the warning, SilverPush said they’d end their tracking program. But because of how the FTC works, they couldn’t have prevented SilverPush from the start.

Katie: Unfortunately, in the U.S., we have a very fragmented system of privacy enforcement. The FTC, doesn't really have rule-making authority, unlike most of their agencies. And so they can't create prospective rules, then regulate future actions. They can only look at something, let's say, retroactively that was unfair and deceptive to user.

[Music start]

The researchers at Yale Privacy Lab found eight android apps that still use SilverPush. Most of them are international, though, and outside the scope of US law.

One of the few laws that does protect consumers in the US is the FTC act, which established the Federal Trade Commission. This act protects consumers against “unfair or deceptive acts or practices in or affecting commerce.” Basically, it protects consumers against the shady stuff business sometimes try to pull. This act was signed into law by President Wilson way back in 1914, so it’s pretty crazy it’s being used to regulate technology they never even dreamt of in the early 20th century.

[Music out]

In recent lawsuits against ultrasonic tracking providers, the Wiretap Act has been referenced.

[Music start]

This act not only protects our private conversations over the phone, but It also makes it illegal to spy on any kind of communication through a device. So it’s no surprise that this act has been brought up in lawsuits against ultrasonic communication providers.

We’ll hear from one of those providers after the break.

[Music out]

MIDROLL

[Music in]

Lawsuits against ultrasonic communication companies have been popping up lately. These apps use your phone’s microphone so it’s easy to see that this could be compared to wiretapping. But, not all ultrasonic communication companies are in the advertising or tracking business. There are genuinely useful ways to use this new technology to make people's lives easier - just like Wifi and Bluetooth has.

A company called LISNR describes their technology as “data-over-audio.”And like most of the providers in this field, they use these near-ultrasonic tones to transmit information.

[Music out]

Rodney: It's really a modulation across a frequency range.

That’s LISTNR’s CEO and Co-founder Rodney Williams.

Rodney: And we can push that frequency range up or we can push that frequency range down depending on the environment to ensure that it's gonna be reliable, but our core infrastructure is built between 18,000 and 20,000 kilohertz.

So, that frequency range is important, here’s why….

Rodney: the FCC says that all audio up to 21,000 kilohertz is safe audio - safe as in health, it’s not affecting your ear drum - We have competitors that actually use audio above 21,000 kilohertz. Technically that's not in the bandwidth of safe audio, and that's why really high frequency ranges outside of that bandwidth are regulated.

LISNR got their start as a marketing technology company, and they worked with some pretty big names.

For example, for Discovery Communication, as you watch MythBusters, little quiz overlays about the myth. Did they use water? And it would count down, and then your phone would start counting down, and vibrating, and then you had nine seconds to hurry up and answer it. One of my favorite, Budweiser Made in America music festival, What I loved was at the end of the night, if it recognized that you walked past a gate, it actually sent you a message to get a Uber, and it gave you a coupon offer on an Uber.

I thought it was perfect, right? I mean, it's a bunch of kids obviously at a festival. Obviously they just need a ride home, and I mean, I just think that's the power of understanding when a consumer's inside of an experience, and being able to help it.

In order for this experience to work, you had to download the festival’s app so it could listen for the ultrasonic signals.

[Play clip: Crazy in Love by Beyonce (Live at Made in America)]



Here’s an example of how it might work. But, in this example we’ve lowered the frequency of the signal by four octaves, to a range where you can hear it.

Rodney: Yeah, so it would be in the Budweiser Made in America app. What would technically be happening is that we would actually be playing our tones throughout the venue, and tones basically would have different location data so that if it heard a certain tone that mean you were in a certain area, and then it could understanding how long you were in that area, and if you went from area 45 to area 46, and then to 47, obviously you're walking, and then we just basically could trigger different messages based on where you are in relation to these tones.

[Music out]

Rodney: The magic behind it, which drove a lot of the engagement, is that this wasn't the battery drainage. It didn't use your cellular data, wifi data or GPS data to trigger you the message.

*[Music start]

Despite all their success, LISNR decided to end their marketing program and focus on other uses for the technology.

Rodney: All transparency, it was mainly because of a lawsuit that we got - that's actually just got dismissed, by the way, because our technology is fantastic and it does what we say it does - but it was a lawsuit that basically said that we were recording consumers' conversations for purpose of advertising, I can't say too much because I don't know what else has been released publicly, but I what I can say is our technology just doesn't do that, right? It doesn't interpret sound. It can't hear a voice. It's not voice recognition. It's true data over audio.

[Music out]

One of the concerns with ultrasonic communication today is that you have to let apps use your microphone. Sean from Yale Privacy Lab says it’s hard to know exactly what they are doing with your microphone, and it might be possible to collect more data than intended, like human voices, for example.

Sean: the processing is happening on a server somewhere. The app is not going to spend a lot of processing power or use the capabilities of your phone to make that waveform more privacy-respecting before it sends that audio to whatever server it's talking to.

But LISNR says they took their technology offline once they stopped using it to track.

Rodney: The moment we went offline, locally encoding and locally decoding, Lisnr has the inability to track. It’s a completely offline method of wireless transmission, so it does not connect to a wireless server. It does not connect to a cloud.

This is a complicated problem - “the cloud” wherever that may be, is actually the vulnerable part of the system. Ultrasonic communication is just a tool to collect information, which could be sent to “the cloud”.

If someone is going to try and steal information from you, they will most likely target a cloud server because they hold such massive amounts of information.

Back to Rodney.

Rodney: You can't hack the data transmission from a cloud server because we are no longer connected to a cloud, so the cloud does not initiate a transfer or decode the transfer, it's locally. Then you have to be locally there. You have to know the algorithm, you have to know the encryption, and you have to be able to understand the time token. And if you was to get all of that, then good for you. I think it should be that hard.



When this technology is offline and more secure, it’s better suited for things like authentication and payment purposes.

[Music Start]

Rodney: There's some unique advantages by using this as a authentication method, and that's probably the biggest area of interest and growth for us. Earlier last year we landed Ticketmaster, a consumer's mobile phone would actually broadcast real-time ticketing data, the same ticketing data that would be sitting in a barcode, and instead of walking up, and getting your screen brightness correct, and then getting the right angle, you would literally just have to place your phone within 12 inches of a scanning device, and your phone would immediately authenticate and turn green, and you are allowed in.

We want our data to be with the individuals that it's supposed to be with, not anyone else. In a perfect world, consumers locally have data, and when they want to transmit it, they control the transmission and they control who it's delivered to, it's not tracked by a third party like Amazon, Apple, anywhere, it's literally tracked by you.

When it comes to ultrasonic datate transmission, it’s up to each company to use their technology is ethical ways. For ultrasonic tracking companies like Silverpush or ShopKick, or really any company that tracks and collects data for advertising, transparency is especially important. And like everything else, transparency is on a spectrum, with open source code on one end, and a black box on the other.

[Music out]

Sean: A lot of this is black box, so we're making guesses from the outside, which is sort of the thing that's so scary. Inside the advertising industry, this kind of tracking is no secret to anyone. What the actual business practice are inside a specific business is the kind of thing that's hard for us to say.

We don’t really know what data ultrasonic tracking companies are collecting, or what they’re doing with it. And that means it’s hard to hold them responsible if they go too far.

Michael: What these advertisers want in this situation is of course to get people to buy their products.

But the degree of manipulation is pretty extensive and so I think as time marches on, the kinds of information and practices that we're seeing in the advertising industry are cause for alarm because nobody really wants to be manipulated in this way and a lot of this is being packaged into video games or into chat apps so in order for us just to carry out our day-to-day lives, we're all being subjected to a lot of surveillance that is very concerning for our rights and liberties.

And remember, 75% of android apps have some kind of tracker, whether it’s ultrasonic, Bluetooth, WiFi or GPS. And these apps are built around the trackers so that the apps won’t even function without them. And even if they could, the companies make it really hard to opt out.

Michael: it's extremely hard to opt out. For Tinder, if you want to use the app, you have to turn on the location tracking.

[Music start]

But, if you turn location tracking off, then you can't use your map service. So, the problem is these companies understand that instead of giving you a straightforward option to opt into these kinds of things, they construct their apps and their privacy policies to make it onerous and difficult for users to opt out, and when you have maybe 40 apps in your phone, the opt out process becomes overwhelming for an individual and their tactic in the industry is to overwhelm individuals so that they just throw in the towel and say, "I want to play games. I want to talk to my friends. I'm just going to install it and click-through."

What we need is stronger transparency from the industry and greater awareness from consumers. It’s easy to forget, but our phones are right there with us during the most intimate part of our lives. It’s worth keeping safe and trustworthy. So, if an app requests access to your microphone, but has no reason to do so - you should probably reconsider installing that app.

CREDITS

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, a sound design team dedicated to making television, film, and games sound incredible. Find out more at defacto sound dot com.

This episode was written and produced by Leigh McDonald...and me, Dallas Taylor. With help from Sam Schneble. It was sound designed and mixed by Nick Spradlin.

Thanks to Sean O’Brien and Michael Kwet from Yale Privacy Lab. Also thanks to privacy and technology counsel Katie McInnis and LISNR CEO and co-founder Rodney Williams.

The music in this episode is from of our friends at Musicbed. Having great music should be an asset to your project, not a roadblock. Musicbed is dedicated to making that a reality. That’s why they’ve completely rebuilt their platform of world-class artists and composers with brand-new features and advanced filters to make finding the perfect song easier and faster. Learn more at musicbed.com/new.

Did this episode change the way you think about your phone? Let us know on Twitter at 20k org. You can also give us feedback, submit a show idea, read episode transcripts, or buy a super cool 20k t-shirt through our website at 20k dot org.

Finally the first person to decode the ultrasonic message we embedded at the top of the show will win a t-shirt. Just hit us up through our website, twitter or facebook with the message. Thanks for listening.

[Music out]

Recent Episodes

ASMR: Why certain sounds give you tingles

ASMR.png

This show was written and produced by Carolyn McCulley.

Do certain sounds give you the head tingles? If yes, this episode is full of ear candy for you! In this episode, we learn all about the phenomenon called autonomous sensory meridian response—or ASMR for short. This soothing episode features researchers Giulia Poerio (University of Sheffield), Craig Richard (ASMRuniversity.com), and ASMR artists Gentle Whispering, Jellybean Green, and Somni Rosae - as well as the team at Defacto Sound!

MUSIC IN THIS EPISODE

Love is the Flower of Life - Chad Lawson
I Should Be Sleeping - Chad Lawson
All is Truth - Chad Lawson
D's Travels - Uncle Skeleton
Blackout - Stray Theories
Shoreline (No Drums) - Dario Lupo
Timeless - Dario Lupo

20K is hosted by Dallas Taylor and made out of the studios of Defacto Sound.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Check out wetransfer.com for all of your file sending needs!

View Transcript ▶︎

A couple of quick notes before we start the show. First, this episode is best experienced in a quiet place using good headphones. But, if you can’t do that, I won’t judge, the show still stands on its own. You just might not get the physical reaction. ..and that brings me to point number two: We’re talking about a subject that could possibly give you a physical reaction. Don’t worry, it’s perfectly safe, and can happen without anyone around you knowing, but you’ll need to be really relaxed in order for it to happen. We’ve put lots of opportunities in this episode to trigger it, and I encourage you to actively think about it and try to experience it. ok, here we go.

[Play “What is ASMR video?”]

Maria: Hello, my name is Maria. And I’m here to tell you about ASMR.

You're listening to Twenty Thousand Hertz. I'm Dallas Taylor.

Maria: Autonomous Sensory Meridian Response. It’s a pleasant tingling feeling you experience when you hear unique soft voices. Or hear certain soothing sounds, such as tapping. Or both. Like sounds of me whispering or brushing your hair.

[music in]

Depending on your age and internet consumption, you either already know exactly what ASMR is or you have absolutely no idea what’s happening right now.

Jai: ASMR is basically... sounds that trigger almost a tingling sensation for people.

That’s Jai Berger. A Sound Designer here at Defacto Sound.

Jai: Sometimes it’s on the top of their head or on the back of their neck. But it can also be used for as a relaxation tool. And certain sounds are different triggers for different people.

Colin: And honestly, it feels pleasurable. Which, I feel uncomfortable saying that, but it does.

That’s Colin DeVarney. Also a sound designer here at Defacto.

Colin: the things that trigger it are so random and it’s weird to say that I get pleasure from somebody unwrapping a gift or listening to them do a task quietly.

Nick: I don’t partake in the watching of the ASMR videos.

This is Nick Spradlin. Also, a sound designer here at Defacto.

Nick: The virtual haircut or watching any of the ASMR videos, like just everyday sounds, I have no reaction to. Since I was a musician for so long a lot of the stuff I listen to I’ll get chills with certain music, and I never really knew what that was called. And I think that’s ASMR,That’s my knowledge of it now, “Oh I guess that’s what that is”.

Sam, what do you think about this whole thing?

[music out]

Sam: Honestly, it really creeps me out.

For me, it kinda feels like a tingle or a chill that goes up the back of my spine and into my brain.

To get everyone on the same page, we listened to some popular ASMR tracks all together.

[Play ASMR clip]

This one’s close for me, it’s close. Ok Sam, it sounds like you have something to say.

Sam: I really don’t like it.

[Play ASMR clip]

Colin: I actually kinda like this one.

Nick: This is enjoyable.

[Play ASMR clip]

Nick: As weird as this is, I like it for being this weird.

Colin: So this might of triggered my ASMR, but once that guys voice I think is now ruined for me. It just gets me out of it.

[Play ASMR clip]

Oh I like that.

Colin: Me too.

Ok any thoughts?

Colin: I like that one more than most we’ve listened too.

I would just like to point out that Sam looks like she’s about to puke.

Sam: Oh my god, my ear. I don’t like it, it makes me so uncomfortable. Ugh.

Nick: I also disliked that one.

So you disliked it, Sam hated it, Colin and I got the little response, Jai?

Jai: I didn’t get a response, but I found it relaxing.

[Play ASMR clip]

Sam, can you describe what’s happening on screen right now?

Sam: Well, she has these fluffy windscreens on each of the mics and she is caressing them, gently.

Can you describe her facial expression?

Sam: She’s really into it.

[music in]

That last ASMR clip we heard was from one of the top ASMR artists on YouTube. Her name is Maria, and her Youtube moniker is “Gentle Whispering”. In that clip she was whispering, but she also used tapping, brushing, exhaling and even role-playing to make these ASMR triggering sounds. In all, her videos have been viewed and listened to over 438 million times.

Giulia: Maria is an ASMR artist who has one of the highest number of subscribers.

That’s Giulia Poerio, a psychology researcher studying ASMR at the University of Sheffield.

Giulia: I find her personally very relaxing. she has this sort of Russia/American accent. She has amazing hand movements that are incredibly relaxing. She speaks in a very calming way and she's really good at explaining things.

[music out]

[Play clip of Gentle Whispering]

Giulia: It's sort of a tingling sensation that starts at the top of my head and spreads down to my through my limbs, as well. So one way that I really like to think about it is as if somebody has opened a can of fizzy drink under my skin, [SFX: fizzy drink bubbling up] so it's kind of bubbling and it's kind of warm and relaxing.

[SFX out]

Imagine one of those scalp massagers you see in Skymall. You know the one - it looks like an open-ended whisk with thin metal spindles and tiny metal nubs on the ends.

Giulia: You put them down at your head and they're metal little spikes and they move into your scalp, and that's kind of what ASMR feels like. It's very, very relaxing.

[music in]

What's interesting about ASMR is that it's a stimulus in one modality, like sound, that is producing a tactile sensation. So you are experiencing being touched through sound.

People generally fall into one of two categories—they either think that ASMR is something that everybody has and, "Oh, of course everybody experiences this sort of tingling feeling when they hear soft speaking." Or they think that they're the only person that's had it and they don't realize that it's something that other people experience.

[music out]

ASMR became popular with the rise of YouTube. Some credit a thread on Reddit around 2007 where people first started talking about “head tingles” in response to sound. But it really took off when a woman in the U.K. posted the first whispering video on YouTube in 2009. That was under the moniker, “WhisperingLife.”

[Play Whispering Life clip - “Hello, I thought I would make some videos of me whispering. I absolutely love listening to people whisper, which is really, really weird.”]

Guilia: a lot of people when they find out about the ASMR experience and they find out that they can watch these videos on YouTube they're like, "Wow, this is amazing, because this is something that I've experienced all my life and I didn't know that I could go and intentionally experience it."

By 2010, a cybersecurity professional named Jennifer Allen decided this experience needed a more scientific-sounding name. So she coined the ASMR label and created a Facebook group for fans to discuss the experience.

Giulia: I used to go and seek out ASMR experiences before I knew about the ASMR community, so I used to go and seek them out in my everyday life. Once I signed up to a credit card because the woman in the bank [SFX: background chatting] was really relaxing and was going through a form and all these sorts of things [SFX: Paper turning, marker circling] and it was amazing and I found it so relaxing.

I know, it's really odd but the woman was so relaxing. And she was form filling [SFX: pen scratches], and that's one of the triggers, somebody filling out a form. Hotel check in, I love checking into hotels or places. Or I love somebody taking information [SFX: keyboard typing] and typing things in. Yea, It's really relaxing.

I've canceled the credit card so I was aware that it was literally only because I wanted her to carry on talking so much.

Even the advertising world has caught on to the allure of ASMR. Popular brands such as Ikea, KFC and Dove have created advertising produced with intentional ASMR triggers. In this ad for Dove Chocolate in China, a woman crinkles a chocolate wrapper, unwraps it and then pops a piece into her mouth.

[Play Dove Chocolate clip]

[music in]

If companies are using ASMR to sell their products, this must be a credible phenomenon, right? What is the science behind ASMR? Is there any? More on that in a moment.

[music out]

[MID-ROLL ADS]

[music in]

ASMR or autonomous sensory meridian response is a physical response to certain sounds. People who have this response often call them head tingles or sparkles.

Craig: I think the biggest term that best describes ASMR is relaxing.

That’s Dr. Craig Richard. He is a professor at Shenandoah University and founder of ASMRuniversity.com.

Craig: Some also will use terms like, it's comforting, and it's soothing. Then, usually associated with that, are head tingles. They're sometimes described as sparkly, and staticky, and for the most part enjoyable. Overall, that just leads to this relaxing, tingly sensation, that for some people it's great for dealing with stress. For other people, they use it to help them fall asleep.

[music out]

Like many people, Craig didn’t immediately connect with the concept when he first heard about it.

Craig: I'm a physiologist. So when I heard this term, I knew that was not a physiological term. I didn't really believe what they were saying, it just sounded like some woo-woo, or made up. Then, they gave an example of something that people who have ASMR find relaxing. They brought up Bob Ross.

[Play Bob Ross clip]

[music in]

Craig: That was when I made the connection for myself. Because I remember being a kid and I would come home from school, flip the channels, come across Bob Ross. I didn't have any interest in painting. I still have never painted. I don't, it's not the painting. It was him. It was his demeanor and disposition. I found it's super relaxing. I would put down a floor pillow, and I would watch him, and I never really saw him complete a painting, because I would end up falling asleep.

I didn't think much of that, until I heard … "Not everyone reacts to Bob Ross like that." I said, "Wow."

It’s still a mystery exactly why anyone experiences ASMR to begin with. But, there are a lot of theories.

Craig: One thing I wonder about is, by looking at a lot of the triggers, and what's common to all these triggers, that do stimulate ASMR in some individuals, is they tend to be a lot of the same kind of triggers that you would use to soothe the baby. So it's whispering, it's talking softly, it's personal attention, it's light touching. All that is very important, from the day we’re born that we have to have some kind of innate response to be soothed by people who care for us.

[music out]

Back when Craig first discovered ASMR, there wasn’t much research. So he decided to do something about that.

Craig: That's when I started the website, ASMRuniversity.com, to kind of put forth some large theories, based upon my understanding of physiology.

We launched the survey a couple years ago and we've had over 23,000 responses. the top responses are they feel relaxed, they feel calmed, they feel soothed, they feel sleepy. We ask them about, what are the physical sensations you feel? Sure enough, tingles is number one and it's occurring in the head. So this is important, because it's confirming what a lot of ASMR artists, and a lot of ASMR consumers are saying. That's important.

[Play ASMR clip - Somni Rosae, ASMR inaudible“Hello, welcome back to the spa. It’s very nice to see you again. It’s been a very long time since you visited the spa”.]

[music in]

Somni: After five years of creating content myself, I've learned that our audience is not just made up of people who have ASMR. It is also people who are sleep-deprived.

That’s Somni Rosae [pronunciation at the front of her video S-ohm-ni Ross-ay.], an ASMR artist who does a lot of role-playing videos. Mostly recreating a spa experience.

Somni: They are suffering, for example, from insomnia. They are currently experiencing a lot of stress or they have depression and as the number of views continues to grow, to me it shows that there is a need for quiet entertainment.

[music out]

We are providing quiet entertainment for individuals who are looking for something that helps them relax. Something like white noise [SFX: White noise] or pink noise [SFX: Pink noise] but instead there is more substance to it. The viewers enjoyed the sound combination of a human voice and the rain and on top of that the role play is about skin care. So they liked the pampering and the one-on-one attention that a character provided.

[play clip of the skin care role play]

Another ASMR artist, who goes by Jellybean Green, says her earliest memories of experiencing ASMR were in grade school. So once she became an established artist, she went back to something she remembered as very satisfying—peeling glue from her hands.

[Play ASMR clip - “Alright, so the glue has mostly dried on my hand. I’m not sure if I made it thick enough to get a good peel, but we’ll see.]

Jellybean Green: YouTube video comment sections aren't always the most eloquent places, but in the ASMR community, I've really noticed, in my videos, there are a lot of people for whom ASMR has been life changer. Relief from insomnia, anxiety, depression, PTSD, and even if that relief is brief and temporary, anyone who's suffered from those things knows that a little bit of relief can go a long way.

[music in]

ASMR videos have helped me. There are some really difficult times. I have a long history of mental illness myself, anxiety, depression, OCD, and it's something I've been in and out of treatment for since I was very young. My favorite ASMR videos, and my favorite ASMR content creators mean so much to me. When people take the time to let me know that my videos have helped them, it makes me feel like I've been able to pay that forward. Pay, what I received, forward in some small way, and it's amazing. It's really, really gratifying.

We live in a noisy, stressful and distracting world...and ASMR offers us the chance to slow down and be in the moment. To be present with our bodies and mind. There’s something really cool about hearing a simple, pure, and gentle sound… and having that jump from our sense of hearing to our sense of touch. It really speaks to the power of sound. It also reminds us that, as much as we know about sound and the human body, there’s still a lot to find out. Maybe one day researchers will tell us all about it - but - for now, go find a quiet place, a few youtube videos, and try it out for yourself.

[music out]

[music in]

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, a sound team that supports ad agencies, filmmakers, and video game developers. Check out recent work at defactosound dot com.

This episode was written and produced by Carolyn McCulley. And me, Dallas Taylor. With help from Sam Schneble, Nick Spradlin and Colin DeVarney. It was edited, sound designed and mixed by Jai Berger. Thanks to our guests – researchers Giulia Poerio and Craig Richard, and ASMR artists Somni Rosae and Jellybean Green.

You can listen to the ASMR artistry of Somni Rosae, Jellybean Green, Whispering Life, Gentle Whispering, ASMR Basic, The ASMR Nerd, and The Tingle Twins on YouTube. Craig Richard maintains the website ASMRuniversity.com. You can find links to the Youtube videos in this show on our website - twenty kay dot org.

The music in this episode is courtesy of our friends at Musicbed. and they’ve completely rebuilt their platform with brand-new features and advanced filters to make finding the perfect song easier and faster. Learn more at musicbed.com/new.

Finally, tell your friends about this episode. I’ll be eternally grateful.

Thanks for listening.

[music out]

Recent Episodes

Broadway to Cirque du Soleil: Sound design for the stage

Live Theater Pic.png

This episode was written & produced by James Introcaso.

Amazing concerts, Broadway musicals, Cirque du Soleil performances, and other live shows live and die on their sound design. This is the story of how sound design for live performances went from zero to speakers in the seats and where the industry might go next. This episode features interviews with sound design legends Abe Jacob and Jonathan Deans.


MUSIC IN THIS EPISODE

Home - Chris Coleman
Gentle Without - Steven Gutheinz
Cedar - Blake Ewing
Iris - Steven Gutheinz
Desert Crossing - The Radial Conservatory
Feels Like Magic (Instrumental) - Sports
Messages - Steven Gutheinz


Twenty Thousand Hertz is hosted by Dallas Taylor and produced out of the studios of Defacto Sound.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Get a 4 week trial plus a digital scale at Stamps.com. Type in "20K". 

Get a 1 month trial for $5 at forhims.com/20k.

Check out Business Wars here.

View Transcript ▶︎

SFX: Beyonce sing the National Anthem

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor.

Back at the 2013 presidential inauguration superstar Beyonce sang the Star Spangled Banner… but she didn’t do it live. Beyonce admitted to lip syncing.

SFX: Beyonce press conference clip

“...Due to no proper sound check, I did not feel comfortable taking a risk. It was about the president and the inauguration, and I wanted to make him and my country proud, so I decided to sing along with my pre-recorded track.”

Beyonce’s decision to lip sync acknowledges the importance of taking sound very seriously. Especially during a live performance. Big events rely on great designers, mixers, microphones, speakers, and a whole host of other things in order to sound effortless to an audience. At least, that’s true today...but, it wasn’t always that way.

Abe: For generations there was no sound system in the theater. Everybody strained to listen in those days. Today, audiences are inundated with ear buds and other forms of mechanical reproduction so they no longer strain to listen in the theater.

That’s Abe Jacob, a sound design legend. Early in his career, Abe mixed concerts for musicians like The Mamas & the Papas, Peter, Paul, and Mary, and Jimi Hendrix.

Abe: In those days with Jimi, all of the band gear, all of the musical instruments, the little bit of lighting and the sound equipment, all fit in one 19 foot truck.

He even worked on the Beatles’ final touring concert in Candlestick Park.

[SFX: Begging of “Yesterday” by the Beatles]

But Abe didn’t just do rock concerts. In 1968, he went to Broadway, where he was asked to do the impossible.

[SFX Opening Notes of “Superstar” from Jesus Christ Superstar play]

Abe: Coming in to do Jesus Christ Superstar in two days was a high point because the previews had been canceled because of wireless microphone problems

[SFX microphone feedback causes the music to end]

Before Abe, there weren’t any credited sound designers on Broadway, probably because there wasn’t much of a sound design.

Abe: We didn't do a great, great job in the very beginning.

It was basically area mic-ing that picked up the sound of voices. It was very simple audio mixers, rather than audio consoles.

It was a struggle for the audience to hear with such little thought put into the sound system. And things didn’t get much better when the first wireless microphones were introduced.

Abe: In early theater in the 60s, there was always one wireless microphone that was usually on the star, Carol Channing in Hello Dolly.

[SFX: Carol Channing begins to sing the title song in Hello Dolly]

It was a very large device. It looked almost like a small carrot that was hung around their neck underneath the costume. That led to the fact that the microphone was underneath the costume, so you got considerable cloth noise on the microphone [SFX cloth noise], which tended to cause attention to itself.

As Abe’s career grew, so did the technology available. As equipment became more complex, Abe needed more people to help him create live soundscapes. He started to take aspiring theatrical sound designers under his wing.

Jonathan: Abe is my mentor.

That’s Jonathan Deans, a four-time Tony-nominated sound designer. In addition to his many Broadway credits, Jonathan works on shows all over the world including Cirque du Soleil.

Jonathan: When I started my career there were no schools that were teaching the subject. Sound was still very in its early stages for live musicals.

There wasn't really any technology, there wasn't anything to teach. I learned from actually doing things and just trying it. Everybody knew you were trying something and you're just putting something out there.

His first project with Abe was A Chorus Line at the Drury Lane Theater.

[SFX: Opening notes of “One” from A Chorus Line]

Jonathan: That was very interesting when Abe turns up with his show, Chorus Line, they're going to put delays in the theater, delay speakers. It's like, "What is that?"

Audio delay systems are something Abe introduced to Broadway that helped every seat in the house have the same audio experience.

Abe: Today, you can put loudspeakers at almost every place in the theater. Before, it was two boxes hung on either side of the proscenium that were of sufficient volume to reach the last row of the house, but a little discourteous to the folks in the front row.

Put speakers further back in the theater so that the front systems didn't have to be quite as loud.

The idea is simple, put speakers all over the room so you don’t have to blast a single set of them up at the front. But doing this presents a new challenge.

[music in]

Sound is actually pretty slow. Imagine if you’re sitting near the back of the theater. The sound from the speakers at the front of the room will hit you later than the speakers at the back of the room, creating a very mushy sound.

If one speaker has its timing off from the rest by even a fraction of a second, even the most beautiful music becomes messy. Take for example, the music we’re hearing right now.

[SFX: Music boosted for a moment]

It sounds great because the music in both your speakers or headphoneis in sync. Now we’re going to play one of your speakers just a fraction of a second behind the other.

[SFX: Music delayed]

The delay system syncs up all the speakers so all of the sound reaches everyone in the theater equally and at the same time.

[music out]

As sound technology improved so did other theatrical effects. Moving lights, projectors, and moving pieces of scenery gave the sound design a new job.

Abe: One of the other functions of the sound system today is to overcome the inherent noise floor of a lot of theatrical productions. The sound of moving lights [SFX: Moving light], of television video projectors [SFX: Projector added to the noise of moving lights], of scenery [SFX scenery moving added in to the cacophony], has all contributed to a higher level of background noise that the sound system has to overcome.

By introducing new technology into live theater, Abe changed more than just the sonic experience. His advancements allowed actors to change their performances from big to subtle.

Abe: Actors' voices need to rescale to reach the house at a proper level. They must be amplified. So what I ask myself is, "How much and by whom?" If all the gain comes from the actor, the price is unnatural diction, inappropriate tonal, emotional cues and stiff posture. But if the gain can come from a properly balanced acoustical system, then the actors can relax more, the speech becomes more natural, and the emotions meet the spoken word.

[SFX: Beginning of All that Jazz from Chicago]

Abe’s work didn’t just allow for intimate storytelling in large theaters, it also gave costume designers more freedom.

Abe: In Chicago in 1975, Gwen Verdon, the star, was going to wear a wireless microphone. She had on a very skimpy costume, and there was no place to hide the microphone or the transmitter. We came up with the idea of putting the transmitter and microphone in her wig. That was, I think, the first instance of the microphone being placed on the forehead of a performer.

The new mic placement was designed for the performer’s comfort, but it had an unintended bonus.

Abe: We discovered that the microphone on the forehead or above the ear was a much better placement for sound quality than being on the chest. It gave us a greater freedom of being able to mix the sound of that particular microphone and that performer.

As Abe brought live theater into the modern world and upgraded the sound systems of different productions, sound design became about more than just amplifying the sound of the show. It became about creating a soundscape that that helped tell the story along with performances, costumes, lighting, and all the other creative elements of theater.

[SFX: Begin “Don’t Cry for Me Argentina” from Evita]

Abe: Evita, at that time, had six wireless microphones in use. That was a big step forward trying to get all six to work at once.

The thundering voice of Eva Peron on the balcony of the Casa Rosada, certainly wasn't natural. But it was the reality of that situation, where she was addressing 1,000s of people and telling about her struggles to get where the Perons were. It was just the effect onstage that made you feel that you were part of that inauguration scene.

[SFX: continue song “Don’t cry for me Argentina….”]

[music in]

The best sound system and the best sound design is one that's basically invisible now we have the equipment with which we are able to do that much easier. Modern microphones and loudspeaker systems are extremely linear and are capable of providing reinforcement with minimal detection. That's a goal.

Abe didn’t just increase the number of mics and speakers on Broadway, he also created jobs.

Abe: When I started out, it was basically just me and the sound operator.

There are now a lot of bodies involved.

Before I came to New York, the sound operator operated the sound from a console or from a mixer located backstage where he just turned the knobs up to a preset mark and that was it.

We were able to talk producers into giving up some seats and putting the sound equipment out in the house where the operator could hear.

They are the fifth member of the quartet.

Abe’s influence in live audio design was huge. He showed an entire industry that creating a soundscape is about more than just hanging speakers. He also trained so many amazing designers who took their talents all over the world. They changed every element of theater through sonic enhancement.

Abe: Sound reinforcement is not required for every production. Then again, neither are makeup, costumes, lighting, and staging.

Sound is as vital a creative element in the theater as any of the other design elements are.

The history of modern live event sound design began less than a-hundred years ago. In that time technology and methods have improved leaps and bounds. How we’re doing things now and where we might go in the future are truly mind blowing. We’ll hear all about it in a minute.

[music out]

MIDROLL

[music in]

In a few short decades sound design for live events went from almost no microphones or speakers to a critical part of every production. So what is modern sound design for live performances like today? Here’s Jonathan Deans again.

Jonathan: Sound has become very complicated due to the equipment that is available. As equipment becomes more complicated, there's less imagination therefore less creativity.

Every live event audio designer puts in tons of hours just to get a sound system up and running. Once it’s working, that’s just the beginning of the creative process.

Jonathan: If you're doing a Cirque du Soleil show, you're going to be involved in it for two to three years with big chunks of time away from the home. I'm talking weeks and sometimes months. I've actually done a production we were in tech for 15 months.

Jonathan doesn’t have a single sound setup that works for every show. He picks equipment based on the show’s story and the size and shape of the venue.

Jonathan: As sound designers we're confined to cabinets and speakers as we know it but beyond that there's nothing intentionally similar from one production to another. It's not cookie cutting and that excites me.

[music out]

Sound design in theater can be never-ending. Especially for an enormous show like Cirque du Soleil. Jonathan recently went back to tweak the sound for Love, a show that uses Beatles music and premiered a decade ago.

Jonathan: There was a refresh done of the production there were new songs put in and there were different acts that were put in.

I went to see the show with Giles Martin and Paul Hicks and Leon Rothenberg, those are the four of us involved in the music "this needs a refresh," because it’s been running for ten years and the technology has evolved so much that the expectations are completely different.

That refresh turned into a total overhaul.

Jonathan: We ended up remixing the entire show just staying up all night when everyone had gone.

That’s because sound technology constantly evolves thanks to innovative ideas from designers like Jonathan.

Jonathan: What if? What if I could put speakers into the seats? What would that be like?"What would the person look like when they're hearing it? What would they be feeling it from? Would they be hearing it from behind? Is that weird?

After sketching out his ideas, Jonathan then puts those ideas into action to see if they work.

[SFX: music from Ka]

Jonathan: I did a show called KA and KA was the first time I had put speakers in the seats there was two speakers in every seat in the theater which is just under 2,000 seats. You could watch people walk in and sit down and we were playing sounds.

I watched a couple coming and the lady sat down. She heard the sounds coming out the speakers in the seats and I could see her point and say, “Look at this sound coming out of the seat, the back of the seat." The guy then leaned down and listened to his seat and put his ear and could hear the sounds coming out of his seat went, “Oh wow.”

He took his jacket off and then hung it on the back of the seat and cover the speakers.

[SFX: Music gets muffled from jacket]

He sat there for the entire show having heard the speakers before knew that they were there making the sound and covered, put his jacket, the shoulder part of it as it hugs the seat, went over the speakers and watched the whole show.

[SFX: Ka music out]

Not every new idea works, but sometimes the only way to know is to experiment with a live audience. That trial and error leads to some pretty awesome innovations, like new ways to track the movements of the actors on stage.

Jonathan: You put the device on the actor that is like an RFID tag, it transmits so you know where their standing, you know where their location is within a parameter which is in this case setup to be the stage, could be in a bigger spaces if you like. And so you know that that person is standing there.

Jonathan is using a modern version of Abe’s delay system to perform an amazing feat of technical sound.

Jonathan: The time delay within that actor's voice going out of certain speakers or all speakers, the delay is changing as it goes upstage downstage like it would if I was to go further away from you or closer to you there is a time difference that happens to when you receive me.

When you do that, when you track the actors nobody notices because it’s just natural.

That same technology isn’t just for tracking actors on stage.

Jonathan: You can put it onto a person like Peter Pan, flying around the room. It could be a sound like maybe it’s tinkerbell following. A sound follows them so you can put a sound effect that follows them so as the person’s moving around, going around the surround system [SFX: Move Tinkerbell sounds around the speakers] and you’re doing it because it’s not something that’s fixed. It has to be done live so you can track that person and or the sound effect that belongs to that person or an instrument.

When it comes to the future of sound for Broadway, it turns out that not only is creative thinking encouraged. It’s required. Here’s Abe again.

Abe: Unfortunately, I think the future of Broadway theater sound may tend to shy away from so many wireless microphones. The radio frequency spectrum is getting very crowded.

Broadway theaters back to back are anywhere from 40 to 60 transmission frequencies between wireless microphones, communications, walkie talkies, and things of that sort.

Broadway has many theaters in a close proximity. That means each device in each theater needs to have its own frequency to work properly. Otherwise you might be in the audience for Wicked…

[SFX: MUSIC: Defying Gravity from Wicked]

… and suddenly hear “Hakuna Matata” from the Lion King.

[SFX: Defying Gravity song mixed with Hakuna Matata]

[music in]

Abe: Maybe we go back to some kind of wired microphone that can be utilized in some form. There will always be wireless microphones, just not in the quantity that there are today.

The future of sound design is full of challenges, but it also has enormous potential.

Jonathan: What I enjoy most about my job is enjoying the audience, being part of that journey that hopefully can add something to their life even if it’s only for those three hours.

Abe: Theater in itself is important. It's an explanation of the life and the times that we are living in, or the lives and times of heroic events in the past. You can't take away the impact of drama to the world. And if what sound can help create and contribute to that impact, then what we do is vital.

CREDITS

Twenty Thousand Hertz is made by the sound design team at Defacto Sound. If you’re interested in hearing what Defacto does, visit defactosound dot com. And if you work in the same industry. Drop a quick hello.

This episode was written and produced by James Introcaso… and me, Dallas Taylor. With help from Sam Schneble. It was edited, sound designed and mixed by Jai Berger.

Thanks to our guests, sound design legends Abe Jacob and Jonathan Deans. Abe’s retired now, but you can checkout more of Jonathan’s work at Designing Sound dot com.

The music in this episode is courtesy of our friends at Musicbed. Having great music should be an asset to your project, not a roadblock. Musicbed is dedicated to making that a reality. That’s why they’ve completely rebuilt their platform of world-class artists and composers with brand-new features and advanced filters to make finding the perfect song easier and faster. Learn more at musicbed.com/new.

Finally, go check out our website. There you can say hello, submit a show idea, give general feedback, read transcripts, or buy a t-shirt. That’s all at 20k.org. We’re also on Facebook and Twitter and I love hearing from listeners! So, reach out however you like.

Thanks for listening.

[music out]

Recent Episodes

Hidden Melodies: Discovering the music in our speech

Music of Speech Pic.png

This episode was written & produced by Katy Daily.

The way you speak has rhythm, timbre, and pitch. It’s more like music than you might think. We chat with The Allusionist host Helen Zaltzman, Martin Zaltz Austwick from Song by Song, Music Psychologist Dr. Ani Patel of Tufts University, and Drum Composer David Dockery on how musical our speech really is.

MUSIC IN THIS EPISODE

Weightless (instrumental) - Prague
Home - Blake Ewing
Le 15 Decembre - Brique a Braq
Everything is Moving, but not the Sky - Dario Lupo
Lights Out - Utah
No Sun - Steven Gutheinz
Years - Steven Gutheinz

20K is made out of the studios of Defacto Sound and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Get $50 off select Casper mattresses at casper.com/20k.

View Transcript ▶︎

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor.

The way we talk is really interesting… and lately, it’s something I’ve been thinking about a lot. See, apparently, when you start a podcast, you suddenly find out about every weird little quirk in your voice. Anyway, what makes someone’s voice interesting to listen to?

Well, recently, I came across a video on YouTube that completely changed the way I think about speech. Basically, it’s a drummer and a bassist playing along to one of the most famous scenes in Willy Wonka.

[David Dockery Drumming clip]

After hearing this, my mind tuned in on just how musical speech really is. Our voice isn’t an instrument only when we’re singing, it’s an instrument all the time.

[music in]

Everyday speech has a rhythm, a timbre, and tonality. ...and without even thinking about, your speech patterns are communicating a lot of underlying meaning.

I talked with a few other podcasting friends about this. I wondered if they think about these things when they’re tracking? Here’s Helen Zaltzman from “The Allusionist”.

[music out]

Helen: I have to think about it consciously, I consider that a big part of my job, because I want to convey some emotion, and some mood, and some tone, all in a couple of sentences.

...and this is Martin Zaltz Austwick from “Song by Song”.

Martin: I think everyone has to think about it. So I think it's happening intuitively, rather than in a more, you're thinking consciously about pitch and tone, and rhythm in the way that you would a musical composition.

Just like different instruments, every voice is unique. Helen and Martin told me what other podcast hosts they love listening to.

Helen: I love hearing Phoebe Judge's voice. That, to me, is like hearing a really low woodwind instrument or something.

[Phoebe Judge clip with woodwind underneath]

She sounds like her thought processes are very clear, and she enunciates things in such a way…

Martin: She's not working stuff out in public.

Helen: Generally not, no. And maybe again, it's that kind of idea that a low voice and a slow voice is confident, and therefore something you can trust, and you should listen to.

[music in]

By that analogy, what would Helen’s voice be if it was an instrument.

Helen: Synthesizer.

Here’s a clip from Helen’s show.

[Helen Zaltzman clip]

How about the host of 99% Invisible, Roman Mars?

Martin: He’s like a kind of John Carpenter, Moog synth.

[clip from 99% Invisible with Moog synth underneath]

Ira Glass, host of This American Life?

Martin: Viola? I think a stringed instrument.

Helen: Yeah I think stringed.

[clip from This American Life with viola underneath]

What about host of The Memory Palace, Nate DiMeo?

Helen: I think Nate DiMeo might be a violin… Like a slow, mournful violin.

[clip from The Memory Palace with violin underneath. violin ends with verb out, brief pause to end segment, start narration]

Any time we speak, we’re singing. We unconsciously vary our rhythm and tonality to create our own unique songs. And with enough practice – you can tune this performance to have more meaning.

[music in]

Dr. Ani Patel: Music and speech are two primary forms of communicating with each other in kind of rich and nuanced and complex ways.

That’s Dr. Ani patel, a music psychologist at Tufts University. He’s been studying how music and human speech overlap in our brains.

Dr. Ani Patel: One of the interesting things is that they sound very different. No one would ever confuse the sound of a cello playing a solo with the sound of a person talking. [Chell morphing into talking voices SFX.] Yet, what an increasing amount of research is showing is that within our brains and our minds, there's more overlap than you might think in how we process those two types of signals; whether it's the rhythm, or the melody, or the structure.

Take for example this next clip. It’s a famous speech that’s been turned into musical data and played by a piano. [music out] See if you can guess the speech.

[play clip: Kennedy piano example]

Here it is again, with a hint.

[play clip: Kennedy piano + original speech example]

...and here’s that same clip played through a digital whistle...and to be clear, the original speech file is not playing along with this. This is all data.

[play clip: Kennedy whistle example]

[music in]

Dr. Ani Patel: That's the power of our internal models. When we have an expectation of what we're hearing and a pattern somewhat resembles that expectation, you can then perceive that thing.

When we speak with each other, we're using a very complicated sound that has many frequencies. Even a single vowel has a whole bunch of different frequencies in it. They have certain patterns where certain frequencies are emphasized more than others.

What that piano piece was doing was it's essentially re-creating that sort of palette of frequencies and energies through piano sounds. It can't capture the way exactly a human voice does, because a piano works in a very different way. It's using the pitches and frequencies that a piano can produce to try and recreate this energy shape of a speech sound.

You program like a player piano to go through these frequency shapes in a really rapid succession in the way that a voice does. Especially if you know what words to listen for, it's amazing how you can pick them out of this sound that sounds nothing like a human voice.

[music out]

It’s still not fully understood why our brains blur the line of music and speech, but there are lots of ways to trick our minds. Take, for instance, the speech-to-song illusion. Psychologist Diana Deutsch found that certain phrases, when taken out of a passage and played in a loop, begin to sound like they’re being sung.

Dr. Ani Patel: It's such a powerful illusion that if you began to hear a phrase as sung, and then you go back and listen to the passage from which that phrase was excerpted, the rest of it will sound like speech. When you come to that phrase, it will just sound like that one phrase is sung [repeated phrase to create a musical pattern]. When you come to that phrase, it will just sound like that one phrase is sung and then it goes back to speech again. It's really wild.

[music in]

Dr. Ani Patel: What makes it interesting is that this doesn't happen for any phrase. You can also find phrases that if you take them out of context and loop them, they don't sound sung at all.

So something about certain sequences of words leads them to transform in this way.

How we choose to sing our words is powerful. It adds a whole new level of human connection. The majority of the time, this is a totally subconscious act, but there are some professions where it can’t be… it has to be thought about and practiced. We’ll hear more about that, in a moment.

[music out]

MIDROLL

[music in]

Recording your voice for a podcast, a radio show, or really anything very much feels like a performance. There’s this unconscious grey area between speech and music. I asked Helen and Martin how their hosting voices are different from their everyday voices.

Helen: I think conversational voice is higher, and you are also inviting a particular response from the other person.

Martin: Reading off a script is completely different and hard skill from, extemporizing, which is also difficult and hard skill. It’s like, to bring a script to life is hard. I can’t read, I can’t sight read, for example. I just have to say something, like five times, until it’s like, “Okay, that’s sort of what the words should be.”

Helen: I think it’s common, because often you’re trying to find these cadences, and sometimes it’s almost like scoring your spoken script. And sometimes there’s an unexpected cadence that you don’t work out until you’ve been through it a few times.

[music out]

Of course, podcast hosts aren’t the only ones who need to think about this stuff. Politicians, actors, and especially comedians have to master rhythm. Take for example this clip from King of the Hill.

[King of the Hill clip]

David: My name is David Dockery and I compose drum scores to famous TV and movie scenes on YouTube where I synchronize the drum beats with the actor's words.

I started it more or less just to kind of push myself to work on my timing, work on my musical phrasing and approach. I just saw it as a challenge more than anything else because it was bound to be timing-wise really complex because there's no meter to it. There's no pulse. So, I said if I could do that, it'll surely be good for my tempo and timing.

But what he discovered is that was there was tempo and timing in a lot of these scenes. It’s just not as obvious. Once this rhythm was uncovered, it really highlighted the talent of these actors.

I was thinking of kind of iconic scenes that I’d seen.

[play clip: Willy Wonka scene with drumming]

When people get more emotional, the rhythm becomes so much more pronounced. You can actually measure it right here, because that is the point in the scene where I, as a drummer, start having more fun, I suppose.

[play clip: Willy Wonka scene with drumming]

I have so much more to work with in terms of Gene Wilder's delivery, those lines.

It's just perfectly in rhythm, I didn't have to do much with that at all. I just played a drum beat along with it.

[play clip: Willy Wonka scene with drumming]

I think the reason that those scenes work, where people are kind of at their most heated is just because they raise their voices and that always means it suits drums more because they are just by nature a really loud, obnoxious instrument. When people get really heated up about stuff I think they tend to employ more rhythm in their voice.

[music in]

Dr. Ani Patel: Excited speech is faster. It's more variable in its timing ... and dynamics.

Again, that’s Dr. Ani Patel.

Dr. Ani Patel: Sad speech is quiet. It's slow. It's got less pitch variation.

When we talk about rhythm in speech, one thing that's important to realize is that we're not talking about a steady beat you could tap your foot to. That's kind of obvious in some sense, right? We don't dance to ordinary speech.

But linguists will tell you that speech has rhythm. The way that the syllables are patterned in time, the way accents are put on words, the way phrases are created, all have a characteristic pattern in a given language. That's rhythm.

Same with melody. Where we put the pitch accent, how many words tend to get emphasized using pitch. When you put those two together, you end up with a very characteristic sound.

[music out]

The starkest contrast is between a happy voice and a sad voice.

[Parks and rec clip]

You can hear it in their voice. It's fast. It's a lot of pitch variability. The voice has a bright kind of timbre to it.

[Parks and rec clip]

What are you hearing that that lets you read that emotion? Well, their voice is quieter. It's slower. There's much less pitch variation. It has a darker kind of timbre to it.

[music in]

The music of our speech works in tandem with our words. Together, they raise communication to another level, and arguably a more natural level. But, lately, our world has been moving away from this. We now tend to text, email, or comment on social media largely without our voices...and sometimes it actually feels kinda weird to just call someone. What are we losing when we don’t communicate with our voice?

Helen: What is the best way to convey how we speak, but in a readable form? I really don't know. You almost need the words, and then like a little graph of emotion to overlay it, so with the intonation.

Martin: I just think there's so much in the English language, where you can completely change the meaning of a sentence, just by the way that you say it.

Dr. Ani Patel: We have to remember that humans, over many hundreds of thousands of years of evolution have become extremely attuned to the sounds of each others' voices. And pulling out nuances, and reading these kinds of signals that we give each other through our voices. And when we communicate through texts or through email, we're just not using that. And so, cutting off that rich part of how we read each other's emotions, feelings, intentions, thoughts, moods, and so on.

I think part of that is this emotional connection that happens when you hear a voice, as opposed to just reading a silent message.

CREDITS

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, a sound design and mix team that supports ad agencies, filmmakers, television networks and video game publishers. If you work in these fields, be sure to drop us a note at hi@defactosound.com.

This episode was written and produced by Katy Daily...and me, Dallas Taylor. With help from Sam Schneble. It was sound designed, edited, and mixed by Colin DeVarney.

Thanks to Helen Zaltzman from the Allusionist, and Martin Zaltz Austwick from Song by Song. You should immediately go subscribe to both of those podcasts! Martin also makes music under the name Pale Bird. Check out his music on Bandcamp or at martinzaltzaustwick.com. Also thanks to Dr. Ani Patel of Tufts University and David Dockery. You can find more videos of David drumming to film and TV scenes by searching “David Dockery” on YouTube. The clip of the drumming and bass guitar you heard at the top of the episode was from Fabiano Mexicano’s youtube channel.

The music in this episode is from our friends at Musicbed. They represent more than 650 great artists, ranging from indie rock and hip-hop, to classical and electronic. Head over to music.20k.org to hear our exclusive playlist.

You can find us all over the internet by searching Twenty Thousand Hertz. That’s Twenty Thousand Hertz all spelled out.

We’d also love to hear from you! Especially your actual voice! If you want to tell us something record it as a voice memo and email it to hi@20k.org.

Thanks for listening.

[music out]

Recent Episodes

Epileptic Symphonies: Turning seizures into music

Seizure PIc.png

This episode originally aired on Sum of All Parts. Go subscribe!

Brant Guichard has heard "The Music" for as long as he can remember. Brant has a particular type of epilepsy where he hears what are called "musical auras" whenever he has a seizure. Brian Foo, aka the Data Driven DJ, introduces a different musical element to Brant's experience of seizure.


Twenty Thousand Hertz is produced out of the studios of Defacto Sound and hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Get a 4 week trial plus a digital scale at Stamps.com. Just click on the Radio Microphone at the top of the homepage and type in "20K". 

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

View Transcript ▶︎

You’re listening to Twenty Thousand Hertz, the stories behind the world’s most recognizable and interesting sounds. I’m Dallas Taylor.

[Play EEG sonification]

What you’re hearing right now are brain waves. Sort of. This is from an EEG machine. An EEG machine is used to record the electrical activity of the brain. Plug that data into a synthesizer, and you get something like what you’re hearing. It’s not exactly pretty. Actually it’s kind of spooky. But what did you expect to hear inside someone’s head?

Well, if the head we’re talking about is Brant Guichard’s, the answer would be…music.

Brant: “What I call The Music”.

Recently Joel Werner, the host of the podcast Sum of All Parts, talked to Brant to find out what exactly is going on inside his head. I’ll let Joel explain…

Joel: Brant Guichard has heard The Music for as long as he can remember...

Brant: The earliest memory I have was sitting on a bed in my room and enjoying a run of music as it went by me, waving my hands as it just run through. And I didn't recognise it for what it as at the time. I listened to it, it finished, and I asked my friend who was sitting next to me watching the television and feeling bored, since we had absolutely nothing to do, and I asked him, "Did you hear that?" And he said, "What?" I asked him, "Music?" And he said, "What music?" It made me realize I was the only one who was hearing that. Nobody else was hearing this, this was all mine.

Joel: That was Brant’s first encounter with what he calls The Music - and in the thirty years since, it’s something that he’s heard multiple times every day.

Brant: The music starts by warping the sounds and things I hear. Then it adds its own rhythm and starts becoming stronger within my head. The pattern is never the same, it is never the same. It is always unique. Every time. A collection of repetitive noises together, warping together the noises around me into a rhythm, often taking any song I've heard and putting that into the mix. Anything I reach for in my memory, that will be placed into the mix as well. All the sounds, even speaking is part of the music. It's why I become absolutely still sometimes, it's because I don't want to make noise myself. I stop any noise I'm hearing if I can and I stop moving myself because that has the best chance of slowing it down a little bit.

It's partially in my control in that I don't have any control that it's going to run, so I reach and try to control where it goes. It's like sitting in a car without having any brakes and having the accelerator tied down, but you've got the hands on the steering wheel. You can control where it goes but you can't stop it.

Then I start developing a partial seizure with having part of my body losing control. And after this I will develop into a full seizure, but I will stay fully conscious at this point, although it will not look like I am. I will be on the ground with a grand mal, as most people think epilepsy is. But after this point it will continuing developing, and past that point I will lose consciousness.

Joel: Brant has epilepsy - and it’s a particular type of epilepsy where he hears what are called musical auras.

Ingrid: So, when Brant has that music that he hears, that's actually the beginning of a seizure. It's a small seizure, as he told you, but if it doesn't progress to involve more of his brain, he remains aware and there's nothing to see. Only Brant can tell us about it.

Joel: Professor Ingrid Scheffer is Brant’s neurologist, and a world leading epilepsy researcher.

Ingrid: We don't really understand why one seizure progresses and another doesn't, except that we do know that almost everyone with epilepsy is more likely to have seizures if they're tired or stressed. And so, you might have some auras, but then they might progress if you're more tired or stressed. Or sometimes people will build up. They'll have a run of auras, which sort of heralds the fact that they're going to go into the biggest seizure.

Joel: There are a few different types of epilepsy that are related to sound. Like musicogenic epilepsy...

Ingrid: Where music may trigger a seizure.

Joel: Or reflex epilepsy..

Ingrid: Where a very loud noise may trigger a seizure. There's a young lady I look after who, if there's a loud noise as she's walking along the street, will suddenly have a tonic seizure and fall to the ground. She actually has to wear headphones all the time to try and dull down the sounds around her so she doesn't get a surprise.

Joel: But, musical auras, like Brant experiences, are unusual. Like, really unusual.

Ingrid: Gosh, I think I have seen one or two, but it's rare. I see lots and lots of people with focal epilepsy and many have auras, but hearing music is rare.

Joel: Do you remember when you first met Brant? Can you take us back to that moment?

Ingrid: Yes, I can remember when I met Brant. He was 18 years old at the time and he came along with his father, and he told me the story of his epilepsy. His epilepsy had begun quite early in life with some convulsive seizures as an infant, and these had occurred every year or so. Then from about the age of eight he developed awake seizures, and these would be preceded by an aura. Brant described an aura of music where he experienced what he called twisted sounds. These were initially pleasant, but by the time he was 11 years old a couple of years later, the sounds hurt and he was scared.

Brant: Strangely enough, when I was very young, it felt good to me. It was very enjoyable and it was something I liked a lot. I was one of those people that, at puberty, my epilepsy developed quite intensely. I started having proper fits and at this point, what I call the music, that started to become something I had perhaps five to twenty times a day and became extremely intense and started to scare me. I don't understand why but the auras, they became very strong and brought on fear to me at that point. Absolute, intense fear that left me a few years later. The fear was not there anymore. And I don't know why that fear occurred at that point.

Coming up, we’ll meet the data visualization artist who took Brant’s seizures and did something…kind of beautiful with them. That’s after the break.

[MIDROLL]

Brant Guichard has been hearing music in his head for thirty years. And, in a way, for the past ten minutes, you’ve been hearing it too…

The music you’ve heard so far in this episode.. That music is intrinsically connected to Brant’s epilepsy. In fact, it is Brant’s epilepsy.

Brian: My name is Brian Foo. I'm a data visualisation artist at the Museum of Natural History in New York City.

Joel: By day, Brian works on data visualisations for the museum, but by night he’s the Data Driven DJ.

Brian: So the data driven DJ project is kind of an experimentation on different ways of expressing data as music.

Joel: Part science, part art form - this is data sonification; the process of representing numbers in sound.

Brian: So a lot of that is thinking about what are the strengths of music. Or what are the strengths of sound, as compared to more visual media like charts or graphs. So it's kind of using the fact that music is more felt, and you can kind of perceive things like change and time more intuitively.

Joel: The clicking of a Geiger counter, where faster clicks indicate higher radiation levels - this is one of the earliest and most practical examples of data sonification. Brian’s work on the other hand, is much more song-like..

Brian: Initially I was very interested in learning how to make music. You know I had a particular skill set, which was computer science. I wanted to figure out a way in which I could learn music. I did some research into data sonification. I wasn't very satisfied with the current state of data sonification. I think a lot of times it's almost like listening to a chart. The question I always had is why make it into sound, if it's already fine as a chart. I kind of used that as the challenge for this project, to make kind of meaningful data music essentially.

Joel: Brian’s take on songwriting is a process called algorithmic composition. He comes up with a bunch of rules, or algorithms, that tell the music what to do in response to a change in the data. The guiding principle of this approach is something Brian calls “uncreative creativity”.

Brian: Yes, so when I say uncreative creativity, when you think about traditional creativity, it's that's artist who is just staring at a canvas and having a direct translation of my emotions or thoughts into the medium. But because my medium is code, when I press play, that's essentially the first time I'm hearing the music. I'm hearing the song. Usually the dataset is so complex, and the algorithm and the rules, there's so many of them, there's so many different variables that it's really hard for my brain to kind of generate that. That's kind of mostly because I didn't really have a music background, it was hard for me to imagine what the music would sound like when I kind of applied this algorithm to the data, to generate the song. It's almost like I'm just designing the rules in which this song plays out, which is not a traditional way you would think about creativity. But that is where the creative act is, is designing those rules. It's designing how you map the data to sound.

Joel: But mapping the data to sound isn’t something that happens quickly - or easily.

Brian: It's very much an iterative approach where I have to constantly tweak the algorithm, because usually the first time it just sounds like garbage. It sounds awful. I mean, think that's the struggle between kind of this creative aspect, as well as the data science aspect, because if you wanted to stay true to the data, you can't really massage what the song sounds like. If there's a particular part of the song that I don't like, I can't change that one part of the song, because that would probably mean I'd have to change one part of the data. Usually when I tweak one little thing, it completely changes the whole song. So it's really tricky. It is just through brute force of just throwing things together and constantly just changing variables until it sounds good. As long as it retains that kind of faithfulness to the data. You don't want to make the song sound good at the expense of not being faithful to the data.

Joel: And so we gave Brian some of Brant’s EEG data - the brain recordings of Brant having a seizure, an encounter with The Music. And Brian turned that data into a song.

Brian: And again, this is not a research project. This is a creative project. I wouldn't take what I'm saying as actual scientific research.

Joel: An EEG, or electroencephalogram is a measure of the brain’s electrical activity. It’s a really common research technique in neuroscience where it’s used to measure anything from a person's sleep behavior, to what’s going on in someone’s brain during a seizure.

Brant: They put you in a bed, they put little dots on your head and they say, "Feel comfortable," then they walk off on you. Then they take the drugs off you, and for most epileptics the drugs they take will make them quite drowsy so they can't sleep either. And I'm sitting in there with wax electrodes on my head and I'm thinking, "Yep, they're waiting for the fit, so I'm going to be stuck here." And I was.

Joel: Brant’s fit eventually came, and it’s the data recorded by those electrodes on his scalp that Brian transformed into the music you’ve been hearing in this episode.

The way Brian composed this song - or the algorithm he wrote composed this song - draws on three elements of the EEG data; the amplitude, or the height of the brain waves, which is a measure of how active the brain is. The frequency, or the number of brain waves that occur in a given amount of time - this is a measure of how alert the person is. And the synchrony, or the relative activity of different parts of the brain. And then? He maps changes in these three variables to changes in sound.

Brian: Amplitude, very conveniently, evokes this idea of is the music louder or softer. Obviously, higher amplitude, the louder the instrumentation. Frequency also has a good corollary to music. High frequency, the instruments are playing at a higher pitch. Lower frequency, at a lower pitch. Synchrony I use to control the percussion in the music. High synchrony, the more drums are playing at a synchronous pattern.

Joel: As well as mapping to loudness, Amplitude controls the vocals in the song as well.

Brian: The higher the amplitude, the more vocals are playing. The different parts of the brain have different kind of vocals associated with it. If all parts of the brain are firing very loudly, there's gonna be many vocals singing very loudly at the same time.

Joel: Vocals are a key part of this composition - they’re the dominant feature of the song. To generate them, Brian sampled the Imogen Heap song, “Hide and Seek”.

Brian: Yeah, I very deliberately used the Imogen Heap song for a few reasons. One, it's completely vocal. Part of the way in which I try to compose these songs is, think about what the listener should be experiencing in relation to this dataset. You know this dataset represents a human being, another individual. Also, very intimate dataset. It's their actual brain activity. Is it possible to produce empathy between a listener and the subject? I wanted to use a vocal element of the song, because it is a human subject. Another little trick that I did, or another concept that I tried to leverage was in psychology, or I don't know what field of study this is, but there's something called phantom words, where you kind of stitch a bunch random syllables together. People will hear words, regardless of whether you're giving them those actual words. I kind of chopped up that song, into syllables, and the algorithm kind of stitches it together in various different ways, but it plays a little mental trick on people where people would be hearing different words, and different people would probably hear different words. Again, it's trying to create an experience that's very personal, and kind of unique to the individual. It even lets the listener's brain do some work. Again, it's trying to connect the listener's brain to the subject's brain.

Joel: By sampling a well known song, Brian also plays into Brant’s experience of the musical aura, where songs he hears, or even thinks about during the seizure, are warped, twisted, and incorporated into The Music.

In addition to the vocals, Brian sampled strings from the Philharmonia sound sample library, and percussion from the American experimental rock band Swans. And then? He just let the algorithm do its thing.

Brian: I’ll play the song in full at the end of this episode, but first - what does Brant think of the song composed using his seizure data?

Joel: That's it. So what do you think? First impressions.

Brant: It reminds me of the graph actually. I've seen plenty of them. They always show it to me after they make them. Well at least I can say to other people, I sound interesting.

Brain: I try to think about the dataset, not as this series of zeros and ones. It is a representation of an actual human being, with real experiences. I think the medium of music is very unique, in the sense that it will evoke a visceral response. My goal with this particular project is to think about what should that response be, as it relates to this particular dataset. I think that's where a lot of my creative energy goes, is thinking about how to match what I believe this dataset is about, and how the listener should kind of experience it, in this very visceral way, because music you feel something. That's what I really like about music, as compared to say a chart, or a graph, which I don't remember the last time I was moved by looking at a line chart. But it's a good match. I think it's the right medium for human data, a medium that has this very primal, visceral quality to it.

Twenty Thousand Hertz is produced out of the studios of Defacto Sound, a sound design team dedicated to making television, film, and games sound incredible. Watch and listen to the latest work at defacto sound dot com. While you’re there, be sure to reach out.

Joel: Sum of All Parts is produced by me, Joel Werner. Sophie Townsend is Story Editor. Jonathan Webb is Science Editor. Sound design by me and Mark Don.

I highly recommend subscribing to Sum of All Parts. It’s a podcast that tells extraordinary stories from the world of numbers. To hear more stories like this, search Sum Of All Parts in your favorite podcast player.

Additional help in this episode comes from Mike Nagel and Sam Schneble. It was mixed by Nick Spradlin. Also, thanks to Luciana Haill for letting us play the sonification data at the beginning of the episode. You can find more of her work at lucianahaill dot co dot uk/. That’s Luciana Haill dot co dot uk.

And learn more about our show, Twenty Thousand Hertz, at our website two zero k dot org. There you can find links to the things we’ve talked about in each episode, you can stream our archives, send us story tips, donate to the show, and even…buy stuff! The stickers are sticky and the t-shirts are soft. Also, follow us on twitter or Facebook by using our handle two zero k org. Or by searching for the name of our show, Twenty Thousand Hertz, all spelled out. Okay. I think that’s everything.

Thanks for listening.

Brant: Did you notice I had a fit during that interview?

Joel: I totally had no idea. What happened?

Brant: It was generally a three to four second lapse.

Joel: And what's it like for you? Like what's the experience?

Brant: That one wasn't a heavy aura, it was just a white noise one. Those ones generally don't have enough time to give me a strong rhythm. So they're there, I notice they're there, and then they're gone.

And now, in full, the data sonification of Brant’s seizure data by the Data Driven DJ Brian Foo.

[Seizure Song]

Recent Episodes

Hamilton: Crafting the sound of the Broadway smash hit

Photo by Joan Marcus

Photo by Joan Marcus

This episode was written & produced by James Introcaso.

Broadway’s award-winning, record-breaking, smash hit, Hamilton, is a musical unlike any other. Get the story from people in the room where it happens of how sound helps tell the musical’s story eight times a week. We talk to Nevin Steinberg, Hamilton’s Tony-nominated sound designer, Benny Reiner, Grammy-winning Hamilton percussionist, Anna-Lee Craig, Hamilton on Broadway A2, and Broadway sound design legend Abe Jacob.

Thanks to Atlantic Records and the producers of Hamilton for lettings us use their music. Go buy their soundtrack where ever you buy music.

Check out Defacto Sound, the studios that produced Twenty Thousand Hertz, hosted by Dallas Taylor.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Check out Musicbed and use offer code "20K" on your first purchase.

View Transcript ▶︎

[MUSIC: “Alexander Hamilton Instrumental”]

You’re listening to Twenty Thousand Hertz. I’m Dallas Taylor.

Even if you haven’t seen Broadway’s eleven-Tony-winning, box-office-record-breaking smash hit, Hamilton, odds are... you’ve heard of it. It’s been in the news a lot lately thanks to the price - and rarity - of its tickets.

[SFX: News sound bite]

“Tickets for the universally acclaimed musical, Hamilton, are constantly selling out… almost 900 hundred dollars a pop.”

[SFX: Oscar Clip at 02.08.04]

“Lin-Manuel Miranda is here with us. I have to say, it’s weird to see you in a theater without having to pay $10,000.”

[CLIP: Stephen Colbert]

“I went and saw it… and then two hours later I’m going, ‘Why am I crying over Alexander Hamilton?’”

[CLIP: Obama]

“In fact, Hamilton, I’m pretty sure is the only thing that Dick Cheney and I agree on.”

Every element - from the performances to the lighting, to the staging, to the costumes - comes together to create a moving, inspiring, and unconventionally patriotic story that stays with you long after the curtain closes. Each piece of the production is meticulously crafted... and sound design - is no exception.

Abe: There were some moments in Hamilton, which only work so very well because of the subtlety of the sound design and the soundscape.

That’s Abe Jacob, a Broadway sound design legend.

Abe: I've been a sound designer for the last almost 50 years.

Abe is considered the godfather of sound design on Broadway. He worked on the original productions of revolutionary shows like Hair, Jesus Christ Superstar, and Chicago.

Abe trained many of the industry’s working sound designers today, or he trained the people who trained them.

[music out]

Abe: At the time when I started, nobody else was credited or titled sound designer, so I sort of started that industry. That's probably one of the things that I'm most proud of.

When Abe says that Hamilton has remarkable sound design, it’s a huge compliment.

Abe: The sound of Hamilton has got to be very difficult because you're listening to a hip-hop sort of musical style, as well as dialogue, as well as legitimate Broadway show tunes. The combination of all three of those elements coming out with a coherent whole, is a very good example of the talent the sound designer came up with up. There are a number of moments in the show where sound is very subtle, and yet it tends to solidify the whole meaning of the piece.

So, just who is the sound designer of Hamilton?

Nevin: I'm Nevin Steinberg and I'm a Broadway sound designer.

Nevin’s been working on Broadway shows since the 90’s. His work is extensive. He’s been working with Lin-Manuel Miranda on Hamilton from the beginning… and even before that.

[MUSIC: “In the Heights” plays and continues just under the first sentence]

Nevin: I worked on a Broadway show called In the Heights which was written by Lin-Manuel Miranda and directed by Tommy Kail. And I was part of a creative team that brought it to Broadway and the show won a Tony award and did very well.

When Hamilton was beginning its development, Tommy would occasionally just drop me an email or a phone call with information and kind of a little backstage look at what Lin was working on and what the plans were for this piece. I knew it was gonna be something interesting and exciting to work on, not to mention a lot of fun.

Nevin started his journey with Hamilton where every member of the creative team began - with the script.

Nevin: I'm given the text. And then after I've read it, the first conversation with the director happens. That's the beginning of the sound design for a show.

The director and I and representatives from the music department including the composer or the orchestrator, music supervision, we talk about architecture, about the what it looks like and how it moves and then we talk about the venue. Is it a 200 seat venue? Is it a 2,000 seat venue and we start to put ourselves in the position of an audience encountering this story and start to think about how sound can help communicate it.

That process can go on for years, when it does, usually the sound design turns out better.

That’s exactly what happened with Hamilton. Nevin and the rest of the creative team talked about the show for years. He was there when the show was in pre-workshop mode with just a few actors and a piano.

[MUSIC: White House Poetry Slam Performance]

Nevin’s job as sound designer is to create the show’s soundscape. He puts the team in place that cares for all of the microphones, on both actors and the orchestra, as well as all of the sound equipment. Nevin balances the volume and sound quality of all of the audio elements in the show to communicate the narrative. He doesn’t just make the show louder, he dictates the story’s dynamic range and makes decisions on how to process the actor’s voices to serve the mission of the story. It’s his team who blows the roof off of the theater with the introduction of this bombastic American spy…

[MUSIC: Yorktown]

[“HERCULES MULLIGAN!

A tailor spyin’ on the British government!

I take their measurements, information and then I smuggle it”]

Nevin’s team also sucks the life out of the room when the story gets intimate, to draw the audience in.

[MUSIC: Burn]

[“I saved every letter you wrote me

From the moment I read them

I knew you were mine

You said you were mine

I thought you were mine”]

It’s not just about volume. Sometimes Nevin and his team are asked to do something that’s never been done before, like in the song “Wait for It” sung by the character Aaron Burr.

Nevin: We had talked about Burr and his relationship to time and how sound might play a part in that.

In the top of “Wait for It,” Burr sings [play clip on “Wait for It”] the question was could we repeat the word day and just capture that one word and repeat it. Of course the easy way to do that is to lock the tempo of the song, pre-record the actor singing day, and just tap it out as many times as you want using playback after he says it.

...but Alex Lacamoire, who wrote the orchestration and did the music direction for Hamilton, didn’t want to pre-record the actor singing. This is because every live performance is different. Different audiences, their reactions, different actors, or sometimes different individual interpretations. Capturing this word from the live performance gives the actor the freedom to interpret the piece that night. So, Lacamoire asked Nevin to capture the single word, “Day,” and do that live during every performance… something that hadn’t been done before.

Nevin: With the help of my extremely talented and very creative game audio engineer Justin Rathman, who continues to mix the show on Broadway, we tried to sort out a way to grab just that one word and send it to an electronic delay and feed it back into the system at just the right moment so that we could capture that word and a few others in that song.

One of the things we discovered was that we could do it live.

[MUSIC: Wait for It]

[“Theodosia writes me a letter every day”]

Nevin: Once we knew we could do it, then it became an idea. It comes back in the coda of “We Know” in which Burr says...

[MUSIC: We Know]

[“We both know what we know”]

Nevin: This reinforces the idea that this character has a very special relationship to time.

Every second of Hamilton has been designed to pull you into the story… and, all of Nevin’s skills were put to the test in the show’s climax, where Hamilton and Burr finally duel.

I should warn you though that we’re about to spoil the end of this duel, but if you paid attention in history class, you should already know this.

[MUSIC: 10 Duel Commandments]

[“One, two, three, four

Five, six, seven, eight, nine…

It’s the Ten Duel Commandments”]

Nevin: Lin cleverly puts a duel early in act one so that the audience can understand both the rules of duels which are explicitly told in the song, in the lyric of the song, but also the style in which we're gonna present them so that later when we encounter it, we're already familiar with how the duel is gonna work.

From a sound point of view, we do the same thing. We sort of set up the duel early in act one with what I like to call a vanilla gunshot which is when Lawrence shoots Lee.

[SFX: Vanilla Gunshot]

That gunfire is pretty generic. It's sort of unremarkable that's intentional.

One thing to know about that final duel, the final gunshot is rather shocking both in its volume and scale and its kind of complexity.

[SFX: Final Gunshot]

The lead up to it, I mean, really, all credit to the music department in terms of the way it's written.

[MUSIC: The World Was Wide Enough]

[“One two three four

Five six seven eight nine—

There are ten things you need to know”]

Nevin: The swells and the tic toc sound and the bells, all of that is actually part of the orchestration and brilliantly orchestrated by Alex Lacamoire. There were two other important participants, Scott Wassermann who is, our electronic music programmer and Will Wells who is an electronic music producer, these guys were responsible for crafting the samples you hear in that score.

[MUSIC: The World Was Wide Enough]

[“They won’t teach you this in your classes

But look it up, Hamilton was wearing his glasses”]

A lot of what you're hearing is just coming straight out what the band is doing and as we lead up to the final encounter between Burr and Hamilton and Burr goes to fire his gun and we stop time.

[MUSIC/SFX: The World Was Wide Enough and gunshot]

[“One two three four five six seven eight nine

Number ten paces! Fire!—

SFX GUNSHOT”]

Nevin: The gunshot is actually reversed and choked so that we get the sense that we've stopped that bullet and all the staging supports that idea.

We see the characters freeze, we see one of our ensemble members who is actually referred to as the bullet, sort of pinch the air as though she's slowed the bullet down as it crosses the stage and is heading towards Hamilton. The ensemble dances the path of the bullet as Hamilton begins his final soliloquy which is one of the only moments of the show that there is no music.

[MUSIC: The World Was Wide Enough]

[“I imagine death so much it feels like a memory...”]

[MUSIC: The World Was Wide Enough]

[“...No beat, no melody”]

Nevin: The only sound here is the sound of wind which again has been produced from the orchestration and we take that sound and actually bring it into the room as Hamilton turns and talks, basically as he sees his life flash before his eyes.

[MUSIC: The World Was Wide Enough]

[“My love, take your time

I’ll see you on the other side”]

Nevin: When we release time, we repeat the last phrase. He aims his pistol at the air where he yells, "Wait," and then we have the final gunshot which reverberates through the theater.

[SFX: Hamilton yells, “Wait!” and then Final Gunshot]

[MUSIC “History Has Its Eyes on You INSTRUMENTAL”]

Nevin’s work on Hamilton is a massive labor of love. He continues to work on the show as it premieres in new cities all over the world. Back on Broadway, he’s left an incredible team in place to run the show every night along with some incredible musicians. We’ll hear from a few of them, in a minute.

[music out]

MIDROLL

[MUSIC: The Room Where It Happens]

[“I wanna be in

The room where it happens

The room where it happens

I wanna be in

The room where it happens

The room where it happens”]

So, what’s it like to do audio for Hamilton live? To be in the room where it happens?

Anna-Lee: No show is the same every night because there's a different crowd, they're responding differently, the energy is different.

That’s Anna-Lee Craig, Nevin hired her to work on Broadway’s Hamilton as the A2 or deck audio.

Anna-Lee: Deck audio is whoever is backstage supporting the audio team while the show is going on. I mic the actors, I make sure that the band is helped. I make sure that the system is working

A typical day for her is pretty busy.

[MUSIC: “Non-Stop”]

Anna-Lee: We come in an hour and a half before the show starts. The first thing we do is turn on the system, make sure everything boots the way that we expect it to.

I'm going through each one of the mics and checking the rigging, checking the connectors, checking any custom-fit parts making sure they're clean and making sure that they sound consistent with how we expect them to sound based on whatever mic that is.

I have 30 wireless transmitters, that's the number of mics like on people, there are probably 70 mics in the pit. While I'm doing all that with the wireless mics, the mixer will also go through the pit and do a mic check on each one of the mics and make sure that they are going through the monitors. We're just like checking all the microphones and then once they're set and once we've done the mic checks, we chill until half hour.

We have 10 minutes of downtime just in case there's an emergency.

At half hour, we assist the backstage crew, assist mics getting on to actors.

That’s all BEFORE the show. During the show, she’s putting out fires.

Anna-Lee: I run interference.

Like a microphone broke on an actor and we need him to have another mic before his next line, but we can't fully take his wig off so I put a halo one him and an extra transmitter in his pocket until the show gives me a long enough break to like put a real new mic on him.

A halo is an elastic loop that is tied to a microphone and you just put it on your head like a headband and then you get like perfect placement. It doesn't look as nice as like our custom-built mics but in a pinch like it will do the job.

[music end]

Justin Rathbun usually mixes Hamilton on Broadway. On his days off Anna-Lee takes the reigns and mixes the show, a job just as busy as being an A2.

Anna-Lee: The way that we mix the show, it's a line by line style. There's not more than one mic open unless everyone is actually singing. Hamilton, Burr, Hamilton, Burr, Hamilton, Burr, Angelica, Hamilton, Burr. It's just one mic open one at a time so that you just have a very clean direct sound and you're not being distracted by any other noise going into our mics.

Mixing audio line by line means Anna-Lee has to be locked into the show for every second of the performance. She’s constantly moving.

Anna-Lee: I'm controlling the band and the reverb with my right hand and the vocals with my left hand and sometimes I'm controlling them all with all 10 fingers. There's not any downtime. I couldn't go to the bathroom, it's impossible.

[MUSIC: “Aaron Burr, Sir”]

For Anna-Lee, being that busy and focused on the show is one of her favorite parts of the job. One of her favorite songs to mix is “My Shot.”

Anna-Lee:"My Shot" is like the introduction of Hamilton and also the spot where he decides that he's going to put himself out there and he meets these guys in a tavern.

[“I’m John Laurens in the place to be!

Two pints o’ Sam Adams, but I’m workin’ on three, uh!”]

Anna-Lee: You have to start like again relationship building. It's first just like four guys and they're beating on the table, they're table-rapping. That should feel like it's coming from the stage, but as Hamilton starts to proclaim his like manifesto or like he's going to do great things, the sound starts to expand with him and it gets louder.

[MUSIC: “Aaron Burr, Sir” right into “My Shot”]

[“If you stand for nothing, Burr, what’ll you fall for?...”

“Ooh

Who you?

Who you?

Who are you?

Ooh, who is this kid? What’s he gonna do?

I am not throwing away my shot!

I am not throwing away my shot!

Hey yo, I’m just like my country

I’m young, scrappy and hungry

And I’m not throwing away my shot!”]

Anna-Lee:Then Lauren says, let's get this guy in front of a crowd. Then the ensemble comes in. The level, the dynamic level takes a step up.

[MUSIC: “My Shot”]

[“Let’s get this guy in front of a crowd

I am not throwing away my shot

I am not throwing away my shot

Hey yo, I’m just like my country

I’m young, scrappy and hungry

And I’m not throwing away my shot”]

Anna-Lee: We don't go all the way yet because we've got another journey to go on. Then the town's people get added in and they're running through the streets. Laurens is saying, whoa. He's telling everybody else to like jump in we add like some reverb in and it's all through the town.

[MUSIC: “My Shot”]

[“Ev’rybody sing:

Whoa, whoa, whoa

Hey!

Whoa!

Wooh!!

Whoa!

Ay, let ‘em hear ya!”]

Anna-Lee: You can hear the sound surrounding you in the audience because the reverb is in surround. It's like you're a part of it. Then we suck back down into just like Hamilton's monologue.

[MUSIC: “My Shot”]

[“I imagine death so much it feels more like a memory

When’s it gonna get me?

In my sleep? Seven feet ahead of me?”]

Anna-Lee: That when he finally starts talking to a crowd again and it builds to the fullest height and then we like go through the whole final chorus. Everybody who's in the show is singing and I am not throwing away "My Shot."

[MUSIC: “My Shot”]

[“And I am not throwing away my shot

I am not throwing away my shot

Hey yo, I’m just like my country

I’m young, scrappy and hungry

And I’m not throwing away my shot”]

Anna-Lee: You're like living that huge, loud, big moment for the rest of the song and then we like slam it for the button.

[MUSIC: “My Shot”]

[“Not throwin’ away my—

Not throwin’ away my shot!”]

For Anna-Lee, mixing the show live is about staying true to Nevin’s soundscape, but it’s also about feeling out actors and the audience during every performance.

Anna-Lee: You're also building with the actor's intensity. He's starting to like believe that everyone is onboard.

As you feel like you're like winning people over that are in the cast, you're also winning over the audience and the audience can take more decibels. If they're not won over yet you have to slowly build, sometimes if you feel like the crowd is not quite ready for that loud level yet you can hear people start to rustle and you can almost feel people want to get up out of their seats. Once you reach that kind of figure pitch feel, that you can really just go for it.

We can’t have an entire episode about sound in a musical like Hamilton without talking to one of the musicians.

Benny: My name is Benny Reiner and I play percussion.

Benny is part of Hamilton’s Grammy-winning orchestra. When he says he plays percussion, you might be thinking he sits behind a few drums and obscure instruments, but his job entails so much more than that.

Benny: My setup consist of different weird things. I have a MOTIF keyboard which is an electric piano basically. I play little patches like vibraphone, I have a sampler that I basically play anything that would be from a drum machine or a sample.

Then there's also what you just think of percussion, just tambourine, shakers, concert bass drum, random stuff like that.

That’s not all Benny does. Another big part of his job is running a piece of software called Ableton.

Benny: The fundamental function of it would be precise timekeeping. Most of the show is in the vain of hip hop and RNB and pop and a lot of contemporary elements. Since there is so much of that in Hamilton, the role of the metronome is to really keep that time together.

As a human, we don't have perfect time, no show is the same because everything we do has variance in it.

The audience and actors on stage never hear the click track. It’s just for the orchestra to keep time. What the audience hears this…

[MUSIC: “What’d I Miss”]

[“There’s a letter on my desk from the President

Haven’t even put my bag down yet

Sally be a lamb, darlin’, won’tcha open it?

It says the President’s assembling a cabinet...”]

And what the orchestra hears this...

[MUSIC: “What’d I Miss” with added 178 BPM metronome]

[“There’s a letter on my desk from the President

Haven’t even put my bags down yet

Sally be a lamb, darlin’, won’tcha open it?

It says the President’s assembling a cabinet...”]

Ableton keeping time for the pit is important, but it has other functions as well.

Benny: There are certain track elements, stuff that really is impossible to play live. Stuff that's going through phasers or effects or there's information that gets sent from it to control certain lighting things.

Controlling lighting cues through the same software that keeps the orchestra in time means that vocals, orchestra, choreography, and lights will all sync perfectly. That’s an important part in creating Hamilton’s moments of sensory immersion.

While Benny has a lot of technical responsibility, when it comes down to it, he’s a musician at heart. You can tell that from the way he talks about his favorite song to perform, the love song “Helpless.”

[MUSIC: Helpless]

[“Ohh, I do I do I do I

Dooo! Hey!

Ohh, I do I do I do I

Dooo! Boy you got me helpless”]

Benny: I love "Helpless" just because it just feels great. You're in there, you're playing, you're just making things feel good essentially and that's one of my favorite things about music is just making people feel happy and making people feel warm.

[continue Helpless, then fade out]

[MUSIC: “Quiet Uptown Instrumental”]

Anna-Lee: Our main job is serving the story and the narrative.

When I'm mixing, I'm really a part of the show like an intimate, intricate cog in what makes Hamilton work. I know that everything that I do is very directly affecting the 1300 people watching the show.

I’m also an extrovert and a people person I get to be backstage. I make really strong friendships with actors and crew members and musicians.

Benny: The ultimate reason why I love doing what I do is the ability to connect with people.

You got to really bring it if you want to really connect and relate to somebody. That's really fulfilling to me. If you put enough of yourself into something honestly and have that reciprocate, somebody listen to it or witness it and really have them feel something, that's it for me.

Nevin: In some ways, Hamilton is one of the hardest things I've ever worked on and in other ways, it was also one of the easiest things I've ever done because it is so well written and so beautiful directed and staged and orchestrated that my job was simply to respond to it in a credible and exacting way.

This extends to everyone who worked on it. I mean Howell Binkley's lighting is exquisite and sharp and just so focused and Paul Tazewell's costumes tell the story so beautifully and so subtly throughout the play. It's extraordinary.

The set moves in such a way and gives you a background in such a way that you're never unsure about how you're going to encounter these characters and the story.

I love the people. They're some of the smartest people I know, in any field. I love the banter. I love the fact that part of my job is having conversations, laughing, making things, criticizing things, and striving to make things better all the time. I love the knowledge that an audience really has no idea what it is that I've done or even what we've done as a team to get them to feel a certain way or get them to look in a certain direction or get them to experience a moment in the way we've crafted it because we've done our job so quietly.

I love that. I like the theater. I like going to the theater. I like plays and musicals, so that I get to do this for a living is pretty exciting for me.

[music ends]

[MUSIC: “Who Lives, Who Dies, Who Tells Your Story Instrumental”]

CREDITS

Twenty Thousand Hertz is made by the sound design team at Defacto Sound. If you do anything creative that also uses sound to help tell that story, go check out defactosound dot com. And don’t forget to reach out. We’d love to know who you are.

This episode was written and produced by James Introcaso… and me, Dallas Taylor. With help from Sam Schneble. It was sound designed and mixed by Colin DeVarney.

Thanks to our guests Nevin Steinberg, Anna-Lee Craig, Benny Reiner, and Abe Jacob. Thanks also to the producers of Hamilton for their help with this episode. If you’re interested in seeing Hamilton, the show is on tour in the United States right now. Go to HAMILTON MUSICAL DOT COM for show dates and more information.

You can say hello, submit a show idea, give general feedback, read transcripts, or buy a t-shirt at 20k.org. ...we’re also on Facebook and Twitter. Hearing from listeners is the most fun thing about making this podcast, so please don’t hesitate to drop us a note.

Finally, be sure to tell all your friends about the show. Especially those people who won’t stop bugging you about Hamilton.

Thanks for listening.

[music out]

Recent Episodes

False Alarm: How the Emergency Alert System went wrong

EBS Pic.png

This episode was written & produced by Jim McNulty.

When people in Hawaii were falsely alerted of a Ballistic Missile threat, the first thing they heard was the sound of an emergency alert. For decades, this tone has alerted us to local weather emergencies and other important events, but it has never been used for its original purpose. In this episode, we explore the history of the Emergency Alert System and its predecessors. Featuring Kelly Williams, from the National Association of Broadcasters, Frank Lucia former EAS advisor for the FCC, and Wade Witmer from FEMA.


MUSIC IN THIS EPISODE

Secret Cricket Meeting - Live Footage
Time (Instrumental) - Joy Ike
The Sudden Suicide of Miss Machine - Oslo
The Idea That Was Stolen Away - Dexter Britain
Topaz - AM Architect
One Last Time - On Earth
Arietta - Steven Gutheinz
Longwave - The Echelon Effect
Waining Patience - Dexter Britain
Levels of Pride - Dexter Britain
Opus 101 - Dexter Britain
God Bless Us Everyone - Kino
Thesis - Blake Ewing

20K is hosted by Dallas Taylor and made out of the studios of Defacto Sound.

Follow Dallas on Instagram, TikTok, YouTube and LinkedIn.

Join our community on Reddit and follow us on Facebook.

Consider supporting the show at donate.20k.org.

To get your 20K referral link and earn rewards, visit 20k.org/refer.

Get a 4 week trial plus a digital scale at Stamps.com. Just click on the Radio Microphone at the top of the homepage and type in "20K". 

Thanks to Audioblocks for supporting this episode. Sign up at audioblocks.com/20k.

View Transcript ▶︎

[SFX: Tornado warning alert sound]

You're listening to Twenty Thous… [CUT OFF]

[SFX: EBS Data burst, two-tone alert]

COMPUTER VOICE: “YOU’RE LISTENING TO TWENTY THOUSAND HERTZ. THE STORIES BEHIND THE WORLD’S MOST RECOGNIZABLE AND INTERESTING SOUNDS. HOSTED BY DALLAS TAYLOR. AND THIS IS THE STORY BEHIND THE EMERGENCY ALERT SYSTEM.”

[music in]

So, that wasn’t a real emergency. But that distinctive, dissonant tone got your attention, didn’t it? And for good reason. That sound has accompanied weather alerts and other important warnings here in the U.S. for decades. It’s a key part of the Emergency Alert System, or EAS.

But had this been an actual emergency, it would have been followed by official news or instructions from local emergency managers… or the National Weather Service, or in the case of a national emergency—the President of the United States.

[music out]

Kelly: The Emergency Alert System, is a mechanism that’s intended to allow the President to speak to the American people in times of catastrophic emergency.

That’s Kelly Williams, from the National Association of Broadcasters.

Kelly: The only part that's actually required for a broadcasted carrier is the President's message. All of the local uses of it, which is 99.9 percent of it, is voluntary. It was a sort of a ... ‘Well, the government's put this system in place. It doesn't do anything most of the time. We’ll let states and cities and towns and counties used it.’

[music in]

The EAS is primarily used by local authorities. They send warnings by way of broadcast, as well as cable, satellite, and wireless providers. It falls under the authority of FEMA— which is The Federal Emergency Management Agency. But the coordination and enforcement of regulations… at least in regard to broadcasters is through the FCC.

Wade: The FCC actually protects that tone in broadcast and in movies, so you're not allowed to use what's called the attention tone unless it's a real emergency, unless you have a waiver from the FCC that you're performing a test.

That’s Wade Witmer, FEMA Deputy for the Integrated Public Alert Warning Systems, or “IPAWS.”

[music out]

The FCC takes these alert tones so seriously, that airing this podcast on a broadcast station—and using the tones at the top like we just did—would be subject to significant fines!

Kelly: There have been a number of cases where a station gets an ad and it's in the ad. There was one for a movie. Olympus Has Fallen. And the ad agency that did the distribution, "Oh, let's put the EAS tones in."

[play Olympus has fallen commercial]

The ad resulted in $1.1 million dollars in FCC fines against Viacom and Disney.

And more recently, a station in Jacksonville, Florida was fined $55 thousand dollars for airing this promo for the Jacksonville Jaguars football team...

[INSERT JACKSONVILLE JAGUARS HYPE SPOT WITH TONES: “This is an emergency…”]

Go Jaguars….

[music in]

So, Why are these tones so protected? Also, how did we land on this tone in particular? To understand that, we have to go back to the height of the Cold War, and the systems that predated the EAS.

Frank: My name is Frank Lucia and I worked as an EAS Emergency Alert System Advisor to the FCC.

Frank spent 36 years at the FCC. Both he and Kelly have worked together on many of the national advisory committees over the years, and Frank continues to consult on issues regarding the EAS, even in his retirement.

Frank: Back in the 50s, that's when the first advisory committees were set up by network industry people, and broadcast engineers, to come up with a system that would best work.

[music out]

The system they developed was called CONELRAD, or Control of Electromagnetic Radiation.

Kelly: The way planes navigated back then was by listening to radio stations and zero in on radio stations, so if the bombers are coming across the Atlantic, shut off the radio stations. Then you don't know where Cleveland is because there's no radio station.

Frank: The CONELRAD system was developed under President Truman. Its main mission was to have certain broadcast stations to go off the air, but those that remained on the air, were required to change frequencies to the 640 and 1240 signals.

Kelly: This idea was to have a channel that the public knew they could go to, so if you can find a real, old radio, you will see a triangle on the radio and you could tune and say, "Here's one of those two frequencies where there's emergency information," That started the part of alerting the public but also getting the radiation down.

[music in]

In the early 1950s, President Truman was concerned about the growing conflict in Korea, and worried it could result in attacks on American soil.

[music out]

[play clip: PRESIDENT TRUMAN: “Remember this: If we do have another World War it will be an atomic war…. I do not want to be responsible for bringing that about.”]

Put yourself into that Cold War, “Duck-and-cover” mindset, and you realize that there was a very real concern that an atomic attack could happen at any time. The Federal Civil Defense Administration even ran public service announcements featuring big name stars, urging people to be prepared.

[play clip: BOB HOPE: “Friends, this is Bob Hope. If an atom bomb hit your home city, would you know what to do? For further details, consult your local office of Civil Defense.”]

When it came time to test this CONELRAD system, let’s just say the signal was very 1950s.

[MUSIC FLOURISH: THIS IS A CONELRAD DRILL! Ladies and Gentlemen, for the next 15 minutes, this program is the only program you will here in this area. By order of the FCC, all standard television and radio stations in the United States are off the air in the first daytime test of the conelrad system of emergency broadcasting.]

[music in]

Frank: In 1963, the Emergency Broadcast System was established to expand the use of emergency information to all stations. Under President Kennedy, they decided to let all the stations stay on their regular channels and provide the emergency information. Because the missile technology had advanced to the point where they used other guidance systems and it didn't have to be so much to track the signals of broadcast stations.

[play clip: JFK: Fellow Americans, the annual civil defense exercise… is a test of our program of peaceful preparedness. Should the United States ever be subjected to direct enemy attack, CONELRAD and the national emergency broadcasting system will come to our defense.]

[music out]

The Emergency Broadcast System, or EBS, is what most of us over the age of 35 remember from our youth. And it was the EBS that first debuted our iconic dual-frequency tone.

[play clip WCBS-TV TEST: this is a test. For the next 60 seconds this station will conduct a test of the emergency broadcast system. This is only a test beeeep...]

Frank: The major networks, wanted to come up with another way because it was hard for engineers to deal with some of technical problems that CONELRAD brought about. The industry, and the FCC, began looking at developing a tone system. And so Bell Labs, at AT&T, decided they were going to find two frequencies that would be clear in terms of what was being used at that time. They came up with the two tone attention signal.

The attention signal is made up of two distinct frequencies:

853 hertz... [BEEP]

and 960 Hertz... [BEEP]

The resulting “chord” (if you will)... [BEEP]

It's dissonant. It’s unpleasant. And it gets your attention.

Frank: After some years of testing, the commission decided, finally, we're going to implement the use of this attention signal by all broadcast stations and we're going to expand the use of EBS to develop state and local EAS plans. There would be key stations, that would monitored by the other broadcast stations so that an emergency message could be distributed, to all the stations in given markets.

[music in]

These key stations were called Primary Entry Points, or PEP stations. Instead of sending the alert signal directly to every station in America, stations are daisy-chained. Here’s Kelly:

Kelly: The primary entry point's stations serve two purposes. One is, in the case of some catastrophic where everything's gone, they're designed to survive and stay on the air and they cover 90-something percent of the population of the United States, but there are also one of the originating points for which the President's message would flow.

So, it comes from the White House to FEMA and they send the message, they go to the primary entry point stations and then there's a daisy chain. Stations around those monitor the PEP station, they pick up the message, they play it. You're not relying on wired connections, these are all over-the-air connections.

These PEP stations are so important to the nation’s warning system, that they’ve even been hardened against attack.

Wade: Today FEMA has a relationship with 74 radio stations. We actually install power generation equipment and some other resiliency components and communications gear.

The equipment we provide there is for them to primarily be able to operate for an extended period of time without commercial power.

[music out]

Frank: FEMA also worked with the Corps of Engineerings to specifically look for transmitter sites that were not in what we would call high risk areas.

As an example, at that time, WABC's AM transmitter was not located in the city, it was was out of this high risk area, but they wanted to harden that transmitter site, so that staff could go and actually man this station from this shelter at that transmitter site.

[music in]

Of course, to date, the President has never activated a national emergency alert for real. But that doesn’t mean there haven’t been a few scares. More on that after the break.

[music out]

MIDROLL

[music in]

Only the President can activate a national EAS alert. At least, that’s true today. But back in the 70s, weekly tests were issued out of NORAD, the North American Air Defense Command at Cheyenne Mountain, Colorado. And in February of 1971, radio listeners across the country got a big scare.

[music out]

[play clip: WOWO-AM FALSE ALARM: This station has interrupted its regular program at the request of the United States government to participate in the Emergency Broadcast System serving the ft wayne area… WOWO received this emergency announcement just a few moments ago, we did verify with a special message in code and this is an emergency action directed by the emergency network and directed by the president...]

Frank: NORAD was in charge of activating the system. I guess, during a normal, weekly test, one of the operators at NORAD put in the wrong tape, so to speak. It was an actual alert tape, nationwide alert tape.

For more than 40 minutes, many newsrooms waited anxiously for instructions until finally the all-clear code was properly sent.

[play clip: WOWO-AM FALSE ALARM: This just in, and I’m quoting direct and being completely honest, it says “AT&T advises that The Airforce at Cheyenne Mountain in Colorado put the wrong message on tape. If you think this hasn’t been something here at the studio The Airforce evidently then at Cheyenne Mountain in Colorado put the wrong message tape on the wire.

[music in]

After this, the activation procedure was taken away from NORAD and now only the President can issue a national alert. But over time, other challenges began to arise with the EBS.

Frank: In the late 80s, we were faced with a number of problems. Broadcasters were becoming increasingly automated, and the EBS was not easily engineered so that the messages could be automatically rebroadcast. Also, a lot of subscribers were watching cable.

At the same time, the National Weather Service was developing a technology to improve weather radio. It allowed data to be physically sent over audio channels, it was called Specific Area Message Encoding.

[music out]

Kelly: What you're actually hearing is a data burst.

[SFX: data burst]

People refer to it as the duck quack. Its purpose is not to alert you. Its purpose is to send data down the line over the audio channel that a receiver can get and decode.

But the introduction of the data burst didn’t signal the end of our attention signal.

Frank: We wanted to keep the two toned attention signal, because it has two functions, 1) was it still caught the attention of people who were used to hearing that attention signal. And number 2), there were still hundreds of old EBS receivers out there that the public had, that would turn on when they heard that attention signal.

So now with the Emergency Alert System, the content of the alert message is actually decoded from those three data bursts you hear at the top. And the group at the end is literally an “End of Message” signal that triggers the automated equipment to switch back over to normal programming.

[music in]

But it was the next technological leap that put the future of advanced warning right in the palm of our hands.

Frank: Something needed to be done to move warning into the digital age, the internet age. As cell phones proliferated, there was a need to capture that industry to be a part of public warning. As a result, the Wireless Emergency Alert System was born.

[music out]

[play clip: FEMA PSA: Tone. Many mobile devices will now bring you Wireless Emergency Alerts. Real-time information directly from local sources you know and trust. with the unique sound and vibration you’ll be in the know wherever you are.]

Wade: The system officially came online in April of 2012, and the first alerts from the weather service, went through to cell phones starting in June of 2012. The National Center for Missing and Exploited Children began sending amber alerts through that exact same system.

[music in]

These sudden alert signals can definitely be startling...especially when you’re in a roomful of people where all the phones go off at once.

But… the ability to send Amber Alerts through the Wireless Emergency Alert system is saving children’s lives… and putting critical information in the hands of the public at a time when every second counts.

[play clip: FEMA VIDEO/CBS13: Reporter: At 2:30 the cell phone notification went out. At 2:34 Lally called 911. at 2:39 Kline was in custody and the 3 children were safe. Official:This really is a great example of how the Amber Alert system is supposed to work.]

This success story from California is one of the 40 successful rescues that FEMA attributes directly to the Wireless Emergency Alert.

[music out]

Wade: The system is limited to only 90 characters of space, so it's a very short message. Other key attributes that make it very effective is that it's broadcast from cell towers, so it's sent to any phone that happens to be associated with that tower at the time.

So, no matter where you go in the country, as long as you haven't opted out on it on your individual phone, you'll get a message that's being broadcast in that area.

This allows the system to target mobile users as they come into an alert area, not just in their home zip code. And through international cooperation, the system isn’t specific to American cell phones.

[music in]

Wade: If you travel to Canada and you have a phone that you got here in the United States that is Wireless Emergency Alerts capable, it will receive alerts in Canada and vice versa.

Canada plans to introduce their version, later next year. I think they're shooting for late spring. There's public outreach and public service announcements going on in Canada right now that is talking to people about that tone.

[play clip: CANADIAN “ALERT READY”: It was Sean’s graduation. We were so proud. We all got together for a picnic. That’s when we heard, “BEE-DO BEE-DO” coming from the radio...”]

[music out]

And while Canada’s system is slated to go online in 2018, Japan’s “J-Alert” system is already operational. And in August of 2017, it sent out the most terrifying message possible. The message said a missile had been fired from North Korea, and urged citizens to take shelter in a sturdy building or basement.

[SFX : J-Alert tone and the real “Missile Overhead” message in Japanese]

Fortunately, this North Korean missile splashed down in the ocean. But events like this add to the tension across the Pacific. It serves as all-too-real reminder of the original reasons behind these alert systems.

It’s also why the residents of Hawaii were terrified, when they heard this.

[Play clip: TONE. “The US Pacific Command has detective a missile threat to Hawaii. A missile may impact on land or sea within minutes. This is not a drill. Take immediate active measures”.]

For 38 minutes, people were scrambling for cover. There’s even a video floating around the internet where a family is putting their children into a manhole for safety.

What actually happened was an employee mistook a drill for a real warning about a missile threat. He responded by sending the alert without sign-off from a supervisor. That employee was later fired, and an Administrator resigned.

So even though human error has caused moments of confusion and widespread panic, it’s clear that the system is working. It’s quickly getting alert messages out to the people.

And the next generation of TV broadcasting will be able to support even more advanced options.

[music in]

Kelly: There's a functionality in ATSC three called wake up bits. We can send something, make your TV go, "Hey," so if it's on, it'll react. If it's off, it might turn on.

Embedded in that is the ability to send a lot more rich data. So for example if you had an Amber Alert, you could send over the air a picture of the child, a picture of the car. All this could be downloaded in the background, loaded down into a phone if it had a receiver into it, or your TV set. You push a button, you go, "Oh, look," and you look and see what the car is.

[music out]

I think the important thing to keep in mind is all of this is there for the general public's protection. Don't tune it out.

If the President comes on, God save us, that means something really bad has happened. And that’s never happened, ya’ know where there has been a catastrophe large enough the President had use the EAS system to speak to the country.

[music in]

While broadcast TV and cellular devices can reliably receive warnings, there is still (ironically) a huge gap of communication through internet streaming.

Wade: Today there's a slow transition away from people watching live content, my children rarely watch the television and when they do watch it, I don't think they watch anything that's live. They're streaming or they're watching on demand content or they're just sitting in the room where the TV may be, looking at their phone.

I think a big gap we have right now is streaming services. Netflix, YouTube, Hulu, the gaming systems, so Nintendo and Microsoft Xbox. People, I think, get consumed into those things and the classic thing to speak is it's the kids in the basement that have no idea what's going on.

I think maybe some of them are more connected than we give them credit for. Those devices today can pop a window up on your screen or if you're listening to a podcast, they can insert a commercial or they can insert other messaging into that stream. We would love to work with them to enable them to insert emergency warnings into that also.

We're not there yet.

CREDITS

Twenty Thousand Hertz is produced out of the studios of Defacto Sound. A sound design and mix team that makes advertising, promos, and branded content sound incredible. If you also work in the television, film, game or advertising realm, drop us a note at hi at defactosound dot com. We’d love to hear from you.

This episode was written, produced and edited by Jim McNulty...and me, Dallas Taylor. With help from Sam Schneble. It was sound designed and mixed by Nick Spradlin. Special thanks to Frank Lucia, Kelly Williams at NAB, Wade Witmer from FEMA and the FEMA public affairs team.

The music in this episode is from Musicbed. They represent more than 650 incredible artists, spanning every music genre imaginable. Check them out at musicbed.com. We also have a playlist set up which features all of the music we’ve used at music.20k.org. Go listen to it the next time your working at your computer or in transit.

Finally, as always, tell your friends, family, and colleagues about the show. You can also connect with us through Twitter, Facebook, or on our website - 20 kay dot org.

Thanks for listening.

[music out]

Recent Episodes