Hillbrook School Podcast
Intentional growth of educators at Hillbrook and beyond
4 months ago

S7E7 - Melody and Machine: AI's Musical Impact

Transcript
Speaker A:

Well, hello and welcome back to the Hillbrook School podcast. My name is Bill Selleck. I'm director of technology here, and I'm here with the one Derek Primo Silberman, the only.

Speaker B:

I am the music and the Derek. Some other things at the school.

Speaker A:

Such a funny intro.

Speaker B:

Yeah.

Speaker A:

How are you?

Speaker B:

Good. It's, you know, crazy week of getting ready for all these different shows and. Yeah, lots of time. Yeah.

Speaker A:

So speaking of shows, winter concert, we had some tracks the kid sang along with.

Speaker B:

Yes.

Speaker A:

But those tracks that we heard were not the original tracks.

Speaker B:

Yes. You're talking about the show choir. I am, yes. So the show choir, you want to know where those tracks came from?

Speaker A:

I do. And I want to know how AI.

Speaker B:

Yes.

Speaker A:

Helped you with the thing.

Speaker B:

There definitely was some AI that helped in that one. So the show choir show is all arranged by me this year. They're all popular songs, and we did have professional studio musicians in Los Angeles track the stuff and send it over. But in the midst of all that, there's always these unknown things that occur. Yes.

Speaker A:

As they often are with middle school choirs.

Speaker B:

Yeah. And so strange little tempo manipulations that were needed and all sorts of things like that. And so after we got all that together, we actually did a recording with the choir just this week for them to continue to practice through the season. And that's where the AI really jumped in. I'm a big fan of the products from the company. Isotope and rx eight is their current iteration of their audio manipulation software.

Speaker A:

And this is where, just to warn everybody, we're going a little bit deep down this rabbit hole, because this is a really cool example of how AI can impact. You're like, oh, I don't know. It feels like such, like a trait thing to say, be like AI.

Speaker B:

Yeah. We're using AI everywhere in the world now. But in this one, check this out. This is like one of those places, I think, about what robotics have done in surgery, where you can go in and they talk about, like, well, now the surgeons articulation of their cutting can have a 360 degree functional range, and they can have cameras in every direction. And it's just like things that we can't perceive as humans. We're able to use AI and computers to assist us in that level. And so now in audio, we can do that. I can go in and use an AI driven plugin that just lets me get rid of all of the plosives off of everything. So when the kids sing, pay attention, and it's way too loud, the AI is just like, I know what that sound is. I'm going to just soften that slightly. And it's crazy that we can do that. Or the fact that I just recorded the choir in a room with their track playing and I just, again, used other products by the same company to remove all of the instrumental parts.

Speaker A:

Wait, so they're in a room singing?

Speaker B:

Yes.

Speaker A:

With the instrumental tracks. So when you hit record, the recording is of them with the instrumental tracks?

Speaker B:

Yes, that's right.

Speaker A:

And then you just run it through and say remove.

Speaker B:

And I just run it through this little bit of software and it says, what do you want to keep? Vocals, drums, bass, keys? And I just say, I only want the vocals and it cuts everything out as if I had multi tracked it.

Speaker A:

Can you do this with other things? Like, I just watched the thriller 40th anniversary documentary.

Speaker B:

Okay.

Speaker A:

And so we get the isolated tracks, but it was them really, like on two inch tape hitting solo and you hear the original drums and then solo and you hear just the bass, like from that two inch tape reel, real to real tape. For those playing along at home. This is not that. This is actually like the.

Speaker B:

Yeah, this is taking the final product, that final mix down, where everything is just making one waveform altogether and being able to just tell the AI that I want you to extract the sound of just the human voices and it works and it sounds good and it doesn't. It's crazy. Actually, last year our birthday song benefited from this software because there were other vocals on that birthday song, but we wanted to record our own. And so I went in and did the same thing. This time it was the opposite. I zeroed out the vocals and we got ourselves a nice instrumental track of the song we wanted. And then we had our own teachers here record the vocals for it. So that way it's more our Hillbrook style.

Speaker A:

Wait, so I listened to that every Friday morning at our all school assembly where I sang to every human at Hillbrook. And that original track had vocals in it. The ones about Hillbrook birthday song?

Speaker B:

Nope.

Speaker A:

So you ran it through and said, get rid of the vocals. And then you're like, hey, Jamie and Vanessa, let's sing over this. And you did. And now we have our own. And we could. Theoretically, this feels like a silly question. I could do that for other songs.

Speaker B:

Oh, fully.

Speaker A:

Or I could be like, if we go back to thriller, I want to hear the bass line of Billie Jean.

Speaker B:

Oh, yeah.

Speaker A:

And it can grab that bass line.

Speaker B:

Totally.

Speaker A:

And I can listen to just the bass line.

Speaker B:

Just the bass line. And it doesn't sound all like all manipulations in these ways. You need to give it some parameters. It's always funny to me when it says, what level do you want? The pitch integrity? Why would I want less pitch integrity? But, yeah, sometimes I have to redo the processing a couple of times to get the settings to really make it sound right. But generally even first pass with the prefab settings stuff is clear.

Speaker A:

That's really cool. Yeah, I mean, I know a lot of when we talk about the arts and AI, a lot of people jump to image generation.

Speaker B:

Yes.

Speaker A:

Right. So as of this recording in January 24, feels like 2024 is the future. Maybe we are living in the future, Derek.

Speaker B:

I think we are. I know when I was getting into first grade, the idea of a world where I can sit here on a bench outside and record into a digital system completely on a battery pack while doing all these things, this would have been like, beyond Sci-Fi unfathomable. Yeah. Right. That I'm going to send gigs of data through the air and not worry about it. Yeah, it does feel like the future. And this is one of those other places, like through quarantine, I really started to see some of the places where these AI tools were so useful.

Speaker A:

And this is before we think of Dolly, like the image generation before mid journey, the.

Speaker B:

Yeah, you know, this is one of those things that, and this is why I think it's interesting, is that some of these audio software tools, the companies weren't outright just saying, hey, everybody, AI powered, because I think we both know five years ago, AI powered was still kind of a questionable term.

Speaker A:

I don't know that it meant anything to anyone.

Speaker B:

Yeah, it was like kids were talking about the quality of the AI in the enemies they're fighting in a computer game.

Speaker A:

No, no, that's an algorithm. There's no AI. Very specific. If this, then that direction.

Speaker B:

Yeah. And so this is one of those places where companies like isotope started to take their logarithms and let the AI have say in what they are doing. It was in the last ten years that these tools have really started to evolve and develop. And now it's the place where for a couple of go get yourself a plugin that will completely remove digital distortion. Like, we had a project during quarantine that was part of the inauguration day celebrations, and we had singers from all over the country. There was 150 people involved in this project who were submitting videos of themselves dancing to the choreography of this project, as well as singing. And most of it was recorded on.

Speaker A:

Phones, which look great and sound great sometimes.

Speaker B:

Yeah, exactly. Usually. But there's some of those people who were, like, in their bathroom screaming, and I get a recording that was completely distorted, and I was just blown away again, just like, isolating some of these things, recordings that in 2004, I would have just said, this is trash. Do it again. There's no way we can do anything with this. I put on the remove distortion tool.

Speaker A:

It worked.

Speaker B:

And all of a sudden, I'm hearing this beautiful, clear irish tenor voice coming through when before it was just static. And I just remember sitting there, I had to run into the living room and tell my wife, like, oh, my God, I just fixed that file. It's working perfect. And she's looking me and going, wait, the one that sounded like garbage? Yeah, the garbage file is sounding great now. It's a solo that we're going to use in the recording. And so these kinds of tools, they really have changed what we can think about as possible. And it's not even really getting into the generative aspect of AI because that's where we so much, like, Chat GPT and these things where we're looking at, oh, they're generating this text, they're generating these images, they're manipulating things in these ways, but in audio, it's like we have so many different ways that we can be thinking about it well.

Speaker A:

And so I wonder how this might impact students. I'm going to give you an example, maybe younger kids, and then you can walk me through how our older students might be able to do it. One of my favorite projects that Emily Hendrix, who is our tech support specialist, did with first grade is they used to build a whole city out of cardboard. Inevitably, it was about this time of year, in January, that cardboard city would be left outside because it was massive and would be completely destroyed. So it became a semi digital version, smaller, physical thing with QR codes. And then when you scanned a QR code, two things would happen. She worked with each first grader to record a piece of music that would accompany the theme of that place. So if you're a supermarket, right, what music do you hear? And then also do some Foley sounds. We brought in a Foley artist who walked us through the most memorable part of that. I remember in the multipurpose room, she was showing us how stormtroopers were exiting Kylo Ren's landing pod thing, the opening scene. And she had, like, shoulder pads and a helmet and a couple of other random things. And we were, like, watching a ten second clip, and she was, like, shaking it in front of a microphone, just like, that's the sound, right? And so we took that, and then the first graders made their own Foley sounds. Remember the time I had her first grader, and she figured out how to get, like, kind of a squeaky wheel and then just kind of clunking, clunking, clunking along the way? And so you could do that. You could mix it, and at one point, all of those sounds got together, and it was a horrific cacophony of 20 pieces of music and 20 sound effects together. But it was like, wow, this is the whole city. Which kind of symbolically was really interesting. So tools like that, like being able to put them in the hands of a first grader, I would imagine it's only going to get quicker, easier, and do more complex things. Well, I wonder what comes to mind for you.

Speaker B:

Well, so, actually, in talking about this concept of a soundscape, this is actually a thing that comes into play, that there's civil engineers who are thinking about this concept, as well as composers. I worked with a composer from Mexico for a while where we had done this project that was composing using the graphical analytics of sounds in nature. And so there actually had been audio recordings obtained from the. Was it the campfire that destroyed the town of paradise?

Speaker A:

Wow.

Speaker B:

And there was recordings of these fires, and then those recordings were put through some analysis, and using that data, we were then composing pieces based on the fires.

Speaker A:

Wow.

Speaker B:

And one of the things that this composer showed me was this concept of a full soundscape. And so that's the idea of taking, like, an ambient recording of the corner of Santa Clara and Market street in downtown San Jose and leaving your recording device out for 24 hours to record all of the sounds of the city in that location, and then to compile this into a fashion where we can look at the overall waves happening. What is the spectral analysis of downtown San Jose?

Speaker A:

Oh, interesting.

Speaker B:

And there's actually a lot of research into how understanding this is beneficial for our mental health. And these are things where hopefully AI could begin to assist us in having better well being in life, is that we have a lot of evidence that as great as the leaf blower is over there, that certain frequencies in that way, over and over are distressing for our subconscious. And so being aware of these things in a health sense is really an interesting thing. And so this concept of using recordings with AI behind them to actually recognize, is this a healthy sound environment?

Speaker A:

Spoiler alert. Probably not.

Speaker B:

Yeah. Well, and this is one of those things, I think, about our campus here, and I think we're pretty lucky that the worst it gets is an electric leaf blower off in the distance when we definitely know there are places in this just within 20 miles of us, that the frequencies are so clouded on every direction and that with the growing knowledge that consistent sounds below 50 actually be distressing for people. And same thing, over 10,000, it really makes you start to wonder, what are we doing to ourselves? What is our overall well being going to be if we're just constantly saturating ourselves with distressing sounds?

Speaker A:

Well, and what you're getting into that I'm going to name right here, is that a lot of people think of audio and music as this very specific thing, like 7th graders singing. But what we're actually talking about now is all of these connections, if you have that kind of as a foundation or as an expertise or even just dabbling. As my 7th grade year, I did this thing to be able to take that and then apply it to being a medical professional and well being, looking at being a civil engineer and being able to take those audio analyses and applying it in different ways. I mean, just being a composer and looking, partnering with a civil engineer and coming up with music based on that. There's so many things you get when you get this really interesting music. Plus something else, right?

Speaker B:

Yeah. Because vibrations and sound are just part of existence. They're just phenomenon. And so always that whole argument of what is music? Organized sound. Well, who said it's not organized in what way? And all these things. And so that's one of the places where I always get into so many kind of conversations with people about this stuff is what is it doing for us? What is the point of it? Actually, we just had an 8th grader say, well, what's the point of being in a musical? Well, it's different for each one of us. And that's.

Speaker A:

There is no point. Just kidding. There is.

Speaker B:

Yeah. Well, it's infinite in both directions. Right?

Speaker A:

Right.

Speaker B:

This is infinitely useless or useful, depending on who you are and what your perspective is on it. And I had a student yesterday who had gotten some music equipment for Christmas, and I was helping him with that, and we started talking about what goes into audio. And he got this little pa. And just explaining what an equalizer does. Yeah. And what that's manipulating. And then actually, in this situation, I said, well, since we're talking about EQ, I'm going to show you what happens if I use this little AI robot that's going to go in here and it's going to EQ this track for us. And he was like, wait, it'll just do it. How does it hear it? And I'm like, well, that's a complicated answer. Does it hear it? No, it's interpreted like the digital information. But yes, it's a very interesting thing that the AI could come back and we could tell the AI, well, I want to boost the warmth of this voice. I want to boost the brightness and the sibilance of this voice. And it wasn't just simply like, I'm going to turn up a couple frequencies. It was actually analyzing the recording and saying, wow, this is a very low human voice. And I can tell that the frequency range of the consonants are around here, and this one's really sticking out, and I'm going to balance. And after we looked at it, it's like, well, the AI just did, like 3 hours of work for us in 15 seconds.

Speaker A:

Yeah. And if we look at trying to make audio production specifically more accessible to students, there used to be a steep learning curve. Hours and years, decades of expertise building on it. And so I remember one of my first jobs right out of college, before I was in education, was working in a recording studio. And some of my recordings would be, like, stellar. And every once in a while, particularly if it's, I have like $200 and want to record five songs, there's only so much we can do with studio time in that amount of time. It's going to be rushed and sound good, so kind of that is a given. But also, I was letting myself off the hook a bit. Sometimes it just wouldn't sound great. And part of it is that I didn't have the expertise yet to know, like, with this person. This microphone works? Yes, with this person.

Speaker B:

Exactly.

Speaker A:

With this guitar amp, this microphone in this location.

Speaker B:

Well, and we were going back to an era where you couldn't just even log on to a forum and immediately say, hey, what's the best microphone for this?

Speaker A:

Oh, for sure. This was before that.

Speaker B:

Yeah, exactly. And so this is like the weird way this has moved so rapidly that it went from this almost learn by rote. Like, these audio engineers were treating this like almost a religion, that you had to come in and pray by their altar for nine years before they allowed you to know how the compressor really worked.

Speaker A:

It was really like, I was an intern at this recording studio for two years, basically, and got paid in studio time and could just, during the day, I could just watch them work. And that's how you learned being an apprentice.

Speaker B:

Yeah, exactly. Just the fact that we started to have the social ability to just log in somewhere and say, hey, guys, what's the best kind of microphone for a high voice? And then, of course, everybody disagrees, but you can see that what other people are thinking, rather than having your one mentor who you kind of were hoping was the guy who actually knew what was happening. And then we get to this place where now these AI have all that information programmed into them. And so it's like, what's the point? And obviously the expertise and the knowledge to be able to do it and control it on our end, that's always the big thing, right? When we start to say, well, what's the difference between an AI generated composition and a human composition? In the end, the AI, they still don't have that unity of vision where as a human, we can look at it and say, I want to create this. The AI, they're just kind of using what they know, which is based on how little we know how to teach them.

Speaker A:

Yeah, well, in generative AI, it's really just, here's a massive data set, and it's just guessing the next character to give you.

Speaker B:

Yeah. And it's just really well made guesses.

Speaker A:

It's gotten very good at that.

Speaker B:

Yeah. And so it's a very interesting thing to look at how this can, for an educational situation, allow students to get around the learning curve. It doesn't replace the vast experience and knowledge and dedication that it takes to really master the craft, but it does mean that a student can walk in and use something like garageband with a couple additional plugins to be able to create some really amazing music in a really relatively limited amount of time and space. And hopefully that's something that would be inspiring enough to make that student feel a desire to actually pursue this to a place of not just fun, but understanding well.

Speaker A:

And if we put our teacher hats on for a bit and start to throw some teacher lingo at people as we start to scaffold those learning experiences, it used to be you'd have to either really give them a tightly controlled step by step thing together, or we build a thing together, or let me kind of fill out most of a template and then share it with you. And then students would do that, but there was less ownership. And so now that barrier to, wow, this is my own thing. That sounds good. It's so much lower. And so what's so exciting about that, and I would say to a lesser extent, to all types of content areas, is that the ability to go from, like, I don't really know a thing to wow look at this thing I just made is getting faster and easier. And so much of what we do with students is shuttle them from these made up things called a schedule, go from this to this to this to this. And that asks a lot of students. And it's also really hard to dive deep into projects. And so if we can get things done faster, it's going to let students feel more success. And then those students that might have been like, I don't know, this music thing might have them go, oh, look what I just made. Look what I just made. And then they go on to become medical professionals, civil engineers, all those other things. And they have a richer, more kind of I own it foundation.

Speaker B:

I would actually say that with the 6th grade class this last quarter, when we got into their music unit, we kind of had that experience. We started out just looking at percussion, and we just talked about percussion around the world. Basically. We looked at the students who were in the room and said, where is your heritage in the world? And let's look at what kind of percussion happens to that place. And of course, there's percussion music with every culture in the world. So we got an interesting mix of things. Then we started into a project of just kind of doing some reading music, and we were using just buckets and pencils and just making our own little bucket beats as a class. Then from there, once they had some understanding of the grammar of the language and such, I put them on the task of composing their own bucket ensembles. And so they did that. And then the next evolution of this was we then took garageband out on their iPads and we went over sampling, and everybody was tasked with creating some of their own samples. And then we went over creating a beat. And so then we were utilizing sampling and beats, and before we knew it, in about two weeks time, all these kids were very passionate about their composition and the ownership that was happening. As these kids, they could not wait to share what they had. And when we didn't have enough time to get through all of them, they said, well, then, we need to finish that tomorrow. Wow. Okay. People didn't care about when it was. I remember similar projects in school, but it was like you say, dictated where it was, you will do this, you will do that, it will happen like this. And this is the assignment. We all felt like we were just being walked through some exercise, and that's.

Speaker A:

All it felt like being done to you.

Speaker B:

Exactly, yes. And watching the ownership happen. And then I get these kids, they're still emailing me, this class is over. Now it ended, and they're still emailing me revisions of these songs. Yes. So, yeah, that's the thing where these tools, as it makes it more and more accessible to the students, we get a chance to see that inspiration. And that's really, I think, where even in a learning area with music, AI lets us move the students in a way that is still energetic and creative rather than monotonous and drudgerous.

Speaker A:

Yes, there it is. All right, so I got to head to carpal duty in a minute. I want to know, for people that are listening, one thing that you can say, like, they can jump into this, the one that you just reminded me of that this will be my share, is an app called Loopy HD. So if you don't want to jump into garageBand, that feels like a bit much. Loopy is a great way to make your own. Tap on a bucket, tap on a cardboard box, loop it, and you can just build on that and it becomes a really surprisingly simple and visual way to make music, particularly if you've not done this professionally. How about you?

Speaker B:

I just keep coming. I can't say enough about what GarageBand is as an educational tool. I have watched it be successful recording production software, midi production software. I've had just every aspect of it. It works so fluidly and on so many different devices that I just keep using it.

Speaker A:

It's phenomenal. I always thought about it as a primarily macOS piece of software, but GarageBand on iPad is phenomenal.

Speaker B:

Oh, yeah.

Speaker A:

I've begun actually using GarageBand on my phone for my own podcast, including intro music, outro music all.

Speaker B:

Yeah. And just as a pocket recording tool, it's superior to any other recording device. And then, of course, we can be able to implement these plugins from a company like isotope.

Speaker A:

It's exciting, exciting times. All right, it's time for me to head to carpool duty. Derek, thanks for joining us.

Speaker B:

Yeah.

Speaker A:

And we should be looking for.

Speaker B:

Oh, man, we got too many things going on. We got 13 with the musical at the high school, beauty and the beast, junior with the middle school. It's all going on.

Speaker A:

That's awesome. Well, great. Thanks so much for joining us.

Speaker B:

Yeah, thank.

Episode Notes -

In this melodic episode of the Hillbrook School Podcast, host Bill Selak sits down with the multifaceted Derek Primo Silverman, music and "other things" maestro at Hillbrook. The conversation strikes a harmonious blend of music, technology, and education as they delve into the intricacies of the Winter Concert and the role AI played in refining the show choir's performance. Derek shares the behind-the-scenes magic of arranging popular songs, the unexpected tempo tweaks, and the innovative use of AI-driven audio manipulation software from iZotope to polish the choir's recordings.

The duo also explores the broader implications of AI in audio, from enhancing student creativity to its potential for improving urban soundscapes and overall well-being. As they discuss the transformative power of tools like GarageBand and Loopy HD, the episode resonates with the excitement of making music production more accessible and inspiring for students.

Tune in for an enlightening symphony of ideas, illustrating how AI is not just changing the future of music but also amplifying the educational experience at Hillbrook School. Listen in as Bill and Derek riff on the future of sound, and don't miss the buzz about upcoming musicals "13" and "Beauty and the Beast Junior."