Bölümler

  • Followers of this podcast will remember two central characters from Season 1: Milton Babbitt, the Princeton Music professor and avant-garde composer who was an early devotee of electronic music; and Babbitt’s protégé, Godfrey Winham, a composer whose work at Princeton made it possible for the masses to hear music made on a computer.

    Both men had one partner in common: Soprano Bethany Beardslee, one of the great voices of her generation. For Babbitt, Beardslee’s voice brought his compositions to life. Winham married Bearsdslee, and they had two children before his tragic passing in 1975.

    Beardslee, still alive at age 98, could not be reached for the main Season 1 podcast. But after it was aired, we got in touch with their son, Chris, who set up a microphone so we can interview her. In this remarkable interview, she looks back at a time when Babbitt and Godfrey Winham – as well as Beardslee herself -- were changing the sound of music. Chris contributed his own memories of his father during the conversation.

    Beardslee, who was born in 1925 in Michigan, is best known for her collaborations with prominent mid-century composers such as Babbitt, Igor Stravinsky, Pierre Boulez, George Perle and Sir Peter Maxwell Davies. She delivered spot-on performances of atonal composers like Arnold Schoenberg, Alban Berg and Anton Webern. Contemporary composers came to reply on her to perform their challenging works.

    “Were there no Bethany Beardslee, she could not have been invented,” Babbitt once said of her. Beardslee’s career spanned the early 1950s to the late 1990s. She received an honorary doctorate from Princeton in 1977.

    A prominent vocalist, there have been many interviews with Beardslee over the years, and we discuss her career during our conversation. But few prior interviews have focused on her memories of her late husband, whose story we told in Episode 3 of Season 1, “The Converter.”

    Godfrey Winham was the first recipient of a doctorate in musical composition at Princeton. Beyond his advances in music generation software, digital speech synthesis, and the development of reverb for art’s sake, he was also a fascinating character.

  • In this episode, Stanley Jordan does something remarkable: He recreates a lost computer music composition, and premieres it here for the first time.

    Part Pac-Man, part symphony orchestra, this three-minute piece is a testament to Jordan's early musical journey through technical terrain, setting the stage for a career in which he has used technology to create dazzling ear candy.

    “Haydn Seek was a composition I made in 1980 while I was studying computer music at Princeton,” Jordan wrote. “It was spawned from an assignment in a composition class with J.K. Randall, in which we were to take an existing piece of music and compose a new one using something that we liked in the original.”

    Jordan based his composition on “Piano Sonata in A Major” by Franz Joseph Haydn. He was also studying computer music with Paul Lansky during this period, so he decided to make his composition for computer.

    “This was an exciting time because computers were just beginning to be used as musical instruments. At that time computer music was only available at academic institutions and most of the music was very abstract. I loved that stuff, but I was more interested in bringing something new to traditional forms,” such as "Switched on Bach,” he said.

    Jordan loved how Haydn got so much material out of a few simple patterns. In “Haydn Seek,” he takes Haydn's original themes from the first movement, and expands on them in his own way, using more contemporary harmonies. At the beginning, everything is a condensation of Haydn's main themes, taken exactly as Haydn composed them.

    But then he starts blending in his own new material, growing and expanding until the piece is completely his own, but still related to Haydn's original main theme.

    The original version of Haydn Seek was incomplete and the materials were lost, so he recreated it here from memory, completing it using only compositional techniques and harmonic knowledge that he had at the time.

  • Eksik bölüm mü var?

    Akışı yenilemek için buraya tıklayın.

  • Stanley Jordan was about to play The Tonight Show with Johnny Carson, and with seconds go before cameras rolled, the sound wasn’t coming out of his guitar. His guitar tech was sweating bullets. Was he able to hit his mark? And what lesson did he learn from the experience?

    In this second episode of Season 2 of "Composers & Computers," Jordan talks about his time at Princeton, including his work with two of his mentors, who were big names in the field of electronic music: Milton Babbitt and Paul Lansky. He discusses the time Dizzy Gillespie’s jaw dropped when Jordan took the stage during a concert at Richardson Hall with Benny Carter.

    And he talks about why he went through the tedious process of composing music on a computer at a time that computers didn't easily generate sound.

    “The idea was so thrilling for me, because I had this sound in my head, and I knew that if I could just get the right numbers, create the right code, I knew there was a way to realize that sound," Jordan said. "So I didn’t mind trudging through the snow at midnight. I think sometimes when something is challenging, I think it’s more meaningful.”

  • Stanley Jordan ’81 grew up in Silicon Valley, making circuits as a kid, watching his father become one of the world's first professional computer programmers. But it wasn’t until Jordan arrived at Princeton that the young musician learned how to fuse his love of music with his fascination with technology.

    In Episode 1 of the new season of “Composers & Computers,” we begin our deep dive into the technology-filled life story of Jordan, who went on to a career as an acclaimed jazz musician. We explore how he was initially drawn to Stanford to work with John Chowning, inventor of the Yamaha digital keyboard, but through a twist of fate at the admissions office, found himself headed to Princeton instead.

    Chowning himself told Jordan that it was a fortuitous outcome, and Jordan explains why this ended up being true, through meeting two mentors who would have a major effect on his musical path, Milton Babbitt and Paul Lansky.

    We’ll look at how he developed his trademark two-hand percussive “touch technique” while he was a student at Princeton. And he’ll talk about his time at the Computer Center, including the time he dropped his punch cards on the floor.

  • Stanley Jordan. Innovator of the two-handed touch technique to coax a sweet, percussive sound out of an electric guitar. After graduating from Princeton University in 1981, he went on to become an acclaimed jazz musician known for his guitar pyrotechnics.

    He’s played on the Tonight Show with Johnny Carson. He’s been nominated for four Grammy awards. His debut album, Magic Touch, sat atop the Billboard jazz charts for more than 11 months. He’s played to audiences around the world, crossing genres with virtuosity and a hunger for musical adventure.

    But forty-five years ago, if you were looking for Stanley Jordan on any given midnight, you might just find him in a sub-basement laboratory in the Engineering Quadrangle. Because his passion at the time, the thing that attracted him to Princeton in the first place, was computer music.

    This new, second season of “Composers & Computers” tells the three-part story of Stanley Jordan’s time at Princeton, and how, despite years of touring and building his reputation as a jazz master, he never really stopped being a computer musician.

  • In our epilogue episode, we look at how an engineering professor, Naomi Leonard, is collaborating with dancers to show how birds fly in a flock without bumping into each other; how robots can reflect our humanity back at us; and how other peoples’ rhythmic movements affect our nervous systems. Engineering faculty at Princeton are increasingly working with artists to create an array of projects, and in the process, shining light on how people use, and perceive, the marvels that engineers create. We wrap up this oral history podcast series with a story about how a computer musician got a late night call from Stevie Wonder to talk shop, and in the process may have changed the way the music legend thought about digital voice synthesis.

  • Has digital music reached the point of diminishing returns? Has it all been done, and heard, before? At the start of a new millennium, a crew of Princeton engineers and musicians answered this question with a resounding no, building the now-famous Princeton Laptop Orchestra. As a Princeton music grad student in the late 1990s, Dan Trueman worked with his adviser, Perry Cook, on building an unorthodox digital instrument played with all the expression of a fiddle, but sounding more like a robot. And rather than running the sound through a single speaker pointed at the audience, they created a 360-degree ball of speakers, so that the device had the sonic presence of an acoustic instrument. When Trueman returned as a member of the faculty several years later, Trueman and Cook (who had a joint appointment in engineering and music) set their minds to creating an entire ensemble of those unorthodox instruments. And in the process, they created a whole new genre of music and digital creative expression.

  • Paul Lansky is the most celebrated and musically influential of the computer musicians at Princeton, and it isn’t only because he was famously sampled by Radiohead on their classic album “Kid A.”

    His work expanded the boundaries of computer music and speech synthesis for art into territory far from the art’s musically difficult twelve-tone beginnings. In the words of current Princeton Music Professor Dan Trueman, “He invites you to listen however you want… It’s this place you go and your find your own way.” Or as his former student Frances White said, Lansky was able to bring “computer music into a much more open and beautiful place.”

    This episode is a celebration of the life’s work of Paul Lansky, as well his collaboration with a Princeton engineer, Ken Steiglitz, that made much of that work possible. We’ll hear a wide sweep of his computer music from throughout his multifaceted career. And we’ll look at Lansky’s work building software, as well as the similar efforts of fellow composer Barry Vercoe, whose CSound technology left a lasting imprint on software musicians still use today.

  • Imagine using a computer to synthesize music, but not being able to hear it as you built it. That's how it was in the 1960s - musicians only heard what they were composing in their mind's ear, until the project, usually riddled with mistakes, was finished and processed at a far-off lab.

    This presented a challenge to the Princeton interdisciplinary team of engineer Ken Steiglitz and composer Godfrey Winham. They worked to build a device that would translate the ones and zeros generated by the IBM into analog sound, the only form of sound human beings can hear. The work they did together represented a watershed in the use of computers as a tool to create music.

    Winham saw the potential of the computer as a musical device, and spent his best years building tools to make the giant machine more user-friendly to musicians. And Steiglitz was uniquely positioned to help Winham realize his vision.

    This episode is the poignant story of their teamwork, as well as of the community of composers that created a wild batch of music on the IBM, music that has largely been long forgotten. But we’ve found it, and there are lots of clips of that music in this episode. We’ll take a detailed look at how humans are able to hear digital music. And we’ll explore the amazing story of Godfrey Winham, Princeton’s first recipient of a doctorate in music composition. Beyond his advances in music generation software, digital speech synthesis, and the development of reverb for art’s sake, he was a fascinating character.

  • When the Computer Center opened along with the Engineering Quadrangle at Princeton in 1962, who knew that the Music Department would be one of its biggest users? The composers were there at all hours, punching their cards and running huge jobs overnight on the room-sized, silent IBM 7090. Working without the ability to hear what they were creating, listening only to the music in their minds, these classical music composers managed to synthesize some of the trippiest music you’ll ever hear. But it was also the sound of progress, as they broke new ground in how digital music is created. Some of their advances live on to this day in music synthesis software.

    Much of the music you’ll hear on this episode was created by James K. Randall, the Princeton music professor who is credited with showing the computer’s early promise for creating nuanced music.

  • This episode is the story of what happened when a Princeton composer, who was inspired to create some of the most challenging music ever written, decided it could be most reliably performed by a machine. His work to realize that machine led to the birth of the electronic synthesizer as a device upon which one could compose music. And it led, indirectly, to the digital music revolution.

    The device wasn’t a computer – it was an early analog synthesizer in Manhattan, co-owned by Princeton and Columbia.

    This episode will take you inside Milton Babbitt’s work with his “robot orchestra.” You’ll get to hear the music it made, and how Babbitt and the engineers who built it carved out a path that would lead to digital music as know it today.

  • A revolution in music happened in the Princeton Engineering Quadrangle, but chances are, you don’t know the story. Sixty years ago, some music-loving computer engineers happened upon some musicians who were enamored with a new computer installed on the third floor.

    The work they did together helped turn computers – at the time, a hulking, silent machine – into a tool to produce music. Their innovations made it easier to hear that music. That was no mean feat back then. Then they made it possible for a computer to make that music better, more nuanced. And they helped make it possible for computers to synthesize speech. What computers are able to do today to help musicians realize their vision owes a lot to the work done at Princeton.

    Much of this history has been effectively lost, gathering dust in far-off libraries. And the music they made has been largely forgotten as well. Over the five episodes of this series, we will tell that story. You’ll get to meet some fascinating people. And you’ll get to hear that music. It’s the sound of history being made. Engineers thrive on collaborations and intersections. Their collaborations with artists continue to this day.

    We look forward to sharing with you “Composers & Computers,” coming in early May.