Shaping the Future of Music

by Jordan Rudess

Jordan Rudess is a Content Creator, Musician, & Founder of Dream Theater

Back

In the past year, we’ve seen AI seep into almost every industry imaginable. From art to coding, and yes, even music, AI is changing the way we work, regardless of our profession or skill level. As a creator in the music industry, I’m watching and participating in the emergence of AI in all aspects of music. From music composition to streaming services, everything is in a very rapid transition due to this technology.

There is no doubt that many people are scared or intimidated by this rapid progression of technologies like generative AI. Growing up, there’s always been some sort of sci-fi fantasy where ‘the machines’ take over. While this technology will bring about change, it’s not something we should fear. I believe that it’s our responsibility to be aware of this change and help guide the way we use it. From a general point of view, I believe it’s important to use AI to magnify our human experience and our senses, instead of allowing it to dull who we are as people. AI can be a great teacher if used that way!

While every day there are more and more use cases and applications of AI in the music industry, I’ve decided to highlight three different examples of how AI is changing the future of music, and how as an artist, I’m fully jumping on board and testing some of these for myself. From the initial brainstorming stage, to basic sound design, all the way to fine tuning certain sounds to achieve your desired result, I believe that AI will open up a whole world of new possibilities. Let’s go ahead and explore these three use cases so you can get a better understanding of what I mean. 

From a general point of view, I believe it’s important to use AI to magnify our human experience and our senses, instead of allowing it to dull who we are as people.

As we know, all musicians draw inspiration from artists that they look up to and connect with. Whether this is someone from a previous decade or even an entirely different genre of music, we don’t exist in a vacuum, which means we’re constantly influenced by the things we’re seeing and hearing. Sometimes this isn’t a conscious process, but for musicians it largely is. I myself consider the keyboardists Keith Emerson, Tony Banks, Rick Wakeman and Patrick Moraz to be some of my biggest inspirations, while my favorite bands and artists include Gentle Giant, Yes, Genesis, Pink Floyd, Emerson, Lake & Palmer, King Crimson, Jimi Hendrix, Autechre, and Aphex Twin. 

When writing new music and brainstorming new ideas, I try to think as creatively and innovatively as possible. This has led me to one of my current projects. Suppose I want to use AI as a tool to assist me in writing a piece of music that mixes the styles of the Polish composer Frédéric Chopin, the iconic (and more modern) Beatles, alongside German composer Johann Sebastian Bach. This mashup combines genres and spans three different centuries, in which I hope will result in an entirely new sound. I know that I’d like to have measures 43 through 50 reflect a contrapuntal idea, which means that the measures will have two or more independent melodic lines. To help me achieve this, I can ask an AI program, such as ChatGPT, to offer me ideas in that style and then learn from that, thereby growing my own musical universe with the help of the information provided by AI. 

Not only does this bring about new and endless possibilities to the world of sound design, but it also helps artists streamline their process and cut down on the time it would normally take them to create such an effect. 

To piggyback off the idea of using generative AI to help brainstorm new ideas that  mix existing styles and genres of music, AI can actually go so far as to help musicians with the more basic elements of sound design. This is a use case that artists are already toying around with and we’re seeing some exciting results. Let’s say, for example, I’m in the studio and I would like to create a sound that morphs between a piano and an accordion. With the right (i.e. properly trained) AI, it could help me decide over what amount of time I would like my track to evolve from one sound to the next. Not only does this bring about new and endless possibilities to the world of sound design, but it also helps artists streamline their process and cut down on the time it would normally take them to create such an effect. 

Another interesting use case, in a related but different field, is the idea of using music as a tool for meditation and using AI to make this a much more personal experience. Think about it, meditation is already an incredibly personal practice, but most of the time, we are limited to preset playlists and the range of a few instruments. I believe that by using AI, we can fine tune the sounds that every individual needs to hear to achieve the results they are looking for. AI will give us the ability to tune into the exact tempo, length, and which instruments are needed to create a custom track generated explicitly for you and your intended goals. In this way, AI can not only be used to streamline processes, but also further enables music to help humanity.  

Overall, I believe that the introduction of generative AI is a wonderful advancement, and who knows, potentially one of the most influential innovations of our century. I find the limitless potential creatively stimulating and I’m looking forward to embracing the opportunity to change the future of music. The horse is already out of the barn so to speak, so with no way to stop the progress of AI, I’d much rather go along for the ride and take an active part in shaping what is yet to come!

Blog