Building on a 30 year legacy started by their groundbreaking Wavestation synthesizer, Korg has announced the new Wavestate Wave Sequencing Synthesizer. To learn more about this exciting new synth, we reached out to Dan Phillips, Manager of Product Development for Korg R&D.

The HUB: For someone who’s completely new to the idea of Wave Sequencing and Vector Synthesis, what is it and why would someone want to use it? What type of application does it have?

Dan Phillips: Wave Sequencing and Vector synthesis are both about transforming raw materials (waveforms or samples) into animated, evolving sounds which change over time.

Vector Synthesis starts with four sounds, conceptually laid out as the four points of a diamond shape (like the points on a compass). You can then use an x/y joystick and/or a two-dimensional "Vector Envelope" to fade between the four sounds in real-time. Each note has a separate Vector Envelope, and can be playing a different mix of the sounds; it’s more complex than simply controlling four faders on a mixer.

Wave Sequences play a series of different samples over time, crossfading (or optionally just switching) from one to the next. So, as you hold down a note, the sound changes over time. If the crossfades are short or abrupt, the transitions between the samples create a rhythmic phrase. If the crossfades are longer, Wave Sequences can produce complex, evolving timbres, such as rich pads or leads. As with Vector Synthesis, all of this happens on a note-by-note basis, so each note can be playing something different.

Generally speaking, Vector Synthesis is good for creating simple motion effects, with easy physical control from the joystick; Wave Sequences are good for more complex, sophisticated creations (and with the Wavestate, for the first time, we've enabled direct physical control of them, as well). The two can be used together, as they are in the Wavestate and as they were in the original Wavestation. For instance, you can use Vector Synthesis to fade between four different Wave Sequences. 

Sound-wise, rhythmic patterns, pads, arpeggios, and soundscapes are some of the obvious applications. In general, both Vector Synthesis and Wave Sequences are well-suited to any sound which can benefit organic, controllable motion – and that includes leads, orchestral sounds, and other categories that you might not expect.

Keyboardist Mikael Jorgensen demos the new Korg Wavestate.

The HUB: Take us back to the dawn of Wave Sequencing: Wavestation. Where did it come from? What was the inspiration?

DP: I think the story starts with the Sequential Circuits Prophet VS, released in 1986. The VS was the first instrument to feature "vector synthesis." John Bowen, who created all of the factory sounds for the VS, had the idea to move the vector so that one of the waveforms was faded to silence, change that waveform, and then fade it back in. It turned out that this wasn't practical on the VS, but he held on to the concept, which he termed "Wave Sequencing."

Shortly thereafter, Sequential Circuits fell on hard times, and in 1988 Yamaha purchased the company. Some of Sequential Circuit’s developers continued to work for Yamaha, including (among others) founder Dave Smith and two key people from the VS project: Scott Peterson and John Bowen. Fairly soon afterwards, the ex-Sequential team was transferred from Yamaha to Korg, and the Korg R&D group was created.

At some point, new engineers also joined the team, including several from Ensoniq. Ever since, we've been Korg's engineering group in California, working with the engineering groups at Korg Inc.'s head office in Tokyo.

Korg R&D's first project was to implement John's Wave Sequencing idea, with samples instead of the VS's single-cycle waveforms, and using an oscillator/filter/effects chipset that Korg had already developed. That was the Wavestation, of course. 

After the Wavestation product line, which included Wavestation SR and Wavestation A/D, Korg R&D did the original legendary-but-unreleased OASYS project, the 1212 I/O (the first affordable multi-channel I/O card), and the OASYS PCI. We then worked with Korg Inc. in Japan on the legendary-and-actually-released OASYS workstation and the Kronos. We've also created fundamental technologies used in many other Korg products, most recently including the Grandstage and VOX Continental.

The HUB: Were you surprised by how Wavestation was utilized? It turned up in everything from “Nightly News” themes and segments, to movie soundtracks, to arguably its most heard use, the startup sound for Apple’s Mac computers.

DP: It was great to hear the synth in so many different contexts! I remember sitting in the theater watching "Buffy the Vampire Slayer" (the movie, before the television show), and it seemed like half of the score was a single note on a Wavestation. And it worked! 

The HUB: While the original Wavestation wound down in the mid ‘90s, the technology never really went away and popped up years later on the OASYS. Can you tell us a little bit about that?

DP: When we started to work on the new OASYS in 2001, we wanted the instrument to be capable of all kinds of synthesis methods: sample playback and virtual analog of course, physical modeling, electro-acoustic stuff like drawbar organs, FM, etc. Wave Sequencing was already a signature of Korg R&D, so of course we had to include that too!

With the OASYS, I worked with software engineer Bill Jenkins to make a bunch of improvements to the Wave Sequencing concept. In the original Wavestation, tempo support was an afterthought; the idea of rhythmic wave sequences was something that emerged during the voicing process. In the OASYS, we built tempo into the design from the start. We also added step sequencer value outputs, so that the wave sequence could control other parameters in the sound engine. There were other improvements, such as crossfade shapes, modulation of overall duration, etc. And, of course, we had a much larger palette of samples to work with!

The HUB: Korg released Wavestation as a software plug-in, and later iOS app. Were you involved with those projects at all?

DP: The plug-in and iOS app are cool products! The app team at Korg Inc. in Japan created them, though they did ask Korg R&D for a little advice from time to time.

The HUB: How did the Wavestate project start?

DP: A couple of years ago, we started to talk about making instruments with a unique “Korg R&D” perspective. Korg’s President Seiki Kato suggested a “new Wavestation” as inspiration. The Wavestation was the first synthesizer I worked on at Korg, and it’s dear to my heart, so that seemed like a great place to start! 

We then thought about what a Wavestation would be in the 21st century. Clearly it had to sound gorgeous. It had to be truly unique, to do things that no other instrument could do, including the original Wavestation. It had to be “re-imagined” rather than a simple re-release. It should make wave sequencing more approachable, more immediate, and more fun than the slightly spreadsheet-like approach of previous implementations. The Wavestate, and Wave Sequencing 2.0, was the result of that process.

Korg Wavestate Synthesizer

The HUB: What did you want to accomplish with Wave Sequencing 2.0?

DP: Original Wave Sequencing can create all sorts of great sounds, but there were a few things that we thought could be improved. Even though original Wave Sequences are often beautiful and complex, they tend to play and repeat the same way. We wanted to create sounds that would be less repetitive, more organic. We also wanted the Wavestate to be much deeper and more flexible than the original instrument, while simultaneously being much more physically-oriented and easy to use. It's a hardware instrument, and we wanted to take advantage of what hardware can offer!

To those ends, the knobby interface (including the Mod Knobs), the Lane presets, and randomization all make it easy to get entirely new sounds without even thinking about individual Steps. And then, we also made it easy to work with Steps, with front-panel buttons for step selection and soloing, and all parameters for each Step shown on a single screen on the display without paging or “menu-diving.” 

It probably helps to understand a bit about what's new in Wave Sequencing 2.0, compared to the original. The fundamental difference is the concept of “Lanes.” 

With original-style Wave Sequences, each step of the sequence includes a Multisample, a pitch, a duration, etc. With the OASYS and Kronos, there were also step sequencer output values. The combination of these settings are a complete event, and each step plays the same way every time. 

Wave Sequencing 2.0 splits apart the timing, the sequence of samples, the melody, and the step sequencer, so that each can be manipulated independently. We also added new characteristics, such as shapes and gate times. Each of these is a “Lane,” and each Lane can have a different number of steps and its own start, end, and loop points.

Every time the sequence moves forward, the individual Lanes are combined to create the output. For instance, a sample may be matched with a different duration, pitch, shape, gate length, and step sequence value every time that it plays. You can modulate each Lane’s start, end, and loop points separately for every note, using velocity, LFOs, envelopes, Mod Knobs, or other controllers. Each note in a chord can be playing something different!

Each Lane type – Multisample, timing, pitch, shape, gate, step sequencer – has its own set of presets, drawing from the over 1,000 Wave Sequences included with the factory voicing. You can easily mix-and-match presets for the different Lanes, and by putting the different pieces together, create something entirely new (or let randomization do this for you!). The Lane presets, in conjunction with all of the front-panel Lane controls, transform Wave Sequence creation into an intuitive, explorational, physical process.

The “Lane” concept is inspired, in part, by the methods of 20th-century serialist composers such as Pierre Boulez. Being able to play with everything in real-time, and with everything happening separately for each note, brings it to an entirely new level.

We also added different types of randomization, as another way of achieving unexpected, organic results. First, for each Lane, the order of the steps can be randomized on each loop repetition. You can vary the range of steps included in the randomization via knobs or other real-time controllers. 

Second, each Step of every Lane has a probability value from 0 to 100%. Each time the system prepares to use a Step, that probability is calculated. If the probability is not met, the Step is skipped. And, this probability can be modulated independently for each Step! 

Finally, all of this can be used in synergy with the arpeggiators. Each new arpeggio note can move to a new Step, and still use all of the controls for Lane loop point modulation and randomization. That produces some really cool, useful effects.

The result of all of this is organic, ever-changing sounds that respond to real-time control. 

The HUB: With such a deep synth engine, how did you narrow down what you wanted people to be able to control in real-time?

DP: We brought up all of the basics to dedicated knobs on the front panel: filter, envelopes, LFOs, effects, and so on. But, ultimately we didn't want people to have to choose! We decided that, as a philosophy, you should be able to modulate anything, either from controllers or programmable modulation sources, unless it's really not feasible to do so. As a result, almost all of the front-panel knobs can be modulated, along with a large number of additional on-screen parameters. Even parameters in individual Wave Sequence steps can be modulated - so there can be more than 1,000 potential modulation targets in a single Program.

The eight Mod Knobs are an important part of this, since unlike the dedicated front-panel knobs for the filter, envelopes, LFOs, etc., they can be programmed separately for each Program, to do whatever makes the most sense for the sound. And, of course they can control many different parameters at once.

We also implemented a new way to create modulation routings, which is really fast and hardware-oriented. I hope that people have fun with this part of the instrument.

Korg Wavestate Mod Knob Assignments

The HUB: Can you speak to the voicing process a little bit?

DP: Yes, absolutely - the voicing (aka sound design) plan was a critical part of the product concept. We knew that we wanted it to sound unlike any other instrument, with a strong emphasis on motion within the sounds, rhythmic or otherwise. We also knew that we wanted hands-on control for the aspects of that motion, which Wave Sequencing 2.0 is pretty unique in providing, and that has demands from all aspects of the instrument: the hardware, the software, and the voicing itself.

The sample selection was certainly informed by our vision of the end results. I did the sample selection and editing (when necessary) by myself. My one rule was that, Marie Kondo-style, every sample had to "spark joy." I took a selection of my personal favorites from the Kronos and Krome, as well as some new, previously unreleased samples, mostly from unusual sources; I sampled my own hand-pan drum, for instance. We also got a fantastic bank of samples from Plugin Guru, and naturally included all of the samples from the Wavestation.

John Bowen was in charge of sound development for the original Wavestation, with a team that included John “Skippy” Lehmkuhl, Jack Hotop, Peter Schwartz, and others. We’re very happy that John, Skippy, and Peter all made many sounds for the new Wavestate, along with the incredibly talented Airwave. Sometimes, voicing starts only after development is complete, but in this case we brought everyone into the loop pretty early in the process; they were our initial beta-test group, and really helped to put the final polish onto the product.

I had a seminal experience at a NAMM show long ago, probably in 1991. I was covering booth duty for someone, behind the desk at our keyboard demo station. There were maybe four keyboards total: three workstations (probably T-series) and one Wavestation. People would walk up, put on the headphones and play for a bit, and then ask questions: how many notes could it play at once, what was the bit depth, how many sounds did it have, how big was the PCM ROM, and so on. After a while I noticed that all of these questions had been about the workstations; in the meantime, there had been one guy just playing the Wavestation for about 15 minutes, headphones on, barely looking up. Finally, he took off the headphones, looked at me, said "Gotta have one," and walked off. No questions about numbers of this or megabytes of that; it was all about the sound. Ever since then, that's the reaction I've been going for; it's the goal of the Wavestate voicing, and I hope it's what people feel when they get their hands on the instrument!

The HUB: Final question! Looking back over the past 30 years, is this where you imagined the technology would be today?

DP: With Wave Sequencing in particular, I've always wanted it to be more dynamic and integrate more with modulation, so in those senses yes. I wasn’t thinking about “lanes” then, though!

The HUB: Thanks so much for your time, Dan!