Recently, I sat down with Stephan Schmitt and had a fascinating conversation about a wide range of topics. He talked about his first synthesizers, the early days of Native Instruments, and the future of sound synthesis. He also gives us some insight into his musical tastes and philosophy — something which has greatly influenced his own instrument designs.
Studies and First Jobs
What did you study?
I studied electrical engineering in Braunschweig; it took me a while to finish my studies because I had a lot of other interests, including making music! During those student years, I was already working with musicians and filmmakers. I also had various jobs in music stores, studios, and PA rental places. I finally finished my studies in 1988 and came to Berlin because I was offered my first job as an electronics developer. It was my first “real” job where I really started making money.
Did you always want to come to Berlin?
Not really. But Braunschweig was relatively isolated because it was very close to the East German border; it was somewhat cut off from everything else. I knew a lot of people in Braunschweig, especially artists and creative-oriented people — and Berlin was very attractive to them. At that time, people who studied art in Braunschweig generally left for either Berlin or Cologne after graduating.
Many of those who went to Berlin were fascinated by the squatting movement and alternative scenes — the “wild life” — in West Berlin. That all seemed a bit artificial to me and I didn’t find it particularly attractive. When I came to Berlin, I had some contact with the “scene”, but worked very long hours and so I really didn’t have that much time to do anything else. I didn’t come to Berlin to experience the “scene”; I came for the job. I could have gone to another city just as easily. I got the job offer from the same man who supervised my thesis; so he was also my boss at this company.
What was your job there?
I was a developer of electronic circuits for fiber optics systems. This was a relatively new field and the push came from the Telekom, who had a near monopoly in communications technology in West Germany. We were dealing with large communications hubs for high-volume traffic. It involved very high-frequency systems and was actually quite a challenge, because my thing was more measurement and control systems, not high-frequency technique.
It didn’t have anything directly to do with music, did it?
No, not at all. Obviously, signal processing is essential to audio and it’s also a central theme in the fiber optics communications technology that I was working on. But that was about the only connection.
Where did your audio technology background come from?
I was always building devices, and it always involved audio. Even before I studied electronic engineering, I made a lot of audio electronics — for myself and for friends who played in bands. I enjoyed soldering and modifying things and building stuff, which taught me quite a bit about electronics and gave me an advantage. I didn’t even realize this until towards the end of my university years, when we all had to start doing real hands-on, physical projects. Later, when I started working, I also realized that I was a good developer, not because of what I had studied, but through my hobby.
Why did you quit that first job in Berlin?
I didn’t want to contribute to what I thought was a step towards governmental control over society; it’s rather interesting when you think about that from today’s perspective! I was politically left-leaning — or more accurately “alternative-oriented” — and felt that this technology had politically questionable ramifications in terms of its inherent surveillance possibilities. I often argued with my boss over whether or not that technology was really “progress”. At that time, we really didn’t have an idea of what the internet would become. We didn’t realize that the internet could become something so “democratic”, as it has turned out to be in many ways. And anyway, I was tired of working sixty hours per week on this stuff, with a lot of overtime and deadline stress.
So what did you do after quitting?
I had saved some money — it was a well-paid job at the time — and I lived on those savings to start making a lot more music!
What are your musical roots?
In Braunschweig I had played in bands and had been involved with music projects — I was always active, either as keyboarder, or as mixer, doing F.O.H., or in working on studio productions. I also worked with a theater to make sound collages and lighting, as well as live sounds. It was a very interesting experience which I could also draw from later on.
I always played keyboards — a number of different synthesizers — but never had enough money to buy something like a Prophet 5. Instead of the Prophet 5, I had a Korg PolySix and various Casio synthesizers. I also had the small DX100, because I couldn’t afford a DX7. When I began to work here in Berlin, I was able to buy my own DX7. For five years, that’s all I played. At some point, Yamaha started building sample-based voices into their FM synths and I really didn’t like that at all. I didn’t want to spend money on a synth that I would only use half of!
I worked a lot with a combination of a DX7 and an Atari-based editor called SynthWorks from Steinberg. For a long time, that was my environment for playing and experimenting with FM sounds. Because of my intense work schedule I didn’t have that much contact with the scene at the time, so I played mostly alone.
Did you give any public performances at that time?
Back then, I was sharing an apartment with Berthold Türcke, a friend who was a sort of musical role model to me. He was a classically-trained pianist and composer and had studied in the USA under disciples of Schönberg. I lived at that place for more than a year, but he had been living in Berlin for longer and had a network of contacts.
At one point he realized that I had been playing synthesizer in my room, with my headphones on, and wanted to hear what I was doing. He reacted very positively told me that what I was doing actually sounded good. Thanks to his encouragement, I started working toward live performances.
Where did you play?
The contacts that I had at the time were in the art scene, so I started playing at exhibitions, galleries, and ateliers, where there was a listening public that was very open to experimental things.
How did you start playing experimental music?
I was always into a wide spectrum of musical styles. In the seventies, when I was in high school, I was involved with a number of bands and played the typical rock styles of the time. At some point jazz-rock came along as a sort of “more intelligent” or “sophisticated” music and I listened to — and tried to play — in that style.
Through Berthold, I was exposed to more experimental, avant-garde music. I also did some concerts with him, which included performing some John Cage pieces. Whenever he needed someone to set up or play something “electronic”, then I was the man [laughs]. Later, when he realized that I had become a better keyboarder, he wrote a composition for both of us and we played here in Berlin a few times. When he organized concerts of works by composition students, he also asked me to play; I felt a little bit out of place playing in those concerts! It was another musical world for me, but it was very interesting.
What did you listen to when you were a teenager?
My first records were from the Rolling Stones, but I also listened to a lot of blues. One of the first records I ever bought was B.B. King, and for a while I bought almost nothing but B.B. Kind records. I was always fascinated by his style, and blues guitar in general.
You often speak about the guitar with great respect; why didn’t you ever learn to play it?
Because everybody played guitar back then! But also — and maybe this is a bit of a personal thing — when everybody does the same thing, I always want to do something different. As a guitarist, it was hard to differentiate yourself from the everyone else.
But it’s true that the music that I was listening to back then was very guitar-oriented. For example, the Irish guitarist Rory Gallagher, or Ten Years After with Alvin Lee on guitar, even though they had a good organist [laughs]. Or Deep Purple… Led Zeppelin was also really important to me. And I just liked guitar music that had a real sound…
Then the psychedelic music came along and there was Pink Floyd, then jazz rock: Chick Corea, Al Di Meola, as well as the European jazz rock scene… I have been to way too many jazz rock concerts![Laughs]
After a while, I became less interested in virtuosic music or highly refined harmonic structures — “head-oriented” music, like jazz rock stuff and became more attracted to music that had a real point of view, that had an attitude, a style of its own, music that had something to say, even to the point of being provocative. I never got into punk, but new wave and the message behind it was interesting to me. I also liked the “German wave” like Nina Hagen, Ideal etc.
There was also a time when I listened to a lot of Frank Zappa. His music wasn’t just jazz rock; he drew from a wide range of music styles including the avant-garde… Looking at it today, it seems too eclectic, too “constructed”. But I do realize that he has influenced me quite a bit, although I also know that that time is over, there are other things now.
The people I know in the contemporary art scene listen to interesting music and that has also been an influence on me. Artists often seem to be looking for culturally inspiring sources. One of my best friends is a painter, and he often worked at night and listened to John Peel’s radio shows on the BBC. He was always bringing by new discs; there was a lot of New York avant-garde associated with the Knitting Factory. This kind of music later influenced me more than Zappa, in terms of my own personal musical directions.
Did you listen to much Brian Eno, Robert Fripp, or Velvet Underground?
The Velvet Underground didn’t appeal to me that much, even though it was very much a music that artists listened to… I have several King Crimson discs, but only the phase with Adrian Belew! I respected Eno as one of the most radical and intelligent innovators, but it took some time before I could enjoy his (sometimes very minimalistic) music.
The Idea for Reaktor
How did you get the idea to make something like Reaktor (called Generator at the beginning)? After all, you came much more from the world of hardware and not software…
As a hardware developer I was one of the few people who focused on analog technology. Many electronic engineers specialized in digital systems and software for micro controllers because that has become the dominant technology in the meantime. But in signal processing, you have to really know about analog phenomena, too.
Also, I was always very involved with synthesizers. I played a number of them during my student days, but I also had jobs in music stores where I sold them and therefore had access to a lot of equipment. Plus, I regularly went to the Frankfurt Musikmesse, thoroughly read the specialized magazines on synths and music production, and was fully aware of what was available.
At some point, I realized that Yamaha had gone very “deep” with FM synth technology, but then quit working in that direction. I had pushed these instruments to the limits of what was possible and wondered why they didn’t develop their synthesizers further. I was disappointed that they were bringing out machines that were no longer compatible with previous, more sophisticated synths, and instead manufacturing mostly sample-based “sound modules”.
So the industry started focusing on these “romplers”. They had very limited user interfaces, and editing was difficult. The sounds were strongly predefined by the built-in waveforms and most people just used the factory preset sounds. So the synthesizer evolution just kind of stopped. The interesting stuff was only available second-hand. This situation frustrated me. I thought: “If I’m serious, I’ll have to develop my own instruments!” I mean, I had a good background in signal processing technology, so this wasn’t just wishful thinking.
But I also knew that developing synthesizers like those made by Roland and Yamaha was a very expensive endeavor; you need a lot of capital to develop special chips. At the time, it wasn’t possible to use any standard processors in synths, everything was ASICS (Application Specific Integrated Circuits). And you’d need hundreds of thousands of dollars to invest in that. The alternative was to use DSPs, which started to get fast enough to handle audio synthesis and processing as purely software-based systems.
One day I was at a vacation house where I found a “c’t” magazine [Note: “Computer-Technik”, a German periodical]. During the time I was there, I would keep picking it up and would page through it, read some articles… and that was when the “click” happened: I could actually just use a completely normal PC, meaning that I could develop something on a very open system.
A number of possibilities opened up: I could design something with a DSP card, or maybe even use the internal processors in real time — after all, CPUs would soon be powerful enough to handle real-time audio. It was really exciting! I realized at the time that I had the possibility — as a small or one-man company — to develop something that could compete with Yamaha and Roland; at least I could do what they no longer did in terms of synthesizer development.
Creating a Modular Synthesizer
Did you already have much programming experience before then?
No, but after the Wall came down I worked for a company in East Berlin, developing digitally controlled mixing consoles. Computer technology and software played an important role. I was much more a hardware developer at the time, but I headed up the whole development there and so I worked a lot with software engineers.
I really enjoyed working with software people to create specifications and I learned quite a bit at that time: how software is conceived and specified and how software projects should be managed.
At some point I realized that software wasn’t all that difficult and that I could also do it — but I never would have started alone! There was a lot that I didn’t know at the beginning. That’s why Volker Hinz was a very important partner for me. I met him at that East Berlin company; he jobbed as a student there. I mentioned to him that I was not planning on staying at that job forever — the company was in the process of being dismantled anyway. I told him: I have an idea…
Volker had learned to program in C on his own and had worked with various systems. I was confident that he would be able to learn a new system, a new environment. He was still a student — I couldn’t have paid much of a salary — and he had time. I was living off of unemployment, which helped support us at the start. I worked on concepts and specifications and wrote some signal processing code while Volker set up our development environment and did most of the coding for the first test versions.
Was it clear at the beginning that you were building something for native processors?
No. At the beginning, we weren’t sure whether we were going to use DSP cards or the “native” main processor for audio processing. We could have actually saved a lot of money that we spent on an expensive DSP development platform which was never really even used! When Pentium and Windows 95 came out we realized that we could do signal processing on those mass-market systems. Intel had a thing called “Native Signal Processing”; they provided a software library to support signal processing on the Pentium chip itself. Strangely, this project was put on hold — maybe the peripheral hardware manufacturers were against it, because there were modem and audio functions that would have replaced the need for such external hardware.
Is that where the “Native” part of “Native Instruments” comes from?
Yes, the idea of using the word “native” in “Native Instruments” came from Intel’s “Native Signal Processing” initiative, back in 1995. But Volker and I had been working on this since the beginning of 1994, laying down the foundations of Generator. There was a lot of learning-by-doing; Volker didn’t have a great amount of professional experience at the time and I had practically none in this domain, so things went slowly at first. It took about two years before we could really show anything.
When did you first show Generator at the Musikmesse?
It was in 1996. I also had a part-time job working for mixing desk company and went to the Musikmesse for them. We were involved with mixers for theaters and were showing them at the Musikmesse. My boss allowed me to put a computer in the corner on a small table to show people a very early version of Generator.
Nobody knew what we were doing; nobody even noticed that we were there! I had made some flyers about Generator and didn’t really know what to do with them. Then I had the idea to go to the booths of the German magazines specialized in synths and studio equipment — Keyboards and Keys — and to leave some flyers there.
Surprisingly enough, on the next day, journalists from these magazines actually came to our booth, wanting us to show them Generator! They were really fascinated. They even prominently mentioned us in their Musikmesse reports published just afterwards, saying that it was one of the most exciting things at the show! That was very, very important for us.
Some people who had read about us in the magazine contacted us and asked if they could work with us. Two of the first people to join us were Michael Kurz (who later developed the B4 and the Pro–52) and Bernd Roggendorf (who later co-founded Ableton). Some months later Bernd introduced us to his friend Daniel Haver, now the CEO of Native Instruments.
When did you actually start selling Generator?
Wieland Samolak ordered the first system from us at the end of 1996. We had developed on Windows-PC and he was a Mac user — that posed a problem! So he ordered a specially configured PC with Generator installed on it! This was an important order for us; I got my hands on a computer and made sure that the software ran reasonably well so that we could turn a working system over to him. I rented a car and drove this computer to the “Synthesizerstudio Bonn” a really legendary place that was the go-to synthesizer shop in Germany at the time. Wieland had been working there.
Shortly before this we had registered the company officially (on May 3, 1996) in order to start doing business. It was then called the “Schmitt und Hinz GbR”.
With Bernd, we had the first really professional software developer on board. He was experienced in software engineering and optimization, and more advanced techniques, helping us develop our projects based on a professional software framework. He later became the head developer of Ableton.
Did Generator work “out of the box” at the beginning?
For the first version, we had to develop our own sound card. Back then, audio interface latencies were too high for reasonable real time performance. With our own card we managed to achieve latencies down to 5 milliseconds, which was very low for the time.
I have a tremendous amount of respect for Volker because he taught himself to develop drivers back then, which is notoriously difficult. The hardware was fairly simple and straight-forward; we had the whole signal chain — software, driver, and hardware — under our control, and that’s how we got the latency low enough to play an instrument live. By the way, our card actually didn’t sound so bad — some people really liked it!
The guys from Emagic realized what we were doing at the time and they offered us their Audiowerk 8, and we could then adapt their driver to our system. This meant that in addition to a low latency, we could also have a multi-channel output.
Was it always your plan to make a fully modular system?
No, that was not clear at the beginning. I was actually never a big fan of large modular systems, not to speak of the fact that nobody could really afford them back then! And I was not into the cult of the old analog gear; I was always more interested in smaller, more compact, more playable systems, with polyphony and velocity sensitivity.
But software is developed in a modular way. You try to make components that you can re-use. By the time they are debugged and really working, you’ve invested a lot of time into them and so they should be used again and not have to be re-invented each time. And when you develop things in a modular fashion, you create a kind of comprehensible structure.
Since we already had a modularity through our software development, I thought, “Why shouldn’t we also give the user the flexibility that this modular development provides?” This way, users could use these building blocks to make their own configurations.
With a radical modular approach you get a graphical programming language. There was already Max and Kyma out at that time. I had a little experience with Max on NeXT stations and found it an interesting challenge to come with something with similar flexibility, but easier to use.
So, you certainly could have developed a ready-made type thing — like the Pro–53, for example, but you expressly decided to create a modular system?
I always had the feeling that the industry seriously restricted users and told us all what to do or what we users wanted in terms of synthesizers; whether they have three LFOs or just one LFO or what would modulate what, what the user interface would look like, and so on. Everything was very strongly predetermined. Plus, things often wouldn’t be developed further and might be taken away in following versions! This experience with industry-created instruments made me want to empower musicians, to make it possible for them to have more freedom with their instruments.
Are you saying that Reaktor has political beginnings?
[Laughs] Yes! It’s all about emancipation! But seriously, for me it was about breaking free from the dependence on the industry and I believed that other users — other musicians — would appreciate this as well.
In retrospect, we realized that not that many people look for that kind of flexibility. That’s why Reaktor was not a big commercial success in the beginning, even though we were certainly a pioneer for software synthesizers. At the time there wasn’t that much available. In 1997 there were things like Propellerheads Rebirth and there was Dave Smith’s software synth company Seer Systems. Both companies were working on more compact instruments, not modular systems. For instance, Rebirth included a 303 emulation — a nice instrument to start with, for people that didn’t really have that much background knowledge and just wanted to create techno-type patterns. My approach was to be radically modular, to keep all of the technical possibilities in the foreground. It was actually much more modular than the classic “modular” systems, because we provided the panel elements — the knobs, buttons, faders, displays — as modular elements as well. Even small mathematical operations were (and still are) modular components in Reaktor.
Again, Generator — Reaktor’s predecessor — wasn’t a big commercial success. From the very beginning, it was clear that the content — the instruments that were delivered with Reaktor — were the key to commercial success. In those days, people often asked us to do a 303 emulation! This freedom that I envisioned with such a deeply modular system was not embraced by a very large number of people. We did earn quite a bit of respect in the audio community for Generator and Reaktor; everybody knew that there was a great amount of know-how behind these products. It was not just about software synthesis; it was also an incredibly flexible tool. But for Native Instruments the first big commercial breakthrough came with the Pro–52 and B4 emulations [Note: the Pro–52, later Pro–53, was a Prophet 5 emulation and the B4, later B4II, was a B3 Hammond Organ emulation].
Wasn’t that about the time that the VST plugin format became available?
When we started, VSTi didn’t exist — the plugin formats for the sequencers supported only effects. In 1998, we discussed the requirements with Steinberg and in 1999 VSTi finally became available. The first really usable third-party VSTi instrument was the Pro–52. It was co-distributed by Steinberg. And with the Pro–52 — by using already-known instrument emulations — we gained a lot of customers. People wanted to see and hear something that they already knew and understood (as hardware, in this case); the knobs and buttons were all the same as on the old Prophet 5 synthesizer from Sequential Circuits; people knew what the knobs did and what the instrument would sound like. The same went for the B4. With Reaktor instruments, this isn’t necessarily the case because the user interface can be custom designed by whomever creates the instrument.
By the way, we made prototypes of both the B4 and the Pro–52 in Reaktor in order to get the right sound.
NI: The First Years
Did any other company try to buy you out at the beginning?
Yes, as a matter of fact, several companies were interested in acquiring us. When Daniel Haver joined the company in 1997, he brought a lot of business know-how; he knew how to negotiate (certainly better than we could!) such offers and to deal with contracts. And as you can see we have managed to stay independent and to become one of the “big players” ourselves.
In 1999, we obtained some venture capital that was needed to really establish NI on the market. Daniel was very active in building up a distribution network, pushing the sales and establishing the brand, which continues to be very strong!
What was the connection between Ableton and Native Instruments?
Gerhard Behles (now CEO of Ableton) was also involved with adding sampling functionality, especially granular synthesis, to Generator which was released then with Transformator and Reaktor. He and Bernd left us in 1999 to create Ableton. Both of them were a little bit frustrated because Reaktor was such an “engineer-oriented” product. Bernd liked Propellerheads Rebirth very much; his thinking was that you should really let people have fun with this stuff, without technical or musical experience. This was an on-going discussion at the time. Bernd and I often argued about this, although we have always gotten along very well on a personal level. I have a tremendous amount of respect for him and what he and Gerhard have built up.
But my thinking was: some things are inherently a bit complex and you just can’t simply make them easy. I also didn’t want to build hard-wired sequences or music in my instruments and really believed in letting the users make the music that they wanted to make, and not tell them what kind of music they should make.
It’s interesting, I often think of these discussions that Bernd and I had and think: yes, he fulfilled his dream of making a tool that also allows non-musicians to make music: easy and light. I had another vision and maybe that made for a more difficult path for Native Instruments. After all, Ableton became commercially successful faster than NI did.
Generator was relatively expensive at the beginning, wasn’t it?
Yes, but it came with its own sound card. When ASIO and DirectX came out, we didn’t need to provide our own sound card, and we could drop the price.
When did the first Mac version come out?
I think it was around the year 1999. When we first started out, Apple had a small and decreasing market share, especially here in Germany. That’s why I felt it was enough to just develop a Windows version at first. But soon it became evident that this market would be important to the company.
Samples vs. Synthesis
You are not a big fan of sample-based instruments, at least for your personal use — why?
I actually experimented with using samples very early on, including during the times when I was working in music stores, where I had access to samplers. For example, I used a Prophet 2000 to experiment with collages and atmospheric sounds as well as with rhythmically based sounds. But I have always felt that using samples for truly playable instruments is too rigid and not expressive enough. This is especially true if you compare it with a synthesizer, for example with filters or FM.
Plus, velocity-dependence of samples is a big problem. With a lot of work, with meticulous multisampling, you can get around this. This works all right with a few instruments like piano or strings that you could use in digital pianos. But if you want a flexible and responsive instrument which allows you to change the sound very fast, and where the sound is elastic, that you can sculpt and form — and is at the same time something that is dynamically playable, sensitive, and reacts to velocity, controllers, and pedals and so forth — then sampling is not the answer.
Let’s just take two aspects: playable, expressive real-time control and sound design. With sampling, it takes a long time to get a finished instrument because you have to multi-sample velocity and key ranges. Plus, it’s a sort of “frozen” instrument, something you can’t easily change afterwards. It’s worth while for reproducing well-known, popular instruments. And even then, the sampler only really works well in a mix, not as a solo instrument, like a solo trumpet or violin or something. Basically, there is still a huge gap between a sampled instrument and the real thing played by a musician. It’s worlds apart in terms of the quality and nuance of the sound.
In most cases, sampling instruments are meant to be played with keyboards, but that’s the wrong surface to play most of these instruments with. For example, the sliding of a bass sound is seldom very convincing when played on a keyboard.
I am mainly interested in expressive and highly individual playing, as well as a very individual sound for each musician. Synthesizers offer the possibility of providing this degree of individuality and flexibility. It’s very important for finding your own way of playing as a performer as well as for developing interesting sounds. Sound design with synthesizers gives me much richer possibilities than with sampling. Plus, it gives me a huge amount of real time control which I have over the sound while I am playing it. This is what I look for and work on when I am building instruments: individual sound and expressive playability.
Maybe this is also a bias I have as a keyboard player, where I have velocity control, pedals, wheels and knobs…
Spark, CHA-OSC, and Prism
Over the past couple of years you have created a number of very interesting new instruments like Spark, CHA-OSC, and Prism. These instruments have a very particular sound and color, something that you don’t find elsewhere. Where does this sound character come from?
Well, the sound could be called hybrid, for sure, but much of the character comes from carefully designed feedback paths and distortion a.k.a. wave shaping. Of course, at Native Instruments I have worked on various instrument concepts for many years. I have contributed to, worked on, or collaborated on products like Pro–52, FM7, Massive, Kontakt, Kore and, of course, the Reaktor Factory Library instruments. We have also analyzed, played, and listened to a large number of instruments.
So you created these instruments to augment the NI product palette?
No, not at all! Spark, CHA-OSC and Prism come from my own personal desires of having relatively simple, playable instruments for my own, personal use. I had been experimenting quite a bit in this direction for quite some time. Actually, I wanted a replacement or successor for my old DX7, something that would allow me to make music similar to what I had made with that synth. But I didn’t want to go back to those techniques. I mean, I could have just used the FM8, which is a fantastic successor to the DX7 which can do so much more and is so much more flexible! But it is also too complex for what I wanted. I wanted something simpler; one big advantage of Reaktor is that it gives you the possibility to develop very special synths, even very focused, special-purpose devices. I wanted something more trimmed down, with fewer components; I tried to combine just a few elements to create the kind of sounds I had been looking for.
So Spark started out as a sort of personal project…
Exactly. But when I showed it to some of the NI sound designers, they were very interested in using it to create sounds. And that’s how it became Spark. I was really surprised how much sonic variety the sound designers — both the external sound designers and the internal NI sound design department — were able to get out of these instruments! It was much more than I had thought; that was also very inspiring to me. It was a sort of proof-of-concept that you don’t necessarily need a really “high-end” machine with an endless feature list to make very interesting sounds. Plus, the relative simplicity of the instrument motivated the sound designers to really do a lot with it.
You see, I’m not a big fan of “monster instruments” that can do everything. I am also drawn more to particularly raw and metallic sounding instruments. This isn’t something you could normally get from pure analog or virtual-analog synthesizers. My instruments incorporate things like ring modulation, FM, wave shaping, sync possibilities and so on. Through many, many versions I managed to make an instrument where these components were linked together, along with a relatively comprehensible GUI.
The positive reactions from Spark encouraged me to continue developing this kind of instrument, so I then made CHA-OSC, then Prism as well.
Doesn’t Prism also employ a relatively unexplored sound synthesis technique?
Yes, absolutely. Prism was built to show off the new possibilities of modal synthesis that the Reaktor 5.5 update provided, with the new Modal Bank and Sine Bank modules. That also meant that the feedback technique was a big part of Prism, because it uses the filter-resonators that work nicely in a feedback loop. I was really fascinated with the possibilities of modal synthesis. When I started playing around with modal structures I thought “Wow — there’s really a lot you can do with this technique!”
Prism started out as a modal synthesis test instrument that I built in order to show people what this kind of synthesis could sound like and what kind of possibilities it opened up. It also grew out of what I learned I from making Spark, as well as experience from other projects.
It’s also a synthesizer that could be classified as a physical modeling instrument because you can get results that sound quite natural. On the other hand, that wasn’t what I was trying to create; I’m more interested providing musicians with something individual, where not only they have their own music, style, and interpretation, but also their own instrument, their own individual sound source. Prism offers a huge amount of possibilities for this. That was important for me — not that I could produce a perfectly natural flute or harp sound, but that the synth can be really used by musicians, to create their own sound.
In your opinion, what are the most exciting areas in synthesis nowadays?
In the past few years, a whole lot has become possible. But I think the main thing to be developed is usability. If you manage to make a very usable editing and control interface you can create really playable instruments.
Additive synthesis has become much easier, mainly because of more powerful CPUs. In Reaktor the new Sine Bank module gives access to this area, exemplified by Lazerbass, and especially Razor.
With a good user interface, additive synthesis becomes a very powerful synthesis technology. Razor appeals to a lot of people with a type of user interface and sound that is already known to them; a sort of sawtooth-subtractive sound with a filter-function. But actually all of that is very open: the waveform spectrum is open, the filter is open and really everything is quite freely definable. The concept represents just one way of defining and controlling the parameters. In additive synthesis, parameters may interact in so many ways to create new sounds.
I also think that there is quite a bit still to be explored in the realm of additive re-synthesis. For this you analyze the spectra of a sample at various points. That brings us back to sampling in a way, because you use samples as a sort of starting point, similar to how wavetable synthesis works, but instead you can work with spectra, combining the frequency and time domains to produce different kinds of sounds which you can morph between. I think this is an area where a lot could happen. Samplers can become vastly more elastic and expressive.
In other words, you think that sampling and synthesis technologies might be joined in the future?
For sure, the gap will get smaller. In general we probably won’t see many “revolutionary” new synthesis technologies, but a lot more “hybrid” techniques with samples, FM, additive or modal synthesis, many sorts of filters, physical modeling… For example, just look at what Mike [Mike Daliot, designer of a number of very successful synthesizers at Native Instruments] did with Massive; that’s a really hybrid synthesizer. There is a lot that you can develop further in that direction, too!
So do you think in the next few years we are going to see more instruments that will focus on usability?
It doesn’t really help to bring out products with more and more new features with new knobs and buttons; the GUIs will just get more and more incomprehensible if instruments are simply expanded in terms of feature-sets and GUI elements. It’s actually most interesting when several different parameters change at once, not just one knob per parameter. In other words, if you make one knob able to control a number of related parameters at once, that can be much more usable and musically interesting. Morphing is an example of this.
But this means that you have to put a great deal of thought and work into creating user interfaces that are clearer, more intuitive, but at the same time still give the user a great deal of freedom. That’s really the hard part: how do you make an instrument really accessible that has hundreds of parameters? This applies to both hardware and software! I think this is really the challenge over the next few years. The question really comes down to this: how can you build a truly haptic interface — for controlling a large number of parameters — that makes musical sense, where you really use the possibilities that we have with both hands and maybe other parts of our bodies?
Couldn’t touch screens help accomplish this?
I don’t think so. Touch screens are very interesting alternatives to classical hardware elements like knobs, buttons, and faders. But they really lack the haptic element, the real sensorial feedback. After all, they are just flat, and you have to look at them to really know exactly where your hand or fingers are.
I think that instruments will be differentiated more in their usability than in their features or signal processing possibilities. Nowadays, there is so much possible with advanced audio engines that it takes a long time before you can really use them. Take Absynth, for example. It has a huge palette of possibilities, but takes time to figure out how to use it, e.g. how to change an envelope. There are different levels and windows, and there are hundreds of parameters… The sonic potential is incredible, though — you can do so much with it, but the learning curve is steep.
Anyway, you don’t need to develop one synthesizer for all sounds; it’s fine to have very specialized instruments. This is an advantage that Reaktor has, for example. Reaktor is an environment where you can load a large number of smaller instruments, even at the same time. Each one can have a special character, a specific user interface. For example, an instrument that is very much centered on filters can have a GUI that really just focuses on the relevant filter parameters through the user interface elements. Or an instrument that specializes in percussive sounds might have very easy-to-use envelope generator or component mixing elements as the focus of the GUI, for example.
In other words, I think the main evolution in the coming years will be instruments with a strongly individual character tied to very specific user interfaces, especially in terms of software, where there is not a huge cost-factor involved with developing new or different user interfaces. Hardware interfaces are, of course, much more expensive. So the trend you see now is, for example, eight knobs, all lined up [laughs]! They are not hard-wired, but freely assignable. Maybe for hardware, the future is more “modular” elements…
Learning and Teaching
Over the years, you have often been involved in various educational projects, such as the developing the SoundForum synth, as well as teaching — what do you think are the possibilities of using Reaktor as an educational tool?
From the very beginning, it was clear that Reaktor was a good tool for showing, demonstrating, and learning about signal processing, sound synthesis, and audio effects processing. With Reaktor, you can very quickly get results that you can hear — and often quite good-sounding results — with just a few modules. But our marketing has always been more focused on the wider customer bases of end users, musicians, producers and DJs — much more than on university and conservatory students or teachers.
My experience from teaching courses and workshops and being involved in magazine tutorials and books is that you have to really prepare the material. You need to prepare the modules and macros and instruments in advance, otherwise the learner is overwhelmed by all of the possibilities and complexities of what’s available. If you open the Structure of an instrument that Mike Daliot or Lazyfish have made — which are in the factory library — then the learner will think: “I’ll never get this — it’s too much!” and then will just close the Structure and never open it again.
But at the same time, with Reaktor you can have a lot of fun and get some really good results with very few components — that is a big advantage!
It’s really too bad that we (at Native Instruments) haven’t invested more time and energy in this direction. That’s why it’s good that big brain audio exists to offer workshops! It’s clear that the workshops are a success because people really want — and need — this kind of opportunity!
Stephan, thank you so much for your time!