Quote:
Originally Posted by DIGITAL SCREAMS
Interesting replies so far guys...thx. However im primarily interested in how they create a specific sonic character using code....i.e. Nords and Virus' sound different.....so how do they do it?
DS
|
Historically, sound generation hardware in digital instruments was parameterized by simple control software. That is, the hardware that actually generated the line-level signal that we can eventually hear was instructed what to do in basic terms and then triggered.
A sample playback engine pretty much feeds the waveform it's to reproduce -- amplitudes at various frequencies varying over time (a 3D map). Some systems go further to implement compression protocols between storage and the sound hardware, but the idea is the same.
ROMPlers and most full-feartured samplers take this a bit further and the basic output may be further manipulated by effect, envelope, filtering, and other hardware that refactors the sound in the output pipeline to achieve desired results.
For analog synthesizers with digital controls (DX7, D50, etc.) the sound hardware was governed by more general parameters -- e.g. starting frequency and waveform descriptors, wavetable offsets, oscillation and modulation settings, etc. -- and these were organized into patches.
These approaches are somewhat old-fashioned, however, by the standards of modern DAWs and VA-type sound modules.
Basically, even as late as a few years ago, commodity DSPs in sound modules and CPUs in conventional computers didn't have the memory or the muscle (in terms of clock cycles or bandwidth among componentry) to abritrarily manipulate sound in "real time".
I.e., if you wanted purely digital tracks solely from a normal computer they could only be set up in advance, rendered in batches into digital audio formats, and them played back but with no useful control of sonic properties during that playback.
Now, however, not only can conventional computers generate a plausible, complete waveform directly from math expressed in common programming languages, but these waveforms can be completely rendered into a routeable signal with no significant delay.
Additionally, the input from hardware and software control metaphors (e.g., MIDI-based control surfaces or VST-type effects plug-ins) can be factored into waveform generation inline, enabling a conventional computer to work as a performance tool.
(...That doesn't mean it doesn't require practice, but we know that...)
So now if a sound module designer needs to support a wide variety of extensible sound generation techniques, they don't need custom-made, impossible-to-update, integrated circuits dedicated to each approach on board.
To achieve their goals, they can just play math games until they find a set of algorithms that work well together with a useful abundance of adjustable variables to produce interesting sounds.
Either that, or they take sounds which are already cool for one reason or another and examine them carefully using hardware or software analysis tools, constructing algorithms that may produce waveforms with the same qualities.
Whatever mechanism is used, these days it's largely just software. C or C++, assembly, even Java can be set up to produce complete waveforms (amplitides within a range of frequencies) from mathematical expressions fast enough that a useable signal may be had almost instantly.
The upshot is that you can create sounds with tremendously sophisticated properties.
The downside, if you will, is that when the actual sound generation was in the hands of hardware under a lesser degree of control, the sound designer just had to do their best to roll with the peculiarities of that hardware. Now they are responsible for almost the whole thing.
In many cases the skill of the device designer can be seen not in an abundance of algorithm parameters available to the musician but the availability of the few most critical direct or derived (i.e., parameters that control others) ones needed to be as expressive as possible.
If you do it on a normal computer, it's a DAW. Do it in a sound module with a dedicated, high-speed bus (the communications medium interconnecting its parts) and ultra-fast, high-end waveform and effects rendering hardware and you have the same idea made more useful.
Plus, you average high-end sound module (like a Virus, Nord, etc.) has snazzy control surfaces that facilitates creativity and is usually solid state -- nothing moving around inside like a hard drive -- so it's more likely to endure in a production or peformance environment.