tubatimberinger wrote:I keep experiencing distortion. After a while I have been able to isolate it to a specific pitch "G" on top space (bass clef).
I did a few experiments, and as best as I can determine at present this does not involve the NOTION 4 native reverb . . . BACKGROUND(1)
Musical pitch and frequency are logarithmic, which also is the case with loudness and panning, although there are linear aspects to panning . . . One of the things I discovered during the three or so months when I temporarily went
Country Western and decided to start singing bass is that it is not so easy to sing bass, because the difference from one note to the next is very small in terms of pitch . . .
Using string bass at standard tuning where "Concert A" is 440-Hz as an example, the low-pitch "
E" string is 41.204-Hz, and the low-pitch "
A" string is 55-Hz . . .
The notes in this range are {E1, F1, F#1, G1, G#1, A1}, and the difference from the lowest to the highest is approximately 13.8 cycles per second . . .
Scientific Pitch Notation (wikipedia)For singing, this requires very precise accuracy, since each note differs from the next by only 2 or 3 cycles per second (or "Hertz [Hz]"), which is a tiny difference at best . . .
If the pitch is off by 2 or 3 cycles per second, then you entirely miss the note and are singing another note . . .
In contrast, in higher registers, it is possible to miss a note by a few cycles per second and not be noticed . . .
(2)
Instrument Sampling Techniques . . .
There are two general ways to sample instruments, chromatic and non-chromatic, and the way which samples every note is called "chromatic"; and since chromatic sampling requires more samples than non-chromatic sampling, chromatically sampled instrument libraries typically cost more and are larger than non-chromatically sampled instrument libraries . . .
Non-chromatically sampled instrument libraries have samples for some but not all of the notes, where as an example, C4 ("Middle C") might be sampled but C#4 might not be sampled, while D4 is sampled, with the result that when a C#4 is specified in the music notation the software "engine" for the sampled instrument library computes the C#4 using an algorithm which among other things does a bit of logarithmic extrapolation and other activities, although this probably is done by multiplying, and the base note will be either C4 or D4, which are the two nearest actually sampled notes to C#4; and if C4 is used as the base note, then the C#4 note is computed upward from C4, but if D4 is used, then the C#4 is computed downward . . .
For notes that differ only by a half-step or perhaps no more than three half-steps, the computed "missing" notes are very realistic and typically sound the same as actually sampled notes, but it depends on the playing style, dynamics, articulation, pitch, and other characteristics, where as an example, if the instrument is an electric guitar played through a Fender amplifier with tremolo at a fixed rate, then the tremolo rate for computed notes will be either slower or faster than the tremolo rate for actually sampled notes, and whether the tremolo rate is slower or faster depends on whether the nearest actually sampled note is higher or lower, respectively . . .
[
NOTE: I think there is a way to keep the tremolo rate constant for computed notes, but doing this requires knowing that tremolo is used, and probably requires a lot more complex set of algorithms, which probably would take too long to run. Anything is possible, but whether something is practical is another matter . . . ]
In the same way that notes can be sampled chromatically or non-chromatically, this also happens with dynamics, articulations, and other playing styles and techniques, where yet another set of variables includes the type of microphones, room acoustics, digitizing techniques, and so forth . . .
Some instruments are sampled at various dynamic levels (pianissimo, forte, and so forth), and some instruments are sampled in every possible articulation, but this is nearly never the case, because it takes a lot of time and requires a lot of resources (musicians, instruments, studios, recording engineers, digitizers, and so forth), where the typical reality is that dynamic levels are computed, but primarily by raising or lowering the MIDI volume level, which ranges from 0 to 127 . . .
Doing dynamics this way can work nicely, but it s
not the same as having a musician play the instrument at the specific volume level, because when trained musicians play at different dynamic or volume levels, they usually change the way they play, and this in turn affects pitch, attack and release times, texture, and so forth . . .
(3)
Volume level affects loudness, pitch, texture, and other stuff . . . As shown in the updated Fletcher-Munson Curves (
see below), the perception of pitch and texture depends on the volume level . . .
Equal Loudness Curves (wikipedia)(4)
You need a calibrated full-range studio monitor system running from 10-Hz to 20,000-Hz at 85 db SPL to hear everything accurately, and you need a sonically neutral sound isolation studio . . . [
NOTE: This is explained in great detail in my ongoing topic in the IK Multimedia FORUM . . . ]
The Fabulous Affordable Studio Monitor System Project (IK Multimedia FORUM)THOUGHTSI hear the problem, but it is not caused by any single factor . . .Instead, it is caused by a combination of factors, and finding a solution will be difficult, but I think there is a solution, and the primary reason is that I think trained musicians playing real instruments can play the pieces you are doing correctly, but part of their training includes listening to each other and making what probably are very subtle adjustments in the way they are playing their instruments in real-time on the fly . . .
As explained (
see above), the differences in the various notes, dynamics, playing styles, and so forth are very small in the bass clef register, and it is unlikely that the NOTION 4 native virtual instruments are sampled in the required detail to do what you want to do; and intuitively I am not certain that there is any commercially available sampled sound library that has all the required samples . . .
However, if you play Harmonium and Tuba, then you can create your own sampled sounds libraries and use them with a suitable VSTi virtual instrument engine like Kontakt 5 (Native Instruments) or MachFive 3 (MOTU), but it will take a while, since you will need to sample every note in every dynamic level per every articulation, and so forth and so on, which certainly can map to hundreds of thousands of samples, all of which need to be recorded professionally and then digitized . . .
You need a calibrated full-range studio monitor system and a sonically neutral sound isolation studio . . . In one of the experiments, I created a NOTION 4 score and entered the first six or so measures of Euphonium, and it was obvious what was happening, so I did a few more experiments to isolate the primary problem, and this is the reason for providing the detailed background (
see above), because there are several things happening, each of which affects something . . .
Among other things, I hear hiss and noise, as well as comb-filtering, and these are made more prominent by specifying lower dynamics ("pp", "p", and so forth), since in another experiment I did not use any dynamics or articulations, and the problem was less prominent, although obviously present . . .
If I do a few more experiments, I can determine whether NOTION 4 is changing the dynamics by varying the MIDI volume level (0 to 127), but I am reasonably certain that this is what NOTION 4 is doing, and when this is done, it changes other characteristics of the specifically computed sounds for each note, which is one of the reasons that I generally avoid all dynamics and articulations whenever possible, since I do not want any computed notes, and in most instances the only way to get the actual sample for each note is to specify only the note with no other music notation, although there are sampled sound libraries which, for example, have a French horn sampled being played pianissimo and have another French Horn sampled being played forte, so if you use the pianissimo French horn, then you are using the sample rather than something which is computed using an arbitrary algorithm, and the same thing happens when you specify the forte French Horn, but doing this generally requires switching channels rather than using "p" or "f", unless there are corresponding rules that do the same thing as switching channels, as contrasted to simply lowering or raising the MIDI volume level or whatever . . .
Explaining the more relevant aspects of some of what I am hearing might require 10 more posts as long as this post, but one way to understand some of the more subtle acoustic physics is to watch this YouTube video of a presentation made by Ethan Winer at the Audio Engineering Society (AES) convention in 2009, where the most relevant information is in the last half of the video, which is fabulous . . .
Audio Myths Workshop (Ethan Winer et al.) -- AES Show 2009 - YouTube videoFabulous!
~ ~ ~ Continued in the next post ~ ~ ~