benmmustech wrote:. . . can someone point me to a document or something of the like that tells me what markings actually does something to the music on the stave.
The simple high-level information is that all the articulations and dynamics in NOTION 3 work very nicely and quite precisely, but the fact of the matter is that it all depends (a) on the specific sound samples in the library for each instrument and (b) on any customizing that has been done for those instruments which have standalone user interfaces, support rules, and so forth and so on . . .
Additionally, it depends on the way the specific instrument was played when it was sampled, and there is yet another aspect which in some respects is a bit more subtle, although the consequences are
not so subtle . . .
Using SampleTank (IK Multimedia) as an example of a VSTi virtual instrument, it can be used to create custom sampled sound libraries and loops for an instrument, where the basic strategy is to record a separate note played in a particular style for each note that can be played on the instrument, where for electric guitar at standard tuning one plays and records a set of individual notes ranging from E3 to perhaps D6 depending on the number of frets . . .
The notes are recorded in a Digital Audio Workstation (DAW) application like Digital Performer (MOTU) on the Mac, and once they are recorded each individual note is exported as a separate WAVE file which has a specific name that follows a standard naming convention which can include the scientific music notation name of the note as part of the suffix, which makes the work that SampleTank does in analyzing and converting the audio file to a sound sample easier and more precise . . .
All these individual note audio files are put into a single folder, which then is reference by SampleTank to analyze and to create the custom sampled sound library for the instrument played with a specific articulation and dynamic . . .
For example, you might have played the notes on the electric guitar when there was a tremolo effects pedal set at a specific pulsing rate, depth, and so forth, and in this case each note will have the same pulsing rate, depth, and so forth. If you sampled the full range of notes, then there are no "missing notes" or "gaps" in the sequence of notes, which is very important to understand . . .
However, if you only record a sample of every third or perhaps fifth note in the chromatic scale, then there will be (a) real notes and (b) emulated notes, which is where everything moves into a surreal universe for several reasons . . .
When there are "gaps" or "missing notes" in a sampled sound library, the SampleTank engine will compute the non-sampled notes using various algorithms, where it might start with the audio for the nearest lower real note and then adjust it mathematically to create a nearby higher note, but it also might start with the nearest higher real note and then adjust is mathematically to create a nearby lower note, which for pitch is not so difficult to do and generally sounds very realistic but with a few caveats . . .
One of the caveats is that the emulated note is created basically by multiplying the pitch of the real note by some value, where multiplying by a value greater than one (1) produces a higher pitch note but multiplying by a value less than one (1) produces a lower pitch note, and while this works very nicely for pitch, frequency, harmonics, overtones, and whatever, it does
not work so well for time-sensitive effects like vibrato, tremolo, echo, and so forth . . .
In other words, you can begin with a real "Middle C" (C4 in scientific music notation) and multiply it by some value greater than 1 to create a C#4, and both the original real note (C4) and the algorithmically computed note (C#4) will sound nearly identical in terms of pitch, frequency, harmonics, overtones, and whatever, but using the electric guitar with fixed rate tremolo example, the tremolo for the C#4 note will be faster than the tremolo for the C4 note, because the tremolo effect is part of the audio sample and cannot easily be separated from it, so as the audio sample effectively is multiplied, so is the embedded tremolo rate . . .
And while all the articulations and dynamics work very nicely and precisely in NOTION 3, what they do is highly dependent on the
specific sampled sounds for each instrument, and the general rule I use is that the best strategy is to use a specific set of samples for the exact articulations and dynamics you need, because otherwise with some articulations and dynamics you get variations due to the real notes and algorithmically computed emulated notes, and you can get these variations even when every note is sampled, as explained later using a pizzicato example (
see below) . . .
The other strategy is to select a set of samples that have (a) no "gaps" or "missing notes" and (b) are played with no articulations and dynamics, at all, which maps basically to a complete set of plain notes where each semitone in the full range of the instrument is sampled as a separate audio file . . .
If you do the combinatorial mathematics on this, then it becomes clear that doing a complete set of samples for every possible combination of articulations and dynamics might take billions of years, so the sensible strategy is to make an effort to keep everything as simple as possible, which is the strategy I use here in the sound isolation studio, where as a general rule I avoid using any articulations and dynamics whenever possible . . .
In other words, I prefer to select a specific sample where the instrument was played by the musician in the style (articulations and dynamics) that I want, and in some instances I do some of the articulation work in the DAW application using the NOTION 3 generated audio, where for example if I want tremolo at a fixed rate for every note regardless of whether the note is real or emulated, then I do the tremolo using a VST tremolo effects plug-in in the DAW once I have recorded the NOTION 3 generated audio as a soundbite in Digital Performer via ReWire . . .
And common sense strongly suggests that the result of using a pizzicato articulation in music notation on a violin where the sample sounds were done with the violinist play the violin with a bow is not going to sound the same as a sample done with the violinist actually playing the violin pizzicato . . .
All the articulations and dynamics work, but the way they affect specific sampled instruments will vary depending on the way the instruments were played by the musician when the individual note samples where recorded . . .
Explained another way, NOTION 3 does all the articulations and dynamics required to control and play a cowbell very precisely, but if the sample is a tuba played with vibrato, then it is not going to sound like a cowbell . . .
In other words, NOTION 3 sends the correct commands and parameters, but if you instruct NOTION 3 to send the commands and parameters for a piece of chocolate pie but the restaurant does not have chocolate pie, then you are not going to get chocolate pie . . .
Summarizing, everything works, but the generated audio is highly dependent upon the specific sampled sounds for each instrument, and in some instances as best as I can determine NOTION 3 also does a bit of computational work for certain types of articulations and dynamics, so the best strategy is to learn how the various sampled instruments sound and how articulations and dynamics specified via music notation affect the sampled instrument sounds, which basically requires one to have the ability to remember virtual festivals of very specific nuances for all the instruments in sound sample libraries, or at least to be able to find things quickly "by ear", which is fabulous . . .
Fabulous! 
P. S. As an example, one of the things I like to do is to count and memorize things, which I first started doing as an entertaining thought exercise after working in a grocery warehouse for a while, during which time I realized the vast importance of being able to identify every single item in a grocery store, as well as its size, quantity, price, and so forth and so on, and it just so happens that this is relatively easy for me to do, where one my curious activities involves memorizing everything sold at a Walmart Supercenter, with a few exceptions, most of which are items that change frequently like clothing styles, not because it boring, but because there is no point to stuff that is virtually guaranteed not to be on the shelves in a few months, so I skip the clothing sections and some of the other seasonal sections like Christmas decorations . . .
And this skill is very useful with sampled sound libraries, where for example I was curious about French Horn samples in Miroslav Philharmonik (IK Multimedia) last year, so I started counting them but stopped after about an hour or so when the count went over 150 . . .
In other words, there are somewhere in the range of 150 to perhaps 250 very specific sounds samples for French Horns in various articulations, dynamics, individually and in several different types of ensembles and sections, which is one of the things about Miroslav Philharonik that I like, because if I need a French Horn played by the musician in a specific articulation, dynamic, and style, it probably is there somewhere, which once I find it maps to the NOTION 3 generated audio for French Horn sample I selected in the Miroslav Philharmonik standalone user interface sounding the way I want it to sound without needing to use any articulations and dynamics in the music notation, because the French Horn was played by the musician using the articulations and dynamics that I want, which effectively does not require either Miroslav Philharmonik or NOTION 3 to do any arbitrary computing to create emulated notes, since everything already is there in the real notes that were sampled . . .
For reference, the particular skill to which I am referring generally is called "eidetic memory", which colloquially is called "photographic memory" but applies to all the senses (sight, hearing, taste, smell, touch, and whatever), and my perspective is that most people have eidetic memory capabilities but that the eidetic memory capabilities are more developed in some folks . . .
There are various strategies for developing and enhancing natural eidetic memory skills, and this is one of the reasons that I do the Walmart Supercenter exercise, which I make FUN by realizing that it is totally strange but makes a bit of sense in the grand scheme of everything . . .
Mnemonics are another way to improve memory skills, where two examples are (a) "Kings Play Chess On Fancy Green Sofas", which is a mnemonic for remembering the classic parts of biological taxonomy (Kingdon, Phylum, Class, Order, Family, Genus, Species) and (b) "I Don't Play Lydian Mode A Lot", which is a mnemonic for remembering the classic musical modes (Ionian, Dorian, Phrygian, Lydian, Mixolydian, Aeolian, Locrian) . . .
And there are other types of exercises, where the general idea is that you can experiment with different strategies to determine which ones work best for you, but it is important to understand that memorizing stuff requires a bit of work, some of which is simply dull, boring, and repetitive, but the more you exercise your mind, the more stuff you can do with your mind, for sure . . .
For sure! In this respect, perhaps the most important thing is to determine what you truly enjoy doing and want to do passionately, which typically coincides with an odd epiphany and for me occurred when I realized that I like the way a Fender vacuum tube amplifier smells when it is warmed up, which basically provided the clue that I really like electric music . . .
And once you have the required epiphany, then you soon realize that everything involved in what you truly like to do is important, which in turn makes it not so completely and totally dull, boring, and repetitive, because while initially it might be dull, boring, and repetitive sooner or later once you learn it, then (a) there you are and (b) you just know it, which is vastly useful in one way or another, where for example on might accurately suggest that most of the people on this planet probably are dumbfounded that anyone would notice and then remember that Elvis Presley does a stellar
uvular trill on the the "h" of "hound dog" toward the end of his hit record "Hound Dog", which is fabulous . . .
Uvular Trill (wikipedia)[
NOTE: The stellar uvular trill occurs on the "h" of the second "hound dog" in the last verse, chorus, or whatever at approximately 2:01 in the following remastered version, and it sounds more like a snare drum roll, but it is a uvular trill . . . ]
"Hound Dog" (Elvis Presley) -- Youtube music videoFabulous! 
P. S. "Hound Dog" is one of the songs I study and have been studying for over half a century, along with "Walk Don't Run" (The Ventures), "She Loves You" (Beatles), and a few others, and the more I listen to these songs the more stuff I hear, which is part of what makes them unique as Gestalts, and there are some vastly useful techniques that one can glean from studying these songs, where for example a uvular trill is a very specific and typically highly advanced vocal articulation for a singer, which makes it all the more amazing that Elvis did it probably intuitively without having any idea what it was called or anything else other than the fact that it worked nicely for the song . . .