These are my thoughts at present on the general ideas of (a) locking tracks and (b) using MIDI sequencer style velocity controls for music notation . . .
[
NOTE for SouLcRusaDer_kA: My reply to your original post follows later, which includes a detailed overview that explains how to use both NOTION 4 and the DAW application to create the scenario where you can use the DAW application to edit MIDI velocity at the individual note level. It is an advanced activity, but it is supported. This is a bit complex, so it takes me a while to work through everything logically, hence the short novel . . . ]
(1) Locking a track in the music notation universe does not make a lot of sense conceptually, since (a) it is easy to create a copy of a NOTION 4 score via "Save As . . . "; (b) when working with music notation nothing changes unless you specifically change it; and (c) one can "lock a track" by recording its generated audio as a soundbite in a Digital Audio Workstation (DAW) application via a ReWire 2 session, noting that in the grand scheme of everything I consider NOTION 4 to be part of a complete digital music production system rather than a complete digital music production system by itself . . .
Complete Digital Music Production System(2) Regarding velocity, I think that doing this using traditional marks is the preferred strategy when one is focused on music notation, but I understand the logic which suggests using the MIDI sequencer strategy, at least from the perspective that NOTION 4 ultimately communicates with VSTi virtual instrument engines via MIDI, hence everything happens based on MIDI messages and parameters . . .
But I think there are other considerations, where one consideration involves the activities of arranging, producing, and mixing, with the latter two activities being best performed in a DAW application for a variety of reasons, noting that in some respects arranging is a part of composing and producing, although I usually promote it to higher level, since it is an important activity . . .
With the caveat that producing and mixing strategies and techniques generally are strongly dependent on specific genres, my perspective on the idea of adjusting the velocity of individual notes is that it tends to fascinate and to mesmerize composers who do
not understand producing and mixing in the digital music production universe, because even for more traditional musical genres it is vastly frivolous, noting that the frivolity is the direct consequence of the basic rules of acoustic physics, which in the digital music production universe are the defining rules . . .
Explained another way, devoting great attention to the way each note is played makes good sense when one is working with real musicians who are playing real instruments or singing, since in this scenario the most subtle nuances are possible when the musicians and singers are skilled; but when everything switches to virtual instruments, digitized sampled sounds, and MIDI, there other considerations because it is completely and totally different in every respect and will continue to be completely and totally different perhaps for another few decades, depending primarily on how quickly advances in computing machines and artificial intelligence algorithms occur, where at present the fact of the matter is that the technology is not available in any practical way, if the specific technologies even exist at present . . .
Why do I suggest this?
Great question! When one attends what I call a "traditional" concert by a symphonic orchestra where there is no sound system and everything is real, the rules are different, and in this scenario music notation and the conductor provide guidance, but ultimately people (musicians and singers) and real instruments are making the sounds in a way that makes it practical to focus intensely on minutiae . . .
However, in the digital music production universe, everything is vastly different and, among other things, one of the basic rules of acoustic physics provides the clue that quite a few key aspects of audio are logarithmic and geometric, which specifically is the case with such things as volume, panning, pitches, tones, textures, harmonies, and so forth, which is the case in both the analog and the digital subverses, where there is a bit more of what one might call "wiggle room" in the analog subverse, but even then everything is done according to the rules of electromagnetism and mechanical physics . . .
As it pertains to volume, this maps to a combination of volume level and perceived loudness, where the basic rule of acoustic physics is that generally for a sound to be perceived as being twice as loud, its volume needs to be increased 10 times, hence the logarithmic unit called the "decibel (dB)", although there are different types of decibels, where one of the key types is a unit of sound pressure level ("dB SPL") and is different from the decibel used for volume sliders in a DAW application or in the NOTION 4 Mixer . . .
The dynamic range of normal human hearing is quite amazing, and based on devoting a bit of attention to doing some difficult calculations involving physics and chemistry, as best as I can determine the human hearing apparatus can detect the sound made by a single electron vibrating, although more specifically the changes in standard atmospheric pressure made by the motion of a single electron, and this is on the extreme pianissimo side of the dynamic range of normal human hearing, where the other side includes considerably more violent pertubations . . .
[
NOTE: The logic I used for the calculation is that an electron has a definite size, which I think can be approximately, which is what I did; and then I mapped it to how much air it would move at standard atmospheric pressure. There were some intermediate presumptions and calculations, but I think it makes a bit of sense, especially since it fits nicely in a way consistent in the universe of quantum electrodynamics and the human eye being able to perceive a single photon, since an electron can transform into a photon, more or less, depending on the way one interprets Feynman diagrams . . . ]
In this Feynman diagram, an electron and a positron annihilate, producing a photon (represented by the blue sine wave) that becomes a quark–antiquark pair, after which the antiquark radiates a gluon (represented by the green helix).
[SOURCE:
Feynman Diagrams (wikipedia) ]
[The value of the minimal threshold of hearing] has wide acceptance as a nominal standard threshold and corresponds to 0 decibels. It represents a pressure change of less than one billionth of standard atmospheric pressure.
[
NOTE: In this usage, "decibels" is "db SPL", as indicated by the information in the second sentence, and this is completely and totally different from the "0 dB" setting of a volume slider in a DAW application or the NOTION 4 Mixer, as is the case with "0 dB" for an external digital audio and MIDI interface like the MOTU 828x or the PreSonus AudioBox 1818VSL, where for the external digital audio and MIDI interfaces "0 dB" maps to setting of "10" on a Marshall electric guitar amplifier with the Nigel Tufnel (Spinal Tap) option, which extends the volume level to "11" or its more colloquial value "+6" . . . ]
[SOURCE:
Threshold of Hearing (HyperPhysics) ]
There is another very important aspect to this, and it involves what happens when the velocity parameter is changed for a digitally sampled sound, where specifically this is done as a computation rather than by having the real musician play the note at a different speed or with different force . . .
The digitally sampled sound has an intrinsic or native velocity, but since (a) the corresponding MIDI note event parameter ranges from 0 to 127 and (b) changing the velocity maps to specifying a different value for the parameter, this does
not change the way the real musician played the real instrument, hence (a) is entirely arbitrary and (b) is
not the same as having the real musician play the real instrument at the desired velocity . . .
Yet another aspect involves sample sound libraries which are not chromatically sampled, which in simple terms maps to only a subset of notes actually being sampled and all the intermediate or non-sampled notes being computed by various algorithms that have logarithmic components, where for example in a sampled sound library where C4 (a.k.a., "Middle C" in scientific pitch notation) and D4 are sampled but C#4 is not sampled, then the sound for C# will be computed typically either (a) by using the sampled sound for C4 and increasing its pitch via computation or (b) by using the sampled sound for D4 and decreasing its pitch via a computation, noting that this is
not necessary for chromatically sampled sound libraries where every note is sampled in the 12-tone universe, although if a chromatically sampled sound library is not sampled for 24-tone purposes, then computations are required, as is the case when certain types of fluctuating articulations like tremolo and vibrato are specified but there is no corresponding set of sampled sounds . . .
~ ~ ~
Continued in the next post ~ ~ ~