Attention:

Welcome to the old forum. While it is no longer updated, there is a wealth of information here that you may search and learn from.

To partake in the current forum discussion, please visit https://forums.presonus.com

Notion 3, DISCO Songs, and Sparkles

A Forum to Discuss NOTION

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Mon May 02, 2011 3:49 pm

dcuny wrote:Because of this, it's convenient to use roman numerals to covert a chord pattern to the scale degrees, with upper case for major chords (I, IV, V) and lower case for minor chords (ii, iii, vi). For example, the chord progression in C major:

C | Am | F | G | C

and in D major:

D | Bm | G | A | D

both map into the same roman numerals:

I | vi | IV | V | I


I like this, because it is a chord pattern that I use a lot, along with the variations that the Beatles used, where instead of going downward to the relative minor third, they go up to the second or something, which at the moment I forget, but I noticed a long time ago, since whatever the second chord is, the pattern fits just as well with " I | vi | IV | V | I ", which is what I call the "Sleepwalk" pattern . . .

"Sleepwalk" (Santo & Johnny) -- YouTube record

I like the Nashville Number System, but the only practical way to do the relative minor third is to use a minus sign, which is a bit awkward (1, -3m, 4, 5), so I am intrigued by the Roman numeral system, since it avoids having to use negative numbers, and it looks better than (1, -3m, 4, 5) or (1, 6m, 4, 5), although I suppose it could use a minus sign to indicate that a chord is played lower rather than higher . . .

Your explanation and overview is very clear, and while I already knew some of the information in one way or another, it is helpful to read it in a logical overview that includes advanced concepts, which gives me something to ponder for a while, for sure . . .

For sure! :)
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Thu May 26, 2011 3:08 am

I devoted a good bit of the past few weeks to pondering the general concepts involved in doing what I call "sparkling" an instrument, which basically is a technique for putting the notes played by an instrument into motion within what I call the "Spherical Sonic Landscape™", where for reference my avatar for this FORUM is one vector plane of the Spherical Sonic Landscape . . .

Putting the notes played by an instrument into motion is something that has fascinated me for a long time, but it is not such an easy thing to do with real instruments in a traditional DAW application, although there are ways to do it . . .

Doing it entirely within a DAW application effectively requires either (a) recording individual notes in some number of tracks, which with a real instrument is extraordinarily difficult to do, or (b) editing the panning for every note played by a real instrument to adjust its panning location, which also is extraordinarily difficult to do, where in both instances "extraordinarily difficult" maps to "imprecise" and "not easily repeated", both of which are problematic . . .

On the other hand, doing Eddie Kramer style slow panning in a rainbow pattern is not difficult to do in a DAW application, and there are editing tools that make it possible to smooth the panning curve, and there are other ways to create motion after the fact in a DAW application, as well . . .

[NOTE: Eddie Kramer produced the Jimi Hendrix Experience albums that had "flying guitars", and he also produced a lot of the KISS albums, among other activities, and he is one of my favorite producers . . . ]

However, being able to control the motion of notes at the individual note level with great precision is stellar, and it adds a virtual festival of motion techniques to the musical palette, which is where music notation, virtual instruments, and Notion 3 come into play, for sure . . .

For sure!

Music notation is very precise, and it is repeatable, which is extraordinarily important for this type of work, since among other things it makes it possible to do experiments without needing to redo everything . . .

"Sparkling" a virtual instrument in Notion 3 is done by making some number of identical instrument "clones", where each one is the same virtual instrument and begins with all the same notes, which tends to be easier than doing the notes for each cloned instrument separately . . .

The total number of instruments required for "sparkling" depends on the type of "sparkle", and it can be two instruments or as many as eight, which is a good upper limit, and each instrument is set in the Notion 3 Mixer to a specific panning location . . .

[NOTE: At present, the way I do it most of the time is like putting a checker on each square of a checkerboard, where the first row is the original instrument and the next seven rows are "clones" of the notes of the original instrument, as well as being set to the same virtual instrument. Then, one at a time I replace notes with equal-valued rests, which is like removing checker pieces from the checkerboard, where the overall goal is to create a geometric pattern that defines the motion. There are other ways to create the pattern, but this way is easy, although it takes a while, which for eighth notes can be an hour or two, but (a) it works and (b) it is faster than any other way of doing it, plus (c) it is accurate and precise in every respect . . . ]

The panning control is a rainbow shaped arc, and for eight instruments I divide each half of the arc into four separate locations, ranging from far-left to top-center for the left side and top-center to far-right for the right side, which works nicely and makes it practical to send the four left-side instruments to one bus, while the four right-side instruments are sent to another bus, and this has the advantage of requiring just two stereo tracks in the DAW application . . .

The easiest way to understand the panning configuration is in terms of half of a pie, where there are eight sections or slices, and each section or slice is the panning location for a specific instrument . . .

I keep the volume levels at 0dB in the Notion 3 Mixer, since this works best for recording the Notion 3 generated audio in Digital Performer 7 via ReWire as soundbites, and I can adjust the volume in Digital Performer 7 once the soundbites are recorded, and it is possible to do some types of panning adjustments as well, where for example I can narrow the panning range . . .

The following screen capture shows an instrument that has been "sparkled" for two measures, which in this instance is one of the SampleMoog (IK Multimedia) harpsichords in the newest version of the "basic rhythm section" for "(Baby You Were) Only Dreaming" (The Surf Whammys):

Image

[NOTE: In this pattern, the staves from top to bottom map to far-left to top-center to far-right, so for example the top staff plays its notes are far-left, while the bottom staff plays its notes at far-right. So, the left side of the 'V" plays notes in a rainbow arc from far-left to far-right, but the right side of the "V" plays notes from far-right to far-left in sequence. But the notes, themselves, also follow a pattern, which on the left side of the "V" is a simple E minor arpeggio {E4, G4, B4, E5, E5, B4, G4, E4} and on the right side of the "V" is a simple G major arpeggio {G4, B4, E5, G5, G5, E5, B4, G4}, both of which also are "V" patterns, which makes it two "V" patterns of notes panned in a larger "V" pattern, which due to the pitch and panning of the notes and instruments is like the trajectory or path traced by a windshield wiper. And there are a lot of patterns that work nicely for "sparkles". In a practical sense, this is easy to do with music notation and virtual instruments in Notion 3, but it would not be so easy for an orchestra to do, although with a bit of practice it is possible, at least in theory, except that for a real orchestra to do the panning aspects, it would require a vastly different seating configuration, since for example the violins would need to be in a straight row from the left of the stage to the right of the stage, and doing far-left and far-right kick drums would require a kick drum on the left side of the stage and another kick drum on the right side of the stage. I think this would be vastly fascinating, but the musicians would need to drink a lot of coffee at first, since it would be like playing a very strange but highly geometric variation of musical chairs, where the notes move around but each musician stays in their respective chair, and conducting it would be a lot of FUN, too . . . ]

Currently, it takes me several hours to "sparkle" an instrument, which is fine with me, since I like the way it sounds, for sure . . .

For sure!

As with most things, the more you do it, the easier and faster it becomes, which is the way it is working with "sparkling" and is the reason I decided to ponder it mathematically for a few weeks, since understanding the mathematics and geometry provides clues to developing a useful set of rules, which is something I discovered when I decided to make sense of electric guitar whammying several years ago, an effort that took a few years, since the rules for whammying are quite strange and a bit counterintuitive until you do it for a while, where the key bit of information is that whammying is a multistep activity which for the most part needs to be started before it is heard and then needs to be done primarily in the "in-between" spaces of everything else, otherwise it is not heard and simply blends into obscurity . . .

Another way to consider "sparkles" is in terms of arpeggios, where a pattern of notes are played in sequence in different panning locations, which is where geometry comes into play, as does symmetry . . .

From another perspective it is like ornamenting harpsichord notes, and there is a similar set of rules for "sparkling" . . .

This is the newest version of "{Baby You Were) Only Dreaming" (The Surf Whammys), and it has two newly added "sparkled" instruments, both of which are different flavors of SampleMoog harpsichords, and each of the two instruments has its notes spread over eight instruments (the original instrument and seven "clones"), which maps to 16 instruments, and I also added a single stereo track of Heavy Metal rapid double-kick drums playing triplets at top-center, which is fabulous . . .

Image

[NOTE: There now are 48 instruments, which are spread over three Notion 3 scores, all of which are separate but synchronized, which is the easy way to do it, and the Notion 3 generated audio is recorded in Digital Performer 7 via ReWire as soundbites. I do not use effects in these Notion 3 scores, because it is easier to do effects in Digital Performer 7 on the soundbites for a variety of reasons, including keeping everything within the resource limitations of Digital Performer 7, Notion 3, ReWire, and the various IK Multimedia virtual instruments, all of which run in 32-bit application space. This is a headphone mix, which is the way I mix when I am working on a song, and it is easiest to hear the "sparkles" when you listen with studio-quality headphones like the SONY MDR-7506 (a personal favorite) . . . ]

"(Baby You Were) Only Dreaming" (The Surf Whammys) -- Basic Rhythm Section May 25, 2011 -- MP3 (9.4MB, 282-kbps [VBR], approximately 4 minutes and 35 seconds)

Fabulous! :D

P. S. For comparison purposes, this is the previous version of "(Baby You Were) Only Dreaming", which has 31 virtual instruments, where several instruments are "sparkled" but two at a time, with a few being "sparkled" three at a time and one being "sparkled" four at a time, really . . .

[NOTE: This version is shorter by approximately 10 seconds, since in the new version one of the Moog harpsichords has echo, so I let the echo repeat and fade, which added 10 seconds, more or less . . . ]

"(Baby You Were) Only Dreaming" (The Surf Whammys) -- Basic Rhythm Section May 9, 2011 -- MP3 (9.2MB, 281-kbps [VBR], approximately 4 minutes and 23 seconds)

Really! :)
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Sat May 28, 2011 10:03 pm

Since the last set of Moog Harpsichord sparkles were not so easy to hear, I decided to double them with a set of Sonik Synth 2 (IK Multimedia) "Psaltery Harp" sparkles, which works nicely . . .

But since I do not like to do an exact double of an instrument I added a few phrases to the new set of Psaltery Harp sparkles, which for reference has the notes spread over 8 clefs (original clef and 7 clones) . . .

Image
Psaltery Harp and Bow ~ Schoenhut Piano Company

Psaltery Harp (Schoenhut Piano Company)

[NOTE: The new Psaltery Harp sparkles sound primarily like tiny metallic tinkly bells, but for lower notes it sounds a bit like a tambura or a droning sitar playing a sustained phrase, which I think depends on whether the sampled notes are picked or bowed . . . ]

"(Baby You Were) Only Dreaming" (The Surf Whammys) -- May 28, 2011 -- MP3 (9.4MB, 281-kbps [VBR], approximately 4 minutes and 35 seconds)

And after thinking about it for a moment, I realized that since I already had an 8-clef sparkle defined in the previous Notion 3 score, all I needed to do was to clone the previous Notion 3 score, which I did by doing a "Save As", and then I only needed to click on the name of the instrument in each track in the Notion 3 Mixer to change virtual instruments, since I have all the IK Multimedia virtual instruments loaded in SampleTank 2.5 XL, so this makes it a bit easier to do a set of sparkles, at least in terms of the initial setup and configuration . . .

[NOTE: The way I keep track of Notion 3 scores for a song is very simple, and it begins with creating a folder for the song. Next I create the basic Notion 3 score, which I use for composing the structure of the song, including drums, bass, chords, and at least a tiny bit of melody or counterpoint. Once this is completed, I record the Notion 3 generated audio in Digital Performer 7 via ReWire as soundbites. When the soundbites are recorded and everything is closed and saved, I open the score in Notion 3 and do a "Save As" where I append a sequential number to the file name and then start working on sparkles or another set of instruments, but while I change some of the existing instruments and clear their notes, I keep a small subset of instruments constant so that I know where I am in the song. If it is complex song with a lot of instruments, then I keep a list of what each specific Notion 3 score does but for the most part I just keep moving forward, since by the time the first Notion 3 score is done the song essentially is defined. There are some additional techniques that I use to keep everything as simple as possible, where one technique for 8-clef sparkles is to send the left-side sparkles to Bus A and to send the right-side sparkles to Bus B, so that instead of having 8 stereo soundbites in Digital Performer 7, an 8-clef sparkle requires only two stereo tracks, and it could be done with one stereo track if I sent everything to a single Bus, but I like to split the left-side sparkles on one stereo track and the right-side sparkles on another stereo track, since this provides more panning options in Digital Performer 7 . . . ]

It takes about 5 to 10 minutes to change the virtual instrument for each of the eight clefs, but this is faster than deleting existing clefs and then starting from scratch, so this new technique is very nice . . .

And the same eight clefs will work for other types of sparkles that need fewer clefs, where the primary criterion is that a subset of the existing panning locations works . . .

Currently, this is the way I have the panning locations defined for the 8-clef sparkle defined:

Code: Select all
L.1 =  (-1.0 pan L, -0.8 pan R)
L.2 =  (-0.7 pan L, -0.5 pan R)
L.3 =  (-0.4 pan L, -0.2 pan R)
L.4 =  (-0.1 pan L, +0.0 pan R)
R.4 =  (+0.0 pan L, +0.1 pan R)
R.3 =  (+0.2 pan L, +0.4 pan R)
R.2 =  (+0.5 pan L, +0.7 pan R)
R.1 =  (+0.8 pan L, +1.0 pan R)


I think that there is a rule for panning locations that is similar to the rule for loudness and volume, where for loudness and volume the rule is that for something to be twice as loud the volume needs to increase 10 times, which is the reason that decibels are logarithmic rather than linear, as is the case for volume and tone controls on a guitar . . .

Far-left and far-right are easy panning locations to hear, as is top-center, but the intermediate locations are a bit more difficult to spotlight, so I am doing some experiments to determine whether there is a simple high-level rule that applies to the way panning locations need to be specified and controlled to position instruments at the hour locations on the top half of a circular clockface (9:00, 10:00, 11:00, 12:00, 1:00, 2:00, 3:00), where the easy locations are (9:00, 12:00, and 3:00) . . .

And from the perspective of acoustic physics, this is a fascinating use for Notion 3, since it is very precise in its panning locations, and the notes are generated from virtual instruments, which makes everything accurate and repeatable, so there are fewer variables, which is very important when one is trying to make sense of what essentially is an analog phenomenon that involves both the physical hearing apparatus and the perceptual apparatus of the mind, where the reality is that the perceptual apparatus of the human mind does a bit of additional processing that includes the creation of auditory illusions, where for example two identical sounds arriving in a very short time (typically 5 to 15 milliseconds) are combined by the perceptual apparatus of the human mind to create the illusion of a single sound that is louder than each of the two real sounds, with the general idea being to provide a clue that something moving rapidly is approaching, which most likely is a defense mechanism designed to spotlight something like the paw steps of a tiger or whatever in an effort to bring the rapidly occurring sounds to the forefront of consciousness as an alerting cue . . .

So, when stuff like that is happening, what might appear to be intuitively correct rules for controlling sound locations tend to be less intuitive . . .

One of the more interesting rules is that if the same sound is played simultaneously at far-left (-1.0 pan L, -0.8 pan R) and at far-right (+0.8 pan L, +1.0 pan R), it is perceived at top-center when one listens with headphones, which might suggest that positioning a sound at 10:00 or 11:00 might require playing two notes, where one note is at top-center but the other note is at far-right, although the specific location for "top-center" and "far-right" probably depends on whether the note needs to be perceived at 10:00 or 11:00 . . .

Intuitively, I think there should be a very specific algorithm or rule for panning locations that does not require elaborate reverberation or echo . . .

It took me approximately two years and lots of experiments to discover the rules for whammying, and they certainly are strange rules, so if the panning location rules are similar, then perhaps it will not take so long to discover them . . .

Some of the rules of perception are patently strange, if not surreal, and one of the rules for vision is that the human eye only has receptors for the color red in a very limited location on the retina, so the reality is that if one is standing in the center of a 1,000 acre perfectly level field of red roses in full bloom and "sees" red roses everywhere, approximately 75 percent of the "red" roses are colored arbitrarily by the visual perception apparatus of the human mind based on statistical probabilities resulting primarily from ancestral genetic knowledge . . .

The field of red roses illusion is related to the Purkinje effect, which basically explains what happens in low light, where for example at night a field of red roses will appear to be a field of charcoal roses, since the rods are doing most of the work, but during the day in bright light the reality is that rods do not "see" red, so only the cones actually "see" red, which maps to the raw information going to the visual perception apparatus of the human mind being a blend of (a) the way the field of "red" roses looks in low light and (b) the way the field of "red" roses looks in bright light, where there is more low-light information than bright light information, since (a) there are more rods than cones and (b) cones are located primarily in a very specific and much smaller area of the retina (the fovea centralis or "fovea"), so the visual perception apparatus of the human mind makes a few logical inferences and simply "colors" all the roses "red" . . .

[NOTE: If you look straight ahead, then the full color vision is in a circle with a radius of approximately 10 degrees, so if you rotate forward by 90 degreesthe typical headphone "rainbow" arc that runs from far-left to top-center to far-right, which is like having a very large round clockface with a bit hole in the center through which you put your head and then look forward toward 12:00, you see full color approximately in the region from 11:00 to 1:00, as you continue to look straight ahead at 12:00 all you see at 9:00 is grayscale, which also is the case at 3:00, and to add a bit more complexity, everything is upside-down, which makes the entire thing a bit mind-boggling and is one of the reasons that a significant part of the human mind is dedicated to visual perception, although some of the visual perception apparatus serves a dual function for processing auditory information very rapidly (in the range of 24 milliseconds for some auditory information but 30 to 60 milliseconds for other auditory information), which leads me to hypothesize that the only practical way to play 20 to 30 notes per second on lead guitar is to move the bulk of the mentation into the frontal eye fields (FEF) region of the brain, which I can do if I simply do not think about it, at all, in any immediately conscious way, since the fact of the matter is that it takes vastly too long to think about a note consciously, hence instead of thinking about notes, you simply play them, which at first is patently surreal but eventually makes sense when you realize that your unconscious mind knows a lot more that one might imagine consciously . . . ]

Frontal Eye Fields (wikipedia)

Image
[SOURCE: Fovea Centralis (wikipedia) ]

Image
[SOURCE: Purkinje Effect (wikipedia) ]

It is stuff like this that I think is happening with panning locations, so the strategy here in the sound isolation studio is to do a series of experiments to determine if there are any simple high-level rules, and Notion 3 is a stellar application for doing this . . .

Lots of FUN! :)
Last edited by Surfwhammy on Thu Aug 11, 2011 1:54 am, edited 1 time in total.
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Fri Jun 10, 2011 11:20 pm

I have been doing some experiments with "sparkles" to discover the rules for "sparkling" the notes of an instrument, and I am making a bit of progress, which at present mostly maps to determining three basic facts:

(1) Panning is logarithmic, in the sense that the locations between (a) far-left and top-center and (b) top-center and far-right are not linear, which has the consequence that positioning an instrument at far-left, top-center, or far-right is easy, since these are distinct and essentially are "all-or-nothing" locations, but positioning an instrument anywhere else follows a somewhat counterintuitive set of rules . . .

If instead of using far-left to top-center and then to far-right to describe a "rainbow" arc for panning, you use the upper hemisphere of a round clock face, then far-left maps to 9:00; top-center maps to 12:00; and far-right maps to 3:00 . . .

Positioning an instrument at 9:00, 12:00, or 3:00 is easy, but positioning an instrument at 10:00 or 2:00 is not so easy, as is the case with positioning an instrument at 11:00 and 1:00, and the difficulty of positioning instruments at what one might call "in-between" locations is the logarithmic aspect . . .

Volume and tone controls have audio tapers (as described below), but I am not certain that balance and panning controls also have an audio taper. If balance and panning controls have an audio taper, then perhaps the problem is that the taper should be different, which is something I plan to investigate in more detail, because panning controls tend to behave more like a very narrow bandpass filter with high "Q", similar to the following diagram . . .

Image

(2) Volume and perceived loudness are logarithmic, which I already knew, and this is the reason for example that volume and tone controls on an electric guitar or radio have potentiometers with what is called an "audio taper", which is a fancy name for a "logarithmic taper", since for a sound to be perceived as being twice as loud, the volume level of the sound needs to increase 10 times, hence the "decibel" is a logarithmic unit . . .

(3) Pitch is both logarithmic and geometric, since the perceived loudness of a particular pitch or frequency is dependent both on the volume of the note and the frequency of the note, where the volume aspect is logarithmic but the frequency aspect is geometric . . .

The geometric aspect is that octaves are doubled, so for example the low-pitch "A" string on an electric bass is 110-Hz but one octave higher is 220-Hz, which is the A at the 2nd fret of the low-pitch "G" string . . .

However, the intervals in an octave are divided into "cents", where there are 1,200 cents in an octave, and "cents" are logarithmic . . .

Cents (wikipedia)

And since a "sparkled" series of notes has both (a) specific panning locations and (b) specific pitches, there is yet another interrelationship, since the pitch of the note affects its perceived loudness, which in turn affects the perceived panning location of the note, which makes the entire thing a bit complex, really . . .

Really!

Intuition suggests that if you set the panning location to 10:00 and then play a series of notes, the notes will be perceived as coming from the 10:00 panning location, but this is not the way it works when the notes vary in pitch, so there is a bit more work that needs to be done to cause all the notes in a series to be perceived as coming from the 10:00 panning location . . .

NEW SONG

This is a new Surf Whammys song that appeared when I was doing the recent experiments with "sparkles", which mostly as a bit of serendipity due to putting a note in the wrong place, which then sounded interesting, so I decided to use it as the basis for a song, where the serendipitous note is at the end of the first measure (where for reference the notes are eighth notes, which makes the serendipitous note the eighth note in the sequence of eighth notes), which is fabulous . . .

[NOTE: There are two instruments at present and they are IK Multimedia virtual instruments. Everything is done with music notation in Notion 3, and I used the T-RackS 3 Deluxe "Master 1" mastering suite on the Master stereo track in the Notion 3 Mixer, where this is the MP3 file converted by iTunes from the Notion 3 exported WAVE audio file. It is mixed specifically for headphones, which is the best way to hear the "sparkled" Psaltery Harp notes. The synthesizer bass is positioned at top-center, but it is not "sparkled" . . . ]

"Sparkles" (The Surf Whammys) -- MP3 (4.1MB, 291-kbps [VBR], approximately 1 minute and 55 seconds)

Fabulous! :)
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Sun Jun 12, 2011 12:01 am

I found more information on panning, and one of the things I found is that there is a "panning rule" or "panning law", which is defined in wikipedia as follows:

The pan control or pan pot (panoramic potentiometer) has an internal architecture which determines how much of each source signal is sent to the two buses that are fed by the pan control. The power curve is called taper or law. Pan control law might be designed to send -4.5 dB to each bus when centered or 'straight up' or it might have -6 dB or -3 dB in the center. If the two output buses are further combined to make a mono signal, then a pan control law of -6 dB is optimum. If the two output buses are to remain stereo then a pan control law of -3 dB is optimum. Pan control law of -4.5 dB is a compromise between the two.


[SOURCE: Audio Panning (wikipedia) ]

But the most intriguing bit of information is that while a single panning knob actually does panning for a monaural track, when there is a single knob for a stereo track it is doing balancing rather than panning, which explains a few of things I have noticed in my experiments . . .

The balance control takes a stereo source and varies the relative level of the two channels. The left channel will never come out of the right speaker by the action of a balance control. A pan control can send the left channel to either the left or the right speakers or anywhere in between. Note that mixers which have stereo input channels controlled by a single pan pot are in fact using the balance control architecture in those channels, not pan control.


[SOURCE: Audio Panning (wikipedia) ]

Balance can mean the amount of signal from each channel reproduced in a stereo audio recording. Typically, a balance control in its center position will have 0 dB of gain for both channels and will attenuate one channel as the control is turned, leaving the other channel at 0 dB.


[SOURCE: Stereophonic Balance (wikipedia) ]

In some respects, I suppose this might be obvious to everyone on the planet except me, but if the "panning" control for a stereo track in Digital Performer 7 and Notion 3 actually is controlling stereophonic balance, then I would call it a "balancing" control rather than a "panning" control . . .

I need to do some experiments to verify what is happening, but I think that the "panning" controls in Digital Performer 7 actually are balance controls, while it is likely that the "panning" controls in Notion 3 in fact are panning controls . . .

The Notion 3 panning control has three dots, which distinguishes it from the more simplistic "panning" control in Digital Performer 7, where in the Notion 3 Mixer the leftmost dot normally is "left" and the rightmost dot normally is "right", but you can reverse the "left" and "right" dots, at which time the background color changes to pink or red as a visual indicator that you have reversed the locations, and this tends to suggest that the Notion 3 "panning" control is a hybrid control which has two functionally separate individual channel "pan" controls (represented visually by the outermost dots), which is a very elegant way to represent the underlying algorithms and functionality . . .

And this maps to needing to do yet another set of experiments, which I think will be interesting . . .

Based on what I have discovered so far, it appears likely that the stereo track "panning" controls in Digital Performer 7 are following a "pan rule", even though they actually are balance controls, but which "pan rule" they follow is something I need to research, although this tends to confirm the logarithmic taper hypothesis . . .

It is possible that the panning controls for stereo tracks in Notion 3 also follow a "pan rule", but the specific details are not so clear, and in some respects the documentation is a bit counterintuitive to what I have observed with headphone listening . . .

Immediately above the fader are controls for panning (specifying the left-right placement of the instrument in stereo). With the stereo sounds of NOTION3, you have two dimensions to specify: left/right placement and “width” of the sound.

You drag the L dot (for left speaker) and/or the R dot (for the right speaker) to specify the sonic placement of the instrument in a stereo field. The further away you want the instrument to sound, the closer you bring the L and R dots together anywhere across the axis. The default placement (left stereo channel far left and right stereo channel far right) is optimum for a close- to medium-mic’d sounds.


[SOURCE: "Notion 3 User Guide" (Notion Music) ]

The counterintuitive aspect when listening with headphones is that when I want to place an instrument at top-center in the front (proximal) rather than in the back (distal), I move the L and R dots very close together and center them in the middle, where {-0.1 pan L, +0.1 pan R} is a very tight proximal panning location setting . . .

For example, the synthesizer bass in "Sparkles" (The Surf Whammys) has a panning setting of {-0.3 pan L, +0.3 pan R}, and it does not sound far away, really . . .

"Sparkles" (The Surf Whammys) -- MP3 (4.1MB, 291-kbps [VBR], approximately 1 minute and 55 seconds)

Really!

I have the Panorama 5 (Wave Arts) VST plug-in, and it does very realistic 3D audio imaging when you listen with headphones, and it also has algorithms for 3D audio imaging for loudspeaker listening, but it uses advanced reverberation, echo, and phase algorithms that require a lot of space or headroom in the mix, since it uses reverberation, reflection (which affects phase), and echos as distance and angle cues, so it is not so effective when there are a lot of instruments . . .

As you can see in the user-interface for Panorama 5 (below), there are a lot of parameters that affect the perceived location of a sound, which is one of the reasons that Panorama 5 is such an advanced 3D audio effect, and it also is the reason that Panorama 5 is a very "heavy" VST plug-in with respect to processing requirements, since it uses elaborate mathematical algorithms to adjust the various distance and angle cues for a sound . . .

Image
Panorama 5 (Wave Arts) ~ 3D Audio Imaging VST Plug-in

For purposes of "sparkling", the general goal is to avoid needing to use what essentially are binaural effects, which I think is possible to some degree and is the reason for the current foray into panning, balancing, and so forth and so on, since with Notion 3 I can control the notes of a "sparkled" instrument very precisely, which includes being able to control the pitch of notes very precisely, because in the "sparkling" technique (a) I can put a note on any of the perhaps 8 staves or clefs and (b) I can control the pitch of the note by virtue of composing it very specifically for this purpose . . .

Binaural Recording (wikipedia)

In other words, if a note needs to be a specific pitch at a particular panning location to be perceived in a certain location, then this is not difficult to do in Notion 3 so long as (a) there is a rule for it and (b) I can discover the rule, since (c) I can compose to the rule . . .

I tend to do everything "by ear", which works nicely most of the time, but "sparkling" involves creating and manipulating auditory illusions, and one of the realities of auditory illusions is that by definition you cannot trust what you hear, since the general fact of auditory illusions is that what you hear is very different from what actually is happening with the audio, hence it helps to know the applicable rules . . .

Auditory Illusions (wikipedia)

Explained another way, knowing the rules for an auditory illusion makes it possible to create and to control the auditory illusion by adjusting and setting various parameters using a mathematical formula rather than "by ear" . . .

Once the auditory illusion is created and is working correctly, you can verify it "by ear", but you need to know the underlying rules to make it happen . . .

And in the "always trust your ears" department, it is nice to have verified that my hypothesis about "panning" having a logarithmic aspect, which is fabulous . . .

Fabulous! :)

P. S. From a different perspective, about two years ago I realized that I needed to focus on the producing, recording, mixing, and mastering aspects of digital audio production, which is entirely different from the focus of composing, playing instruments, and singing, although it includes a bit of arranging work, since (a) arranging is a key aspect of producing and (b) arranging involves managing and coordinating instrumental and vocal parts so that they do not compete for the same sonic spaces, which is yet another new area of detailed focus here in the sound isolation, where the general rule is that each instrument and vocal parts needs to have its own unique space within the mix . . .

And I have better understanding of the vast importance of what George Martin did when he produced the Beatles . . .

The Beatles were excellent composers, musicians, and singers, but George Martin made them sound good on records, for sure . . .

For sure! :idea:
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Sat Jun 18, 2011 7:50 am

After pondering the newly discovered information about panning and balancing controls, which included a bit of additional research on various aspects of acoustic physics and psychoacoustics, I made a few tiny adjustments to the "sparkled" Psaltery Harp in the Surf Whammys new song, "Sparkles", which at present continues to have only two actual instruments (a synthesized bass at top-center and a "sparkled" Psaltery Harp playing notes spread over a rainbow arc) . . .

[NOTE: Technically, "acoustics" is the broader category, but I prefer to focus on acoustic physics, since in one way or another physics covers everything, and it generally makes more sense to me. On a related note, it appears that Alexander Graham Bell was unable to read the German edition of Hermann von Helmholtz's stellar book on acoustics and consequently Bell misinterpreted some of Helmholtz's diagrams as suggesting that Helmholtz had discovered a way to transmit voice over wire, hence motivating Bell to continue doing his research, when in fact Helmholtz had not discovered a way to transmit voice over wire, which if nothing else provides clues to the value (a) of logically constructed diagrams and (b) of considering ideas that might make no immediate sense but appear intriguing in one way or another . . . ]

Acoustics (wikipedia)

Psychoacoustics (wikipedia)

"Sparkles" -- June 18, 2011 -- MP3 (4.2MB, 296-kbps [VBR], approximately 1 minute and 55 seconds)

The primary change in this version is that I adjusted the volume levels of the eight panning locations using a combination of "by ear" intuition and the general principles of the panning rule, where these are the new volume levels and corresponding locations:

Code: Select all
L.1 =  (-1.0 pan L, -0.8 pan R); volume = 0db
L.2 =  (-0.7 pan L, -0.5 pan R); volume = -0.3db
L.3 =  (-0.4 pan L, -0.2 pan R); volume = -0.4db
L.4 =  (-0.1 pan L, +0.0 pan R); volume = -0.5db
R.4 =  (+0.0 pan L, +0.1 pan R); volume = -0.5db
R.3 =  (+0.2 pan L, +0.4 pan R); volume = -0.4db
R.2 =  (+0.5 pan L, +0.7 pan R); volume = -0.3db
R.1 =  (+0.8 pan L, +1.0 pan R); volume = 0db


And I lowered the volume level of the synthesized bass at top-center (-0.3 pan L, +0.3 pan R) by approximately 2db . . .

Additionally, I lowered the "Pre-delay" on the Notion 3 Reverb to 5 and lowered the "Room" to 30 . . .

RESULT

This improved the location of notes on the rainbow panning arc, but there continued to be a bit of a gap at top-center, so I changed the panning of the left and right buses, which were set to specifically to limit left notes to the left quadrant and right notes to the right quadrant . . .

Instead, I changed the left bus to be a full rainbow arc and did the same for the right bus, which made an improvement in the overall location of the notes . . .

[NOTE: The Psaltery Harp set {L.1, L.2, L.3, L.4} is routed to Bus A, while the Psaltery Harp set {R.4, R.3, R.2, R.1} is routed to Bus B, and the locations from far-left to top-center to far-right in order are {L.1, L.2, L.3, L.4, R.4, R.3, R.2, R.1}, where for example L.1 is far-left and R.1 is far-right . . . ]

Code: Select all
OLD:

Bus A =  (-1.0 pan L, +0.0 pan R); volume = 0db
Bus B =  (+0.0 pan L, +1.0 pan R); volume = 0db

NEW:

Bus A =  (-1.0 pan L, +1.0 pan R); volume = 0db
Bus B =  (-1.0 pan L, +1.0 pan R); volume = 0db


This is the revised version where Bus A and Bus B each are set to the full rainbow panning arc . . .

[NOTE: At first, I thought that it would help to set the Bus A and Bus B panning to focus them on their respective quadrants, but it did not work the way I hypothesized, although it was a useful experiment, so this version shifts the specific panning to the individual tracks, which works better. The music notation for this version is provided as a separate PDF file for those folks who are interested in mapping the notes to the perceived panning locations, since (a) there are variations in the panning patterns and (b) it is easier to follow when you know the specific variations. However, it is equally important to remember that the panning locations are fixed and do not change for each of the respective 8 staves. So, the panning pattern is changed by the way the notes are placed on the staves, where for example a simple two-position "sparkle" that alternates notes from far-left to far-right will have notes only on the L.1 and R.1 staves, since L.1 maps to "far-left" and R.1 maps to "far-right", all of which in some respects is vastly complex, but it works . . . ]

[NOTE: Whenever possible, I do everything on the treble clef, since I hear in my mind the three octaves is spans (including notes approximately 3 to 5 lines below and above the treble clef, where for example Middle C [C4] is one line below the treble clef and High A [A5, the A above High C (C5)] is one line above the treble clef), and this is one of the things that Notion 3 makes possible, since for example I can set the treble clef so that notes are played two octaves lower, which is what I do for bass parts, and I also can set it to play notes one octave lower, which I might do for cello and viola parts. So in the following score, everything is on the treble clef but the notes are played in various registers . . . ]

"Sparkles" (Music Notation) -- PDF (86.9KB, 5 pages)

"Sparkles" -- June 18, 2011 Full Bus Panning -- MP3 (4.2MB, 298-kbps [VBR], approximately 1 minute and 55 seconds)

I think this version has better coverage of the notes at top-center, but I continue to hear differences in the volumes of individual notes as the panning location moves to the sides, so there most likely is a pitch-related aspect to the perceived loudness, which is yet another experiment that might involve specifying some very simple and subtle dynamic marks, although adjusting the respective volume levels is another way to do it and has the advantage of not requiring any dynamic marks or articulations . . .

[NOTE: Actually, after looking at the printed score for a while, I remembered that I already had resorted to using dynamic marks to lower the volume of the notes as they get nearer in panning location to top-center, so when this is combined with additionally lowering the volume slider levels in the Notion 3 Mixer, it appears likely either (a) that there is no panning rule being applied specifically to the panning controls or (b) that any panning rule being applied is subtle at best, which suggests yet another series of experiments. It also suggests that dynamic marks alone are insufficient to level the perceived loudness of notes across the full rainbow panning arc, which is a useful bit of information and is fine with me, since my focus is on discovering the rules. In other words, for the most part it does not matter to me what the rules are, so long as (a) there are rules and (b) I am able to discover them, which is one of the things I like about Notion 3 . . . ]

The notes of the "sparkled" Psaltery Harp are beginning to follow a more clearly distinct and balanced rainbow arc pattern, which is most obvious at the start of the song where the notes are eighth notes and move across the full range of the rainbow arc, as contrasted to the quarter note section in the second half of the song, where each measure has notes all at the same panning location most of the time, except in the last few measures before the chords are played, which is fabulous . . .

Fabulous! :)

P. S. For reference, the panning patterns for the Psaltery Harp in these versions of "Sparkles" generally are very simple, but there are a lot more possible permutations. The goal of this first set of experiments is to discover how to get a smoothly distinct rainbow panning arc with good clarity of locations . . .

Once the relative loudness of notes are kept constant, then this essentially removes loudness as a variable, which then shifts the focus to (a) pitch and (b) location . . .

If pitch is constrained to three octaves, then this maps to 37 notes, and there are 8 possible panning locations for each of the 37 notes, which at any given time maps to 296 possibilities, except that for a series of notes it becomes wildly geometric, since it becomes a matter of determining for example the different ways one can 8 of 296 possible items with respect to order, which makes it permutations rather than combinations . . .

This is that mathematical formula for permutations:

Code: Select all
P(n,r) = n! / (n - r)!

Where:   n = set of all elements
         r = subset of all elements


[NOTE: The key to understanding this is observing (a) that each of the 37 notes can appear in 1 of 8 possible panning locations and (b) the same "note" (for example Middle C or C4) is considered to be a different actual note based on each of the 8 possible panning locations. So, instead of there being only C4 irrespective of panning location, there actually are {C4.L.1, C4.L.2, C4.L.3, C4.L.4, C4.R.4, C4.R.3, C4.R.2, C4.R.1}, where if only this set of 8 items is considered, then there are 40,320 permutations, which is one of the reasons that my perspective is that there are more songs than there are grains of sand in the known universe, although I continue to ponder whether it is finite or infinite number. And for reference, remembering that 0! = 1 is useful, hence in the special case where r = n, the formula simplifies to n! (or "n factorial"), which for n = 8 is 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1, really . . . ]

Doing the arithmetic produces the result that there are 53,567,374,264,732,540,800 possible ways to select 8 of what one might call "pan-notes" this way from a set of 296 possible "pan-notes" with respect to order (approximately 53.6 quintillion ways, or 53,567 trillion ways, if you prefer), which is a bit mind boggling, for sure . . .

For sure! :ugeek:

P. S. Another way to put the number of possible songs into perspective is to consider the number of melodies that can be created with 32 "pan-notes" out of the three-octave set of all 8 panning location notes, which is useful since 32 notes are more than enough for a lot of simple melodies, really . . .

Image

This is the full number, which is approximately 2.12 times 10 raised to the 78th power, presuming I counted the digits correctly, which makes it approximately 2.12 quinvigintillion in at least one large number naming system, where for reference one trillion is 1 followed by 12 zeroes (US "short scale" naming system):

2,119,900,272,613,596,819,664,957,036,261,367,616,930,115,268,457,494,193,659,283,440,979,148,800,000,000

Based on the current view that the universe is approximately 13.75 billion years old, if you wrote 1 million songs every day since the so-called "big bang", this maps approximately to 5 times 10 raised to the 18th power (or 5 followed by 18 zeroes) songs . . .

[SOURCE: Permutations Calculator (Calculator Soup) ]

Really! :idea:
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Sun Jun 19, 2011 11:38 am

As yet another experiment, I made the motion of the notes across the rainbow panning arc more regular, where the first half of the song has eighth notes that go back and forth across the rainbow panning arc from left to right and then right to left, except for the last two measures of eighth notes where the pattern changes but still in a symmetrical way, since I like symmetry . . .

[NOTE: The eighth note pattern is similar to the way the "eye" of the robot in "The Day The Earth Stood Still" moves from one side to the other, which has a mesmerizing affect . . . ]

I made a similar change in the last two measures of quarter notes but kept the original quarter note rainbow panning arc pattern, since it makes it easier to hear the different panning locations, because each panning location is held for a full measure of quarter notes, for sure . . .

"Sparkles" -- June 19, 2011 -- PDF (90KB, 5 pages)

"Sparkles" (The Surf Whammys) -- June 19, 2011 -- MP3 (4.2MB, 298-kbps [VBR], approximately 1 minute and 55 seconds)

For sure!

And after listening to this more symmetrical version, it appears that there should be a set of rules that defines the ways various panning patterns affect the mood of the listener, where intuitively one might suggest that regular panning patterns are more mesmerizing, while chaotic panning patterns are more emotively energized, although this is best left to yet another set of experiments, which is fabulous . . .

Fabulous! :)

P. S. As should be obvious, one of the truly stellar aspects of Notion 3 is that everything is absolutely precise, since it is done with clearly defined music notation, and this makes it possible to do these types of experiments in an elegantly mathematical and geometric way . . .

Swapping panning positions for notes takes a bit of copying and pasting, but I devised a technique where I insert a new measure; copy and paste notes to the desired rainbow panning arc locations in the new measure; and then delete the original measure, which takes a few minutes per revised measure, hence is practical . . .

Intuitively, there should be a way to do this in a computer algorithm, which is yet another concept I am pondering . . .

For example, instead of doing the "sparkling" manually, there should be a way to select a particular instrument and then cause it to be "sparkled" according to various parameters that drive a computer algorithm, which might be a fascinating feature in a new version of NOTION . . .

[NOTE: Anyone who has been in a garage band or has played popular songs in a nightclub setting understands the fact that every popular song of the past century, essentially beginning with the first Edison wax cylinder recording system, follows a clearly defined pattern, hence none of this stuff is a huge mystery. And for reference, all music follows patterns, where the only significant differences among popular songs and complete symphonies is the total number of patterns, although once you do the mathematics it is not so uncommon for a complete symphony to be based entirely on a handful of very simple melodies, where in some instances all the movements or whatever of a symphony simply are variations of one simple melody . . . ]

As best as I have been able to determine what one might call the "motion" aspect was explored only occasionally until recently, and then only on studio albums with big budgets, but over the past decade the emergence of the iPod and ear buds as the popular music listening platform has changed everything in a significant way, since "motion" fits nicely with ear bud and headphone listening, as it does with high-end car audio system listening, since one of the more curiously fascinating aspects of high-end car audio systems is that they tend to be acoustically tuned to their specific interior cabins, which from the perspective of acoustic physics makes such high-end car audio systems very similar to headphones . . .

[NOTE: The important aspect of high-end car audio systems is that the systems are designed by acoustic engineers at the factory, and as such are tuned very precisely to the interior cabin spaces of such vehicles. In contrast, so called "after market" systems might be precisely tuned, but it depends. As a general rule, a listening room should have a flat response where there are not favored pitches and frequencies, with the goal being that you hear the recorded music as accurately as possible. Tuning a listening room for this purpose can be vastly complex, and it can require installation of sound absorbers, diffusers, and a lot of other stuff, as well as careful selection of amplifiers and loudspeakers, as well as adjusting the performance of the amplification system via equalization, which is not so practical for a lot of folks but is both practical and possible for a vehicle manufacturer to do with a high-end audio system for its cars (at least some models). Tuning a listening room or home theater makes a significant difference in the overall listening experience, but it takes a bit of expertise, some specialized audio processors, and for certain rooms various materials and devices for handling troublesome acoustic behaviors and characteristics of the room, itself . . ]

And it is abundantly obvious that "motion" is a central aspect of the popular music of the so-called "Youth of Today" and all the other teenage mutants, really . . .

[NOTE: Based on experiments already done and verified, it is virtually trivial to do this stuff with music notation and IK Multimedia virtual instruments in Notion 3 and Digital Performer 7 on the Mac, although it takes a few hundred hours to do the "sparkling", but so what . . . ]

"Till The World Ends" (Britney Spears) -- YouTube music video

[NOTE: For proof of concept purposes, the key bits in "(Baby You Were) Only Dreaming" (The Surf Whammys) are (a) the asynchronous Dubstep, Techno, and Trance stuff in the interludes and (b) the "sparkles" in the verses, all of which are done with music notation and IK Multimedia virtual instruments in Notion 3, where the asynchronous stuff is enhanced with carefully designed and controlled digital reverberation and echo units in Digital Performer 7, which is an excellent way to have a bit of FUN with the more asynchronous aspects of what essentially is a blend of white, pink, brown, red, and grey noise . . . ]

Color of Noise (wikipedia)

"(Baby You Were) Only Dreaming" (The Surf Whammys) -- Basic Rhythm Section -- MP3 (9.4MB, 281-kbps [VBR], approximately 4 minutes and 28 seconds)

Really! :ugeek:
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Fri Jul 15, 2011 3:52 pm

As they do every once in a while like clockwork, IK Multimedia is having yet another "group buy" promotion, where initially you get one free software-only product when you purchase a qualifying hardware or software product, and when 5,000 people participate (hence the "group buy" aspect) everyone gets a second free software-only product . . .

15 Year Anniversary Group Buy (IK Multimedia)

And since I already have nearly everything IK Multimedia currently offers, I did a bit of checking and decided to get the ARC System Crossgrade as the qualifying hardware product and CSR Classik Studio Reverb as the first free software-only product, where the rule for the "group buy" is that the free product(s) need to be downloadable, which is fine with me . . .

ARC System (IK Multimedia)

CSR Classik Studio Reverb (IK Multimedia)

I had been pondering the idea of calibrating the monitoring loudspeakers here in the sound isolation studio, but although I decided to focus on discovering how to do producing, mixing, and mastering several years ago I continue to have a bit of a musician's and singer's mindset, so I put the idea into the "hold that thought" category until last week when I had a moment of clarity and decided that having calibrated loudspeaker monitors probably makes a lot of sense . . .

It takes about 15 minutes to do loudspeaker monitoring system calibration, and then the ARC System does a series of computations to determine the required correction, which it saves as a preset that is used by the ARC System VST plug-in in your digital audio workstation (DAW) application as the last component on the Master stereo output channel when you are mixing with loudspeaker monitors, which also works with the Notion 3 Mixer when you are using the Notion 3 Mixer as your DAW . . .

[NOTE: When you are finished mixing and are ready to do the bounce to disk of the mixed Master stereo output track, you disable the ARC System VST plug-in, since it makes corrections for the studio loudspeakers so that the uncorrected audio actually is correct, which makes sense if you think about it for a while . . . ]

The ARC System uses technology that is similar to the technology used in the THX system for motion picture theaters, and it works very nicely here in the sound isolation, where at present I am using Acoustic Research Powered Partner Multimedia Loudspeakers for monitoring, although more recently they are made by Advent but might not be available these days . . .

Image
Advent AV570 70-Watt 2-Way Powered Multimedia Speaker System

I got them about 10 years ago, and they are surprisingly good, in part because they have heavy metal cases and each unit weighs approximately 10 pounds, and they have 5-inch polypropylene woofers and 1-inch fluid-filled polycarbonate tweeters . . .

After doing the ARC System calibration, the frequency response is basically flat, and this is making it much easier to mix, since I can hear everything more accurately . . .

[NOTE: The orange curves are the actual room measurements, and the white curves are the room measurements after the ARC System corrections are applied. There are other target curves, but I am using "flat response" at present . . . ]

Image
Sound Isolation Studio Loudspeaker Calibration ~ ARC System

In retrospect, I should have done this a long time ago, but I was focused on other aspects of making sense of producing, mixing, and mastering, as well as making sense of music notation and virtual instruments, which is the way it works every once in a while, where the important thing is to make progress consistently in one way or another even when the specific sequence might be a bit illogical, since it all comes together sooner or later . . .

I need to get a sound pressure level (SPL) meter, since I found information in several places that indicates it is important to mix at an SPL of 85db, due in part to the experimental equal loudness curves determined by Fletcher and Munson in the 1930s, which been updated and transformed into an ISO standard more recently . . .

[NOTE: Some folks suggest that 80db SPL is better, but I found more references to 85db SPL being the best level for loudspeaker mixing . . . ]

Image
Equal Loudness Curves ~ Fletcher-Munson (blue) and ISO 226:2003 (red)

Fletcher-Munson Curves (wikipedia)

Image
Equal Loudness Contours ~ Original ISO 226 (blue) and updated ISO 226:2003 (red)

Equal Loudness Contours (wikipedia)

So, after doing the ARC System calibration, I did a new mix for "(Baby You Were) Only Dreaming" and then I decided to do a prototype vocal track, since I had not actually sung the melody and was curious to hear how it sounded, which I did, where the single vocal track is the first time I sang the song, which is one of the more unusual things I started doing about five years ago based in part on (a) thinking mistakenly that Paul McCartney only does vocals on the first or second take and (b) thinking quite correctly that it is an excellent way to discover how to compose and sing melodies in real-time on the fly, although more recently when I am wearing the "Producer Hat" I tend to think that actually composing a melody in advance and then practicing singing it makes a bit of sense in the grand scheme of everything, for sure . . .

[NOTE: All the instruments are done with music notation and IK Multimedia virtual instruments in Notion 3 and then recorded in Digital Performer 7.23 via ReWire as soundbites, and selected instruments are "sparkled" in Notion 3, as described in earlier posts, although in some instances I enhance the "sparkles" with reverberation and MOTU Echo when I switch to focusing on mixing. In this version, the instruments were mixed with ARC System calibrated loudspeaker monitors, but I did the vocals with headphones, so this is a hybrid loudspeaker-headphone mix, since it is easier to do the reverberation and echoes for the vocals when I listen with headphones, but I will do a full loudspeaker mix later, probably after I add the real lead guitar stuff . . . ]

"(Baby You Were) Only Dreaming" (The Surf Whammys) -- July 15, 2011 -- MP3 (9.2MB, 278-kbps [VBR], approximately 4 minutes and 26 seconds)

For sure!

Doing the instrument mix with calibrated loudspeakers created enough headroom to have a bit of FUN with CSR Classik Studio Reverb on the single vocal track, which I cloned and enhanced with MOTU Echo to add some precisely tuned rapid repeats on the tails of words and phrases, and I did pitch correction and individual note adjusting with the Melodyne Editor (Celemony), which makes the "melodic speaking" more like singing, which is an interesting technique . . .

There is so much instrumental counterpoint that it is not so easy to compose and sing a melody for the verses, since there also are a lot of words and the song has a very fast tempo (200 BPM), but doing the pitch correction and individual note adjustments in the Melodyne Editor provided a few clues to an actual melody for the verses, although there are a few parts of the first verse that have a melody . . .

The song is not finished, but it is coming along nicely . . .

Yet another producing, mixing, and mastering thing I realized is that even though I generally am happy with whatever happens the first time I sing a song, there is merit to devoting as much attention to vocals as to instrumentation, so this is early in the vocal part of developing the song, and one of the things I am planning to do is to play the melody for the verses on lead guitar, perhaps using a Rocktron Banshee II "talk box", which is similar in some respects to George Harrison playing on guitar parts of the melody for "Lucy In The Sky With Diamonds" (Beatles) when John Lennon does the more surreal verses, which basically are nearly monotone but with a lot of reverberation and tight echoes, which for "(Baby You Were) Only Dreaming" makes a bit of sense, because for all practical purposes it is "inspired by" the Beatles song, which is fabulous . . .

Image
"The Absinthe Drinker" ~ Viktor Oliva

"(Baby You Were) Only dreaming!" (The Surf Whammys)

[CHORUS] ~ A

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

[1ST VERSE] ~ B

Absinthe in camera
Sailing the seas
In search of green auras
As much as I please

The telephone rings
But nobody's home
An imagined young lady
Sits there all alone

As velvet clouds of rolling fog
Dance around and wander through your mind
When the telephone operator tells you
To insert another dime if you want more time . . .

[CHORUS] ~ A

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

[BRIDGE] ~ C

I'm not myself tonight
Möbius was right!
Are you upside-down and your left is your right?

[CHORUS] ~ A

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

[2ND VERSE] ~ B

Desire leads you to a lake
With a waterfall, so you cross it
And there are sweets on the road
That make the ducks quack . . .

All of them are happy
When you slide
Beyond the poppy flowers
That cast shadows on it all

And it's pretty strange

Bumper cars driven by toads
Arrive nearby and offer their wares
Select one and get on board
Where you are floating but then you feel so lost . . .

[CHORUS] ~ A

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

[BRIDGE] ~ C

Are you the same as you were last night? All right!
But, is your head squared on tight?
Are you upside-down and your left is your right?

[CHORUS] ~ A

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

Baby you were only . . .
Baby you were only . . .
Baby you were only dreamin'

{3RD VERSE] ~ B

You are on a train with a chimera
And the conductor has a scarf
That he is tying in the mirror
For the workers who deliver chocolates

All at once the telephone operator
Who is wearing glasses
Arrives at the door
And is gracefully distracted . . .

"Look at the rainbows and sparkles!", she says
"Look at the rainbows and sparkles!", she says
"Look at the rainbows and sparkles!", she says!
Ahhhhhhh . . .

©2011 RAE Multimedia


Fabulous! :)
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Wed Jul 20, 2011 2:20 pm

I have a topic over in the IK Multimedia FORUM that I am using to make suggestions for new products, and the current focus is on a super advanced programmable custom echo unit, so as part of my ongoing research I did a bit of sleuthing to determine the different types of VST plug-in echo units that are available at the dawn of the early-21st century, and there are some interesting echo units, although none of them are sufficiently programmable to be able to replace automation in an high-end digital audio workstation (DAW), but so what . . .

So what!

I found two Lexicon style VST plug-in units that look very interesting, and I found two quite fascinating free echo units, one which is based on the Roland Space Echo and another which is based on what apparently was the first custom echo unit . . .

But most importantly, I discovered the Timeless 2 VST plug-in echo unit by FabFilter Software Instruments, which is a bit beyond stellar, and although it is not programmable in a DAW automation type of way, its echos can be controlled by a variety of methods, including LFO generators and different types of panning curves and trajectories, as well as envelopes for filters and so forth . . .

[NOTE: Timeless 2 has a virtual festival of parameters that a DAW application can control via Automation, and it also has parameters that can be controlled via MIDI, but this is different from the super advanced concept of having its own programming language or internal Automation environment, where instead of the DAW application doing the Automation, the VST plug-in provides its own Automation system that makes it possible to design and create vastly different custom echoes that are "played" by Automation at different points along the timeline identified by measure, beat, and tick (MBT) timestamps or whatever, which from my perspective makes more sense for the VST plug-in echo unit to do than for the DAW application to do, since doing it via Automation in a DAW application is bit of an enormous hassle, really . . . ]

Image

Timeless 2 (FabFilter Software Instruments)

I have been experimenting with the full-featured demo version of Timeless 2, and it certainly does what I need it to do, but it also does more stuff than at present I can comprehend in any immediately conscious way, so the only decision making here in the sound isolation studio involves deciding whether I want to get only the Timeless 2 VST plug-in or the Mixing Bundle (which currently is on sale at a 25 percent discount), where I am leaning toward the Mixing Bundle, since the other processors have very nice visual meters, real-time audio displays, and so forth, which is important because it is easier for me to fine-tune parameters when I can hear and see what is happening, since some aspects are fine-tuning are a bit on the subtle side, which maps to being perceived but not in such an immediately conscious way . . .

[NOTE: To be precise, when I refer to the "subtle side" this maps to the range from 5 milliseconds to perhaps as much as 350 milliseconds, since depending on the tempo even at 350 milliseconds it is more a matter of easily perceived and very distinct echo, where the general rule is that shorter the duration the more difficult it is to control since the auditory perception apparatus of the human mind creates a virtual festival of what essentially are auditory illusions toward the goal of simplifying all the raw auditory information sent to it by the ears and all that biomechanical stuff, where one example is the "Hass Effect", which is quite fascinating and is one of the many techniques used in commercial broadcasting to make advertisements appear louder than they actually are, which also is the reason it is used in songs, since it just as easily makes the singing appear louder but without needing to increase the actual volume level, hence it tends to conserve overall headroom and dynamics, which in turn allows more room for other stuff . . . ]

Haas found that humans localize sound sources in the direction of the first arriving sound despite the presence of a single reflection from a different direction. A single auditory event is perceived. A reflection arriving later than 1 ms after the direct sound increases the perceived level and spaciousness (more precisely the perceived width of the sound source). A single reflection arriving within 5 to 30 ms can be up to 10 dB louder than the direct sound without being perceived as a secondary auditory event (echo). This time span varies with the reflection level. If the direct sound is coming from the same direction the listener is facing, the reflection's direction has no significant effect on the results. A reflection with attenuated higher frequencies expands the time span echo suppression is active. Increased room reverberation time also expands the time span of echo suppression.


[NOTE: In many respects, this is the Rosetta Stone of what happens in very short time range, and one can read it several hundred times while checking definitions and studying various graphs in an advanced book on Acoustic Physics and discover even more vastly useful information. My best current hypothesis on this is that it originated as a survival mechanism, since for example at a very primitive level (a) it causes the sounds of the paws of a rapidly running tiger to be perceived as being much louder than they actually are and (b) it focuses the directional or locational auditory cues very precisely, both of which can be quite handy if one wants to avoid being a happy meal for a hungry tiger. Of course, for vocal producing at the dawn of the early-21st century, the more practical application is to make it easier to put Elvis' voice literally inside the minds of all the lovely ladies in the audience, which certainly is one of the goals here in the sound isolation studio, although the focus is on getting Pretend Elvis™ into Electric Underpants™ . . . :D ]

[SOURCE: Haas Effect (wikipedia) ]

For reference, there are quite a few ways to use echo units, but I like to use echo units to augment and enhance reverberation, where generally (a) I view reverberation in terms of creating a virtual room but (b) I view echoes in terms of a combination of tail extending and phrase repeating, both of which have a syrupy flavor . . .

Explained another way, the extreme flavor of reverberation is being in the Taj Mahal or an enclosed swimming pool room that has metal walls and big glass windows where there is so much reverberation that you have to shout to have a conversation with someone who is standing two feet in front of you, but echoes are focused on what happens after a word or phrase is spoken or sung, hence primarily are involved with the tails of words or at the more extreme end with entire phrases . . .

In other words, (a) reverberation creates and defines the overall perceived audio space in terms of room size and acoustic characteristics but (b) echoes determine what happens (b.1) to the ends of words and (b.2) to entire phrases when longer delay times are used . . .

As an example, consider the word "tight" . . .

Short echo will cause the "t" at the end to be repeated ("tight t t t"), but a longer echo will cause the entire word to be repeated (tight tight tight), and while reverberation will do a little bit of the former, it does not do the latter, and even with shorter single duration repeats, reverberation tends to blur everything, so yet another distinction in echo is the clarity of repeats, where "slapback" echo is a classic example of a short but clear echo repeat that is distinctly different from reverberation . . .

[NOTE: Technically, reverberation and echo are just variations and flavors of the same thing, but with reverberation there is a virtual festival of echoes that creates what one might call a cloud of blurred and diffused fog, while echoes are more like clear and distinct raindrops, which is a nice analogy, metaphor, or simile . . . ]

I did a new version of "(Baby You Were) Only Dreaming" that has the Timeless 2 VST plug-in echo unit on one of the two vocal tracks, and this made it possible to have a bit more FUN with echoes, really . . .

[NOTE: One of the more fascinating aspects of the Timeless 2 VST plug-in echo unit is that it is very melodic, which is obvious during the first bridge or interlude, where it creates a spectacular "flying zoom" artifact from the tail of the last word of the phrase ("I'm not myself tonight"), which sounds like a "whoosh" or whatever . . . ]

Image

"(Baby You Were) Only Dreaming" (The Surf Whammys) -- July 17, 2011 -- MP3 (9.2MB, 276-kbps [VBR], approximately 4 minutes and 26 seconds)

Really!

From a different perspective, this specific type of echo creates what one might call a "syrupy" flavor, where (a) when a word ends abruptly you hear rapid distinct repeats but (b) when the end of a word is sustained the echoes increase the duration of the sustain, which overall maps to being able to control the "syrupy" aspects of the singing and basically is a way to emulate some of the vocal techniques that for the most part only highly skilled singers are able to do, where one example is the way that Don Everly sustains the ends of the word "all" at 0:50 in "Cathy's Clown" during his first vocal solo by singing it as "all-laa", which is a way to emulate an echo unit and is a technique that singers typically learn either in the studio as part of what I call "working an echo unit" or in a liturgical choir that performs in large cathedrals where there is a lot of reverberation (which requires emphasizing the ends of words so that they are distinct and is one of the more paradoxical aspects of "syrupy" singing), which is fabulous . . .

[NOTE: I think it is Don Everly, but regardless of whether it is Don or Phil, he is the Everly brother on the right in the YouTube video . . . ]

"Cathy's Clown" (The Everly Brothers) -- YouTube music video

Fabulous! :)

P. S. The way vocal processing is done is a key consideration for designing and playing instrument "sparkles", which is a layering activity, since the general goal is for instrument "sparkles" to coexist in a happy way with all the vocal "sparkles", where yet another rule is that for the most part (a) instrument "sparkles" need to be primarily dry, but (b) vocal "sparkles" need to be primarily wet, which is where Notion 3 becomes so vastly important, because it is the only practical way to create primarily dry instrument "sparkles" easily . . . :ugeek:
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Notion 3, DISCO Songs, and Sparkles

Postby Surfwhammy » Fri Jul 29, 2011 9:00 am

After discovering that the instructions for microphone selection in the ARC System were a bit confusing, which led to selecting the wrong microphone, I ran the calibration again but this time after selecting the correct microphone, which is identified by a thin orange line rather than by the model number, since the model numbers are the same . . .

Once the loudspeaker monitors where correctly calibrated, I decided to do a bit of work on "Feel Me", which is another of the many DISCO and Pop songs I am developing for my pretend musical group (The Surf Whammys), and after getting all the Notion 3 generated audio recorded in Digital Performer 7.24 as soundbites via ReWire, I did a new loudspeaker mix and then composed and recorded the lead vocal track on the fly in real-time, although I sang it once to set the microphone level as a bit of practicing, which is the current compromise in my new strategy to make an effort to compose vocal melodies in advance and to practice singing . . .

[NOTE: As a bit of background, in the 1970s an audio engineer told me that Paul McCartney did the vocals for "R.A.M" on the first or second take, and I thought this was vastly cool, so about five years ago I decided to do everything on the first or second take as an experiment both for lead guitar solos and vocals, although I already had been doing this for decades with bass guitar. It is a fascinating strategy, although at first it was patently strange, if not a bit frightening, but the key to doing it is discovering how to avoid being judgmental. The interesting aspect is that the unconscious mind already knows a virtual festival of information, so once you discover how to play and sing without actually thinking in any immediately conscious way you can do a lot of quite amazing things that simply cannot be done any other way, and once I got past the often sheer terror of not knowing which notes and phrases were going to appear, I discovered that there is a way to rewire the frontal eye fields (FEF) region of the brain so that it works interactively with the auditory cortex, which is the only way one can play and sing elaborate musical phrases in real-time on the fly when some of the phrases contain rapid series of notes in the 25 to 50 millisecond range, which basically is faster than the double kick-drumming of a skilled Heavy Metal drummer, since (a) this is one of the amazing functions of the frontal eye fields region of the brain and (b) it simply is impossible to contemplate or ponder anything this rapid in an immediately conscious way. On the other hand, yet another thing I discovered more recently is that the audio engineer was being very specific about perhaps one or two songs rather than every song in general, since it now is clear that Paul McCartney actually composes and practices his singing in advance, but so what! It was a great experiment, and it transformed the way I play lead guitar, which is fabulous . . . ]

Image
Brodmann Areas on the Lateral Surface of the Brain

Frontal Eye Fields (wikipedia)

Curiously, the frontal eye fields region is located in the section of the brain bounded by the 4, 6, and 8 Brodmann areas on the lateral surface of the brain, as shown in the diagram (see above), and this literally maps to the "top of the head", so when folks say that they played a lead guitar solo or composed and sang a melody "off the top of my head", if there are a lot of rapid phrases, then this is a scientifically accurate observation . . .

Overall, I think this is easier to do for folks who primarily play and sing "by ear", which is the best way to discover how to do it, but regardless the key is to create a happy space where you can play and sing whatever appears in your mind in real-time without being bothered by wasting time trying to determine whether intervals, notes, and phrases are "good", "bad", or "indifferent", which is the suspending judgment aspect . . .

Another thing I discovered is that while whatever you are playing or singing in real-time on the fly might appear at the moment to sound odd, if you ignore that frivolous thought and continue, then when you listen to it later, it is considerably more logical than you might have thought when you were doing it . . .

The FACT of the matter is that in Western music, there are 12 notes appearing in perhaps 8 octaves, and all of them are good . . .

If you are thinking Ionian but everything is Phrygian or Locrian, then it might sound a bit odd, but all you need to do is to switch to the correct mode or scale, and then there you are . . .

For vocals, I use a few key processors, and one of them is the Melodyne Editor (Celemony), which I use to do pitch correction and to do a bit of manual fine-tuning, which works nicely so long as the manual fine-tuning is within the range of a few semitones . . .

I also use TrackPlug 5 (Wave Arts) for doing various types of vocal "mid scooping", as well as noise gating and a bit of gentle compression, and I like CSR Classik Studio Reverb (IK Multimedia), since it has a very big reverberation space . . .

Most recently, I started using Timeless 2 (FabFilter Software Instruments), which is a wildly fantastic VST plug-in echo unit, and I was able to discover how to set it to emphasize the melodic breathing between notes and phrases in the single vocal track for "Feel Me", which is fabulous . . .

Image
Timeless 2 ~ FabFilter Software Instruments

[NOTE: The image for the Timeless 2 echo unit is general, and I used a very different custom echo for the vocal track. This is a loudspeaker mix done with calibrated loudspeaker monitors. The vocals are a tiny bit hot, but it is a development version, and I will do a bit more fine-tuning once I do the real lead guitar solos and backup vocals. All the instruments in this version are done with music notation and IK Multimedia virtual instruments in Notion 3, and it is mastered with T-RackS 3 Deluxe (IK Multimedia) in conjunction with the ARC System VST plug-in, which is used when mixing and mastering but is disabled when the final version is bounced to disk . . . ]

Image

"Feel Me" (The Surf Whammys) -- July 28, 2011 -- MP3 (8MB, 300-kbps [VBR], approximately 3 minutes and 38 seconds)

Fabulous! :)

P. S. Although it might not be so obvious, "sparkles" are processed in part by the frontal eye fields region of the brain, as well as by the auditory cortex, of course. So, from this perspective, one of the primary functions of "sparkles" is creating a more immersive listening experience by including additional regions of the brain in the overall auditory perception apparatus . . .
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

PreviousNext

Return to NOTION

Who is online

Users browsing this forum: Google [Bot], LeonardJok and 33 guests


cron