Attention:

Welcome to the old forum. While it is no longer updated, there is a wealth of information here that you may search and learn from.

To partake in the current forum discussion, please visit https://forums.presonus.com

Rit & a tempo-synced delay

A Forum to Discuss NOTION

Rit & a tempo-synced delay

Postby wglmb » Wed Mar 30, 2011 2:36 pm

I have a piece with a rit in it. Several of the instruments are being sent to a bus with a delay effect on it, which is tempo-synced.
When playback reaches the rit, the delay effect can't seem to cope, and I get some messy clicking noises.
Is there any way I can mute the bus when playback reaches the rit? Or, stop sending any audio to it when playback reaches the rit?
Or any other ideas for fixing this?
User avatar
wglmb
 
Posts: 67
Joined: Tue Jul 27, 2010 9:00 am

Re: Rit & a tempo-synced delay

Postby Surfwhammy » Mon Apr 04, 2011 3:44 am

As best as I can determine, there are two possibilities for what a "rit" might be:

(1) Ritardando — less gradual slowing down (more sudden decrease in tempo than rallentando)(abbreviation: rit. or more specifically, ritard.)

(2) Ritenuto — slightly slower; temporarily holding back. (Note that the abbreviation for ritenuto can also be rit. Thus a more specific abbreviation is riten. Also sometimes ritenuto does not reflect a tempo change but a character change instead.)

[SOURCE: http://en.wikipedia.org/wiki/Tempo#Terms_for_change_in_tempo ]

Both of these are similar, although there clearly are differences, but so what . . .

So what!

The key bit of information is that the delay is "tempo-synched", which most likely causes a problem either (a) as the result of the tempo-synching algorithm doing a bit of what in computer science is called "look-ahead" and detecting a change in tempo or (b) as the consequence of not doing any look-ahead or perhaps doing it badly . . .

There could be other causes of the problem, as well, but the important thing is to devise a solution, which can be done in several ways, depending on how you are doing everything . . .

If you are getting the Notion 3 generated audio into a Digital Audio Workstation (DAW) via ReWire and recording it as soundbites, then you will be able to have more control of the delay effect, and there are several ways to fine-tune everything, but it depends on the DAW application . . .

I use Digital Performer 7, and I get the Notion 3 generated audio into Digital Performer 7 via ReWire, where it is recorded as soundbites with each instrument being on a separate stereo track . . .

When that step is completed, I do all the effects and mixing exclusively in Digital Performer 7, which provides more control and makes it possible to do very elaborate effects . . .

And instead of running several instruments via a bus to a grouped delay effect, you can run each instrument individually to its own specific delay effect or a series of delay effects, and the delay effects can be bussed to their own separate tracks as well, which then makes it possible to do additional work . . .

Another strategy is to separate a single instrument track into parts, where for example you might cut the instrument part where the "rit" occurs and then copy it to a new instrument track, where the "rit" part simply has no delay . . .

However, if the DAW application supports automation, which Digital Performer does, then another strategy is to automate the delay unit, which will be a VST plug-in in the Digital Performer and Mac universe, where you can turn-off the delay unit just before the start of the "rit" and then turn it on when the "rit" is completed, which can be done in a gradual way to make the transition from delay to dry smooth. There are special automation editing tools for doing this, and they work like graphic drawing tools, where you can adjust the curve or shape of the specific parameter that controls how much of the instrument signal is sent to the delay unit or how much of the delay is heard, and so forth and so on . . .

This is not the only way to do it, but all the various ways are similar, and they involve working with the instrument(s) and delay(s) at a finer level of detail, which basically is a matter of creating what I call a "custom echo" effect . . .

As an example of custom echo effects, you can listen to the European Single of "Who Owns My Heart" (Miley Cyrus), which has a virtual festival of highly customized echo effects, perhaps at least a hundred of them, where for example, there are approximately eight custom echos in the first 13 seconds of the song, where what appear to be single custom echoes in one or two instances actually groups of cascaded or pasted and combined custom echoes, where these types of echoes are done one at a time and involve a lot of work, since they cannot be done by setting the parameters of an echo unit, plug-in, or whatever, because each one is different, which makes it impossible to do with a preset or whatever, really . . .

[NOTE: This is the official YouTube music video for the European Single of "Who Owns My Heart" (Miley Cyrus), and it is easier to hear the highly customized echo effects when you listen with studio-quality headphones like the SONY MDR-7506 (a personal favorite) . . . ]

http://www.youtube.com/watch?v=iVbQxC2c3-8

[NOTE: I like this particular live performance of "Who Owns My Heart" since it has a lot of raw energy, and the key bits of information from my perspective are (a) that there are two electric guitars, electric bass, drumkit, keyboard synthesizer, and two backup singers, which is not a particularly elaborate band by any definition and (b) that it is abundantly clear that Miley is a stellar singer when she does the solo bridge (or whatever it is called) that starts at 2:36 and extends for approximately 13 seconds, which is just enough to "get 'er done" without straining her voice. Curiously, I think that this is the way most people hear the studio recording, at least in terms of being able to identify all the layers and levels of extraordinarily elaborate and detailed instrumentation and vocal processing. For musicians, it can appear to be "normal" that everybody should be able to hear everything instantly, but I think that for the most part, this is not the way it works. Nevertheless, all the complexity, layering, and elaborate minutiae are important, because even if listeners do not hear much of it in any immediately conscious way, they hear it subconsciously, and it produces essentially the same effect no matter how one hears it . . . ]

http://www.youtube.com/watch?v=-aiNGX2vzs0

Really!

There are several strategies for doing highly customized echo effects, and some of them are done with vocal processing software like the Melodyne Editor (Celemony), which is what I use, but you also can use echo units and other types of VST plug-ins, so it really depends on the effect and what it needs to do . . .

Summarizing, I think that you will need to work with the instrument track(s) and delay effect(s) at a finer granularity or level of detail, and there are several ways to do this, with the general strategy being to discover a way to get more intimate control over what is happening, which tends to require working with smaller parts of things in more detail, which overall is best done in a DAW application that supports automation, cutting, copying, and pasting, as well as more elaborate effects routing and so forth, for sure . . .

For sure! :)

P. S. This YouTube video provides a bit of information on one of the ways that highly customized vocal echoes are created with the Melodyne Editor, and it provides a visual way to see what you are doing and the way it works in terms of notes, pitch, formants, duration and so forth . . .

[NOTE: You can do Auto-Tune (Antares) types of effects with the Melodyne Editor, but they are not done automatically, so each one is a custom designed echo. The reason I like the Melodyne Editor is that it is not so focused on doing the "Cher Effect". It has very complex and powerful capabilities, and it makes it possible to do some quite amazing things with instruments and voices, since the first thing it does is analyze the audio for an instrument or voice, with the result being that all the individual components of notes, pitch, formants, duration, and so forth and so on are identified and digitized in a way that makes it easy to apply various mathematical and musical algorithms to everything. In the more "way out there" department, for example, you can use the Melodyne Editor on a piano track and change the mode from Ionian to Lydian or one of several types of minor scales that are not musical modes, per se, which is quite fascinating. You also can change the key, but there are limits on how far down or up you can move notes, since at some point it sounds artificial, but if you keep things within a few semitones, everything sounds very natural. Once you understand how this works, as an example, you can to listen to the official recorded version of "Who Owns My Heart" (see above) and then to listen to a live performance where there are not nearly so many highly customized and tailored vocal effects, at which time it becomes very easy to determine what it real as contrasted to what is highly customized and tailored in the digital universe of the recording studio . . . ]

http://www.youtube.com/watch?v=7o8yTwUMwow

The examples in the YouTube tutorial are doing custom echoes for a vocal track, but it works just as easily with instrument tracks, which is fabulous . . .

Fabulous! :)

P. P. S. "Believe" is the song by Cher that was one of the first instances of using Auto-Tune technology in a hit song or whatever, and it is the source of the colloquial name "Cher Effect" for this type of Auto-Tune vocal processing . . .

It also has custom-designed echoes, really . . .

http://www.youtube.com/watch?v=LbXiECmCZ94

Really!

And there is a stellar live performance of "Believe" that makes it easier to understand the basic strategies used in the vocal processing, where doing everything in a live performance requires two separate microphones, one for the AUTO-TUNE sections and one for the standard echo singing, and although it is not shown, there probably are two separate vocal producers working the various controls and effects for the two microphones, which is fabulous . . .

[NOTE: Even with all the elaborate vocal effects, this only works in a live setting when the singer is extraordinarily precise, since ultimately it is the singer who does the important work . . . ]

http://www.youtube.com/watch?v=DDeke8gGLp4

Fabulous! :)
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Rit & a tempo-synced delay

Postby wglmb » Wed Apr 06, 2011 2:11 pm

Thanks for your reply.
Both of your suggested work-arounds had occurred to me, but I was hoping someone would know a neater way of doing things. Ah, well.

I'm reluctant to apply the effects in my DAW, so I'll probably go with the split-the-instruments-up approach.
User avatar
wglmb
 
Posts: 67
Joined: Tue Jul 27, 2010 9:00 am

Re: Rit & a tempo-synced delay

Postby Surfwhammy » Thu Apr 07, 2011 5:27 am

wglmb wrote:Thanks for your reply.
Both of your suggested work-arounds had occurred to me, but I was hoping someone would know a neater way of doing things. Ah, well.

I'm reluctant to apply the effects in my DAW, so I'll probably go with the split-the-instruments-up approach.


QUESTION: Which DAW are you using?

If this is for performing, then it probably requires a different type of strategy, but if you are using Notion 3 to generate the audio based on music notation, then it is easier to do this in the DAW application, albeit with a few caveats . . .

With a DAW that supports automation and soundbite editing, this is easy to do, and it does not require a lot of extra work, although it does require a bit of editing and fine-tuning of the soundbites and echo unit (delay) . . .

The basic rule for echo work is that when the parameters need to change (as contrasted to being set one time and then remaining constant), then there are two general ways to do this:

(1) Use a single set of echo units (which can be one echo unit or several that cascade) but automate their behaviors to change the parameters as needed . . .

(2) Split the soundbite or recorded material into parts and then work with each part separately, which is done by creating new tracks and then cutting sections of the soundbite or recorded material for purposes of pasting it into the newly created tracks at the same locations as the original, which then allows you to have a fixed parameter echo for each of what then are separate tracks . . .

Both of these strategies can appear to be a bit complicated, but once you do them a few times, it becomes second nature . . .

Another useful bit of information about echo and reverberation units is that it tends to be a good idea to apply a bit of frequency filtering before you send the audio to the echo and reverberation units, which might provide a third strategy . . .

I found an excellent book that explains in great detail the various recording and equipment strategies used at Abbey Road Studios to record the Beatles, and one of the more useful bits of information is that the audio sent to the various reverberation and echo units was filtered so that very low frequencies and very high frequencies were not sent to the echo and reverberation units, where as I recall the low frequencies were brick-walled at 500-Hz and the high-frequencies were brick-walled at 10-KHz . . .

http://www.recordingthebeatles.com/

So, nothing lower than 500-Hz was sent to the echo and reverberation units, and nothing higher than 10K-Hz was sent, based on the general idea that lower frequencies had too much amplitude, hence caused problems with the levels, and higher frequencies just added hiss and noise . . .

If you are doing this entirely within Notion 3, it might be possible to use an equalizer, notch filter, bandpass filter, or something similar to adjust the audio that is sent to the delay unit . . .

Wave Arts has an interesting VST plug-in that makes it possible to work with separate regions or "bands" of audio, which is useful for example when there are noises in different frequency ranges but the recorded material otherwise is fine . . .

Image
MultiDynamics 5 ~ Wave Arts

http://wavearts.com/products/plugins/multidynamics/

[NOTE: There are several ways that "brick wall" is used, and one way refers to the volume, where for example a brick-wall limiter creates a ceiling or upper limit for the volume and does not allow the volume to increase beyond the upper limit or ceiling, but there also are brick-wall filters that only let frequencies or pitches in a specified range pass through the filter, and the "brick wall" aspect refers to the restriction being absolute . . .

If you set the lower "brick wall" to 500-Hz and the upper "brick wall" to 10K-Hz, then for example a 400-Hz sound effectively encounters a "brick wall" that does not let is pass through . . .

In other uses, a "brick wall" is similar or perhaps the same as a "hard knee", while a less restrictive boundary or filter is more like a "soft knee" . . .

And the "knee" aspect refers to the concept that when graphed the curves look a bit like a knee and calf of a person sitting in a chair, where a "hard knee" is when the person is sitting upright in an "attention" position but a "soft knee" is when the person is a bit more relaxed and has their legs extended more in front of the chair, which is an analogy, metaphor, or simile that apparently is intuitive to nearly everyone except me, although it eventually made a bit of sense . . .

One of the these techniques might work, but overall the rule tends to be that when you need an echo or reverberation to behave in a very specific way, which in some respects is the general case, then you need to control it very precisely, which essentially maps to creating a custom echo or reverberation effect . . .

Echoes and reverberation add depth, space, and motion to music, and I like to use elaborate echoes for lead guitar and vocals, which is fabulous . . .

Fabulous! :)

P. S. On a related note, while it can appear to require a lot of extra work to create custom echoes and other types of special effects, this mostly is a matter of the way musicians and composers view everything, which is entirely different from the perspectives of audio engineers and producers, which is something that took me a while to comprehend in an immediately conscious way, in part because it is such a completely and totally different perspective from the way musicians and composers tend to view everything . . .

One way to understand this is in terms of artists and photographers, as contrasted to printers . . .

Artists focus on painting pictures and photographers focus on capturing images on film or more recently in digital media, but when it comes to having the pictures and photographs printed, the focus shifts to paper, ink, printing presses, and so forth and so on, and this is where a skilled printer, lithographer, or whatever becomes very important, since for example in lithography it might require printing several plates of very specific types of ink to reproduce a particular shade of red, so while doing a four-color separation might be fine for general work, it the goal is to produce a high-quality lithograph, then the expertise of the lithographer becomes very important and basically determines how everything happens . . .

And this is the way it works with composers, musicians, singers, and songs, where those folks do their work but then the audio engineers and producer make it sound good on the particular recording media, and while everything begins with the composer, musicians, and singers, the way it ultimately sounds is determined by the expertise and skills of the audio engineers and producer . . .
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am

Re: Rit & a tempo-synced delay

Postby wglmb » Thu Apr 07, 2011 8:49 am

I use Mixcraft (relatively cheap, powerful, and easy to learn!).
I know it'd be easier to do what I want in Mixcraft, but my normal workflow is to get all the instruments sounding how I want them in Notion, then export them as audio and import them into Mixcraft, where I do tiny adjustments and add vocals.
So it's really just force of habit that makes me want to do it this way! Sill, I know ;)
User avatar
wglmb
 
Posts: 67
Joined: Tue Jul 27, 2010 9:00 am

Re: Rit & a tempo-synced delay

Postby Surfwhammy » Thu Apr 07, 2011 10:23 pm

wglmb wrote:I use Mixcraft (relatively cheap, powerful, and easy to learn!).
I know it'd be easier to do what I want in Mixcraft, but my normal workflow is to get all the instruments sounding how I want them in Notion, then export them as audio and import them into Mixcraft, where I do tiny adjustments and add vocals.
So it's really just force of habit that makes me want to do it this way! Sill, I know ;)


I visited the Mixcraft website, and there is a comparison chart that shows all the stuff it does compared to Cakewalk and Cubase Studio 5, Cakewalk SONAR Home Studio 7 XL, and Acid Pro 7, so it looks like Mixcraft is a full-featured DAW application . . .

Mixcraft supports automation, and there is an example of a track with something being controlled by automation, where the automation is done as a line that has an arc of points in the middle, which typically indicates that the level or whatever is lower and then gradually increases and decreases (the arc of dots), followed by returning to the original level . . .

This also suggests that there are "drawing" tools for editing automation lines, which is very important, since the "drawing" tools allow you to make everything work smoother than simply recording the particular control as you move it . . .

As noted, I use the Notion 3 Mixer to adjust levels and panning when I am working on music notation and need to hear everything in playback, but once I am happy with the music notation and the VSTi instruments, I save the project, and then I set all the tracks to 0dB and remove all the effects, including the Notion 3 Reverb on the Master stereo output track, after which I assign ReWire channels; and then do a "Save As" where I append "ReWire" to the file name . . .

This way, I have two version of the score--one of which has the ReWire channels, tracks at 0dB, and so forth . . .

Then, I close Notion 3 and start Digital Performer 7, which is the DAW I use . . .

In Digital Performer 7, I create new tracks and assign them ReWire channels to match the ReWire channels of the Notion 3 score, and then I start Notion 3, where I use Digital Performer 7 as the ReWire host for purposes of recording the Notion 3 generated audio via ReWIre as Digital Performer 7 soundbites . . .

And based on various experiments, I avoid using channels 1 through 10, so I start with the 11-12 pair of channels, since the pairs are 1-2, 3-4, 5-6, and so forth, where the number pattern is odd-even . . .

Additionally, I limit Notion 3 scores to 25 instruments, where 5 of them are common to all the subscores for a song . . .

Using subscores requires a tiny bit more work, as well as keeping all the subscores in a common folder and using a naming convention for the file names, but this is not difficult to do, and it works very nicely, since most of the VSTi instruments I use are "heavy" in terms of application and system resources, so keeping them in subsets of 25 avoids problems . . .

The reasons I do all the effects work in the DAW (Digital Performer 7) are (a) that there is more control and (b) that I can have more "heavy" VST effects in Digital Performer 7 than in the Notion 3 Mixer, which is important since some of the effects plug-ins (VST) I use are vastly "heavy", where two examples are AmpliTube 3 (IK Multimedia) and Panorama 5 (Wave Arts) . . .

And I like to use a lot of individual components from T-RackS 3 Deluxe (IK Multimedia) on drumkit tracks, bass tracks, vocal tracks, and any instrument that needs a stronger level, vacuum-tube blur, punch, or whatever . . .

Whatever!

The T-RackS 3 Deluxe components also are "heavy" but not quite to the extent of AmpliTube 3 and Panorama 5, and there is a limit to how many of them the Notion 3 Mixer can handle, which generally is in the range of 10 or so when T-RackS 3 Deluxe also is used for mastering the Notion 3 Master stereo output track . . .

Explained yet another way, I treat Notion 3 as if it were a musician or singer, and the goal is to record it dry at a strong level . . .

The only adjustments I make and keep in the ReWIre version of a Notion 3 subscore is panning that is done to create what I call "sparkles", but for the other tracks in Notion 3 I set their panning to the full range, which makes it easier to work with panning in Digital Performer 7 once the Notion 3 generated audio has been recorded as soundbites in Digital Performer 7 . . .

Again, this certainly can appear to be a bit confusing, but once you do it a few times, it is not the least bit confusing . . .

And the reason for mixing everything in Digital Performer 7 is that by not placing arbitrary restrictions on the Notion 3 generated audio tracks--other than the panning settings for "sparkles"--this provides the most flexibility and control over the instruments for purposes of ensuring that the instruments do not overpower the singing, which for the most part is not something that I can determine until I actually record the singing and decide how I want to enhance it with custom echoes and so forth . . .

The problem with doing instrument mixes is that some of the instruments inevitably will dominate, which is fine when it is an instrumental song, but when there is singing the general rule is that the singing dominates, except during instrument breaks, where for example the lead guitar becomes the dominate force . . .

And from this perspective, it is much easier to do all the mixing in the DAW, which is the way I do it . . .

Overall, it requires more work, but this is the way it is done by major studios for hit records, so I think it makes sense for me to do it this way, since I want to have hit records . . .

At first, I tried doing everything with real instruments, but I never was able to get good levels, which mostly is a matter of not having a lot of stellar microphones, so after doing some experiments with Notion 3 and VSTi instruments I realized that this strategy is much better and the levels are excellent . . .

I continue to do real rhythm and lead guitar, for which it is easy to get good recording levels, and I do real singing, which also is easy with respect to getting good recording levels, since I have two reasonably good microphones and I do a lot of post-processing with the Melodyne Editor (Celemony) . . .

As an example, this is the first song done (a) with music notation and VSTi instruments in Notion 3 that are recorded in Digital Performer 7 via ReWire as soundbites and (b) real singing done in Digital Performer 7, and it is a headphone mix, which is what I do when I am working on a song, as is the case with this song, since I have not added the real lead guitar solos and backup vocals . . .

http://www.surfwhammys.com/Im-Going-Goo-Goo-Over-Ga-Ga-11-28-2010-2-DP7.mp3

Once everything is recorded, I switch to doing loudspeaker mixing, but for now it is a headphone mix, which is fabulous . . .

Fabulous! :)
The Surf Whammys

Sinkhorn's Dilemma: Every paradox has at least one non-trivial solution!
User avatar
Surfwhammy
 
Posts: 1137
Joined: Thu Oct 14, 2010 4:45 am


Return to NOTION

Who is online

Users browsing this forum: No registered users and 30 guests