Posts Tagged ‘software’

This post is going to be a tutorial on doing mashups using logic pro and some of the tools it has, mainly Flex Time and the ESX24 Sampler. The first thing I want to talk about is house keeping. When you’re doing a mashup, you need every sample or loop to have its own track, sometimes you can make exceptions for loops or samples from the same song, or similar songs, but if you have the power in your computer, I still recommend you put them on different tracks. The reason for this is quite simple, every song is mixed differently, so when doing a mashup of all these different songs, you’re going to have to mix them all differently to get them to work. As well as change their tempo, and their pitch and numerous other things.

As you can see here I’ve put all my samples/loops on different tracks, and some of them have drastically different settings. Now, collecting samples/loops themselves. Some producers will disagree with my here, but I personally find the best way is to just cut the sample in the arrange window, then test it by looping in the sample editor. This picture below shows the cut file selected, causing it to show up in the sample editor, then by clicking the button with the 2 arrows following each other, then clicking the speaker button next to it, you can hear the loop played in loop. Then its just a matter of some careful listening to get your selection perfect.

Next up is speed and beat editing so that you can get your mashup to sound like everything is at the same speed with beats that sync up perfectly. Logic makes this process very simple with its Flex Time tool. Once you have your loop cut, select the tempo you want your mashup to be at, select the sample you want to move into that tempo, and drag the end into the amount of bars the sample goes for. In this case below, its 4.

 

 

 

 

Now once the tempo syncing is done, sometimes you’ll find that beats aren’t always perfectly on the beat, so you have to move the beat a little, to do this just select a spot in the loop so that these locators come up, then drag the locators left or right to sync the beat.

 

Now the final technique for getting the mashup to work is by making sure all your loops and samples are in the same key, this is done pretty easily using the pitch shifter, its just move it up or down as much as you need (it takes some practice finding the right keys but once you get the hang of it it’s easy)

 

 

Another technique I like to use in mashups or any kind of modern music production is the stutter, this is also just a simple matter of clipping sections out of an audio file in tempo.

 

 

There’s one last technique I use in my mashups, this technique is creating a custom sampler instrument in the ESX24 sampler. What I’ve done here is cut each beat out of a synth riff, saved them as a series of audio files, then imported them into the ESX24 (this is done by opening the synth, clicking edit and dragging in the files.)  Then I assigned each of the samples to a note on the keyboard (seen at the bottom of the picture) so that if I played each note in order it would play the riff. This allows you to play the notes in random order and completely reinvent the riff.

To listen to my mashup and hear some of these techniques you find it on soundcloud here.

The Mix was an intense process. Without many instruments we had to create a full sound. With the help of Logic 9, I think it went pretty well.

All songs were recorded in the one session and each instrument had the same track for each song, so these settings were applied to all songs, then split off and minor tweaks were made (most of which probably won’t be mentioned because they’re not really very interesting.) Sam wanted his EP sound to be really consistent so we decided doing it like this would be a fun, challenging and interesting approach.

I started mixing the first song, which was probably the thinnest of the lot with nothing but guitars and vox. It’s quite an intense and raw song and I wanted the EP to start strong. I started with the guitars and did the usual pan each mic to either side, high pass and low pass EQ to keep out sounds not in the frequency range, some low mid cuts and a slight high mid boost to eliminate any masking, compressor and a noise gate. This got it sounding pretty good, but after mixing everything else the guitar just seemed to not… sit right, it was just a little overpowering in the high mids, so I decided to do something I tried in a different mix. I put a stereo spreader on both guitar mic tracks and spread the upper mids a little, order 11 for one and 12 for the other so the same frequencies weren’t being spread on either side. Being already panned to one side, this pushed some of those frequencies just a little closer to the middle, widening it at the loss of some of its power, perfect!

 

On the left we have the left guitar mic and its plugins, on the right, the right!

 

The vocals were a lot more simple, just a matter of EQ, compression, noise gate, and a tinyyyyyyy little bit of delay.

 

Now the lead guitar was interesting, for the electric lead solos we had 2 mics, one dead on and one to the side, so I panned them to show this. This sat the guitar slightly to one side but still gave a nice full sound, almost like if a band were playing and the guitarist had come up to do a solo and was standing just to the right of the singer.

Here you can see the centered mic on the left and the panned right mic on the right, as well as the EQ, compression, noise gate and delay decisions I made.

Next was the acoustic guitar solos, there wasn’t really anything special done here, it was recorded with a single mic so I opted against any crazy stereo imaging, quite a decent amount of EQ and delay, and more compression than I normally try to use, then a good ol noise gate.

 

The only other major instrument recorded was the double bass, once again a pretty simple process, decided to go for some more creative EQ to try and keep the clarity of the double bass whilst removing resonances and fixing some masking problems.

 

There were some other instruments used, including an egg shaker, a MIDI glockenspiel, a MIDI kick drum and some whistling, but I opted against doing those ones as they were very simple processes, but if anyone is interested let me know and I’ll show you how I did them, or help as best I can with mixing those instruments.

The final part of the mix was the reverb, this was done by setting up an auxiliary channel strip and putting the space designer reverb on it, then bussing all tracks to it. I don’t know very much about designing reverbs, in reality or in plugins, but I found the preset my ears agreed with most was the small booth reverb. It just sounded right and fitted the style perfectly.

I also used a concert hall reverb on the vocal echoes in the first song, ‘Heaven’.

If anyone has any advice for my mixing process for future mixes, please don’t hesitate to send it my way!

Also, please look up Sam Luff on facebook! or soon iTunes! He will be extremely grateful and is very much worth your time!

Audio technology magazine posted this on facebook, found it rather humorous.

This is a very interesting idea I have never even thought about! As we all know, every piece of hardware and software sounds different, this would be a brilliant way to get some really original synth sounds, live and in the studio.

Glitzerstrahl

Here’s something that came as a big revelation to me when I first saw it. Guitar effect pedals are an awesome addition to your synth setup!

( Keep in mind that I’m still a beginner in this field ;-), I’m easily impressed. )

I already had a distortion / fuzz pedal (Plimsoul) for my strat that I really, really like the sound of, and putting it in front of for example the Animoog running on the iPad, or the Meeblip gives a whole new range of possible sounds. I especially like how it gives you a truly tactile and immediate way of manipulating effects while playing live. Also, since it’s actual hardware sitting in the signal path before your instrument reaches the interface, it puts 0 load on the CPU of the host computer.

The obvious downside of course is that you cannot go back and manipulate parameters or dry/wet mix…

View original post 247 more words

A synth is created through tone generation, manipulation, and amplification.

Generation:

A basic synth will often have a single oscillator (the generator) with the ability to generate some kind of sound wave, the most common being the square, saw, triangle and sine waves.

To start building a synth sound, first select your wave. The square wave has quite a ‘woody’ sort of sound, and is often good for bass synths, the saw wave is quite sharp and a common lead synth wave, the triangle wave sounds similar to the square wave but a bit more dull and harmonically weaker and the sine wave is very pure and lifeless. This is because the shape of the other waves creates harmonics, and if you were to filter out the frequencies around the note you were playing you would end up with something that sounds like a sine wave anyway.

These waves are often not the only choices, many synths offer you the choice of combining two waves, and some synths offer you noise waves, or even the ability to create your own wave shape.

So depending on your synth, these are your choices… for one oscillator. Very often synths will have multiple oscillators, giving you the ability to layer sounds on top of each other, and blend the volumes for an even or one sided combination, as well as a pitch control (often used to put an oscillator up or down and octave, but sometimes used for different intervals, be careful with using intervals other than an octave, it might sound cool at first but it could create problems in the harmonic progression of your song.) Individual detune controls are often available to add some crunch by putting oscillators slightly out of tune with each other.

Many synths also allow you to choose how many unison voices the synth has, which essentially means the synth adds multiple copies of itself onto itself, this is a great way to create a really huge sound, but don’t get carried away or you won’t leave any space in your mix for anything else!

If you’re just starting out I suggest you just use one or two oscillators, and stick with basic wave shapes, one of my favourite sounds is 2-3 oscillators all set to a basic saw wave. Which sounds kind of boring, but here’s where it gets more interesting.

Manipulation:

The first manipulator is the ADSR envelope (an envelope is where the signal passes through to be manipulated, feel free to call it the manipulator thing.) This stands for Attack, Decay, Sustain, Release.

The attack is how long it takes for the synth to go from no volume to peak volume

The decay is how long it takes to go from peak volume to sustain volume

The sustain is the volume from the end of the decay period to until the note you’re playing is released

And the release is the time taken for the sound to reach silence after the note is released

This is best shown using this graph:

Slow attack and release times are most common in pads to allow the chord to swell up and fade away.

Decay and release are only experimented with excessively for more unusual kinds of sounds.

The next manipulators are the filter and resonance:

The filter is used to roll of frequencies above a certain point, sometimes you are given control of how fast they roll off as well. This is often used to filter out unwanted harmonics or if the high frequencies are masking another sound in the mix and you feel you don’t need them. Another common use for it is to have the filter set low and automate it to open up gradually, allowing the synth to grow and fill out the audio spectrum. If you don’t know how to automate, leave a comment and I’ll do a post on it as I won’t be going into that in this post.

The resonance is a control to create a resonance at a certain frequency, making it more prominent in the sound. This is often used to create a bigger bass sound or to emphasise some interesting harmonics in lead sounds.

Other manipulators are –

The LFO: This stands for ‘Low Frequency Oscillator,’ but it doesn’t generate sound, however it is still a waveform, this waveform is just used to control other aspects of the synth. A common use is to have it control the volume on a bass synth so that the volume fluctuates at the rate the wave is set to, the wave is then often synced to the song tempo, and there we have… WUB WUB!

Other uses are to control things like the filter, the pitch, the resonance or anything really, and you can get some really insane sounds.

Effects: These can be anything from chorus, flange, delay, reverb, or many other things, but I won’t be going into these in this post either.

Unison Detune: This is often a single knob to control the detune for all the voices in the synth that aren’t given actual oscillators.

I’m sure there are lots of synths with manipulators I haven’t mentioned, but these are what I’ve found to be most common, but if I’ve forgotten an important one, let me know!

Finally, the amplifier:

This isn’t really something you need to think about unless you’re using a hardware synth that requires an amp to make sound, in which case you will just have to buy an amp and get familiar with it, because all amps are very different. Software synths are amplified by your DAW or your computer (if its a standalone synth) and played through your speakers.

There is also another kind of synthesis called FM or ‘Frequency Modulation’ synthesis, but I will save that for another post as it gets quite complicated.

Thanks for reading, hope you found this helpful, if theres anything you’d like me to do a post on, let me know!

Setup and Recording:

Guitar and bass: The guitar and bass were both recorded using an Axe FX pre-amp, running into Helix board 18 fire wire digital mixer set up with a recording track in logic. The tone/virtual amp setup for the guitar was an off axis miking of a virtual Recto Orange amp. The bass was run through the Axe FX and into the mixer as a direct signal.

Drums: The drums were programmed with MIDI using Superior Drummer software.

Synths: The synths were programmed with MIDI using the Nexus plugin.

Voice: All vocals were recorded in a small room with little treatment using an AKG Perception 220 running into a DI interface, into a recording track in logic. A pop filter was used and singer/vocalist stood a few inches away from pop filter with that a few inches from the mic.

Processing and Effects:

Guitar: The 2 guitar parts were panned hard left and hard right. Both being processed with CLA Guitar plugins from waves, to add a little reverb, compression and EQ colour, some frequencies were then EQed out separately. Intro riff was notch EQed for effect.

Bass: The bass was EQed with a high roll off and some mids cut out, then compressed lightly.

Synths: Synths were given some Drastic EQ dips, peaks and roll offs to help stop masking. Delay and reverb were applied in the plugin whilst creating the sound.

Drums: In the plugin: Kick and snare were EQed and compressed. The toms were noise gated and compressed, some were also filtered. All drums except overheads and hats were bussed to a separate channel and given parallel compression. This bus was then sent to the main out with everything else at a lower level to add an intense thickness to the sound. Drums were all panned to the appropriate places to where a drummer would be sitting. Out of the plugin (aka the drums as a group:) Multipression to compress upper mids was applied, as well as a limiter at a low threshold to keep from peaking.

Vocals: All sung vocals were given a tight reverb, relatively strong compression, some stereo widening (particularly on the harmonies) and some EQ for colouring. Often harmonies were pushed down in volume to keep them from masking the main melody. Screamed vocals were triple tracked, EQed, had reverb applied, and compressed slightly. One track was panned half left, one half right, and one widened to maximum.

Checked phase correlation using a correlation meter plugin and frequency spectrum using a multimeter plugin.

Mastering: Mastering was done using iZotope Ozone 4. In the paragraphic equalizer some lower mids were cut, as well as some of the highs, as there was a lot of hiss. A limiter was added with a -4.2dB threshold to really improve the loudness of the track. A harmonic exciter was added to enhance the higher frequencies in particular. A multipressor was used to compress all the frequencies with slightly different amounts, as the mix needed it to help translate to other speakers in my house, crossovers were set at 80Hz, 324Hz and 5.05kHz. A multiband stereo imager was used to centre all frequencies below 80Hz a little and widen all frequencies above 5.05kHz quite a lot.

IZotope Ozone 4 also has a built in correlation meter, level meters and phase monitoring allowing keeping tabs on phase cancelation and peaking. Mix was converted  to mono to listen, however, just to be sure.