Welcome to the Sound On Sound Recording and Mixing podcast channel where you’ll find shows packed with Hints & Tips about getting the most out of the recording, mixing and mastering process.
More information and content can be found at www.soundonsound.com/podcasts | Facebook, Twitter and Instagram - @soundonsoundmag | YouTube - https://www.youtube.com/user/soundonsoundvideo
Hello and welcome to the Sound On Sound recording and mixing podcast channel with me Sam Boydell.
In this episode we are going to look at a really important skill set for today's music producers and that is how we can create more realistic sounding orchestras from our sample libraries. Our aim is to go from this (audio) to this (audio).
Before we move into the technical, let's think about this conceptually and understand the reasons behind why we want to do this, which is that we're music makers. We're here to convey emotions and transport the listener into the worlds that we've created for them. This is true whether you're an artist or, like me, a commercials composer and the first step to achieving those goals is to make something that's truly believable.
So, let's get started in that I'm going to assume that you have at least a computer with the DAW and from there, I would highly advise that you are able to get hold of a couple of things that make the process a lot easier.
Firstly, a MIDI keyboard controller. It can be quite basic but it does need to have a couple of sliders to play with that later on we're going to be mapping those to the orchestral samples the different parameters that they can have for us. The reason for this is that a huge part of this is getting into the mindset of a orchestral session musician and the best way to do this is by using our hands and playing the sample libraries in live.
I appreciate people work differently, maybe not keyboard players. Some will choose to manually click information in and we'll be doing that to a large extent too, but there are other styles of MIDI controllers out there. I've got as an example a MIDI finger ring that's triple axis and that's really fun for this type of application, because I can play in with one hand, whilst the other hand is moving all the parameters as I feel the music waving in the air, like a conductor would.
But, for now, the examples that we're going to be basing around is around a typical MIDI keyboard. Secondly, if you can afford to bring in a microphone into your setup, particularly a condenser style mic with multiple polar patterns, then we can use that for many applications to further enhance the realism and the uniqueness.
So just a quick side note on uniqueness. We are all using the same sounds now, so within this we should keep a thought out to how we can take these great samples that are an offer to us and make them work for us, make them unique to us. So we'll touch on that a couple of times as we go through it.
So we've got our setup, we are now going to need some sounds and the better the sounds are to begin with, the easier this process will be, particularly if the library has multiple variations, different playing styles, adaptable microphone positions and so on. That tends to be what you're paying for, you're paying for that extra variation. In terms of free libraries, there are some out there that are really amazing for the fact they are free.
Spitfire Audio and PianoBook are really, really great places to start, but there are plenty of other companies out there doing free versions now, of their paid versions. And something I should note is that some need you to have a paid version of Kontakt, so do be aware of that because obviously that's going to add to the cost but you should definitely be exploring different sample creators so that your sound continues to be unique, so that we're not all using the same sounds.
Another thing to bear in mind as you seek sounds is to think about something called room tone. So that's the room that the sample library has been recorded in. As listeners, we actually subconsciously pick up on room tone as a historical characteristic. And the infamous scores we know and love have a particular sound about them, and when you research their recording, you come to realize how strongly this plays its part.
Some famous examples are John Williams. Used to use Abbey Road for his scores, just, it has this classic sound about it that we remember, and in particular that famous brass sound. And and then you've obviously got people like Hans Zimmer's work, which would get recorded often at a place called Air Adele.
Clue being in the name, it just imparts a beautiful airy space around its players. So we do need to think about these things. Realism is partly from the fact that these orchestras are all recorded in the same space. So they're in these really unique spaces that our ear is almost trained to hear. And then it's very common for these composers to re amplify the computer made sounds they've had or synths into the same room.
And that's because They know that as listeners, we want to feel like we are physically there, transported to the center of the orchestral recording. And it's what just helps to keep everything blended together. And further note on room tone or reverbs, is that when you use multiple reverbs, They kind of eventually start to blend into one and then there is actually no room tone because they're kind of smearing together.
So that can be a creative technique to use, but it can also be a negative. Certainly when we're thinking about realism, we want people to feel like they are inside this orchestral recording happening. So let's get started on this. I've got a short piano motif. We're going to replicate the notes on a string part that will come after it.
The piano part is for context, so that we can really feel when the strings come in. So what we're going to do is we're going to listen to each iteration as we go, and as we're contributing to this realism factor that we're trying to develop. And we're going to start off with it at its most basic stage, just as you would as you were writing a demo part.
The string sound that we'll be using is Spitfire Audio's Chamber Strings. So I want to point out at this stage that this isn't a pay to win game. These tools are amazing and There's reasons why I bought them, but for many, many years, I operated with some very old string libraries and a lot of free ones.
By having that restriction actually led me to figure out a lot of these techniques on my own, because I had to manually. create a lot of the playing styles that are now available to us, or how to manually replicate the idea of different mic positions. So it is very much possible with whatever you can afford to do.
Having said that, these make it a lot easier. Because they've got all these amazing playing styles already. In addition, going back to the idea of room tone, a company like Spitfire was working hard to create all their samples at Aerodel, and I really loved that sound. And having the opportunity to have various different music libraries available to us recorded in the same space, again, comes back to the idea of.
it feeling a bit more real, feeling a bit more live, as opposed to having lots of different libraries recording different spaces and then that space then becomes a bit merged together and it becomes a non space. So let's have a little listen to the piano motif followed by our basic start point which is the simple long string sound ensemble patch.
Sounds like a computer generated string patch to me. The aim here is to include everybody, all different levels. And as an educator, I do see it quite often. People have clicked in their notes that they want to play. And the first step is the fact that you can hear that all of the notes are playing at the same velocity.
So that's the pressure or the amount. of pressure that you put on one key, and therefore the volume that it comes out at. In addition to over quantizing, so quantizing is snapping it into the grid of sixteenths or eighth notes, which are two things that a real player will never be able to achieve. So, this is one of the main things that we're going to hit upon.
But to start with, we simply need to break the ensemble up into its So, we've got basses, we've got cellos, viola, violin 1 and violin 2. And what we want to do is separate them all out, so that we've got them all on separate tracks. And this will allow us to be able to affect each part individually. And we're not going to be able to go as far as you could do.
The chamber strings itself is going to be three to four players per section. So three to four viola players, three to four bass players. So depending on the sound that you're going to be using and the patches, depends how far you can break it down. But you know, if you were to really go deep into this, you could find a sample library that allows you to affect each individual player.
On the same note, if we're working with a library patch, and all that we have available to us is a patch that incorporates 10 15 players, then of course we're going to have to work in a slightly different fashion. All of the techniques still apply, but we have to realize that we're working within that limitation.
So we've broken the ensemble up into its parts. With the chamber strings we've got our five And to start with, we just need to do some basic panning. Samples need to have their own space and have their own feel. Again, we're trying to create width, we're trying to create individual players. So, simply starting with the concept of an orchestra playing live in front of you, where do they sit?
Well, traditionally they do sit in certain areas of the stage. Thinking deeper into this, When we think about mixing techniques, we think of bass down the center, and mid to high frequency information are the sides in general. And that comes primarily from the idea of orchestration. So there's some really good tools out there that actually show it to you in a diagram as you're choosing through the different patches you want to play with.
But generally, we can just be a bit random about it, as long as they have their own space. So let me play you that, how it sounds now.
And listening back to the original start point.
Interesting how some of the parts now poke out. And that's because. The samples themselves will have been recorded slightly different I imagine than the ensemble parts, and there's different microphone techniques and so on. But for this next stage, what I call Humanize, which is the MIDI Humanize function that is available in all DAWs.
In particular, I'm going to be using Logic Pro and on Logic Pro, you simply double click a MIDI region and it has functions to the left hand side of the keyboard roll that comes up. Click functions, go down to MIDI transform. You've got all these different MIDI transformations that you can make. One of them being humanize.
So if we click humanize, It comes up with a preset of things that it says that we can change in the position, in the velocity, and in the length of each note. As we touched on earlier, humans, believe it or not, cannot play at one velocity at exactly the same time as each other. So we need to introduce randomisation.
People are going to be a little bit early, a little bit late, the length of note they play and the velocity they play it. However, we are working within 127 milliseconds. So 0 to 127 is our value range that we can work in. And when you're working with sample libraries, it's important to understand that the way they create sample libraries is that they'll use each band of value.
So 0 to 40, 40 to 80, and 80 to 127. They may use 3, they may use 4, and it will trigger a different sample, whether that's a slightly different expression or if we just think about it in volume, 0 to 40 is going to be very lightly played, 80 to 127 is going to be as hard as the player can play. So we have to factor that in as we're moving through it.
Again, going back to the idea that If we can play these things in and put a natural velocity range in there already, then once we come to this humanisation process, suddenly that becomes a lot easier, because we just simply want to shift very, very small amounts, left, right, up and down, or the length of notes, etc.
We're simply just trying to replicate what would happen in a real life situation. So the default value of 10 can be a little bit too much for me, so I end up moving to 6 or 7. Then let's have a listen to how that now sounds.
And listening back to the original start point.
So at this point we've barely done anything really, but it's already starting to feel a lot more natural. It's at this stage that we want to start bringing in our own ideas. to the section of music. Our own feeling. So we need to look at what the digital humanisation has just done, both positively and negatively.
A little bit of a note of caution that the first note on any MIDI region, certainly within Logic, has to be on the bar exactly, or after it to be played. Anything before it will be outside the MIDI region and therefore won't play. Let's have a very quick listen back to the post humanization so I can point out a few things.
So I can hear that the velocity at the beginning is much stronger than the rest of the notes. It doesn't quite gel perfectly together. I think the ending note needs to be longer so it just feels more emphatic. So a couple of basic changes and this is now what it sounds like.
Once more.
And I just want to chime in here and mention other orchestral instruments, drums, percussion brass in particular, vocals, whatever it may be, all these techniques still apply in particular where we're going to go with it now, which is we're bringing in human feel. You've got to really understand.
particular instruments and how that person may play them. Oftentimes you'll see a brass instrument where every note reaches towards the next note. Well a brass player is going to have to take a breath most of the time, so we have to add in little breaths. We have to understand how their breath manipulates the sound and therefore is the sample library that we're working with achieving that to the best it can do.
So it's all of the little tiny percentage gains that are going to create this much greater result at the end. So let's take it the next step. We're going to become the player now by adding in MIDI automation. So it's not just volume automation. We're going to use our keyboards or You can use the pencil tool with your mouse to really draw in complex MIDI automation on parameters like vibrato or expression.
So we need to do two things to get going. We have to set our DAW to allow for the merging of two recordings as you overlap the current MIDI region. In Logic for instance you go to the recording tab in the settings and it simply says whether you would merge When the cycle is on or off, that will allow us to get rid of the default action, which would be to create loads of take folders.
As you record over the second thing we need to do is dependent on your DAW. We want to map our sliders that we have on our MIDI keyboard or wherever it may be to the functions within the sample instrument. We're talking about things like vibrato, expression, whatever you'd like. We need to figure out a way.
That it controls those parameters. Every DAW is unique, but generally within contact player, you would select the parameter you want to work with, move it around a bit, then also move around the hardware controller that you want to play with, then go back to contact player, right click on that parameter, learn CC automation.
So what I'm going to be doing now is I'm going to be again putting less emphasis at the beginning that really harsh note. I want to fade that in essentially and at the end I want this sort of snapping back feeling so it's like we're also going to be introducing vibrato now or at least playing around with the abilities that we have and finally there'll be some overall volume automation on the whole string section.
So I've got the string section now within a group you Or a bus to which I can then automate the final output. Just so we can add that extra push in the direction we want. So let's have a listen to how this sounds now.
So starting to really get there. The automation part. Like all mixing is where you put the most time and effort and it makes the biggest difference in my opinion. It starts to really impart your own emotion into it and just sounds more natural. So we're coming back to room tone now. We need to think about with your own pieces, you know, what is the space that the orchestra is going to sit in within the track.
In mine, I'm feeling like I've got this very upfront piano, intimate piano sound, and I suddenly want the orchestra to be the opposite. I want it to be deep, wide, and therefore, I'm going to be utilizing the microphone positions that are available to me so that it feels like it's set back further. In addition, I'm going to add in a reverb, a hall setting reverb that will just give it more depth really, an extra depth, especially as I want that tail to kind of drift off better than it currently does.
So probably what happened when I was making this is that I felt as if. The room mics alone were not doing enough, which is quite common. Additionally. The double reverb, even though I said, you know, too many is not going to achieve what we want. The second reverb just adds that little bit of smearing that is very subtle within the notes.
So obviously from one note to another, it just adds, imparts that depth, imparts that extra naturalness. And also what I want you to listen out for is how I've kept the reverb quite natural in that I haven't EQ'd out various frequencies. I haven't EQ'd all the low end out. I want the low end to come at me.
And engulf me. I've probably gone a bit far with it here just for this example, but you know, generally when you go and watch an orchestra live, your base energy's bouncing around the room really engulfs you. So let's take a listen.
It's got a really nice tale to it. Let's for reference, I'm gonna chuck in the piano.
Great. So let's keep pushing. We wanna inject. more depth and dynamic into the orchestra. In this case, we want it to sound larger than the sum of its parts. And we're going to do this by understanding all the techniques that we have available to us from our session players through the sample library. So we're looking at things like flautando, or in particular here, I'm going to incorporate portamento.
Certainly on the first note and the last note. So that's the sound of a sliding of one note to another. You know, just like you would on a guitar, same on the string, where they slide from one note to the other. They call that portamento. I want it to be very subtle, so I'm just creating a new patch, a new track, that has that.
A portamento function, and I'm adding that just to the first note and the last. I'm not just simply copying and pasting the whole MIDI region from a different instrument. And this is what it now sounds like.
So it's very subtle there. All these two, three percent differences, they really do add up when we come to the end. But a quick note on playing styles, this is obviously a very short, specific section of music. When we're looking at a whole piece, and we're looking at, you know, how the orchestra is being utilized, how, to what extent it is in our piece, playing styles is one of the biggest things that we're going to have to incorporate to, to build the depth and dynamics from part to part.
Therefore, It's really important that we're continuing to develop our knowledge for phrases like consordino or flautando. It's just so that you've got them available to you. And I will just note that going back to This being a paid for library, and for many years not having this available to me, I again would have to manually create these styles by re engineering the sample somehow or merging two samples together.
So, it was that manual kind of creation that I suppose we just take for granted now. But it is possible. So this leads on to the final stretch where we're just going to continue to layer like we just have done and really build up all the various playing styles and add them in very subtly in order to emphasize certain notes, create dynamics.
In my experience, samples struggle with that side of playing. the human player. So if you can imagine a whole orchestra that is really feeling the piece together you've got, you're going to have much more vivid volume and vibrato spikes as each player sort of feels it out. And I've found the easiest way to replicate this if you have enough CPU power that is, is just to double up on the notes, find a slight different patch, push it hard on a section, and then bring the volume right down to feed it in subtly amongst your original parts.
In addition to a lot of layering that I've done, I've added a flautando section at the start and the finish, which really helps us show off the dynamics of the playing and help carry the music into the next sections. Like I was saying before, trying to imagine that this is a small section of a film score.
The more variety that you can add in that contributes to dynamics, the better. So we need to go back to imagining what a 60 piece orchestra would do. Most of the time people making string arrangements just don't go far enough to replicate all these subtle additions that session players are contributing.
Let's go even further again. I've added a fake series of sounds that are replicating, you know, sort of bums on seats. I have a violin not that you really needed one, and I've recorded myself multiple times sort of subtly fidgeting, bringing the violin out to play, my breath. There is also a blank room tone in there just to help sort of merge the two, and I've also re amped the whole piece into those same microphones in the room that I was using.
If you aren't familiar with re amping, There's plenty of information out there. Very quickly, it is that I'm sending the music out into some speakers in another room, into a live room, and then I'm using microphones to re pick that up. And it's just so that I can increase some uniqueness, some realism into the whole thing.
Though I'm not really blessed with a great room, so I've actually Had to feed all that back into the additional reverb that I placed over the whole orchestra before and that has helped meld the two together. So let's have a little listen to where we are at now.
Let's hear that again but let's hear the initial demo as well. So
Some big changes. It hasn't felt like much each step, but now we've really got very far with it. Which leads us on to the final step, the mastering and the mixing techniques. So there's something about using multiple samples within a track that just creates a lot of upper mid range. So I've placed a de esser over the orchestral group that just works to pull down that area if it gets over the top.
We're talking two and a half to four and a half K. It's just where you get a lot of that sort of. sharpness in the sound, which we don't really want. The additional room tones weren't quite melding together as much as I'd hoped, so I've added a little bit of compression to the whole piece. Just talking like 1dB of gain reduction on a medium attack and release setting.
Because I don't want to remove any of the dynamics that were there before. I've worked really hard to place in. I just simply want to create a bit of a gluing effect. And then lastly, I've added in an EQ. This is quite an important one because samples tend to be quite dark and it's all those finger sounds we really want to emphasize in our realism.
So I've added a five decibel shelf at 6k and half a DB at 2. 7. And I've added 140 percent on the width knob that this particular EQ comes with so that we can really start to feel wrapped in by this orchestra. It may seem counterintuitive that I'm adding even though it's half a dB at 2. 7 and I'm also using a de esser to bring it all down around about that range but especially with orchestral stuff it has that kind of like sharpness on it but we just didn't want it too much.
So that's where the two things work in combination really well. So finally We're going to listen to where I ended up. I'm going to play you the initial sample. I'm going to play the piano and then the final Pegasus will come in and I will A, B between the two a couple of times.
There we have it. Thank you for listening, and be sure to check out the show notes page for this episode, where you'll find further information, along with web links and details of all our other episodes. And just before you go, let me point you to the soundonsound.com/podcasts web page, where you can explore what's playing on our other channels.
This is a Sam Boydell production for Sound On Sound.