Category Archives: Guitar Heroez

Some place to talk about guitars, music, some of the recordings that I am up to… basically anything music related.

I Don’t Care What You Call It… I Call It Music

I recently got into a what felt like a really useless argument regarding musical genres. The reason that I felt it was useless was because these days it has become a tool for musical division. It was bad enough back then when the record companies used it for marketing purposes. Nowadays, it seems like people are just using it for the sake of trying to differentiate from something already known and have other people use their name instead of an already established one.

Even worse is that I find these names rather confining. I feel locked in. It feels to me like if I say I play a certain type of music, I can’t play any other type of music for fear of losing my current fanbase. I don’t want to feel like that. I like to write and play music. I really don’t care what you care to call what I play. I just call it music. It’s the music that moves me at the moment. It’s also the music that I like to listen to at the moment. Honestly, if it sounds like music, I’ll probably listen to it. And if I feel like writing it, who’s going to stop me?

I already had this moment with writing “Dancing With The Moon”. My son, Joshua, heard me mixing and said, “Wow! That’s not like any music you’ve ever written… I like it.” If anything, Josh validated my defining moment. And even better is that Josh didn’t even put a label on it. He just called it music.

Getting Dirty

For years, I have been somewhat of a purist when it came to recording. Everything going in to my computer had to be clean and plug-in free so that I could scuplt it within Logic. For some things like acoustic guitars, that was good to a point, and even that had its moments, but for many other things, like electric guitars, it became totally uninspiring to record because what I really wanted was a tone that would excite me, and quite often it was right at my fingertips. But I felt I had to wait to apply them to my sound, and quite often, the wait would often result in not getting the sound that I originally intended. Main reason as far as electric guitar went was because there was no real interaction because the guitar and speakers and whatever interaction there was, it was a more digital response resulting in squeals for feedback, if there was any feedback at all. So, by the time the song was ready for mix, it would already be boring and the it would feel like nothing more than an act of turd polishing.

You can say I’ve had enough of that. If I was going to get a sound that excited me, I had to get dirty with my recording. This meant knowing the sound I wanted to use for the song, which is uaually the case, and stick with it to the end. Of course this means making a commitment. But, heck, I wouldn’t really change things once I record them, so I’m good.

These days, almost any track that is not of a software instrument is recorded with some type of treatment on them. Electric guitars will go through my POD or my Rockman, using one of the amp patches. I will also make sure that I am not using headphones when I record them because I can get some really cool interaction between the guitar and speakers, and it doesn’t sound digital. For my acoustic guitars, I have been taking advantage of some console emulations, such as the Universal Audio 610-A, and I’ll drive it through either an 1176 or LA-2A and Pultec emulations. Even vocals get treated now. I’ll do anything that gets me closer to the sound that I want to hear right away.
The result lately is that my last few mixes have got me even more excited because I am getting the sound I am looking for right from the start. Everything sounds right in place and I have been able to pull mixes together within a matter of hours, instead of days because I was fiddling for that sound and feel after the fact. And, the mix to me is far better than I imagined. Even Joelle has given me the “don’t change a thing” after listening to them.

I’m not saying that this is going to work for everyone. However, it has worked wonders for me. If I know how I want something to sound, I’m going to record it right away, rather than set it all up later. And most of all, I’m not afraid to get dirty.

Bad News + Campaign Slogan + Old Guitar Riff + Piano Lessons = Song

To say that some blessings come disguised as problems would be an understatement. By now, it is no secret that I work for BlackBerry. When the news was leaked that 4500 employees would be laid off, many of us dismissed it as rumors because there were way too many media agencies and blogs grasping at straws for a story, to create some interest becoming a dying industry. After all, what better sparks interest than fear? It just so happened that the news was true in this case and if anything, it felt like someone dropped a bomb. If anything, there were a lot of emotions going through me to the point where I was feeling overwhelmed. There were a lot of feelings that needed to be sorted out.

To me, there was only one way I could think of to get it all down. That was to write a song about it. As to where I got the music for this song, I had an old guitar idea that I had recorded and it never quite worked for me in that form. I have been learning how to play piano and have been practicing fairly diligently. I decided to try to play the guitar riff on the piano and I was instantly hooked on the song. All of a sudden, the lyrics containing all of my feelings were starting to bleed all over my BlackBerry keyboard, figuratively speaking.

What was missing was some form of resolution to my feelings. I needed a statement that said that no matter what was happening, no matter what I was feeling, and no matter what does happen to me at BlackBerry, I will go forward from all of this because I believe in something greater than a place of employment. Plus, I also needed to state that I would do this and help others as well, whether I was on the inside or the outside. All of a sudden it hit me: BlackBerry’s campaign slogan. We will keep moving. To take it a step further, I expanded on it to state that we will not stop. We’ll keep moving on. That was the song. Our feelings may be hammered and bruised, and in some cases broken, but we will keep moving on. What a beautiful statement.

The song is dark and full of emotions, but at the same time, it is filled with hope. There is a glimmer of light that shines through the darkness.

The song is a blessing to me because without the onslaught of emotions, how would I have written this? I needed to experience that bad news in order to come up with what I felt is a good song. It means that I am alive and so are my songwriting abilities. I do feel blessed in not only the ability to write such a song, but also in the ability to keep moving on.

Given Up Reality… For Music

I think I have discovered half of my songwriter’s block: I’ve been trying too hard to keep my music real. I’ve been so pre-occupied with guitars that sound like guitars, bass lines that come from a real bass, and drums that sound like a drummer has been playing them. It probably stems back from my earlier days when I used to write songs with a Yamaha digital drum machine as well as a Roland R-8m drum module and people would comment that the drums weren’t real. Not real, even though I was using digitally sampled drums in those units. I became obsessed with making drums real. My obsession got the best of me as I lost something bit by bit. I was so caught up in how real things sounded, as if it were to be played by a band, that I forgot that it was all about making music. I think I almost forgot what the fun was all about. I got caught up in a paradox of reality rules. The guitar may have been real, but the amp isn’t (it’s a POD); same goes for the bass; the drums are digitally hyper sampled. Or, I was insistent on using an acoustic guitar when I wanted something that sounded like… an acoustic guitar. Where’s the music in all this reality?

It took my POD and the greatest birthday present Joelle ever gave to me, a James Tyler Variax, to help me lose what is real and help me discover what is musical. This is almost as great as the present itself. I remember watching one of the Line 6 videos and Sean Halley did this totally weird tuning and model setup in the POD and after hearing it, I decided that I had to try my own variant. After set up my own model and tuning and added some effects to it, it was almost magical. For the first time, I didn’t think of how real the guitar sounded. I thought how musical it sounded. I was hearing music once again and I was excited. I was even more excited that the music I was hearing started turning into a song. Adding Reason 7 into the mix, no pun intended, brought out even more music. At that point, I didn’t care if it was going to sound real. I only cared if it was going to sound like music.

I then found myself grabbing a bunch of Joelle’s percussion instruments, like her one shots and her shakers, and I sampled them into Reason and started making music with them right away. If anything, all it took was one or two samples. It didn’t take that much time to do and next thing you know, I was putting a song together. It got me way too excited that I have been waking up early in the morning to work on this song. It’s getting me excited because it sounds like music. I feel like being given back music was God’s special gift to me. Almost as special as being saved.

Does the song sound real? Who cares! It’s music!

Reflections off a bad mix

As my Christmas and New Years holidays come to a close, and I go back to work, I sort of gleaned a lot out of the week. To start it off, I was excited about a song that I was working on. The song started out of being inspired from noodling on my guitar synthesizer, which obviously fleshed itself out. It was an exciting start to the song. One thing led to another and a song was born. I got the guitars, lead and vocals down. I even made a string arrangement to add to song. The vocal harmonies seemed fantastic, and I thought that the vocal effects were chilling. Everything seemed to fit.

So what went wrong? The mix… that’s what.

I spent the past few days working on the mix trying to get things right, only to be disappointed in the end. It sounded limp and lifeless. Other than the vocals, there was nothing to get excited about. It was a total letdown. I felt like I lost a few days out of this.

However, instead of feeling like I lost my entire holidays over this, I may as well learn something out of this mix, so it won’t feel like a total waste.

The first thing I learned was not to rely on a set of drum samples just because they sampled a famous drummers kit. What may have been good for his band, is not necessarily good for a mix in other people’s work. I couldn’t get the type of oomph I needed for song, no matter how hard I tried. And the cymbals were only recorded through the overheads all at once. I couldn’t separate them. Nothing worked on it. I should have been smart enough early on in the mix to switch kits for something that I knew I could work with.

I also learned how well they didn’t mix with the bass guitar either. I loved how the bass guitar sounded as it was and it was more prominent in the song than other parts. The drums are supposed to reinforce the bass. These didn’t seem like they were part of the same song, let along reinforcement.

My next mistake was trying to compensate with levels. I pushed everything to point where it was pushing the buss compressor to the point where everything was mushy. Now nothing sounded right.

To make matters worse, I thought I could  cover things up by applying my tape emulation setting in Ozone. That was nothing more than an exercise in turd polishing. Let’s face it. If it sounds bad, then nothing will make it good. What was I thinking.

So, where do I go from here? I think I am probably going to give this song a bit of a rest for now so I can plan my strategy for the next time I mix it. If anything, I have other song ideas that I would like to work on for now. I can’t brood over this song. There were a lot of things I got out of writing the song as well, such as programming some real nice drum lines and some real cool harmonies. If anything, I could apply them to other song ideas. So, really, nothing was wasted. If it wasn’t for the reflections of this bad mix, I probably wouldn’t have anything to think about for my next mixes to make them better.

Convoluted Reverbs

Joelle is right… I am turning into a grumpy old fart. Well, I am because I can’t believe how much time I waste trying to find the perfect sounds and settings and all the things that are messing with making music. I already grumped out about playing guitar. However, despite all of my grumpiness, I feel that I have become a better person for it. I’m starting to hear music.

However, there’s another thing that got in my way of music: Convolution Reverbs. Or, should I say convoluted?

Heck, why does wanting to have some ambience in my sound have to be so complicated? What happened to just dialing in a space like we used to be able to do with old hardware reverbs like my Ibanez SD1000? Heck, I could dial in a type, decay and density, along with a few other parameters, and get instant results. I find myself spending way too much time looking for the right space, time and other garbage that I lose time on the music itself. Sure, if I’m looking for medium sized wood room and know that it is there, I may dial it up and move on. However, for most ambiences, I can get away with a simple algorithmic reverb that I can make sound thick or thin to my liking. What do I care if it sounded like a plate or a hall if the track sounds like music? Or even more so, would you know or care if I stuck in a delay before the reverb and made it sound like an airplane hangar or bat cave? Do you know the difference between the various concert hall impulse responses? Do you know what an impulse response it? Do you care? I really don’t anymore.

Give me a simple reverb. If I can dial a great sound out of it without having to think of whether it sounds like a room or a hall or a plate, then what else do I really need? Like I said already, I want to make music. I don’t feel like being a programmer for certain rooms. I just want to plug in and play, and if it doesn’t sound like music, I’ll tweak it.

Okay, I’ve been a grumpy ol’ fart long enough… I need to make some music (yes, I programmed this to go up while I am sleeping – grumpy ol’ farts need their sleep too).

Ruined by Technology

These days, I’m starting to thing that the more plug-ins I get for Logic, the worse off I am when it comes down to actually trying to make music. I find that I wind up playing more with the plug-ins, trying to find the “perfect” sound out of it, only to find that there is no perfect sound. There’s nothing magical that is going to become my inspiration for a song. I wind up with these plug-ins with interfaces that seems to resemble the console of a Boeing 747, and I find that I can’t do anything with them. Perhaps, you can call me old fashioned, having come from a hardware background, but I like a synthesizer plug-in to look like a synthesizer. Perhaps, that’s why I can use Propellerhead’s Reason very well. Everything looks familiar, right down to the hooking up cables in the back. I don’t need to think… I just make music with it.

Then there’s guitar… I can’t believe how I let myself get ruined by technology when it came to playing my guitar. All of these software amplifier plug-ins with hundreds of amps, cabinets, microphones, and stomp boxes. Heck, at first I thought i was in wonderland. I was playing for days with all of that virtual equipment. Was I making music with it? No! Heck, I was too busy trying to find the perfect amplifier setup. Well, I never found that perfect setup, because I was looking for that imperfect sound. The software amps sounded too perfect. What it was missing was the feeling that went with the sound. Sure, no one can tell the difference when your listening to it in a recording (and I made a couple of them). However, the challenge for me was getting it down into that recording stage. There was just something missing when I was playing my guitar through it. In fact, there were many things missing. There were things like tonality changes as you play, as well as feedback from the speakers.

Nonetheless, I gave up on software amplifiers and have gone back to hardware when recording guitar. Sure, it’s through a POD HD, and a tube-pre. You can argue with me that it is not an amplifier. However, it gives me both the tonality changes with my playing and it gives me feedback. To me, that’s what an amp does and when I record with it, I can now feel it. I know that recording it through my POD HD is final when it hits the computer, but who cares. I can never think of a time when I recorded my guitar through software that I needed to change my amplifier. If anything, it helps me to commit to a sound, as well as a song. The past few songs have been done this way, and I haven’t regretted it. I have less types of amplifier tones, but I find that I use very few tones as it is.

If anything, I want to play music, rather than feel like I have to be a programmer to make music. Freeing myself from this techno-yoke has allowed me to do just that, and I feel much better for it. I find that I am writing music these days. I’m willing to go as far as to believe that half of my stagnation was because i was ruined by technology.  That’s a place I don’t want to go back to. Playing music makes me feel good. Having my computer play most of it doesn’t make me feel good.

Musical borders on the Internet… STILL!!

I’m developing a real disdain for the music industry these days. To put it this way, I have been looking for two albums in Canada. However, they are not available without paying really questionable amounts to get them into Canada. However, if I just happen to live in the USA, I can download them off the iTunes store, Amazon, and even the online christian stores.

Isn’t this supposed to be the age of the Internet? Perhaps, there was a business model of whether or not it was feasible to ship physical product around the world. However, in this case, we are talking about an inventory that does not deplete, costs very little to store and absolutely nothing to mass produce. So, why the heck am I restricted from getting these albums in Canada? It makes no business sense? And then the music industry complains of a download problem. If there is a problem, the music industry created it! They’ve attached borders to the Internet and prevented the willing customers from purchasing items from their favorite artists. And all in the name of what? There can’t be money involved, because I am not even allowed to spend it.

It leads me to serious dilemmas here. I could download it from a torrent, but that would be a total wrong move. Gotta love that one: stealing Christian music to praise our king of kings and lord of lords. I could pay the outrageous prices as an “import” which is stupid because I live 90 minutes away from the border. I am not going to move down to the USA for the sake of two albums. I don’t even know how questionable it is to get a friend in the USA to buy and download them them for me and I reimburse them for it. I suppose it’s better than stealing it an I am still paying for it.

If anyone in the music industry is reading this, which I really doubt they are, break down the borders and let the rest of the world experience the joy of music. There are no more fences in the industry. There’s a new inventory model… it’s called no inventory. Embrace it and allow us to purchase what we want where we want. I’m sure it will solve a bunch of these problems that you’ve created. And, your artists will build a more global fanbase. Isn’t that what they would like in the first place?

— Posted from iCandy that doesn’t bear fruit!

Rhythmic Digititis… AKA MIDI Doesn’t Kill Music – People Do!

I subscribe to a periodic e-mail from an up and coming project studio engineer, and normally there’s some great stuff. However, there was this one email from him telling how he would never use MIDI drums again because they have that real drummer feeling.

My response to this is that the next person who says something like this to me, I will string them up by their treble clefs until 8th notes start dropping out. The problem is not MIDI. The problem is the person programming the MIDI notes. Recorded MIDI, in and of itself, is nothing more than a snapshot of your performance on it. A sequencing program is to MIDI what a tape recorder is to audio. One of the main differences is that MIDI is just a bunch of numbers and you can associate any sound with it. The other main difference is that you can enter MIDI notes in one at a time, or compose a score and have the computer play it.

So, with all of this freedom with MIDI, what’s the problem?

The problem is that people who often program MIDI notes into a computer are only doing half a job: they’re supplying the note. They’re forgetting the feel of the note. The feel can be in the form of an accent, flam, a grace note, or even a ghost note or roll. It’s these little things that people often forget, and then go on and on about how their drums don’t feel like a drummer or their piano parts sound like they’re being playing by a Borg. Building a piece of music is often like painting a picture. Leonardo Da Vinci would probably have never rushed the Mona Lisa. Sure, he may have been probably able to paint quickly, but that is because he knew his art to a point of what kind of stroke goes where. Same thing for music. People rush through the drums and stuff because they want to record the guitar, which they often spend a lot more care recording. They can pick out the notes or phrases that bother them. Why don’t they take this type of care with MIDI? The main reason is that they are unfamiliar with that part of their art and don’t take the time to get to know it. They believe that because it’s a computer, it’s automatically supposed to know what to do.

I’ve been playing with MIDI for almost 30 years. Prior to that, I was trying to figure out ways to make programmed drum machines sound more realistic, due to the fact that I would probably be relegated to working with them for the rest of my natural born life. Having discovered programs like Apple Logic and FXPansion’s BFD2 helped open a lot of new doors for me. However, it didn’t replace the fact that human feel was necessary if it were to sound like human drummers.

So how does one combat rhythmic digititis?

It’s rather easy. Listen to drummers. Listen to pianists. Listen to brass and wind players. Use your ears to pick up those little nuances. If you’re going to program those types of nuances, make sure that you have sample programs and romplers that will support those nuances. Recently, I’ve been playing with Garritan Jazz & Big Band 3 and discovered how to create trumpet kisses on it. In the right spot, it can play with your mind and emotions and all of a sudden you’ll be thinking “that’s a trumpet”. I have also been analyzing MIDI grooves from various drummers who played them on MIDI drums. Through that, I’ve learned to program my rhythms a few ticks ahead or behind the beat. I’ve also learned to play them with my fingers on my padKontrol. I’ve watched videos of Neil Peart and how he subtly puts in ghost notes. And once you are done recording your instruments, revisit the drums and see how they fit. Don’t be afraid to change things once you have pieced them together. You wouldn’t think twice about changing the guitar.

And if you’re going to say that you’re not a drummer, I’ll tell you to then either find recorded grooves from a drummer, such as Platinum Samples Steve Ferrone or Bobby Jarzombek MIDI grooves. You don’t need to invent your own rhythms for most songs. Maybe certain fills for certain parts can be programmed in, but the rest can be handled by an experienced drummer, even if he is virtual.

There are plenty of ways to make it real in a digital world… stop making excuses as to why it’s not.

The Apple iTunes Conspiracy Theory

Last week, I made a rather horrible discovery: I found that my iTunes had its EQ turned on. Doesn’t sound so bad, does it? Well, it gets worse… at least to me it does. The EQ settings were set to “Spoken Word”. I’m sure you are asking how this is bad. For me, it is bad in two ways:

The first way is that I already use a graphic EQ in the studio. I have it set to match the characteristics of the room relative to the listening position. Great pains goes into these settings as I am often having to endure both white and pink noise generators for a couple of hours. Not only that, it drives the rest of my family crazy ‘cuz all they hear is a constant noise ranging from a hiss to a jet engine noise. So, you can best believe after all of that setup, I am going to use it heavily to ensure that what I am hearing through those monitors is going to translate to other speakers. An added EQ in the chain will colour everything. It will change what I am listening to and it will ensure that what I am hearing is not correct. So, when I am playing my reference mixes, they are all tainted by iTunes’ EQ. This will affect how I mix, because I want it sonically similar to the references. Because of the EQ setting that it was on, it boosted the mid-range excessively. The problem this creates is that I am now doing funny things with the mid-range in order to make it sound like the references in iTunes. The worst part is that I never listened to my mixes through iTunes on my computer. I just use it to sync to my iPod and listen to it there.

The second way compounds on the first. Because I am not hearing those references properly. I am making misguided mix decisions because I am believing what I am hearing. These mix decisions carry over to other speakers, such as my car, headphones, etc. All of a sudden, things start to really sound weird and I find myself doing unnecessary things to make it sound reasonable when it translates over. This has often resulted in mixes that I felt were either missing something, or had too much of it. All I can say is that the mix is way off.

How did I discover this? Stupidly enough, by playing a raw mix in iTunes. It didn’t sound anywhere near what I had just done. In fact, it was worse… much worse. After shutting the iTunes EQ off (I already ensure Sound Check and Sound Enhancer are disabled) and re-listening to my references for a half hour, I discovered that I had a more solid mix. Everything was right where I wanted it to be. Not only that, it translates rather nicely now.

You’re probably thinking, why all the huff? Why didn’t you just turn it off sooner?

Well, I never turned on iTunes EQ in the first place. And I would never set it to Spoken Word. So, for me, it wouldn’t have dawned on me to look there first. And, like I said, I don’t listen to my mixes through iTunes… I can do a Quick Look and preview it there. Main reason I use iTunes is because it hooks me up to my iPod in a rather brainless way. However, I think that brainless has its pricetag.

I’m almost convinced that Apple turns on the EQ either with an iTunes update or a OS X update. Either one of the two. I’m almost thinking that Apple at times is arrogant enough to believe that it not only knows what we want in our settings, but that they believe they know better than we do. I will be watchdogging my settings on a regular basis. If I find that Apple has touched any of my settings without my permission from this point onward, you can bet that they will be on the receiving end of a 64-bit tongue lashing!