Anonymous asked: for a long time I have been extremely curious about your vocal processing techniques. ever since listening to "here in heaven 2" I have been trying to imitate the effect with no success. could you maybe share some of your vocal processing knowledge? I have tried plug-ins such as melodyne and auto tune, and even the ableton pitch controls. it all just leads to frustration
it’s just the “vocal transformer” effect in garageband set to the “male to female” preset with the pitch correction set fairly high
whenever i have tried to do it live i use the antares autotune exo plug-in with the “transpose” dial set to +12 and the “formant” button pressed but it usually sounds terrible
for a long time i had this bad habit of seeing vocals as just raw information to be tweaked and molded the way i would any other sound. like any sample or synth sound or anything like that that i put in my tracks gets put through a million effects before it sounds like i want it to sound, especially when you are dealing with samples of recordings that are from vastly different eras and genres that were not designed to sound good together, that’s the only way to get them to coexist peacefully in the mix. i would record vocal tracks that, in terms of the quality and timbre of my voice or whatever, did not sound good, but did hit the notes i needed the vocal track to be hitting while saying the words i needed the vocal track to be saying. then i would process the everloving shit out of those vocal tracks until they barely resembled the actual sound that comes out of my mouth when i go to sing in real life.
this was a mistake. the result of this is that the instrumental parts of most of the songs that i have made until recently are not really designed to work with the way my voice actually sounds. like, a lot of old elite gymnastics songs, for example, are out of my range. because i just accepted that my vocals always sound bad without effects, i force myself to write vocal parts that i have to strain to hit the notes for without ever considering changing the key of the song to fit in my range better. there are songs with really distorted, claustrophobic production that just always clash with live vocals because there’s no way to drown a live vocal in the 20 layers of distortion and reverb i used on the recorded version in real time. and putting the vocal tracks from the recorded versions of the songs into the backing track, which is a completely normal thing that a lot of people do that i do not object to on any grounds, wasn’t an option either because they clash with the sound of live vocals even more than the rest of the track does.
if heavy vocal processing is integral to your project or you have no intention of ever performing those vocals live, then none of that applies to you. but i mean for me and my project, the heavy vocal processing on everything was totally just the result of insecurity and it to me it is basically my greatest failure as an artist so far. like i don’t know how many people have told me that they didn’t really start to get into elite gymnastics until after they heard the how to dress well cover of “here, in heaven” where you can actually hear the words. i mean even my preconceptions about what the project is where totally shaken and rewritten by hearing that cover. i think vocal processing is really cool and i recognize that there are a lot of really interesting things being done with it and that in general it is kind of a new sound that identifies a piece of music as being a new vital thing that is happening right this second but i also know that i messed up by trying to use it as a veil to hide my singing and my lyrics behind because i was afraid of what people would think if confronted with them directly. that was a weak impulse and it ended up putting a wall between the content of the music and people who were trying to connect with it. i wouldn’t want that to happen to anyone else.