Coming from a background in audio engineering, I thought I'd take this opportunity to collect some of the principles of sound and sound production. It's been almost 15 years since my degree was handed to me, so I'm a little rusty on some points, but most of it features in every single audio file I produce - I'm confident that there's something to take away from this tutorial for just about every community member reading this.
This topic is locked on purpose, I want to keep the tutorial clear of interaction for maximum information density. A separate topic to discuss and ask questions on the matter can be found here. Also, there's no way I can share everything I have to say in a single post, so this topic will be expanded with additional information as time goes on. So without further ado, let's get all the way down to basics and start with
What is sound?
To understand sound engineering - the discipline of manipulating sound - we must first understand what sound actually is. Wikipedia has the following to say on this topic:
In physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid.
So basically, we need a vibration and a medium to make this happen. In the real world, for example, using your vocal chords to talk makes them vibrate. This vibration is carried over to the air molecules - a gaseous medium - sending them bouncing up against each other until the ones up against your ear drum cause that to vibrate as well - upon which your brain translates the vibration to the speech it just sent to your vocal chords. But as Wikipedia says, the medium can be a liquid or solid substance as well - so your own head counts as a medium too. The vibration of your vocal chords is also sent to your ear drums straight through your own skull, and that's the reason why most people are so freaked out by hearing a recording of themselves: you are so used to hearing the composite signal of sound coming through both the air and your skull at the same time, that it sounds completely unnatural to hear your voice the way other people hear it, with sound transmitted through the airwaves alone.
Sound waves
What the vibration does, whether it's your voice, a passing car or your favorite band's music, is create sound waves. In a basic and poorly drawn form, they look like this:
Amplitude says something about the energy contained in the wave - the bigger the amplitude, the more energy, so the louder the sound is. The wavelength is inversely related to sound frequency, which determines the pitch of the sound - the shorter the wavelength, the higher the frequency; the longer the wave length, the lower the frequency. The human hearing is theoretically equipped to hear sounds within the frequency range of 20 to 20,000 Hz (or 20Hz to 20kHz, as it's usually displayed). Hz, short for Hertz, literally means 'per second', so a 20Hz wave occurs 20 times per second, and a 20kHz occurs 20,000 times a second.
Each sound source occupies its own frequency range - sounds that consist of only a single frequency, called (simple) sine waves don't naturally occur. What this means for the sound of your voice we'll look at in the next post, in which we will tackle dynamic processing - manipulating the amplitude - and equalization - manipulating the frequency spectrum.