Physical foundations of sound - Part 2: Phase. Phase shift.


Not all sound waves are created equal—even two identical signals will have slight differences in volume and tone. However, some sound waves are identical (or nearly identical), and when they overlap each other, audio phasing can occur. But what is sound phasing?

Audiophase is one of those phenomena that is difficult to understand. However, once you understand it, you can take your work to a whole new level. In this article, we'll look at the basics of audio phase, why it matters, and how to solve phase problems in your projects.

What is "phase" in audio?

The phase of a sound indicates a point in time within a given sound wave. Sound waves have three main components: amplitude, wavelength, and frequency:

Amplitude refers to the loudness of a wave at a particular point in time; For a perfectly symmetrical and repeating sound wave (like the sine wave shown above), the wavelength measures the distance between two equal amplitudes along the cycle; and frequency (aka “pitch”) is the number of times per second that a sound wave repeats along a cycle.

The phase of the sound wave tells us where exactly we are in this cycle. In audio production, the relationship between two or more waveforms is important; the absolute phase of an individual sound wave is not particularly important for reasons we will discuss below.

Sound speed

The speed of sound directly depends on the characteristics of the medium in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids directly depends on the type of material and its properties. Velocity in gaseous media depends on only one type of deformation of the medium: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with surrounding particles and is called adiabatic.

The speed of sound in a gas depends mainly on temperature - it increases as the temperature increases and decreases as the temperature decreases. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the “conductivity” of the wave and, accordingly, the greater the speed.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-discharge. But in these environments, in addition to the same dependence on temperature, the density of the medium and its composition/structure are quite important. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complex and is determined in each specific case, taking into account the location and interaction of molecules/atoms.

Speed ​​of sound in air at t, °C 20: 343 m/s Speed ​​of sound in distilled water at t, °C 20: 1481 m/s Speed ​​of sound in steel at t, °C 20: 5000 m/s

Why is phasing important?

Audio mixing is the act of combining separate but cohesive elements so that each component is heard as the performer, producer, and engineer intended it to be heard. Thus, you will be juggling countless sound waves, each of which differs in frequency, amplitude, harmonic overtones, etc. What is bound to happen is that some waves will move in and out of phase with each other at different times. When two signals are "in phase" with each other, their amplitudes (i.e. peaks and valleys) are the same. Understanding phase is critical to optimizing mixes. Phase problems are at the root of many mixing problems and have a major impact on the overall sound.

When the waves collide

To simplify things, imagine two perfectly symmetrical and repeating sine waves, one in the left channel, the other in the right. When both halves are perfectly aligned, their amplitudes are the same over time, meaning you'll hear the same sound from both sides.

Connect these channels together and play them back simultaneously and you get what is called "constructive interference" since the combination of these in-phase waves doubles the resulting amplitude. Conversely, if these channels were completely "out of phase" (i.e., the lowest amplitude of one channel's wave occurs when the other channel's wave is at its loudest), their peaks and valleys would cancel each other out. This is called "destructive interference" or "phase cancellation".

Antiphase

Now let’s assume that the pressures from both columns (sound waves) change equally, but have the opposite direction. That is, one column emits “plus” waves, and the other column emits “minus” waves. This can happen when the listener accidentally mixed up the connection terminals of one of the channels (the left channel for example).

A little simpler. The speakers on the right speaker play forward, and the speakers on the left speaker play backward, trying to reproduce the same frequency at the same time. One column creates a pressure of, say, 1 Pascal, and the other creates a pressure of minus 1 Pascal. This effect is called antiphase .

The overall sound volume in the place where the listener is located, theoretically, should tend to zero, but this does not mean that any sound will not be heard at all. In this case, the “sound stage”, the “picture” of a musical work may be seriously damaged, and in some place in the room the sound will actually fade out, but not completely. The sound will become “blurred” and some frequency components will disappear from the overall sound signal.

We will not go into complicated scientific formulation by citing formulas. We can say that from the second speaker the sound reaches the listener, but with a time delay (do not forget that the same signal is supplied to the speakers!). And the delay in this case is exactly 180 degrees. Why is that? Let's try to figure it out in the picture, it's clearer and clearer.

360 degrees is the length of the signal period (Phase), 180 degrees is half the signal period.

How phase in audio works in practice

The exact scenario described above doesn't happen often in the real world, since these ideal, fundamental sound waves aren't what you're working with, but the theory still applies. Whether you're recording a single instrument or multiple instruments with any number of microphones, phasing will be a factor that cannot be ignored. Phase interaction also occurs when layering samples on acoustic drums, using different plugins on the same tracks, parallel processing, etc. Simply put, the phase of a sound is a factor when combining two or more signals—the more coupled those signals are, the more significant the phase becomes.

If you record an instrument using two separate microphones (stereo recording), the incoming fundamental frequencies (i.e. the notes played) will be the same in each channel. However, since each microphone is in a unique spatial position, different overtones will arrive at each microphone at different times. As a result, the sound waves of each channel will be similar in some respects, but different in others. Various frequencies can be boosted, cut, or virtually eliminated depending on the phase relationships between the two channels. As you can imagine, adding one or more microphones further complicates things, increasing the likelihood of phase problems. And if two microphones are pointed in opposite directions from each other, then the phase of one of them should be reversed to combat sound cancellation (i.e. silence).

Drum dilemma

When it comes to recording drums, phase problems often become rampant. After all, most modern drum recordings use a minimum of 5 microphones (or as many as 20) to capture each component, the entire kit, and room reflections. It doesn't help that cymbals resonate at high frequencies, and that capturing the low and high frequencies of the snail and kick drum often requires two microphones. If you're not strategic with your microphone placement/setup, your initial drum kit recording can turn into a mess and nearly impossible to mix. Luckily, the handy phase flip switch found on some modern microphones can quickly solve phase flip problems during recording, whether you're recording drums, acoustic guitar, or anything else.

Thinking about the phase

Stereo microphones are not the only culprit of phase problems. You can experience phase problems even when recording to just one channel, especially if your recording room is not treated properly. Sound waves are easily reflected from acoustically reflective surfaces. These reverberations essentially duplicate the original sound, transmitting back another, quieter and tonally different version of it after a certain amount of time, depending on your proximity to the surface, the size and shape of the room, etc.

If the timing of these reflections coincides, destructive or constructive interference may occur when picked up by the microphone, altering the resulting tone and volume. Intentional use of delay and reverb effects can also cause phase problems. To complicate matters further, you may hear phase problems when the recording is played back, even if the recording itself does not have phase problems. This problem can occur if your speakers are "out of phase", i.e. connected with incorrect polarity.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire sound spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. This graph is called a sound frequency spectrum graph. The frequency spectrum of sound comes in two types: discrete and continuous. A discrete spectrum plot displays individual frequencies separated by blank spaces. The continuous spectrum contains all sound frequencies at once.


of Amplitude-Frequency Characteristics is most often used . This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph, it is easy to understand, for example, the strengths or weaknesses of a particular speaker or acoustic system as a whole, the strongest areas of energy output, frequency dips and rises, attenuation, and also to trace the steepness of the decline.

How to Find Audio Phase Problems

As your hearing develops, you will be able to hear phasing when it occurs. Of course, the human ear alone cannot easily detect all phase problems, so additional tools and techniques come to the rescue.

Listening to your mix (and individual tracks) in mono rather than stereo can reveal certain phase issues. If you notice that the sound is duller or thinner when mixing your mix to mono, you may be experiencing phase distortion. Likewise, if the signal disappears from the center when mixing to mono, but remains in the left and right channels, you are likely experiencing audio phase mismatch. You can also identify audio phase problems using visual plugins designed with phase in mind (we'll look at these in more detail in the next section).

Phase shift

And now, we have reached the point where we can already examine the question - “What is a phase shift?”

Phase is the temporal relationship between two signals. And during the period of oscillation it changes from 0 to 360 degrees. Then again - from 0 to 360, and so on. We can say that this is the instantaneous signal level at a certain point in time within a period. We do not hear the phase itself, but we hear the phase shift of one signal relative to another.

Wiki says this about it: Phase shift

is the difference between the initial phases of two variable quantities that change periodically over time with the same frequency.

The phase shift is an immeasurable quantity and is measured in degrees or fractions of a period.

How to fix or prevent phase problems

Since there are so many potential sources of phase problems, it is critical to arm yourself with the knowledge, tricks, and tools that will help you prevent and resolve such difficulties.

Know the 3:1 ratio for microphone placement

This method is used when working with two microphones; The second microphone should be located three times further from the first microphone than the first microphone is from the sound source being recorded. If one microphone is six inches from the guitar's sound hole, the second microphone should be placed 18 inches (1.5 feet) from the second microphone. This technique doesn't always work and some adjustments may be required, but it's a good starting point for minimizing phase issues when recording with two microphones.

The Mid/Side recording technique is designed to minimize potential phase problems.

Mono mix

Mixing in mono may seem counterintuitive, given that most tracks will be heard in stereo. However, as mentioned above, some instances of phase shift may go undetected when listening in stereo, and switching tracks to mono at various intervals during mixing can reveal phase problems that you might otherwise miss. Simply put, mixing to mono will help you better understand the context of your mix as a whole and clear up any ambiguities before returning to stereo.

Use audio phase plugins

In addition to properly placing microphones in your mix and mixing in mono, you can also use various plugins to eliminate phase interference and easily visualize what is happening during these moments. Fortunately, there is currently no shortage of phase correctors. Notable examples include the InPhase plugin from Waves, In-Between Phase from Little Labs, and Eventide Precision Time Align. And if you get good enough at recognizing phase problems by ear, simply moving the tracks a little left or right can also correct phase problems. This trick doesn't always work, especially if your track strictly adheres to a grid.

Moving Waveforms

Perhaps the easiest way to solve phase problems is to simply move similar waveforms to the correct location. If two waveforms of the same signals are not aligned, phasing is bound to occur, so simply moving one of them left or right on the timeline can quickly correct the situation. There are even plugins that automatically align waveforms to save you the hassle—Sound Radix's Auto-Align plugin and Melda Productions' MAutioAlign are two popular alignment options.

Using phase to your advantage

So far we have discussed phase interference mainly as problems that need to be solved. In reality, phase noise is not inherently bad, it is simply an acoustic artifact that can be manipulated in several ways. Of course, eliminating or correcting phase interference is one way to combat this phenomenon.

However, if you know what you're doing and what you want from your mix, you can use phase mixing as another mixing tool. For example, by manipulating the phase relationship between guitar tracks, you can shape the tone of the resulting track (the same applies to the tone of any instrument or vocal), much like an equalizer filter. Certain devices (like the Neve Portico 5016 and Radial's Phazer) contain phase shift circuits that allow you to select specific frequencies that you want to boost but also cancel, giving you unique tone-shaping capabilities.

Prologue

Musicians, music lovers, as well as lovers of “high-end” sound, in conversations among themselves, often use terms that seem to be understandable to everyone - spectrum, phase, frequency, meander, depth and localization of the scene, and other narrowly meaningful words. But often, even some of the “experts” cannot fully understand what it really is.

Concepts such as “Phase shift” are very often mentioned when designing crossovers for acoustics. We already talked in detail about crossovers a little earlier.

With the Internet, finding out this or that question is not a problem. In the absence of one, you can go to the library, find a couple of really scientific books and read the theory itself. But everyone has become so busy these days that there is no time to even get information from the Internet. Let's try to find a simple explanation - what is a “phase shift”?

What do these terms actually mean? Is it possible to “feel” their true meaning? Yes, definitely possible. Now we will try to understand the question - “What is a phase shift?”

Questions and answers by phases

Still have questions about audio phase? Let's answer some frequently asked questions.

What is phase music?

Phase music deliberately uses the properties of phase as a compositional tool. Phase music often involves minimalistic, similar sounds (i.e., notes) with slight changes in frequency, tone, and/or tempo to create effects such as echo, delay, flanking, phasing, and others.

What is phasing when mixing?

During the mixing process, phasing can occur when there is a slight time delay between identical or related signals. This phasing can lead to unwanted changes in tone and volume, but can also be used creatively.

How do you diagnose phase problems?

You can identify phase issues in your music by developing your ear, mixing in mono, and using plugins designed to detect phase.

What is combined filtration?

Comb filtering is a type of phasing that occurs when a signal is added to itself over a short period of time, resulting in both constructive and destructive interference—typically due to room reflections and/or stereo recording. This phenomenon gets its name due to its resemblance to a hair comb.

Can the human ear hear phase?

Although the human ear cannot detect the absolute phase of a waveform, it is sometimes sensitive to the relative phase. For example, many people notice an audible shift when combining two identical sine waves (as this creates louder noise) or when adding a phaser effect to a signal.

Resonance phenomenon

Most solids have a natural resonance frequency. It is quite easy to understand this effect using the example of an ordinary pipe, open at only one end. Let's imagine a situation where a speaker is connected to the other end of the pipe, which can play one constant frequency, which can also be changed later. So, the pipe has its own resonance frequency, in simple terms - this is the frequency at which the pipe “resonates” or makes its own sound. If the frequency of the speaker (as a result of adjustment) coincides with the resonance frequency of the pipe, then the effect of increasing the volume several times will occur. This happens because the loudspeaker excites vibrations of the air column in the pipe with a significant amplitude until the same “resonant frequency” is found and the addition effect occurs. The resulting phenomenon can be described as follows: the pipe in this example “helps” the speaker by resonating at a specific frequency, their efforts add up and “result” in an audible loud effect. Using the example of musical instruments, this phenomenon can be easily seen, since the design of most instruments contains elements called resonators.

As you might guess,
a resonator serves the purpose of amplifying a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole mating with the volume; The design of the flute tube (and all pipes in general); The cylindrical shape of the drum body, which itself is a resonator of a certain frequency.

Rating
( 1 rating, average 4 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]