The question we must ask ourselves is whether we can approach in a professional way the creation, editing and/or production of audio from Linux using only free software.
In order to adequately answer this question, we must start by considering the (spinous) issue of latency. Without getting into overly brainy subjects, we can define audio latency as the time between the moment of sound emission and the moment when it is heard. In other words, it is a delay between the generation of a sound and the sound itself.
This is a fundamental issue since, although it is not possible to achieve zero latency - we will explain it later on -, it is necessary that the existing latency should not be appreciable for the human ear or it would be impossible for us to manage the system,"touch" or "interpret" sounds in real time.
Let's see, the sound propagates at a velocity of 340 meters per second; that is, if we emit a sound at a distance of one meter, it will take less than 3 milliseconds to hear it (specifically 2.9 milliseconds). Under this premise, we realize that any musician assumes some implicit latency.
For example, a guitarist (acoustic) has the sound source at about 40 cm so it will take about 1 millisecond to hear it. A violinist (he is about 10 cm from his sound source) about 0.3 milliseconds. An electric guitarist, if he is about 3 meters from the amplifier, will have to endure a latency of 9 milliseconds. This example can be made even more complex if we think of a group of several instrumentalists playing in different parts of a stage and trying to play all together and in tune. But that is not the issue at hand, so we will leave it for a better occasion.
A computer, as we have already said, also has latency but, in this case, it is not due to the distance between the transmitter and the receiver, but rather between the sound generation and its actual emission. Similarly, in the previous case, there are certain permissible latency limits (because they are imperceptible by the human ear). We can usually support, for a "real-time" system, a latency of 11 milliseconds.
So how do we deal with the audio latency of a computer running under the GNU/Linux Operating System? The answer is simple: using a Kernel RT (lowlatency or low latency)[and, in addition, using a sound server, but we'll talk about that later on]. The "normal" Linux kernel is multitasking and has a priority control system but we can't interrupt the processes in "anywhere" and that's where latency appears. An RT Kernel controls the priority of the processes and is able to manage this latency more efficiently.
We can also help to reduce it by using more modern and powerful hardware. If we would like to install an RT Kernel on GNU/Linux (for example on a Debian branch dystrom) we should do it as follows:
sudo apt-get install linux-headers-lowlatencysudo apt-get install linux-lowlatencysudo update-grub
Once the RT Kernet is installed, in order to manage incoming and outgoing audio connections in our Linux operating system, we will use JACK (Jack Audio Conection Kit, http://jackaudio.org/) which is a sound server that provides low latency connections between applications.
This is the main screen of QjackCt (https://qjackctl.sourceforge.io/ a graphical interface jack that helps us when interacting with the program)
And this is the configuration window where we can see (bottom right) that with this configuration is achieved a latency of 5.8 milliseconds (any value below 11 milliseconds is more than acceptable).
Having configured JACK (this will be dealt with at a later time) we are ready to start working with our audio software (compositing, editing, production,...) in our favorite GNU/Linux distribution. In fact, there are Linux-specific distributions for audio processing at all levels (Ubuntu Studio, KXStudio, Musix, AVLinux,...). Both the configuration of the JACK audio server, as well as the choice of the software we want to use and/or the most appropriate distribution are topics that we will deal with later.
See you later and don't let the music stop!