-
I won't explain the difference between audio signals, sound signals, and sounds, because someone has already answered them. In fact, the understanding of the first signal, the image signal, and the image can also be memorized by analogy with the audio signal, the sound signal, and the sound, such as the first signal refers to the TV signal, the still image signal and the visual TV image signal, which is an umbrella; The image signal is the first signal converted into a signal through the picture tube, or it can be the image information you obtain from the outside world; An image is a source of information, sending out information to be transformed into something else to capture. Their transformation of electrical signals is achieved through the changing electric field, which is the principle of the CRT phenomenon, that is, the image is converted into the electrical signal that causes the image signal through a certain way, and the electrical signal then causes the changing electric field, and the particles deflect differently in the changing electric field and hit the receiver at different positions to form an image.
-
The word signal is generally an electrical signal, which needs to be restored to be heard by the human ear, and the sound generally refers to the vibration of 20-44kHz, and the sound is converted into an electrical signal The most common two kinds, 1Condenser microphone, 2Coil test microphone.
**The signal is the information received by the photosensitive element array through a certain encoding, the image signal, the signal before the image is restored, the image can be distinguished by the human eye, these words themselves are also relatively general, depending on what occasion it appears.
-
An audio signal is an information carrier of the frequency and amplitude changes of regular sound waves with voice, ** and sound effects. It is an electrical signal, and these signals can be received by audio devices such as stereos, and then sound is produced.
Sound refers to the impression produced by the sound waves generated by the vibration of an object through hearing, and is a subjective definition of human beings.
Sound signals refer to the carriers of sound, that is, sound waves. This is similar to an audio signal, which is just a carrier.
To put it simply, people speak to produce sound, and the sound is transmitted to the recording device through sound signals (sound waves), and the recording device generates an audio signal for storage or is restored to sound through the broadcasting equipment.
Images and image signals can also be explained in this way: people make images through various tools, and the images are recognized by people or cameras and video recorders in various colors of light (image signals), converted into radio frequency signals, and passed through **device**. If you have any questions, please leave a message!
-
There are many ways to classify signals, and the signals are based on mathematical relationships, value characteristics, energy power, processing analysis, time function characteristics, and whether the value is a real number.
It can be divided into deterministic signals and non-deterministic signals (also known as random signals), continuous signals and discrete signals (i.e., analog signals and digital signals), energy signals and power signals, time domain signals and frequency domain signals, time limit signals and frequency limit signals, real signals and complex signals, etc.
1. Analog signal.
Analog signal refers to the signal waveform that simulates the change of information and changes, and its main feature is that the amplitude is continuous, and an infinite number of values can be taken; In time, it can be continuous or discontinuous.
2. Digital signal.
A digital signal is a signal that is discrete not only in time but also in amplitude, and can only take a finite number of values. For example, telegraph signals, pulse code modulation (PCM) signals, etc. are all digital signals. A binary signal is a digital signal that is a combination of two digits, "1" and "0", to represent different information.
-
The digitization of sound goes through three stages: sampling, quantization, and encoding.
Sampling is the process of discretizing a time-continuous analog signal on a time axis. There is the concept of sampling frequency and sampling period, the sampling period is the time interval between the two sampling points adjacent to Liang Kai, the sampling frequency is the reciprocal of the sampling period, theoretically speaking, the higher the sampling frequency, the higher the restoration degree of the sound, and the more real the sound. In order not to be distorted, the sampling frequency needs to be greater than twice the highest frequency of the sound.
The main task of quantization is to convert each sample with a continuous value of amplitude into a discrete value representation. The quantized sample is represented in binary, which can be understood as having completed the conversion of the analog signal to binary. The usual accuracy is 8bit, 16bit, 32bit, etc., of course, the better the quality, the more storage space is needed.
Encoding is the last step in the digitization of the whole sound, in fact, the sound analog signal has been sampled and quantized into digital form, but in order to facilitate the storage and processing of the computer, we need to encode it to reduce the amount of data.
-
Strictly speaking, it should be the difference between a sound signal and an audio signal. Because audio generally refers to the frequency of sound.
Audio signal: an information carrier of frequency and amplitude changes of regular sound waves with voice, ** and sound effects. It is an electrical signal that can be received by audio devices such as stereos, and then **.
Sound: The impression created by the auditory sense of sound waves generated by the vibration of an object is a subjective definition of a person. Sound signal: The carrier of sound is also sound waves. Similar to an audio signal, a signal is just a carrier.
Audio signals travel through electric currents, and sound signals travel through the air. To put it simply, people speak to produce sound, and the sound is transmitted to the recording device through sound signals (sound waves), and the recording device generates an audio signal for storage or is restored to sound through the broadcasting equipment.
-
There are two types of sound signals.
Digital signal and analog signal, digital signal must be converted by the decoder to obtain an analog signal linear power amplifier can be used (common is CD disk, disk, memory card recorded digital signal, digital stream resources), analog signal can be directly used for linear power amplifier (audio tape, vinyl record, etc.).
Audio output signal party envy mountain type and application:
1. The output and input on the stereo are active speakers. Output refers to the transmission of audio signals from the speaker to other devices. The interface is connected to the Xinbai source, such as mobile phones, computers, audio pre-output, etc., connect the input signal cable to a 3.5 mm (similar to the headphone connector), and the speaker is connected to the power supply, and you can make a sound.
2. In most practical applications, the core of the electronic live performance system is the mixer and peripheral systems. The mixer and peripheral system mentioned here are composed of the sound reinforcement mixer and sound processing peripheral equipment used in general live sound reinforcement, but they are different in connection and settings.
This choice is conducive to the control of performance costs, the operability of the actual performance operation, and most importantly, the compatibility between different electronic programs is the best in the universal mixer, and the composer can skillfully use the various functions on the universal mixer to complete the creation in the pre-performance debugging. <>
-
According to the different sound frequencies, signals can be divided into three types, namely low-frequency signals, medium-frequency signals, and high-frequency signals. Low-frequency signals are commonly used for audio signal processing, medium-frequency signals are typically used for modulation and demodulation, and high-frequency signals are commonly used in wireless communications and radar systems.
A low-frequency signal is a signal with a frequency of less than 20 Hz and is typically used to process audio signals. For example, in ** recording, the low frequency signal is used to record the sound of bass and drums. Low-frequency signals are also used to deal with vocal signals, such as in communications.
IF signals are those with frequencies between 20 kilohertz and 300 megahertz. IF signals are often used for modulation and demodulation, such as in radio and television transmissions. In these applications, IF signals are used to convert audio and signals into electromagnetic waves for transmission in space.
High-frequency signals refer to signals with frequencies above 300 megahertz. High-frequency signals are commonly used in wireless communications and radar systems. In these applications, high-frequency signals are used, for example, to transmit information into a receiver and demodulate it in the receiver to recover the original information.
High-frequency signals are also used in radar systems to detect the position and speed of objects.
In short, according to the different sound frequencies, signals can be divided into three types, namely low-frequency signals, medium-frequency signals, and high-frequency signals. These signals have a wide range of applications in both communication and radar systems and are an indispensable part of the modern communications landscape.
-
1. Audio is a professional term of filial piety, and the word audio has been used as a general description of sound-related equipment and its functions within the audio range. All the sounds that humans are able to hear are called audio, and it may include noise, etc. After the sound is recorded, whether it is a speech, a singing voice, or an instrument can be processed by digital ** software, or it can be made into a CD, at this time all the sounds do not change, because CD is originally a type of audio file.
Audio, on the other hand, is simply the sound stored in your computer. If you have a computer plus the appropriate audio card.
It is what we often call the sound card, we can record all the sound, and the acoustic characteristics of the sound such as the height of the sound can be used with a computer hard disk.
files are stored in a way that is stored.
2. Audio is a professional term, and the word audio has been used to generally describe the sound-related equipment and its functions in the audio range. (1) Audio, which refers to the sound waves that can be heard by the human ear at a frequency between 20Hz and 20kHz, which is called audio. (2) Refers to a file in which sound content is stored.
3) signifiers act as filters in some respects.
vibrations.
-
Originally, it was the same thing, but the purpose was different, and the name was different.
The frequency of the audio signal in our signal is relatively low, I also call it a low-frequency signal, but there is a difference between the signal generator and the low-frequency signal generator, the signal generator outpnt is only the frequency we want, without post-amplification, that is, the output impedance is relatively large, and the low-frequency signal generator has post-amplification, the output impedance is relatively small, that is, it can be directly loaded, and the 10Hz 10MHz signal generator must not be the audio generator.
It's not a low-frequency generator anymore, don't buy the wrong one.
-
The difference is that the signal waveform category is different.
A signal generator is an instrument that produces an electrical test signal for the required parameters. According to the signal waveform, it can be divided into four categories: sinusoidal signal, function (waveform) signal, pulse signal and random signal generator. Signal generators, also known as signal sources or oscillators, have a wide range of applications in production practice and in the field of technology.
Various waveform curves can be represented by trigonometric equations. Circuits that can generate a variety of waveforms, such as triangular waves, sawtooth waves, rectangular waves (including square waves), and sine waves, are called function signal generators.
-
Depending on the range, signal generators include audio signal generators, but also other signal types, such as optical. Pulse. microwave, etc.
-
"Frequency band" refers to the width occupied from the low end to the high end of the frequency.
For example, the frequency of sound that can be heard by the human ear is 20-20,000 Hz, and this range is called [band width] or "bandwidth".
There are differences in the amplification, transmission, and restoration characteristics (capabilities) of audio amplifiers (including speakers) for signal frequencies [high] and [low], that is, within the [frequency band] of the signal 20-20000 Hz signal, full frequency transmission cannot be achieved, for example, the circuit [the frequency band is narrow] lacks treble, the sound of suona is affected (that is, the treble is voiced), the bass is missing, and the sound of the drum is affected (that is, the bass is dull), so that the ** appreciation is limited.
The relationship between the quality of the sound and the frequency band of the signal is that if the frequency band is narrow, the treble or bass will be lost or lost.
In addition, to be precise, the amplifier [frequency response] is more important than the [frequency band], and the bad frequency response means that the amplifier amplifies the frequency of each signal unevenly in this [frequency band], which will directly affect the [sound quality].
The problem of recording electronic organs with audio cables. >>>More
It's Shannon's sampling theorem.
In order to ensure that the signal is not distorted, the sampling frequency should be "= 2 times the highest frequency of the signal. >>>More