A brief history of data compression technology

Updated on number 2024-06-08
4 answers
  1. Anonymous users2024-02-11

    Hello, dear, according to your question, the information pressure shrinking technology belongs to the data compression technology. Data compression refers to a technical method that reduces the amount of data to reduce the storage space and improve the efficiency of its transmission, storage and processing without losing information. Or reorganize the data according to a certain algorithm to reduce the redundancy of data and the space for branch storage.

    Data compression includes lossy compression and lossless compression.

  2. Anonymous users2024-02-10

    Reduce storage space

    Image data compression cannot improve the clarity, contrast, and brightness of the image, but the compression of image data can reduce the storage space and improve the transmission speed of image data. Image compression is the basis of image storage, processing and transmission, and it is the storage and transmission of images with as little data as possible. Image data can be compressed, and the basis for supporting this theory allows for a certain amount of distortion in image encoding; Redundancy of image data.

    In most cases, it is not required that the compressed image be exactly the same as the original, but a small amount of distortion is allowed, as long as it is not visible to the human eye. This provides favorable conditions for the improvement of the compression ratio, and the more distortion that can be allowed, the higher the compression efficiency can be achieved. Because the image data is compressible, there are a large number of so-called statistical redundancy, resulting in physiological visual redundancy, removing this part of the image data does not affect the visual image quality, and even removing some image fine joints has no fatal impact on the quality of the actual image.

    In computer science, data is a general term for all the symbols that can be input into a computer and processed by a computer program, and is a general term for numbers, letters, symbols, and analog quantities that are used to input into an electronic computer for processing. Computers store and process a wide range of objects, and the data that represents them becomes more and more complex.

  3. Anonymous users2024-02-09

    There are two types of data compression, which are different according to the standards, and there are three divisions:

    1. Instant compression and non-instant compression.

    Instant compression is the conversion of voice or local signals into digital signals, compression at the same time, and then transmission over the Internet in real time. Instant compression is generally used in the transmission of image and sound data. Non-instant compression is done only when needed, and there is no immediacy.

    Non-real-time compression generally does not require special equipment, and can be directly installed and used in the computer with the compression software of the phase segment state.

    2. Digital compression and file compression.

    Digital compression refers to data that is temporal, often acquired, processed, or transmitted instantly. File compression refers to the compression of data that will be saved on physical media such as disks, such as an article data, a piece of ** data, a piece of program encoded data, etc.

    3. Lossless compression and lossy compression.

    Lossless compression uses statistical redundancy of data for compression, so the compression ratio of lossless compression is generally low. This type of method is widely used in the compression of text data, programs, and image data for special applications, which require accurate storage of data. The lossy compression method takes advantage of the insensitivity of human vision and hearing to certain frequency components in images and sounds, and allows the loss of a clump of fixed information in the process of compression.

  4. Anonymous users2024-02-08

    1. RLE algorithm: also known as run length encoding, is a non-absolute high normal simple algorithm for lossless compression. It replaces duplicate bytes with a simple description of the number of repetitions and the number of repetitions.

    2. Huffman's algorithm: the best method in lossless compression. It replaces each symbol with a pre-binary description, and the length is determined by how often the special symbol appears. Common symbols require very few bits to represent, while uncommon symbols require many bits to represent.

    3. Rice algorithm: For data composed of large words and teaching with low data values, rice encoding can obtain a better compression ratio. Return.

Related questions
12 answers2024-06-08

Big data is to sort out and summarize various data between an enterprise and a public institution or between multiple enterprises and institutions, and then unify them into a data chain.

11 answers2024-06-08

Big data refers to data that exceeds the processing power of traditional database systems. It requires a high data scale and transfer speed, or its structure is not suitable for the original database system. >>>More

7 answers2024-06-08

Index, National Bureau of Statistics, Business Information, Button Data, Promotion, 360 Big Data Platform, Yiche Index, AutoNavi Map, Mobile Observatory, iResearch.com.

7 answers2024-06-08

According to the general direction of data science and data engineering of the School of Computer Science of Fudan University, it also belongs to the Shanghai Key Laboratory of Data Science. At present, this research group focuses on the research of data analysis and mining algorithms centered on web data, social networks, and social big data. In recent years, he has mainly participated in national projects, key projects, national 863 projects, Shanghai Science and Technology Commission Innovation Plan, foreign cooperation projects and other projects. >>>More

17 answers2024-06-08

2020 College Entrance Examination Voluntary Filling, Big Data Professional Interpretation.