Abstract:
Data Compression
Due to limitations in data storage and bandwidth, data of all types has often required compression. This need has spawned many different methods of compressing data. In certain situations the fidelity of the data can be compromised and unnecessary information can be discarded, while in other situations, the fidelity of the data is necessary for the data to be useful thereby requiring methods of reducing the data storage requirements without discarding any information.
The theory of data compression has received much attention over the past half century, with some of the most important work done by Claude E. Shannon in the 1940’s and 1950’s and at present topics such as Information and Coding Theory, which encompass a wide variety of sciences, continue to make headway into the interesting and highly applicable topic of data compression.
Quantization
Quantization is a broad notion used in several fields especially in the sciences, including signal processing, quantum physics, computer science, geometry, music and others. The concept of quantization is related to the idea of grouping, dividing or approximating some physical quantity by a set of small discrete measurements.
Data Quantization involves the discretization of data, or the approximation of large data sets by smaller data sets.
This mini dissertation is a research dissertation that considers how data, which is of a statistical nature, can be quantized and compressed.