# EE4740 Data compression: Entropy and sparsity perspectives

**Topics:**Data compression and its connections to information theory and compressed sensing

The course covers a comprehensive overview of data compression and its connections to information theory and compressed sensing. The course content is presented in two parts: the first part covers data compression based on entropy, and the second part discusses compression based on sparsity.

The first part introduces the fundamentals of the mathematical theory of information developed by Shannon. The course starts from probability theory and discusses how to mathematically model information sources. Entropy is established as a natural measure of efficient description length of optimally compressed data. Then, Shannon’s source coding theorem is discussed, followed by the entropy encodings of Huffman and arithmetic coding used in lossless data compression.

The second part introduces the notion of sparsity and gives an overview of the area of compressed sensing. It starts with the classical techniques to solve underdetermined linear systems. Then, the core problem of compressed sensing, namely the l0 norm minimization with linear constraints, is introduced. The compressed sensing problem is motivated using example applications from wireless communication, control, and image processing. The l0 norm is shown to be NP-hard, and several classical approximate algorithms like l1 norm minimization, orthogonal matching pursuit, and thresholding algorithms are discussed. Building on the classical solutions, more advanced solution techniques are then covered. The first approach is the sparse Bayesian framework that introduces general statistical tools such as majorization-maximization and expectation-maximization. The next state-of-the-art approach is the deep learning framework based on autoencoders and generative adversarial networks. This part closes with the theory of compressed sensing. Here, the mathematical concepts of spark, null space property, and restricted isometric property are introduced. Using these concepts and properties, the theoretical performance guarantees of the standard compressed sensing algorithms are studied.

**Learning objectives**

At the end of the course, the student should be able to

- Explain the basic notions and core theorems on the analysis and design of information representation.
- Derive the limits to data compression and codes that meet the limits.
- Understand the concept of sparsity and compressive sensing from an optimization perspective.
- Debate the functioning of different sparse recovery approaches, namely convex optimization, greedy, thresholding, Bayesian, and deep learning techniques.
- Discuss the theoretical underpinnings of sparse signal recovery and performance guarantees of compressed sensing algorithms.
- Connect to recent literature on sparse signal recovery that builds upon the presented tools and techniques.

Although sufficiently independent, the course refers to EE4C03 Stat DSP, ET4386 Estimation and detection, EE4530 Applied convex opt.; SC 42140 Signal analysis & Learning-2D; SC 42150 Statistical SP.

**Literature**

- T. M. Cover, J. A. Thomas, “Elements of information theory”, John Wiley & Sons, Inc, (2015)
- S. Foucart, and H. Rauhut, “A mathematical introduction to compressive sensing,” Applied and Numerical Harmonic Analysis, Birkhäuser, New York, NY, (2013)

## Teachers

### dr. Geethu Joseph

Compressive Sensing, Sparse Signal Processing, Signal Processing for Communication, Linear Dynamical Systems

### Nitin Myers

Last modified: 2022-06-19

### Details

Credits: | 5 EC |
---|---|

Period: | 0/0/4/0 |

Contact: | Geethu Joseph |