On the 15th of March, our doctoral student Yauhen Yakimenka defended his PhD thesis on the intersection of two different fields of computer science: error-correction codes and compressed sensing.
Most of the information on the internet is now transmitted over fibre optics. It’s extremely fast, but it comes with a price: now the erros also must be corrected extremely fast. So-called low-density parity-check codes are often used to tackle this problem. In addition, they use message-passing algorithms for decoding.
While the message-passing algorithms work quickly indeed, they fail more often than “traditional” decoding methods. In the thesis, we studied the theoretical bounds on bridging these two families of algorithms.
Error-correction codes
The problem of error-correction codes is fundamental for digital communication: how to transmit information without errors when there are errors. At first, it sounds contradictory. However, surprisingly enough, it is indeed possible. Let us do a short thought experiment.
Imagine that you are sitting in a bar with your friends. You are having a conversation, but because of the noise around you, sometimes you cannot hear what the other person has just said. What would you usually do in such a situation? Of course, you would ask them to repeat! Now, if you know that it is rather noisy and others cannot hear your phrases very often, why not to say everything twice without waiting to be asked to do so? In this case, your friend will have a much higher chance of understanding you without asking to repeat.
This is the main trick used in error-correction codes. However, the “speech” is now in the form of electrical current, radio waves, infrared light or almost any physical phenomenon you can think of. The “noise” is imperfections of the real world, like interferences between different signals etc. Of course, the techniques and algorithms invented for information transmission are much more complicated than simple repetition, but the key trick is the same: repeat (in some form) your message. At least some parts of it.
Compressed sensing
According to some sources, every day 2.5 quintillion bytes of data are created, which is one with 18 zeros if we follow British or US naming convention. Storing all this becomes a problem per se. A significant part of the data we produce is various measurements of the real world.
Compressed sensing is based on the following observation: many measurements of different real-world quantities/processes are sparse, i.e. most of the time one observes zeroes. Why not compress them on the fly now when you measure them? But since the measurements arrive every millisecond, one needs efficient algorithms to store the data quickly.
One way of doing it is by interval-passing algorithm (IPA). Here we start observing similarities with error-correction codes: the IPA is fast but sometimes fails while more “traditional” methods would succeed.
In the thesis, we went further and discovered many more similarities and parallels. That is an excellent example of how science works: knowledge about one object allows you to apply similar methods to other objects.