We are currently in the process of significant changes in the communications paradigms. Among others, one of the changes is moving to cloud-based services instead of using local applications (Office365, Youtube, Spotify, etc.). We can argue about the motivation of this trend, but this allows us to provide the customers with the most up-to-date solutions and integrations with other software. For this change to happen, the service providers are moving to more complex, massively distributed communication models to ensure low latency and cost-efficiency.

Ivo Kubjas

For motivation, let us look at an example. Say we are working with Google Docs in Tartu. The page is very responsive, and the document is probably located in Google’s data centre in Finland. The next day we take an overnight flight to Sydney and want to work on the same document. If it were presented from Finland, then due to physical restrictions on the speed of light, it would take around 0.3 seconds for any change to appear on the document. We definitely would not be satisfied with it and would move back to using Word.

Instead, Google hosts multiple copies of our documents all over the world. Now, when connecting from Sydney, Google would rather serve the document from a local data centre, and we again have an enjoyable user experience working on our important things. We even invite our colleague from Madrid to collaborate. As our colleague is served a copy of the document near Madrid, Google needs to synchronize all document changes so that everyone can see the changes in real-time.

However, duplicating the information to tens and hundreds of servers is expensive and may introduce inconsistencies. So, the servers need to occasionally perform full synchronization or store only parts of the documents in different locations.

In the dissertation, we looked at more general definitions of workflows similar to the above scenario. Instead of using algorithmic approaches (i.e. by using rsync), we applied algebraic methods, hoping that these methods would allow us to gain a more thorough insight into the nature of specific problems and improve on the existing state of the art. We identified the fundamental properties of decentralized systems and defined all practical scenarios by combining these properties. Through this systematic approach, we were able to identify already solved and also new problems. We then looked in detail into some of these scenarios.

We first tried to improve on the problem of data synchronization. In data synchronization, there are two or more servers with their data sets, and the goal of the servers is to find the union over all the sets. A naive solution is to transmit the whole sets, but this approach becomes inefficient when the number of elements missing in every set is small compared to the sizes of the sets. There exists an optimal solution that requires transmitting only the elements which every server misses. However, prior knowledge of the number of unique elements at every server is needed for this solution. We show that even if the estimate is inexact, then iterative execution of the data synchronization protocol leads to the convergence of the sets.

Then, we looked at a scenario where the underlying network topology of the servers is not complete. In such case, it is not possible to communicate directly between every pair of servers, but the transmissions have to pass other servers. This allows to model networks that are more similar to wireless networks deployed in practice. Additionally, we further constrained the problem, allowing the servers to query for specific elements (instead of all missing elements). We discovered a characterization of the network topology called ρ-solvability, which gives the minimum number of rounds to satisfy the requests of all the servers. For 1-solvable networks, we gave an optimal protocol. For arbitrary ρ-solvable networks, we gave a protocol that uses the least number of rounds.

Our last contribution is the study of function computation on synchronized data. This problem is a derivation of the general data synchronization problem where instead of finding the union of the servers’ sets, we compute a value of some function on the union. Our main goal was to study if this modification allows reducing the number of transmitted bits compared to simply synchronizing the sets and computing the value of the function on the synchronized set. The answer was positive – the reduction is substantial for some functions, but for others, the improvement was minor.

DSpace