SOFSEM
unibz
47th International Conference on Current Trends in Theory and Practice of Computer Science    Bolzano-Bozen, Italy    January 25 - 29, 2021

Keynotes


Thomas Erlebach

Algorithms that Access the Input via Queries

There are many interesting settings where an algorithm cannot freely access the whole input: For example, reading the whole input may take too much time, or parts of the input may be uncertain and it is costly to obtain precise information. In such settings, one is interested in algorithms that make queries (or probes) to access parts of the input until sufficient information has been gathered to be able to output a solution. The focus then typically lies on using as few queries as possible, either in terms of their absolute number or in comparison with the best possible number of queries for the given instance (competitive analysis). We will discuss different models and results in such settings, especially in the model of computing with explorable uncertainty. In that model, an algorithm is initially given intervals that contain the exact input values, and queries can be made to reveal the exact values. We will discuss recent results for the case where a limited number of queries can be made in parallel, and the case where untrusted predictions of the exact values are available. Based on joint work with Michael Hoffmann, Murilo Santos de Lima, Nicole Megow and Jens Schlöter.

Thomas Erlebach received a PhD in Computer Science from Technische Universität München in 1999. From 2000 to 2004 he was an Assistant Professor in Theory of Communication Networks at ETH Zürich. Since September 2004 he has been with the Department of Computer Science (now School of Informatics) at University of Leicester, first as Reader and then, from April 2007, as Professor. His research interests lie in approximation and online algorithms for combinatorial optimisation problems, algorithmic aspects of communication networks, temporal graphs, and computing with uncertainty. He is a member of the Editorial Board of Theoretical Computer Science, has served on over 60 program committees of international conferences, and has published more than 130 papers in refereed conference proceedings and journals.

Renée J. Miller

Towards Knowledge Exchange: State-of-the-Art and Open Problems

I'll discuss my group's experience in bringing data exchange to knowledge graphs. This experience includes the development of Kensho, a tool for generating mapping rules and performing knowledge exchange between two Knowledge Bases (KBs). I’ll discuss the challenges addressed in Kensho including handling the open-world assumption underlying KBs, managing the rich structural complexity of KBs, and the need to handle incomplete correspondences between property paths. From this, I’ll highlight some of the many open and important problems related to knowledge exchange including how knowledge translation can inform the task of KB integration and population.

Renée J. Miller is a University Distinguished Professor of Computer Science at Northeastern University. She is a Fellow of the Royal Society of Canada, Canada’s National Academy of Science, Engineering and the Humanities. She received the US Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the United States government on outstanding scientists and engineers beginning their careers. She received an NSF CAREER Award, the Ontario Premier’s Research Excellence Award, and an IBM Faculty Award. She formerly held the Bell Canada Chair of Information Systems at the University of Toronto and is a fellow of the ACM. Her work has focused on the long-standing open problem of data integration and has achieved the goal of building practical data integration systems. She and her colleagues received the ICDT Test-of-Time Award and the 2020 Alonzo Church Alonzo Church Award for Outstanding Contributions to Logic and Computation for their influential work establishing the foundations of data exchange. Professor Miller is an Editor-in-Chief of the VLDB Journal and former president of the non-profit Very Large Data Base (VLDB) Foundation. She received her PhD in Computer Science from the University of Wisconsin, Madison and bachelor's degrees in Mathematics and in Cognitive Science from MIT.

Merav Parter

Resilient Distributed Computation

Following the immense recent advances in distributed networks, the explosive growth of the Internet, and our increased dependence on these infrastructures, guaranteeing the uninterrupted operation of communication networks has become a major objective in network algorithms. The modern instantiations of distributed networks, such as, the Bitcoin network and cloud computing, introduce in addition, new security challenges that deserve urgent attention in both theory and practice. In this talk, I will present a recent framework for obtaining fast, resilient and secure distributed algorithms for fundamental graph problems. We will mainly provide general compilation schemes that are based on exploiting the high-connectivity of the graph, and highlight major open problems in this area. Based on joint work with Yael Hitron and Eylon Yogev.

Merav Parter is a faculty member in the Department of Computer Science and Applied Mathematics at the Weizmann Institute of Science. Before joining Weizmann, she was a Fulbright and Rothchild Postdoctoral Researcher in the EECS department of MIT. In the past, she was a Google European Fellow in Distributed Computing, 2012. Her research interests include reliable distributed communication, graph theory, and neural networks. Parter's research is supported (in part) by the European Research Council (starting grant DistRes, 2020-2025), the Israeli Science Foundation (ISF) and the NSF-BSF organization.




Celine Scornavacca

Reconstructing Phylogenetic Networks from Sequences: Where we Stand and What to do Next

Phylogenetics is the field of evolutionary biology that studies evolutionary relationships among species through the analysis of molecular data. Graphs are used to depict the relatedness and shared history of multiple species. Traditionally, these graphs are rooted leaf-labelled trees. These trees are devised to represent evolutionary histories in which the main events are speciations (at the internal nodes of the tree) and descent with modification (along the branches of the tree). However, the last few years have witnessed a growing appreciation of reticulate evolution – that is, events causing inheritance from more than one ancestor. Classic examples of such events are hybrid speciation, horizontal gene transfer and recombination. Such events cannot be represented on trees, and in these cases using rooted leaf-labelled DAGs is more appropriate. There remains a strong demand for practical algorithms to reconstruct such DAGs from genomic data, mainly due to the fact that existing methods can only be used on a small number of taxa. In this talk, I will present several recent progresses towards efficient and scalable algorithms for the task.

Celine Scornavacca is a Centre National de la Recherche Scientifique (CNRS) researcher director – Section 51 (Modeling and analysis of biological data and systems: approaches through computer science, mathematics, and physics), based at the Institute of Evolutionary Sciences of Montpellier (ISEM). She received a Ph.D. in Computer Science from the University of Montpellier in 2009. After a postdoctoral position at the University of Tübingen, she joined the Institut des Sciences de l’Evolution de Montpellier (ISEM) as CNRS research associate in October 2011. Her research interests include parameterized complexity, supertree and network methods for phylogenetics, combinatorics and bioinformatics.

Jiří Wiedermann and Jan van Leeuwen

Towards Minimally Conscious Finite-State Controlled Cyber-Physical Systems: A Manifesto

Incidents like the crash of Lion Air Flight 610 in 2018 challenge the design of reliable and secure cyber-physical systems that operate in the real-world and cope with unpredictable external phenomena and error-prone technology. We argue that their design needs to guarantee minimal machine consciousness, expressing that these systems must operate with full awareness of (the state of) their components and the environment. The concept emerged from our recent effort to develop a computational model for conscious behavior in robots, based on the theory of automata. Making systems minimal machine conscious leads to more trustworthy systems, as it strengthens their behavioral flexibility in varying environments and their resilience to operation or cooperation failures of their components or as a whole. The notion of minimal machine consciousness has the potential to become one of the defining attributes of Industry 4.0.

Jiří Wiedermann belongs to the first generation of computer scientists graduating in former Czechoslovakia. His research interests include ad-hoc networks, algorithms (design and analysis), algorithmic systems, applied algorithmics, artificial intelligence, artificial life, cognitive systems, computational complexity theory (incl. non-uniform complexity theory), data structures, distributed computing, embodied robotics, information technology, intelligent algorithms, languages and automata, machine learning, non-standard computational models (amorphous computing, molecular computing, nano-computing, super-Turing computability, etc.), networks and network modeling, network algorithms, neurocomputing, parallel computing, philosophy and computing, theoretical computer science, and more. In recent years he focuses mainly on algorithms and models for AI inspired by modeling human higher-level abilities such as machine consciousness and understanding. He published two monographs and more than 150 papers in scientific journals and conference proceedings. Some of his results have entered the monographs and textbooks in computer science. By his invited talks, publications and activities in organization of important European and national computer science conferences he contributed to the development of informatics both at national and international level. In nineteen nineties he acted as the vice-president of the European Association for Theoretical Computer Science (EATCS). Between 2000 and 2012 he served as the director and since 2017 he is the deputy director of the Institute of Computer Science of Academy of Sciences of the Czech Republic. Professor Wiedermann was a member of the board of directors of ERCIM (European Research Consortium in Informatics and Mathematics) (1997-2010) and is a member of Academia Europaea (London) and of the Czech Learned Society (Prague).

Jan van Leeuwen is a distinguished Professor of Computer Science and former head of the Department of Information and Computing Sciences at Utrecht University (The Netherlands). His research focuses on the algorithms, theories and philosophy of the computing sciences. He has published extensively on the design and analysis of algorithmic systems, on computability theory and non-classical computing, on networks and formal methods, and on the science-philosophical views of computational systems. He edited the well-known 2-volume Handbook of Theoretical Computer Science and is a former series editor of the Lecture Notes in Computer Science (Springer). He was the editor, with Barry Cooper, of "Alan Turing: His Work and Impact" (Elsevier). Jan van Leeuwen was vice-dean of the Faculty of Sciences at Utrecht University, vice-president of EATCS, and co-founder and vice-president of Informatics Europe. He received the Bolzano Honorary Medal of the Czech Academy of Sciences in 1999, an honorary doctorate in the Natural Sciences from the RWTH Aachen in 2008, and the ACM Distinguished Service Award in 2013. Jan van Leeuwen is a member of the Royal Netherlands Society of Sciences and Humanities, and of the Academia Europaea.